Last time, in D for Detectors, we looked at some of the applications of physics that you might encounter during your day. In this post, we’re going to find out more about something I can guarantee you use almost every day- electricity!
A reliable source of electricity is something we could easily take for granted, but how does it actually work? Well, surprise surprise, it all comes back to our friend- the humble atom. More specifically, electricity is the flow of electrons – remember them?
Electrons are the tiny negatively charged subatomic particles which whizz around the outside of the nucleus. In some materials, such as copper, the outer electrons break off quite easily and their movement through the material (a copper wire for example) creates an electric current.
Of course, not all materials can conduct electricity. I’m sure you remember the experiment in school where you tried to complete a circuit with different materials- they didn’t all work. Rubber, for example, holds onto its electrons pretty tightly so they can’t easily flow. This means rubber can’t conduct electricity.
Now back to that circuit experiment you may have tried. Your copper wires alone aren’t enough to light the bulb- you also need a source of power, like a battery. The battery is a source of “pushing power” to move the electrons along. The official name for this is the electromotive force (EMF) but it’s more commonly known as voltage.
One way of thinking about electricity is a bit like a Mexican wave in a stadium. It usually takes a few people acting together to get it started- that’s the battery, and then the wave (or energy) transfers through each person and onto the next. The moving wave is a bit like the moving electrons.
Time for a fun fact now- did you know that the band AC/DC can actually teach us more about electricity? Their name came from seeing a symbol on their sisters’ electric sewing machine. This symbol stood for Alternating Current/Direct Current and meant that the machine could work with either type or electrical current.
Direct Current is best explained by the Mexican wave analogy used above. All the electrons move in the same direction. This type of current is used in toys and small gadgets. Larger machines tend to require Alternating Current.
The electrons forming an Alternating Current reverse direction 50 to 60 times a second. It’s a bit difficult to imagine how that could create a current but remember it’s all about transfer of energy. The battery still provides the initial push but these electrons don’t run in a straight line, they run on the spot instead. It doesn’t work in quite the same way but, just like Direct Current, still requires loose electrons and they still need energy.
See you next time for a convenient start of summer post- F for Flying!
In this post, we will learn a little more about the world’s largest particle accelerator, the Large Hadron Collider (LHC).
Do you remember a few years ago when there was a lot of fuss about a black hole being created? I know there were a few nervous questions fired at our unfortunate science teacher that day.
The reason for all that fuss is situated 175 metres underground, on the border between France and Switzerland. The LHC is a circular tunnel with a diameter of 27 kilometres, where scientists and engineers are working to solve some of the big unanswered questions in physics.
But how does this machine work, and how is it going to help?
Well, as the name suggests, colliding is a pretty big part of the whole idea. This machine usually uses protons (the positive subatomic particle) but can sometimes use the whole nucleus of the material lead. Remember what they are? If not, we’ve already covered a bit about what protons are.
Naturally, it’s not as easy as just throwing them at each other and observing what happens. The particles are so small that the chance of getting a successful collision is described by the creators themselves as about as likely as “firing two needles 10 kilometres apart with such precision that they meet halfway.”
Hmmmm, not easy stuff then.
To add to the complications, the particles must be going at almost the speed of light to have enough impact when (or if) they do collide for anything to happen. The Large Hadron Collider is designed to help speed up the particles using thousands of seriously strong magnets. These magnets actually have a pretty impressive name, officially they are superconducting electromagnets. There’s your dinner party lingo for the day!
With all that whizzing around and accelerating you’d imagine things are getting pretty hot down there, right? Well, no. Another complication arises. In order for the electromagnets to work they must be kept at -271.3 °C. That’s colder than outer space! In order to achieve this, a complicated cooling system is in place which uses liquid helium to keep things chilly.
I’m beginning to understand why this project is such a big deal.
But what is it all for?
Well, sometimes science for science’s sake is a good enough reason to conduct an experiment. However when setting up that experiment cost an estimated £6.2 billion and involves over 10,000 scientists and engineers in an international collaboration you need a slightly better excuse.
The team at the European Organisation for Nuclear Research (CERN) have certainly got more than one decent reason for this mammoth undertaking. They hope to answer some fundamental questions about the structure of space and time, to better understand forces which are part of our lives every day and even to discover brand new particles. In July 2012 the team announced the discovery of the Higgs boson, a particle which will now be studied intensively to help answer some of these big questions.
The Large Hadron Collider is at the forefront of some of the most profound scientific discoveries of our time and we should certainly stay tuned for more exciting discoveries. If you’re interested in finding out more visit the CERN website which even includes a virtual tour of the tunnel itself.
In this post, we will look at the basic structure of the atom. But first, what are atoms?
Atoms are the building blocks of the world- think of Lego. If the whole world was made of Lego an atom is that tiny single square block. Imagine how many of those tiny blocks would be needed to build the whole world and all the people, animals and stuff inside it… That’s a lot of Lego, and there are a lot of atoms. A single grain of sand contains millions of these tiny particles.
For a long time atoms were thought to be the smallest piece of the puzzle. Then in 1897 a scientist named J.J. Thomson identified an even smaller particle which helps to make an atom.
Thomson made the discovery when he was experimenting with mysterious beams of particles called cathode rays. When firing cathode rays at hydrogen atoms, he measured how the path of the beams changed as they interacted with the atoms. Thomson realised that the cathode rays were made of tiny, negatively charged particles – around 1/2000th the size of a hydrogen atom. He named the particles ‘corpuscles’, but we know them today as electrons. But Thomson’s discovery doesn’t tell the whole story about what we came to know about the atomic structure.
In fact, we have another scientist to thank for that. In 1909, New Zealander physicist Ernest Rutherford fired some positively charged radioactive particles through a sheet of gold atoms, and measured the different paths they took. He was testing out J.J. Thomson’s ‘plum pudding’ model, which proposed that atoms were made up of electrons sitting happily inside a positive sphere, holding them together.
Rutherford noticed that most of the particles passed straight through the atoms, but a tiny proportion were deflected back. That meant that instead of being plum puddings, each atom was made up of a small positive nucleus, surrounded by orbiting electrons, with lots of empty space between them. And so, the basic model for an atom was born!
A typical illustration of an atom will show a ball in the middle surrounded by orbits- but what is going on in there? The ball in the middle is the nucleus which Rutherford discovered, and inside the nucleus are protons and neutrons. The things whizzing round the outside are the electrons. Protons, neutrons and electrons are known as sub-atomic particles, now that’s an impressive dinner party phrase.
It might look like electrons are in a messy, complicated cloud but they are actually very precisely arranged. Around each nucleus are different shells, or energy levels, which have space for a different number of electrons.
The very first energy level around the nucleus can only hold 2 electrons. In an atom of the element Helium both of the spaces in the first energy level are filled by an electron. So, using Helium as an example, what else is in there? To work that out you should know that it is really important for an atom to be balanced. Each electron carries a negative charge so to balance Helium we now need two positive charges. Fortunately, protons have a positive charge each. So we’ve got two negative electrons whizzing around the outside, two positive protons snug in the nucleus, the charges are balanced. So, are we finished?
Almost, don’t forget the third component, neutrons. Luckily, they are- you guessed it- neutral, so it’s okay for the number of them to vary between different forms of the same element. Most of the time, Helium has two neutrons and with two of everything it is nicely balanced and known as stable.
Helium is a nice example with small numbers but not all elements are quite so compact. Take Uranium, for example, there are a lot more protons (92!) and electrons involved to try and get Uranium to balance.
That’s a very basic introduction to atoms- and for sticking with it, you’ve earned yourself another terrible joke! What a treat…
A neutron walked into a bar and asked for a drink.
“How much?” asked the neutron.
The bartender replied, “For you, no charge!”
Hopefully, that joke will make a bit more sense now you know your stuff about subatomic particles and their charges.
Quick, without looking it up: how many elements are there on the periodic table?
If I had asked that question before the first hydrogen bomb exploded in 1952, the answer would have been 98. In that year, humans succeeded in synthesizing the first element that the crucibles of stars and supernovae hadn’t supplied to Earth: Einsteinium.
Since then, we’ve been busy bees, building bigger atoms by smashing protons and neutrons together. 63 years after the first atoms of Einsteinium were made off the coast of an atoll in the Pacific and according to the International Union of Pure and Applied Chemistry (IUPAC), there are 114 official, named elements. Those 16 additional elements were not easy to make, but we’re far from done.
I want to tell you the story of the outer reaches of the periodic table. The tale involves magic (no, seriously… there’s an important concept called magic numbers) and a legendary island in the midst of a terribly unstable sea (again, not just metaphors here… Chemists have theorized of an island of stability that lies in the midst of a sea of instability), but the edge of the chemical world is dark and full of terrors. Before making our way to the brink, I need to prepare you with the tools you need to wade out into the sea of instability to find the island of stability.
So what is an “element” anyways?
Atoms, the basic building blocks of matter, are made up of electrons whizzing around in clouds around central nuclei. A nucleus is made of positively charged protons and neutrons without a charge. An atom is said to be a particular element because of the number of protons it has. If an atom has one proton, it will be called hydrogen. A hydrogen with extra neutrons or electrons will still be hydrogen, but as soon as another proton is introduced, you’ve gone and made yourself a helium.
In that sense, atoms are just like me with breakfast: change up the cereal or the fruit but touch the coffee and I turn into a whole different person.
At this point you might be asking yourself what the point of neutrons or electrons is if they have no effect on the name of an atom. The utility of electrons is pretty obvious: the tiny, whizzing balls of negative charge allow atoms to bind together and, because atoms are mostly empty space, their mutual repulsion is what gives matter the illusion of being solid.
Neutrons and Why We Need Them
The role of neutrons is a little bit less obvious. They have almost all the same properties as protons (same mass, same size, made up of quarks) but lack a charge. This similarity but lack of charge keeps them subject to the strong nuclear force, just like protons, but avoids the electromagnetic force. The strong force acts only at very short distances and, like its name suggests, is very strong. The electromagnetic force, like Paula Abdul suggested in the 80s, acts to keep like charges apart and opposite charges together.
That means protons have a problem if they want to live together in a nucleus. Protons are by definition positively charged and would be repelled by each other if it were up to the electromagnetic force alone. This is where neutrons come in.
Neutrons act like nuclear glue: they exert extra strong nuclear force pressure to keep protons together without any electromagnetic effects. Small nuclei don’t need much glue: Helium has 2 protons, 2 neutrons; Lithium has 3 protons and 4 neutrons. Bigger nuclei need a lot more glue (e.g. gold – 79 protons, 118 neutrons, lead – 82 protons, 126 neutrons).
Charting the Nuclear Waters
The question soon became: how big can we go?
It has long been known that any element with more than 82 protons (anything past lead on the periodic table) will be inherently unstable. It will decay radioactively by shedding protons and neutrons until a stable configuration is reached.
Radioactive elements are still elements, though. They just don’t stick around for as long. Typically, heavier atoms are less stable. Just ask Livermorium, whose atoms have a half-life of only 60 milliseconds.
If you start to graph the stability of atoms according to their number of protons and neutrons, it quickly becomes apparent that larger nuclei need proportionally more neutrons to be stable.
Another trend that scientists noticed is that there appears to be particular numbers of neutrons or protons that make for unusually stable atoms. Those numbers, as of 2007, are 2, 8, 20, 28, 50, 82, and 126. They have been dubbed “magic numbers”. Atoms with a magic number of protons and a magic number of neutrons, like Helium (2 and 2) or Calcium (20 and 20) are said to be “double magic”.
The Island of Stability
In the 60s, it was suggested that beyond the current range of the periodic table lies a set of theoretical atoms that could be very large and very stable. With a “magic” number of protons and neutrons, the atomic components could be arranged in just such a way as to maximize the glueyness of neutrons and spread out the repulsion of protons.
The metaphor was so vivid that it soon became adopted and has been used ever since. Scientists even talk of landing on the shores of the island, but its oases still lie undiscovered and unspoilt.
Before they were confirmed to be synthesized in the lab, the edges of the periodic table were given temporary names according to a set of naming conventions. These rules are a strange hybrid of greek and latin roots for the element’s number.
The most recent transition from lati-greek to English and official additions to the periodic table were Flerovium (element 114, previously ununquadium – quad is latin, tetra would be greek) and Livermorium (element 116, previously ununhexium – hex is greek, sex would be latin).
The synthesis of element 117, Ununheptium, was announced in 2014, but IUPAC is still reviewing the findings. The chemistry world continues its search for Ununoctium and speculation about its properties varies from unusually stable to unusually reactive.
One thing is for certain: when it is synthesized, it won’t last long.
We all know that CO2 emissions are warming the planet. Or at least, most of usdo. What often goes unreported is the effect of carbon dioxide on the worlds’ oceans. A lot of the CO2 that we pump into the air makes its way to the water and is making it more and more difficult for shelled creatures like sea urchins, lobsters, and coral to survive.
In order to understand why this happens, we need to go back to secondary school chemistry.
Don’t worry, I’ll make sure Jared doesn’t pick on you.
The first lesson we need to recall is about acids. What is an acid?
Acids are compounds that have free hydrogen ions floating around. These hydrogen atoms are quite reactive, so it means the more free hydrogen you have floating around, the more reactive your compound. Acidity is usually measured in pH, which stands for the “power of hydrogen”. pH is measured on a scale (creatively named the “pH scale”) that ranges from 0 to 14.
Compounds that get a 0 on the scale are exceedingly acidic, meaning they are made up of pretty much just free-floating Hydrogen ions. Compounds that rate 7 are perfectly neutral, like distilled water. Compounds on the other end, near 14, are called “basic” or “alkaline” and instead of having lots of hydrogen ions to give away, they have all sorts of space for hydrogen atoms. This makes them reactive because they can strip hydrogen from things that don’t usually want to give it away (like Edward Norton’s hand in Fight Club).
The other confusing bit to remember is that the pH scale is logarithmic, meaning that each number you jump actually indicates a multiplication by 10. For example, something with pH 3 (like soda) is 100 times more acidic than something with pH 5 (like coffee). This means if a large body of water (like the ocean) shifts by even a small pH number, the effect can be very large.
The second lesson we need to recall is about equilibrium.
In chemistry, everything tends towards balance. If you combine equally strong acids and bases, they will react together until the result has a pH that is in between. You might also get a volcano-themed science fair demonstration.
When CO2 combines with water (H2O), they form carbonic acid (H2CO3). The carbonic acid will break up (dissociate) into bicarbonate (HCO3–) and a hydrogen ion (H+). In a basic environment, the bicarbonate will dissociate further into carbonate (CO32-) and the result will be two hydrogen ions (2H+).
We can visualize this path with a chemical equation:
H2CO3 —- H+ + HCO3– —- 2H+ + CO32-
Where this path stops depends on the environment it is in. In an acidic environment, the balance will tend towards the left, with more hydrogen bound up with the carbonate (because there is no space in the solution for more free hydrogen). In a basic environment, the balance will tip to the right, releasing more hydrogen and freeing up the carbonate.
Currently, the pH of the ocean sits at about 8.1 (slightly alkaline). Because of this, there is plenty of carbonate available for creepy-crawly-shellfish to use to build their homes. Crustaceans and corals combine the free carbonate with calcium to form calcium carbonate (aka limestone, chalk, and Tums). They can’t use bicarbonate (HCO3–) or carbonic acid (H2CO3) and find it hard to form anything at all in an acidic environment.
This means that as we add CO2 to the water, we create more carbonic acid and contribute to the acidity of the ocean, dropping its pH. Not only does this make it hard for the little guys down there trying to make a living, but it also endangers the big chompers that eat the little guys.
A recent review found that even under the most optimistic emissions scenario, the ocean’s pH is likely to drop to 7.95, affecting 7-12% of marine species that build shells. Under a high emissions scenario, the pH will go down to 7.8, affecting 21-32% of those species.
In order to keep track of the progress of this acidification, researchers from Exeter have proposed using satellites to monitor hard-to-reach bits of the ocean.
Regardless of the pace of the change, scientists agree one thing is certain: the oceans will become less hospitable for shell-builders. The superficial impact of this for humans will be rising prices on shellfish, but there will be much deeper ramifications throughout marine ecosystems.
Say goodbye to foil floating hearts on Valentines, shimmering floating shamrocks on St. Patty’s, and the prospect of tying thousands of balloons to your house and abducting a neighbourhood boy scout. The world’s Helium reserve is going to run out, and sooner than you might think.
Helium is the universe’s second most abundant element and we’ve never had real cause to worry about it before, so what has changed that we need to start hoarding Helium? The short answer: the U.S. is selling off their strategic reserve and getting out of the Helium game, meaning prices are going to skyrocket as demand outstrips supply.
The longer answer begins with the fact that Helium has always been a non-renewable resource here on Earth. It is produced underground by radioactive materials like Uranium and Thorium and then floats up into the atmosphere and out into space unless it gets trapped by natural gas in the Earth’s crust. Once the radioactive materials decay and release Helium, there is no putting it back.
When we extract this gas, we can collect the Helium and use it to fill party balloons, make our voices squeaky, pack fuel into rockets, or cool superconducting electromagnets to four degrees above absolute zero (-269C).
MRIs and the LHC capitalize on the unique properties of Helium: it is inert and has the lowest boiling point of any element, allowing it to bring the temperature of metals down enough to make them superconducting. These more scientific uses of the substance have ballooned (pun kind of intended) in the past two decades, putting real pressure on producers.
We don’t think of Helium as scarce, partially because of its perceived strategic value in the 1920s. The U.S. felt that airships were the way of the future and so set up a government-owned strategic reserve in 1925. Given that the only real demand on this stockpile was the occasional rocket test or airship, this reserve built up over 70 years.
The U.S. government has long dominated the world Helium market (in 2006, U.S. helium reserves accounted for two thirds of the world’s total) and has been gradually selling off reserves, keeping prices artificially low. Maintaining the infrastructure to keep and distribute the gas isn’t cheap though, and the government wants out.
In 1996, Congress mandated the shutdown of the world’s largest (and only) strategic helium reserve by 2013. This was delayed by a last-minute law passed by Congress which averted a dreaded “helium cliff” that would have seen MRIs go silent. The new shut-down date is 2021.
Algeria and Qatar are trying to pick up the slack in time, but prices are rising by as much as a factor of 2.5 every year. Some scientists think that before long, a simple Helium-filled party balloon will cost upwards of $100.
If Slate’s Nina Rastogi’s calculations about the number of balloons required to lift Carl’s house in Up are to be believed, that would put Carl’s Helium bill at nearly one billion dollars. If there is going to be an Up2, either someone’s going to have to be a billionaire, or they might just have to risk it with Hydrogen.
Deep in Antarctica, right on top of the geographic South Pole, there is a research station that peers back in time to the very beginning of our universe. Named the Amundsen-Scott Station, it is home to instruments such as the creatively named South Pole Telescope (SPT), the Keck Array, and the BICEP experiments.
The temperature is currently sitting at about -30C and it’s the height of summer. The sun won’t set at the station until March 23rd and once it sets, it won’t rise again until September. So why the heck (or, one might say…Keck) would we build an observatory there?
Because the temperature is so low and the altitude is so high (2743m) at the South Pole, the air is thin and dry, reducing blurriness normally caused by the atmosphere. There are no cities nearby to cause light pollution and there are months of nonstop night, allowing for continuous observation. It’s an astronomer’s dream. Except the nearly-constant -30C temperatures. And the remoteness. But otherwise, dreamlike.
So what are astronomers looking for all the way down there at the end of the world? They are searching for clues as to how the universe started. Ever heard of the Big Bang Theory? No, not these clowns, the theory about the beginnings of the universe. Although, come to think of it, the theory is actually pretty well summed up by the first line of the Barenaked Ladies’ theme song to the Big Bang Theory (yes, those clowns):
Our whole universe was in a hot dense state,
Then nearly fourteen billion years ago expansion started. Wait…
That’s really the core of the theory: everything used to be really hot and dense and now its not. What happened in between is what the astronomers at the South Pole are trying to figure out.
Astronomy is awesome because when we look up, we are actually looking back in time. The distances involved are so great that it can take years (or billions of years) for light to reach us. So, what if we just looked as far as we could, wouldn’t we be able to see the Big Bang happening? What would that even look like?
Unfortunately, because everything was so hot and dense right at the start of the universe, nothing could stick together so the universe was just a soup of energetic particles. Any light that was emitted was bounced around like the light from a flashlight in thick fog. About 380 000 years after the Big Bang , the universe had cooled and expanded enough to let atoms form and collect electrons. Atoms are mostly empty space, which means that unless they are packed very close together like in a solid or liquid, they are transparent. What resulted was light spreading pretty evenly throughout the universe, starting 13.7996 billion years ago. This is what is called the Cosmic Microwave Background Radiation (CMBR). Cosmic because it comes from space, Microwave because it has lost a lot of energy since the Big Bang and is now only 2.7 degrees above absolute zero, Background because it is there no matter which direction you look, and Radiation because it is light.
So, no matter how far you try to look, this map is all that you see. It is all that can be seen because it is the oldest light that escaped. Sounds kind of disappointing, but astronomers think that that image (what some refer to as the baby picture of the universe) holds clues to what happened before.
If there was inflation, faster-than-light expansion of space and time (again, check out my essay on the history of the universe if you’re confused), that process should have produced gravitational waves.
“Woah, woah, woah. Hold up. I understand gravity, apples falling on heads, etc etc… How the Keck could there be gravity waves?”
One of Einstein’s key contributions to science was the understanding that space and time are linked and that they are influenced by mass. He described space-time as a fabric that could be warped by the presence of mass. All that gravity is, he said, is the curvature of space-time around mass. A simplified way to understand this is by thinking of space-time as a trampoline. If you put a mass on the trampoline, it will create a depression. The heavier the mass, the more extreme the depression. Now, if you have an extreme depression and move it very quickly back and forth, it will create waves in the same way that a moving hand in a pool will create waves. Astronomers think that inflation must have created gravity waves with a very specific signature. They also think that very heavy stars moving quickly, like binary neutrino stars, would create these gravitational waves.
If (or, once they are discovered for sure, when) gravity waves pass through you, it is space itself which is expanding and contracting. You are not moving, but as the wave passes through your arm, your arm will be closer to your body than it was before and time for it will move slower.
The thing about gravity, though, is that it is by far the weakest of the fundamental interactions (Electromagnetic, Weak, Strong being the other, stronger ones). By a factor of about a nonillion (1 with 30 zeroes after it). This makes the waves it creates very difficult to detect. While your arm is probably having a taste of timelordery as you read this, there is no way you could possibly feel it. Gravity waves are not interesting for how they make us feel, but rather for the challenge they present in detecting, for the possible confirmation of our current physical model, and for what they can tell us about the origin of the universe.
So let’s come back back to the barren, frigid wasteland of Antarctica and the astronomers freezing their buns off for science. BICEP2, the second iteration of the Background Imaging of Cosmic Extragalactic Polarization experiment, looked at the CMBR and looked for patterns in the light. These patterns, called b-mode polarization, can be produced by gravity waves, but also by interstellar dust.
In order to cancel out the effect of dust, the BICEP2 team used data from Planck, a European satellite launched in 2009 with a very similar mission: to study the early universe. Whereas BICEP2 could only look at one particular wavelength with high sensitivity, Planck could look in a few different wavelengths but didn’t have quite as much sensitivity for these b-modes. Dust doesn’t leave the same polarization patterns in light in different wavelengths, so by comparing the results from different wavelengths from Planck, the BICEP2 team was able to show that the b-modes weren’t from dust and so had to be from gravity waves from the early universe. Proof of inflation! Proof of the standard model! A possible Nobel Prize!
So, understandably excited and with a positive result in hand, there was a big announcement at the Harvard-Smithsonian in March of last year. Unfortunately, the data they used was preliminary. In September, new data was released and the effect of dust seems to have been larger than they thought. The team reduced the confidence in their findings but still stood by a significant result. Just last month, in January 2015, another set of data was released that makes the BICEP2 findings inconclusive.
It seems the team jumped the gun a little bit, were blinded by the impact of their apparent discovery, and had too much confidence in preliminary data. The result of all this is that there is still no direct evidence of inflation or of gravitational waves and the teams at Planck and BICEP are going to work together now with the strengths of their instruments. Within a few years, the effect of dust should be able to be cancelled out and we will be able to see whether we were right about the beginning of the universe. And all the frostbite will have been worth it.
The year is 1791. On a crisp autumn morning in South London, Margaret Hastwell, a blackmith’s apprentice, gives birth to her third son. With her husband, son, and daughter crowded around, she decides to name the newborn Michael. Michael Faraday.
Margaret had a lot on her plate, what with two young children, a husband who was often sick, and quite a few bills to pay. She probably didn’t have much time or energy for idle thought or daydreaming. I doubt if she much considered what Michael might do with his life other than get by. There is no way it occurred to her that Michael would grow up to revolutionize the world of physics, make electricity a viable source of mechanical energy, and inspire countless scientists, engineers, and young people (including but not limited to Einstein, Rutherford, and this young science communicator, 223 years later). But that is exactly what he would do.
Faraday went to elementary school and learned to read and write, but by the time he turned 13, he had to start work in order to help his parents make ends meet. He was apprenticed to a local bookbinder and spent the next 7 years diligently mending books. But that wasn’t all he was doing. He was also reading. Over those 7 years, Faraday read voraciously and became interested in science, particularly the topics of Chemistry and Electricity. Luckily for him, George Riebau, the bookbinder to whom he was apprenticed, took an interest in young Faraday’s education and bought him tickets to lectures by Humphry Davy at the Royal Institution in 1812. This was only shortly after Davy had discovered calcium and chlorine through electrolysis. Davy was a big name in science at the time, comparable to today’s Stephen Hawking, Neil Degrasse-Tyson, or Jane Gooddall, so it was with wide eyes that young Faraday attended. He was so blown away by what he saw and heard that he faithfully wrote notes and drew diagrams. These meticulous notes would prove to be his ticket into Davy’s lab.
Later that year, Faraday sent a letter to Davy asking for a job and attached a few of his notes. Davy was impressed and so interviewed young Faraday, but ultimately rejected the eager young fellow, saying “Science [is] a harsh mistress, and in a pecuniary point of view but poorly rewarding those who devote themselves to her service.” Translation: “Sorry, I don’t have space for you in my lab, but just to let you know… Science really isn’t very profitable.” A few months later, one of Davy’s assistants got in a fight and was fired, so guess who got a call? That’s right, Mikey F.
Not only did Faraday get a spot in Davy’s lab, but he also got to go on a European tour with Mr. and Mrs. Davy. Pretty sweet deal, right? On the eighteen month journey, Faraday got to meet the likes of Ampère and Volta. If those names are ringing distant bells, it should be no surprise. Those eminent continental scientists give their names to standard units of electrical current (Ampere) and potential difference (Volt). Re-invigorated, 22-year-old Faraday returned to London and took up a post at the Royal Institution as Davy’s assistant.
The next two decades saw Faraday make great advances in chemistry, including discovering benzene, liquefying gases, and exploring the properties of chlorine. He didn’t get much chance to focus on electricity, however, until 1821. In that year, Faraday started experimenting with chemical batteries, copper wire, and magnets. Building on the work of Hans Christian Ørsted, Faraday’s work was some of the first to show that light, electricity, and magnetism are all inextricably linked (we now know that they are all manifestations of the electromagnetic force). He was a dedicated experimentalist and between 1821 and 1831, he effectively invented the first electric motor and, later, the first electric generator. These two inventions form the basis for much of today’s modern power system. The electric motor that opens your garage door as well as wind and hydro-electric generators work on the exact same principle that was discovered by Faraday back in 1831: electromagnetic induction.
Faraday’s insight was that when connected by conductive material, an electric current could make a magnet move. He also found that the reverse was true: a moving magnet can create a flow of electrons: an electric current. The experiment is actually quite simple and you can even try it at home. Induction enables the transformation of energy between mechanical, electrical, and magnetic states. Before Faraday, electricity was seen simply as a novelty. Since Faraday, we’ve been able to use it for all sorts of things. Like writing science blogs!
[While he was definitely a gifted scientist, Faraday knew next to nothing about mathematics. He observed, took careful notes, and had an intuition for how to design experiments, but could not formalize his theories in mathematical language. He would have to wait for James Clerk Maxwell, a young Scottish prodigy, to do the math and formalize Faraday’s Law in the 1860s.]
Faraday continued his work on electricity and gained all sorts of recognition, including medals, honorary degrees, and prestigious positions. This increased pressure may have been to blame for a nervous breakdown in 1839. He took a few years off, but by 1845 he was back at it, trying to bend light with strong magnets. He discovered little else after the 1850s, but continued to lecture and participate in the scientific community.
So not only can Faraday be considered to be one of the fathers of the modern world because of his breakthroughs in electricity, but he can also be considered to be one of the fathers of modern popular science communication. In 1825, he decided to give a series of Christmas lectures at the Royal Institution, specifically aimed at children and non-specialists. He gave these lectures every year until his death in 1867 and was renowned as a charismatic, engaging speaker. He tried to explain the science behind everyday phenomena and in 1860 gave a famous lecture on the candle, something which everyone had used but which few actually understood. The Christmas lectures continue to this day and, continuing with Faraday’s legacy, the Royal Institution is one of the UK’s leading science communication organizations.
It is not simply that Faraday was a great scientist and lecturer, nor that he managed to escape poverty in 19th century England to become world-renowned. Michael Faraday’s story is so great because by all accounts, he deserved every bit of success he gained. One biographer, Thomas Martin, wrote in 1934:
He was by any sense and by any standard a good man; and yet his goodness was not of the kind that make others uncomfortable in his presence. His strong personal sense of duty did not take the gaiety out of his life. … his virtues were those of action, not of mere abstention
It’s no wonder that Einstein had a picture of him up in his office. I think I might just print one off myself.