A point not mentioned by the article: the electrons in a metal at room temperature are already moving very quickly due to their thermal energy (at the order of 100km/s) — much faster than the speeds quoted in the article, which is what's called the "drift velocity".
This thermal motion is essentially random, and the electrons constantly scatter off the nuclei every which way, so it cancels out and doesn't create a net current.
So, it's less than the electrons gently move under the influence of an electric field, and more that it introduced a slight bias in the existing thermal motion.
E: To clarify in case it may have been unclear, this is unrelated to the speed of propagation of the electric field, which as the article says is the speed of light.
It probably should be noted that the reason the drift velocity is so much lower than velocity due to thermal motion is that electrons cannot move very far before they collide with an atom which changes their direction.
The mean distance they move in a copper wire between collisions is about 0.00000004 meters. At 100 km/s it would take 0.4 picoseconds to travel that distance.
That reminds me of something interesting about biological cells. The molecules in that thick stew might be bouncing around at 20-250 miles per hour, depending on their size.
More importantly, this means that any given molecule will, in a very short time, bump into almost every other molecule in the cell. Thus, the answer to the question, how does protein X find fuel, or resource Y, or a protein Z it needs? Simple: it doesn't - it eventually just bumps into it. Similarly, in simple cells, there may be no need to transport stuff from place to place - chemical diffusion is fast enough!
Meanwhile larger cells massively benefit by speeding up those processes. So there exists lot of structures to group and or transport where different molecules are.
Similarly, organelles group related process together. So for example DNA related proteins are in a nucleus not inefficiently bouncing off the cell wall.
Which helps explain why mitochondria were so beneficial. Keeping a bunch of related machinery all tightly clustered together it makes more efficient use of each individual protein and can quickly replace anything that gets damaged. ATP however can diffuse through the cell just fine because so many processes use it that extra copies isn’t a major issue.
> electrons cannot move very far before they collide with an atom which changes their direction
And yet we know it’s mostly just empty space. I’m assuming it’s more because of the electromagnic force being particularly strong at those scales rather than a straight up “collision” right?
I believe particle colliders really overcome the forces and have the particles touch and annihilate. But sure, at our level it’s all normal low energy electromagnetic stuff and nothing every really touches anything else.
I would really argue that "electromagnetic stuff" is what "touches" means in the first place.
Atoms aren't really empty. They're electron clouds with an extremely dense core of protons and neutrons, but the electron clouds are what we care about.
I think that’s just arguing over the imprecision of language. Are we referring to the human sense which is the electromagnetic field or are we referring to the abstract mathematical concept of whether two line segments touch which is what the particle accelerator does or matter/antimatter. Both definitions are valid imho.
I think it's really hard to define this fully. The most relevant parts are probably the Pauli exclusion principle (which prevents two fermions from occupying the same point in space if they are otherwise identical), and the size of elementary particles (which, as far as we can tell, is 0). Overall this means that you can get two fermions arbitrarily close, but not to overlapping positions. Since they have no know dimension, they can't "touch" unless they are completely overlapping, and since they can't overlap, and since the electromagnetic repulsion increases rapidly as you get closer and closer together, I think the parent poster is right - regardless of inertia, they will always have some kind of electromagnetic interaction that will change their course or nature before actually "touching".
To add a bit of a complication, if two particles that have mass, even one as tiny as the electron, come closer together than some minimum, general relativity predicts that they would collapse into a (really really tiny) black hole. Of course, we don't know how and if general relativity really works at this level, so the relevance of this is unclear.
No, it's just different field interactions. None of this is tangible in the way that billiard balls are tangible. With particle physics there are no "things" that can "touch" - just different events in different particle fields which are more or less likely at different energies and spatial separations.
You can even take it up a level and say that interaction events are the only things that really exist. That's more or less what Copenhagen QM boils down to. It is not at all a given that "particles" even exist between interactions.
Intuition insists that if something happens here and something apparently related happens over there and you can move here and there around to make a line or curve, then something physical is moving between here and there.
But actually - no. Not necessarily. All you have are ghost traces of an apparent chain of causality. And you can play with those traces experimentally to make them do incredibly weird shit in very surprising ways.
I'm so tired of this. If atoms are "empty space", then there's no such thing as non-empty space, which makes the concept meaningless. Electron clouds have all the properties we want from non-empty space, mainly excluding other objects made of non-empty space, so let's just admit that's what non-empty space is and move past this pseudo-profound silliness.
I don't know how established this is, but I'm partial to the "vacuum is elastic, non-linear medium, and particles are waves trapped in the non-linearity - self-confining energy" hypothesis. IANAPhysicist, but this really seems to make a lot of sense and is quite elegant:
And with this, there's no empty space in atoms, as the vacuum perturbations from each particle spread out to infinity, so inside the atom there's more energy in those perturbations than outside - the space is less empty inside than outside.
But what is the nature of the "perturbations"? Relativistically self-interacting knots/manifolds? Can you take the elastic medium and fold it onto itself (through higher dimensions?) in a manner similar to (but not the same as) a microscopic black hole to stabilize it into a particle?
Either I'm going schizo or I should read some real books on the subject.
As I understand it, the idea in the video is that if vacuum can be seen as non-linear elastic medium, then with the right functions for elasticity and stress, you can create conditions where EM waves at high enough frequency will hit the "sweet spot", a local minimum, where the energy gets confined in space by those non-linear properties, and can't leave without supplying additional energy. And such confined energy seems to behave like you'd expect particles to.
Again, I am not a physicist, so I'm probably wrong in understanding what half of the words I used above mean. But I do understand the idea of multiple forces creating semi-stable states that are "energy traps". For instance, the balance between electric repulsion vs. attraction from strong nuclear force is what defines how tightly bound is the nucleus of any given atom (aka. the "nuclear binding energy"). The nucleus can't expand or split apart, nor can it contract, unless you supply additional energy. When you do - say, you hit a large atom with high-speed neutrons to break past strong force attraction, or smash two small nuclei together at high speeds to overcome electric repulsion, the other force takes over resulting in a rather spectacular release of energy[0] as matter finds new stable configuration.
So I feel the idea here is similar, but with stress and elasticity in place of strong and electric forces - if there's enough energy propagating through some space, the wave gets trapped in a spatially-confined region instead of dispersing into the medium.
Ye olde Rutherford experiment nicely illustrates the points of 'yes it is all empty space' and why that is both non-obvious and profound. Nothing pseudo about it.
> Ye olde Rutherford experiment nicely illustrates the points of 'yes it is all empty space'
No, it does not. It illustrates that the portion of an atom that strongly scatters alpha particles occupies a small fraction of the atom's volume. Asserting that alpha particle scattering is the only correct definition of the threshold between emptiness and non-emptiness is an awfully strange position to take.
When interacting with other electrons at energy scales relevant to human life, i.e. maybe ten electron volts and down, an electron cloud is somewhere between solid (core electrons especially in heavy atoms) and squishy (valence electrons). Give a rock the ol' "I refute it thus!" and your toes will report that that rock's electrons are quite competently occupying their space.
Ask an alpha particle at nuclear energy scales a million times greater, and that electron cloud is like a swarm of gnats to a speeding car. The electrons only interact with the alpha electromagnetically, and it takes another object that interacts through the strong nuclear force to really bother a nucleus (most of the time).
Ask a neutrino, and it will tell you that all the nuclei in the Sun are but a wisp of fog.
This is a good example of how approximations valid in one regime fail in another but are
not therefore useless. Rutherford's experiment reveals that there is more going on with matter than what can be probed by fingers and microscopes and chemical reactions, but it in no way invalidates those other observations.
That's a defensible perspective, I guess, but the fact remains that you don't fall through your chair, and that's all we really want from a description of "solid" matter. Turns out your chair behaves weirdly when you shoot alpha particles through it, but your bottom isn't made of alpha particles, so what does that prove as far as solidity goes? I suppose it proves that solidity is a non-fundamental property of matter, and that is profound, but that doesn't mean we need to change the name. It just seems like a semantic cul-de-sac to me.
You can sit on a pincushion filled very densely with needles equally well as a solid wooden chair. Yet the needles clearly have more space between them than a solid block of wood. Your personal inability to penetrate or see into that space doesn’t mean the space isn’t there. Indeed, how do you know if there’s a microscopic crack? You look at photons reflecting and amplify them into your optical range. How do we know whether or not there’s space in the atom range? We use an electron microscope. It’s still a microscope - a tool to objectively pierce the physical realm in ways that you can’t with your own sense.
None of that, even to the extent it's true, makes a lick of difference to my point. You definitely don't need to lecture me about how scientific instruments supply info my biological senses can't. I know how subatomic particles work. I'm trying to make a point about obfuscatory terminology.
Just to clarify, the high speed at which electrons generally move around in metal is called "Fermi velocity". Like you said, since it's random, on average it cancels out to 0. When applying an electric field, the electrons achieve a non-zero average velocity which is called the "drift velocity".
I hadn't heard of the term "Fermi velocity", but if it is the classical velocity derived from the Fermi energy, that should only be a lower bound on the average speed of the electrons in the metal.
That said, I don't remember the orders of magnitude here, the relevant question being how cold "room temperature" is with respect to the Fermi level -- a quick Google suggests quite cold, in which case the Fermi velocity should be a very good approximation.
In which case, it's interesting to realise that the motion of the electron gas comes mainly from Pauli exclusion rather than thermal noise! It's not a result I would have expected.
> which as the article says is the speed of light.
This is pedantic because there's practically no difference. But just to be pedantic, it's not the speed of light and I'd argue it's not usually even close to the speed of light. In communications we are talking about anywhere from 60% to 80% the speed of light through most mediums.
To be even more pedantic, it is 60-80% of the speed of light through vacuum. Because the speed of light in those media is 60-80% of the speed of light in vacuum.
For most use cases that's true. But for a surprisingly large number of use cases that's not true at all.
I was thinking of using a high-frequency trading example, but here's a case that's a bit more normal: stadium concert audio for a live band. Stadiums are big enough that you need to deal with latency issues for audio, because the back of the venue will get its audio a bit later than the front of the venue (assuming the sound is wired so the board is in the front of the floor and no adjustments are made). That is obviously disconcerting to the audience. Adjustments usually are made to handle this problem.
If it was instant we wouldn’t have latency and the three generals problem would be somewhat trivial (ordering would be mildly interesting in an algorithmic sense, but practically meaningless).
we were hungry so we decided to discuss this philosophically over a pasta lunch, but we deadlocked with our forks and that dispute kept us from our main topic
Confused. Are you saying that, if we took a light bulb (off) and a metal rod (0 charge) beside on another.
Then were somehow able to turn the light bulb on and apply a charge to the that rod at the same time. While also having a detector that can sense a photon and a change in electric field some equal distance away from the bulb and rod.
Then the photon (from the bulb) would reach our detector before the detection of the change in electric field (from the rod)?
Let's suppose the medium is just plain air, and not particularly humid.
The speed of fibre optic cabling has more to do with signal integrity and bandwidth than any difference in propagation delay. In fact, in some cases a signal can travel faster on a copper wire (0.8c) than it can through an optical fibre (0.6c).
No, the two would arrive at the same time. What the poster above was saying is that the speed of EM propagating through copper is much lower than c, which is the speed of EM in a vacuum.
Overall the picture is like this:
- massless particles like photons travel at speed c in empty space, in a straight line
- massive particles like electrons travel at a speed slightly less than c even in empty space, in a straight line
- inside a medium, any particle traveling in a straight line will quickly bounce off/be absorbed and re-emitted in a new direction because of some field given off by another particle; so, the average speed at which particles actually traverse through the material is lower than c. How low depends on properties of the medium, mainly how dense it is; for copper and most metals commonly used in electronics, this varies between 60-80% of c.
- When applying a potential difference to a metal wire, the electrons which normally are moving at speeds very close to c in the empty space between atoms in random directions (amounting to an overall speed of 0 along the wire) will start collectively moving in the direction of the potential difference (towards the positive node), at a very low average speed called "drift speed"; this is caused by their normally completely random bounces now being biased in the direction of the electric field;
- However, the current in the wire (the EM radiation) moves extremely fast along the wire, at the same speed that light moves (on average) through the wire. You can think of this as being caused by photons (since EM waves are photons) moving much more easily than electrons through the wire, simply because they don't have a charge of their own and so don't get caught so easily by other atoms as electrons do.
So you have 5 speeds relevant to this: the speed of light/photons/EM waves in vacuum (c), the speed of an electron in vacuum (very close to c), the average speed of all electrons in a metal wire without any electric potential (0), the average speed of all electrons in a metal wire with an electric potential (drift velocity, very small), and the speed of EM waves in a wire (60-80% of c in typical conductors).
Edit: one more note that complicates this picture, but the EM waves in a charged wire don't really move inside the wire, or not entirely - they move mostly around the wire - which means that their actual speed depends not (just) on the material from which the wire is made, but also the insulation outside the wire. That is, the EM field will propagate a different fraction of c for a copper wire than for an aluminum wire; but also for a copper wire wrapped in plastic versus one exposed directly in the air.
So if I understand this correctly, the information is moving along the wire, at a speed between 60 to 100% of c.
This brings me to a Veritasium video of a few years back that I didn't quite understand at the time. [1]
The claim being made was that if you connected a light bulb to a switch, with 300'000 km of wire left and right, and if the switch and the light bulb were 1 m away, the light would turn on in 1/c seconds.
But this would imply that the information travel is not along the wires, but straight from the switch to the light bulb?
In theory the change in electric field will induce a small current in the other wire, and their magical science lamp turns on at any non-zero electricity. Whether the wires are connected or not at the far end doesn’t matter.
They never clarified how strong the other current would be.
Information is a bit weird as a way to put it, but it may be correct. Still, in case this is very intentional wording, the information about the electrical circuit being lit up will (that is, any photon emitted at the moment and location that the voltage appeared, such as hitting a switch to close a circuit) still travels at c in all directions. It's only the (main) electric current that travels at 60-80% of c and along the wire.
Also, the phenomenon being discussed in the Veritasium video is slightly different, and that video sort of mixes up two things to make things more mysterious.
The simple explanation for what is going on in the video is that whenever you have a voltage change in a circuit, such as closing it with a switch, you get a tiny bit of radio waves being emitted radially outwards from that point out *.
In the Veritasium experiment, with a very precise measurement tool, they could detect the tiny radio pulse emitted by the switch at the moment it was closed, far before the main current reached the same point moving along the long wires.
The other thing he was mixing this up with is the Poynting vector, which is a mathematical object that represents the direction along which power (as in, worl, the thing you measure in Watts, not the actual current which you'd measure in Amperes) propagates in an electrical circuit. It's not clear at all that the Poyinting vector is a meaningful aspect of reality and not just some mathematical tool. It does happen to coincide with the direction that the radio waves propagate as described earlier, so some physicists do like to interpret it as a deeper, physically real, thing that is related, but you can just as easily ignore it.
Note that when I say that it's "not real", this is similar to how I would describe the Lagrangian and Hamiltonian of a classical mechanics system. It's an extremely useful mathematical tool and it predicts the behavior of the system perfectly well, and is mathematically equivalent to Newton's laws of motion. But while the laws of motion describe real direct aspects of reality, such as objects having mass and momentum, the Lagrangian doesn't really represent an actual thing in physical reality.
The above comes with a gigantic caveat: in QM, the Lagrangian and Hamiltonian correspond to the wave function, which is, to the best of our current understanding, the real underlying thing behind reality, with particles and waves and things localized being the artifical objects we introduce merely for convenience.
* This is normally completely negligible for DC current, but for AC current, where the current switches direction all the time, a relatively high amount of radio waves get generated radically put from the wires along their whole direction. This can lead to problems such as AC power lines causing significant radio interference. This is also normally described as "parasitic", since the power those waves carry is lost from the transmission line, which is designed to carry as much as possible of the power generated at the source towards the destination.
They travel at the same speed, in the same medium. c is the speed of light in a vacuum, all EM radiation travels slower in any other medium.
The complicating factor here is that when you have electricity flowing in a wire, the fields are generally mostly outside the conductor, not in it. That is, the signal propagation delay depends more in what you are using as an insulator around wire than the material of the wire. This has had practical consequences in the past; if you replace the insulating jacket of one wire in a twisted pair with a slightly different material, on long runs it will ruin your signal.
EM radiation, whether radio or visible light is photons, and photons only have one speed. However, electrical conduction is the movement of electrons, not photons.
> EM radiation, whether radio or visible light is photons
Is the only part that's not wrong in your post.
Photons only have one speed in empty space. They slow down when traveling through any medium other than vacuum.
> However, electrical conduction is the movement of electrons, not photons.
Electrons move because they influence each other through their fields, which are transmitted by photons. Electrical conduction happens at the speed of the fields, not at the speed of the electrons. When you push one more electron into one side of a conductor, an electron flows out the other side when the fields reach the other side, not when the electron does.
(As an analogy, consider a rubber hose full of steel balls. When you slowly push an additional ball in from one side, another ball starts to fall out of the other side as you push the first ball in, perceptually instantenously⁰, regardless of the speed you are pushing the new ball in.
(0): After a delay of (length of tube)/(speed of sound in steel)
I'm not a physicist, but as far as I know, outside of general relativity electromagnetic perturbations always travel at the speed of light (i.e. to affirm that photons always travel at c is correct).
It's only after the fields interact with electrical charges (atoms and their electrons for example) that a secondary field is induced as these charges begin to oscillate. This field will add over the original field, "shielding" an external observer from the original oscillation and apparently slowing down the propagation of electromagnetic waves.
"In a vacuum, electromagnetic waves travel at the speed of light, commonly denoted c."
I would expect that in air, that the photon from the light source and the perturbance of the electric field from the charge to reach the detector at the same time.
It moves at the same speed as light (it is light), but not at the "speed of light" (c). Light/EM waves move slower than c in air (and much slower than c in a copper wire).
>electrons constantly scatter off the nuclei every which way, so it cancels out and doesn't create a net current
no, not because it's every which way.
it doesn't create net current because if, randomly, net charge moves in some direction, the resultant electric field will put pressure on the random movement to bring it back to equilibrium, 0.
Btw, Grok explained the point you brought up rather well. I am personally finding AI to be better at explaining concepts than writing code with imaginary API calls.
"Random Motion: Even without current, electrons are jiggling around at high speeds (~10⁶ m/s at room temperature) due to thermal energy. The electric field just adds a slight bias to this chaotic motion, resulting in the net drift."
that's a really helpful clarification about drift velocity vs. thermal motion... it's easy to get those mixed up. the analogy i always think of is a crowded dance floor - everyone's moving fast, but not really going anywhere until there's a general push in one direction.
Is this related to how some materials become superconductors at low temperature? Does the slowing down of this electron flux improve the material's ability to conduct electricity or is there some other mechanism at play?
Superconductivity is fascinating. I don't know how people were able to come up with the explanations. Crudely, the reduced temperature means less jiggling of the metal lattice. This in turn makes it possible for the nuclei to be pushed around by electrons to form essentially sound waves (phonons) in the lattice (think of the lattice compressing and expanding due to interplay with electrons). At a certain temperature and therefore a certain frequency of lattice oscillation, electrons pair up to form "Cooper pairs" - they move in concert due to the lattice movement. What's crazy is that cooper pairs become a sort of pseudoparticle, and their quantum behaviour is different to regular electrons. Cooper pairs have integer spin (as opposed to half-integer spin), so they no longer obey the Pauli exclusion principle and all the electrons in the entire material basically form one giant condensate that extends through the whole material and can all occupy the same lowest energy quantum state.
Similar to superconductors, there are superinsulators, which have an infinite resistance so that no electric current passes through. The phenomenon of superinsulation can be regarded as an exact dual to superconductivity.
- Artur Ekert, basically the father of Quantum Cryptography has an amazing course for free on youtube: https://www.youtube.com/@ArturEkert . It's a very precise and understandable explanation of quantum computing, and some of the math that is involved with quantum mechanics.
- If you have hours to spare, watch Richard Behiel's videos on Youtube. He's like the 3Blue1Brown of Quantum Physics. His latest video on superconductivity and the Higgs Field is almost 5 hours long (!!!) https://youtu.be/DkH1citHtgs?si=-yQNYDu9TlTpE1A0 . It builds on his other videos, so I'd recommend starting at the beginning.
My background is material's science so there might be another meaning physicists use that I'm not familiar with but my very simplified understanding/explanation of Phonons are that they are vibrations within a crystal lattice. These vibrations exists at all temperatures above absolute zero, so I don't think it is accurate to say phonons form due to reduced temperature.
> all the electrons in the entire material basically form one giant condensate
Very not my field, but perhaps that's "all the paired electrons"? Brief ai-ing (do we have a verb yet?) suggests only some small fraction of conduction electrons form pairs, let alone all the rest.
Another way to put it: Cooper pairs become _bound_ objects. And so their energy levels are quantized, and to excite them, you need to apply more energy than is available from the thermal motion of atoms in a superconductor.
This doesn't happen with unpaired free electrons because their energy spectrum is pretty close to continuous.
Electrons are actually delocalised in a metal: rather than point particles bouncing around the nuclei like a pinball, they're more like waves that ripple and diffract around them. This means that to good approximation, the electrons pass right through each other. Because of this, I don't expect the electron motion to affect resistance much.
What definitely affects resistance is the vibration of the nuclei lattice, in which thermal energy is also stored. This vibration makes the electrons more likely to scatter. This means even in a non-superconducting metal, resistivity drops as you get colder.
The special thing about superconductors is that there's a temperature where the resistivity suddenly drops to zero. (If you look up "superconductivity resistance against temperature", you'll see some graphs showing what I mean.)
I don't know exactly the details of why this happens, but it has something to do with Cooper pairs. Electrons in these states are also sensitive to being knocked out and bumped up to regular conducting states by thermal noise.
On a whim, I bought a book called There Are No Electrons at a used book store, some years ago.
The idea of the book is that we spend lots of time teaching students various incorrect and inconsistent models for how electricity works, that also don’t optimally build intuition for working with the stuff.
The book’s remedy is to say “forget all that: here’s a wrong model that is good at building intuition for working with electricity, and if you’re not planning to go for a physics PhD, that’s much better for you than the other wrong models”
I don’t know enough about electricity to evaluate whether this was a good idea or well executed, but it’s an interesting approach.
“Trees” are not a coherent phylogenetic category. On the
evolutionary tree of plants, trees are regularly
interspersed with things that are absolutely, 100% not
trees. This means that, for instance, either:
- The common ancestor of a maple and a mulberry tree was
not a tree.
- The common ancestor of a stinging nettle and a
strawberry plant was a tree.
- And this is true for most trees or non-trees that you
can think of.
And it’s not just ancestors! One thing that blew me away after moving from Finland to Germany was that while willows in Finland are predominantly shrubs or bushes, and maples are trees, in Germany willows grow to be trees, but maples mostly stay as vines.
There's a book: "Why Fish Don't Exist" which is pretty good. Evolutionary, my understanding is that things are usually named after their ancestry (phylogenetically?), while fish are basically... "have fish shape and is in water" which becomes awkward. Lungfish and coelacanths are a lot closely related to humans than to salmon.
I do embedded stuff and spend a lot of time looking at electric signals.
Attitude I have about knowing stuff is asking if I can use it predicatively or for design. I feel that for most attempts at explaining how electricity actually works the answer is no. And the amount of torment I'd have to subject my brain to order to make that a yes is higher than I'd like.
There is a story of a student taking an oral exam at Oxford or Cambridge many many years ago.
Examiner: "What is electricity?"
Student: "Oh, I do know, I mean I used to know, but now I've forgotten."
Examiner: "How very unfortunate. In the whole of history only two people have known what electricity is - the Creator and yourself. And now one of the two has forgotten."
It depends how deep you go. I mean it's the force between charged particles and their movements but if you go too deep into how did the laws of physics get there and what are electrons made of you get stuck.
That goes along with the other old saying, "All models are false, but some models are useful." We don't know what electricity is in the sense that if you keep asking "But why does object X cause (or experience) effect Y?", you will eventually reach a point where we don't know the answer.
In that sense, we don't know what anything is. But we can still use it. And because everything we learn seems to become useful sooner or later, it doesn't pay to stop asking.
I'm not sure that was the point of the poster's question about electricity, because I've heard the same assertion made by science writers and such.
Our current BFF, ChatGPT, says the question is about "charge" in that we don't know why particles have a charge. So what is a "charge" and why? Gravity is also presented as a thing we don't fundamentally (ontologically) know about. Interesting!
And not disagreeing with the desire to keep asking, nor with the desire to find a final answer. The author of the article puts it fairly well:
We don’t have philosophically satisfying insights into the universe at subatomic scales...there’s no straightforward explanation of what a bound electron actually does: it’s not orbiting the nucleus or spinning around its own axis in any conventional sense. Most simply, it just exists as a particular distribution of an electrostatic field in space.
I got told what electricity is in primary school, in middle school, in high school, and in university.
Every time I understood it less.
I even watched some videos where people interviewed physics professors, to explain what it really is, and the explanations only got more convoluted.
Seemingly not because those people were bad at explaining, but because if you want to explain it as correctly as possible, it just isn't intuitive at all.
AlphaPhoenix's measurements, experiments and visualisations really help! They show some things that are normally not even taught and even electrical engineers will appreciate to see this. This video he made about it is very good, and worth watching if you wonder about how electricity propagates from one end of a conductor to the other.
And at each level the degree of confidence seems to be dropping. When you get to the point where the explanation includes an electron being made from a changes in a quantum field and a quantum field is probability, it starts to feel like there’s nothing underpinning reality.
Gravity is a thing, and we can feel it under us. But we can’t touch it, or see it.
Intuitively things that we can touch and see should be qualitatively different, but this intuition is wrong. Turns out it’s just big weak fields like gravity, and small strong fields like electromagnetism.
Ugh, this is one thing I really hate about academia. There's this weird fixation on trying to make things intuitive or tangible at every step along the way. The result is that you get five hundred very wrong/sort of wrong/okayish explanations that all differ enough to make it so we score the student wrong during examinations anyway because they chose the wrong tutorial to follow.
Sometimes, things are just hard to understand. Sooner or later students are going to have to face that, so why do we delay the inevitable?
Because there's a lot you can do with basic knowledge of a subject.
Very few people need to know the subatomic behaviour of electromagnetic fields that make electricity work, but all of us need to know that it travels in wires and it can kill you.
I'm particularly talking about in the context of teaching someone what electrons are and how they behave at the subatomic level. High level overview? Sure, use analogies or models that aren't 100% accurate. But if we're talking about students in a university, then it shouldn't really be an issue to get into the nitty gritty.
I see this as well when it comes to teaching programming to freshman CS students. For some reason, we've strayed away from lower level languages like C and don't introduce the low level details until students are quite far into their curriculum. Abstracting away the details just muddies the waters in my opinion.
As a recent graduate who did both since I switched from a very theory first uni to a practice first uni I find starting with a high level language much better.
If you're trying to learn "class, method, extends, static, var/Integer/int, interface, abstract, virtual, different exceptions, recursion and loops, etc" having to also learn about memory is not helping and if you try to do basic pointers first it becomes kind of like spell chanting. You just start trying different combinations of * and & until it works. Partially because you're a bit overwhelmed and partially because it seems like useless knowledge.
The more I worked the more I started to appreciate subjects like operating systems, algorithms, etc but at the time of doing them they seemed too theoretical since they were way above my practical knowledge and useless to the projects I was doing. "Why would I need to know how to build a file system/compiler/etc? Why in the hell would I ever do that?"
The other aspect is that if you start with learning the theoretical side you end up worrying you won't be able to code by the end after a bit. For example you've been there for 2 semesters and while you can talk about low level subjects you've barely done a todo list/calculator/chess. If you start with the more high level things by the third semester you can definitely be working part time.
This is from the POV of doing CS to start working as a dev and not do academia.
it's confusing because all of the words we use in the field make it seem like it's akin to water flowing (current) whereas the physical phenomenon is far beyond the movements of individual electrons.
The problem with (or the advantage of) the water flowing analogy, or even more broadly the discrete element model, is that it explains reality good enough to be used in most practical situations. Schematics are ubiquitous, yes they are "fake", but they are also usually "correct enough". Kind of like the incorrect Bohr's model of electrons orbiting the nucleus actually does explain the emission spectra (up to a point).
There is one commonly used concept that requires understanding electricity correctly, and not just as a combination of waterhoses and gizmos. It's impedance, and it directly corresponds to the "controversial" experiment that Veritasium is proposing in his video. Impedance breaks the pipe-of-electrons analogy.
Yes you can keep going a bit further, but that's still the lumped element model. The problem is when you analyze something like a transmission line - like that circuit presented in the video. Or a PCB with very fast signals where you have to understand that the energy moves through the insulator, not through the conductor, or the circuit will not work.
One way to look at this is that there is no such thing as a hose for electricity. It cannot be confined to a conductor, even if it is also wrapped by an insulator. It is only mostly confined. And this is not some failure of materials engineering that we may overcome one day, this is just how this stuff works.
BTW, the answer in the video is 1/c seconds, i.e. one meter worth of speed of light. And the lightbulb will experience current determined by the impedance of the transmission line. Then the fields will do a full wraparound the ends, at which point the circuit will start stabilizing around the resistance of the load. It can take a few back-and-forth iterations to stabilize the current.
I have gone down this hole many times before and while it is kind of possible (the equations for a capacitor and an inductor are basically just a spring and a flywheel), it just creates really convoluted images that won't fit well (or will be too convoluted) when you try to integrate into wider electronics.
Yes, all pipes leak, even DC pipes, in two almost entirely independent ways (electric and magnetic). And the vector product of these leakages (the Poynting vector) is what actually transmits power. Note that this energy transfer happens entirely outside of the conductor.
Sounds bizarre, right? That's why this is mostly ignored unless it can't be, for example in very fast circuits.
>>Seemingly not because those people were bad at explaining, but because if you want to explain it as correctly as possible, it just isn't intuitive at all.
Sometimes its a whole lot easier to speak the truth than to speak in analogies.
My favorite thing about electrical theory is that all this business about flow of energy going from + to - is the idea of "electron holes" flowing, instead of the actual electrons! Basically all of electronics and electricity uses hole flow convention. It seems weird to me we don't use electron-flow convention (aka: reality), but then again I'm a weird guy.
We say B is electrised positively; A negatively: or rather B is electrised plus and A minus ... These terms we may use until your philosophers give us better.
Here A and B are Franklin's buddies, standing on insulating plates while one of them rubs a glass tube with a piece of, if I remember rightly (can't find the proper source), "buckskin". Then they reach out to join hands and a spark crosses the gap.
Problem is, it isn't even clear from the experiment which of A and B really was negatively charged, because it turns out the charge depends on the nature of the "buckskin" (or whatever term he used), and how hairy, furry, or possibly even leathery it was. The resulting charge could be positive or negative, depending. So he defined the terms, but didn't even clearly assign them to direction of electron flow.
Here leather is above glass, and fur is below it. He was definitely rubbing glass with something like leather or fur, but the resulting charge depends on where in the series that thing was relative to glass.
Historically, several studies have suggested that insulators could be ordered based on the sign of charge they exchange, from the most positive to the most negative. For instance, if glass charges positively to ceramic and ceramic does the same to wood, then glass (usually) charges positively to wood. Thus, glass, ceramic, and wood would form a so-called "triboelectric series."
The problem with these triboelectric series, according to Waitukaitis, is that different researchers get different orderings, and sometimes even the same researcher does not get the same order twice when they redo their own experiment.
There are many cases where there are triangles: material A is positive when rubbed against B, B is positive when rubbed against C, and C is positive when rubbed against A, an issue mentioned by Shaw in 1914.[29] This cannot be explained by a linear series; cyclic series are inconsistent with the empirical triboelectric series.[75] Furthermore, there are many cases where charging occurs with contacts between two pieces of the same material.[76][77][47]
I guessed it meant "rubbing to create electricity" like a van De Graaf.
Uh, "tribbing" is a sexual act that women can do.
that's how i figured it out, but:
> 1965, "study of friction," from tribo-, a word-forming element in physics with the sense "friction," from Greek tribos "rubbing," from tribein "to rub, rub down, wear away" (from PIE root *tere- (1) "to rub, turn") + -logy. Related: Tribologist; tribological.
I remember reading an early book on the topic where the author describes two kinds of electricity: "glass-electricity" and "resin-electricity". The experiments seemed to involve rubbing either glass or hardened resin (amber?) with something. The author (it wasn't Franklin) concluded, after a series of experiments, that this produces two different "kinds" of electricity which seem to cancel each other out.
Funnily enough, we still don’t fully understand the mechanism by which static electricity is built up when rubbing things together. Professor Merrifield covers this in a very approachable way here: https://youtu.be/0UZb07imNLU. Skip to around 6:00 (or watch the whole thing, it’s well worth it)
> We say B is electrised positively; A negatively: or rather B is electrised plus and A minus ... These terms we may use until your philosophers give us better.
I can relate. This is just a quick hack to get to production, we can always rewrite it later!
But that’s because of historical precedent, if you weren’t aware of this then congrats for being one of today’s lucky 10,000!
Ben Franklin arbitrarily picked the positive anode as the starting point when coming up with the idea of electricity flowing, long before we had any understanding of atomic theory.
It wouldn’t make sense to just invert everything after we discovered that electrons are the actual fundamental charge carriers.
I had a similarly funny discussion with a proper engineering student on refrigeration insulation and why it isn't more natural to express it similarly with temporarily trapping cold given the inevitable nature of heat/entropy.
I remember when I was very young I asked my dad if a refrigerator works by moving cold into the box or heat out of the box and he told me it was the latter. It’s a strong memory because he also said something to the effect of that being a good question and at that age, comments like that had an impact on me.
> about flow of energy going from + to - is the idea of "electron holes" flowing
You mean flow of charge.
My favourite thing about electricity is how the actual energy is transferred on the outside of the wires, in both the directions of positive and negative charge. Resistance is the portion of the energy that accidentally enters the wire. The energy flux inside the wires -- and on the surface of the wires -- is zero. Just outside their surface it is very high.
A capacitor wouldn't work if the energy came from its poles. No, the energy used to charge it enters from the side. This is so counter-intuitive!
Charge flows on the same medium as energy. There's current on the equation of electrical power for a reason.
That bullshit model about electricity flowing around the wires is good for generating Youtube engagement, but it doesn't represent the actual physics, makes things impossible to calculate, doesn't lead to any intuitive understanding, and makes things impossible to learn. Or, in other words, the model is bullshit.
DC current flows entirely in the wires (up to at least "parts per billion" precision), as does energy, because energy flows at the same place current flows. AC current leaks. Everybody knows that, how it leaks is well known, and there are plenty of resources to calculate almost everything around it.
Exactly. If energy flowed around the wires and resistance was “the portion of the energy that accidentally entered the wire” then larger wires would have larger resistance (due to a higher chance of “stray energy entering the wire”) than smaller wires, which is clearly incorrect. It really is a terrible model in almost every way.
When I was young and had just finished my EE training I ran into a guy working with electrical systems in cars.. he claimed that wires are resistance (true enough), but also that the thicker the wire, the more resistance.. so when he wired up cars (for stereo or extra lights etc) he would use as thin wires as possible in order to reduce the resistance.
I didn't even bother asking why he hadn't thought of just removing the wire altogether.
I have expressed this repeatedly: the assignation of a negative value to the electron and a positive value to the proton has probably slowed humanity by a decade.
Anybody who needs to know this detail to design or repair a device, already knows it. There are no simple enough devices where this is relevant for casual repair or modification. You can fix your DC devices such as cellphones and your AC appliances such as refrigerators, and even design your own phones and refrigerators, without knowing in which direction the fundamental particles flow.
I started tutoring other kids in grade school and eventually got paid for it as a side gig later in life. If I ended up covering electronics and/or general electricity, I saw the same thing I saw as an undergrad taking EE courses: confusion. A lot of people found the situation counter-intuitive. It required extra mental labor for them. Hence, multiplied against millions of people, there's overall lost time.
And this isn't just in young people, either. Knew a guy who swore up and down that the "electron holes" really represent positrons.
Bad notation, weird syntax, poor choices in variable names, and so on, all of these are a collective drag which could be streamlined away.
This was pretty clear and readable, I guess. But the most succinct explanation of electricity that I know of is from Stephen Leacock:
> Electricity is of two kinds, positive and negative. The difference is, I presume, that one comes a little more expensive, but is more durable; the other is a cheaper thing, but the moths get into it.
Except in extremely exceptional circumstances, it is not useful to try to reason about the behavior of electrons when looking at electricity. Treat it as electrical fields and you'll be fine. The fields represent reality at the non-quantum scale much better than attempts to model non-quantum reality.
As a notable example, macroscopically electricity is totally symmetric - positive current flows the same as negative current does. There are components that exhibit asymmetric behavior but they can be arbitrarily oriented so it doesn't really matter.
You're broadly correct when it comes to designing practical circuits.
But if your aim is to answer questions like "why do different colour LEDs need different resistors" or "what does it mean for something to be a semiconductor" pretty soon people will start talking about 'electrons' and 'holes' and 'band gaps' and 'depletion regions'.
It is true that this is how we tend to discuss these things, but the important thing in practical circuits (even semiconductor circuits) is how they behave, not why they behave that way.
Really there has been only limited success in discovering new semiconducting technologies motivated from first principles -- it's mostly intuition and dabbling and experimentation that has yielded the advancements. To some degree of course motivated by models of behavior, but it's very easy to ignore all the blind alleys that theory has led down.
Mostly you're better off understanding that semiconductors work by "magic" and knowing what the response curves look like (or building mental approximations and heuristics) and otherwise just treating things in terms of currents and voltages (and fields at the lower level as necessary).
> Most simply, it [electron] just exists as a particular distribution of an electrostatic field in space.
Best simple description of an electron I think I’ve heard yet. I wish we would drop all the dumb analogies. From a kid’s perspective (at least what I can recall from high school), these macroscopic analogies mislead you into thinking the laws of physics work differently than what humanity’s best models of physics actually predict.
For instance, I never liked the sense of “arbitrariness” I felt while learning about the periodic table in K-12 school. The diagonal rule. Hund’s rule. The exception to Hund’s rule. And so on. Don’t even get me started on organic chemistry. But if someone had told me “Forget about billiard balls and wave/particle duality. Our best models consists of solutions to simple and beautiful equations that are extremely difficult to solve”, then that would have made a lot more sense to me.
The author of the article describes the truth as “weird math”. I don’t think that’s necessarily the case. Unitarity is aesthetic—it just “feels right”. The correspondence of atomic orbitals to irreducible representations of symmetry groups is beautiful. Why don’t we teach that to kids? You don’t have to go into the mathematical details of group theory, but just let them know these odd shapes originate from symmetry constraints. Much better than my reaction to seeing an illustration of a d_z^2 orbital in high school. I remember thinking “What the heck is that? This subject makes no sense.”
All of education is “lies to children”. Those lies start off as brazen untruths, designed to get the basic concepts across, and then we, iteratively, make things more and more accurate as time goes by. Eventually, if you go far enough in a particular discipline, you’ll reach the boundary of our knowledge, and that’s when things start to get “fun”.
I did a physics PhD. I still never got a really good answer to a question I asked in year-1 senior-school (11 years old, for non-Brits)… “What, exactly, is a positive charge ?” The waviness of the hands diminished over time, but it never really went away.
The lie I was told: "it has more electrons than it can stably accommodate, so it will give some up to anything around that can accommodate more°".
What's the next level where that breaks down?
°Though the Benjamin-Franklin-reversed-the-signs thing I learned about for the first time up-thread has me thoroughly confused. Positive charge means it... Has fewer?
Asking what charge, momentum or energy (or other conserved quantities like QCD color) in Physics basically boils down to something that is invariant under some symmetry.
Momentum and energy may feel more intuitive, but I'm not sure they really are, especially within QM.
I'm not sure if we have any deeper explanations than these symmetries.
Btw, questions formed like "What is X?" can have this kind of problem in any domain, especially if we expect some answer that is both intuitive and provides an essentialist explanation.
For instance "What is consciousness?". "What is intelligence?", "What is the meaning of life?"
What I've come to think, is that these questions come from the same type of mistake:
As any Physicist would know, the world as described by Physics and the world as we intuitively start to understand it as small children are quite different, especially at scales far removed from our senses (like in QM or Cosmology).
Humans simply doesn't have access to the full extent of reality, nor would our brains be able to do something useful with it if we had it, since we don't have anything near the processing power to comprehend it.
What we're always stuck in, is an inner world model that is some kind of rough representation of the outside world. Now let's assume the outside world actually EXISTS, even if we don't know all that much about it. Physics is just a hint of this mismatch. If we simply let go of the assumption that there is a close correspondence between our internal model of the world and the actual world, we no longer have an obligation to form strict correspondences between object within our internal simplified simulation and the outside world.
Now we're prepared for the next step: To understand that there probably is a REASON why we have this internal representation: It's there for evolutionary purposes. It helps us act in a world. Even for concepts that do not have a 1:1 correspondence with something in the Physical world, they may very well have correspondences to aspects of the world we're simply not able to comprehend otherwise. For instance, fully understanding what "consciousness" represents (how it emerges) may not even be possible without extreme amounts of computational power (the compute part may be irreducible).
Concepts like charge are similar, except that we DO (through some advanced math) have some kind of ability to build mental models that DO (perhaps) capture what gives rise to it in the Physical world.
But it still will not map onto our intuition in a way that give us the feeling of "understanding" what it "is". It kind of feels like "consciousness is an emergent property of sufficiently large scale computational system that build world models that include themselves". Still doesn't correspond to how we "feel" that consciousness "is".
But if we simply stop insisting on full correspondence between the intuitive representation of the world and the "real" (or rather, the one represented through accumulated scientific knowledge), but instead realize that the intuition MAY still be useful, we not only avoid stress related to the disconnect, we even allow ourselves to bring back concepts (like "free will") into our intuitive world model without worrying about whether it's "real".
This provides two benefits:
1) We are "allowed" to use concepts that we know are not 100% accurate representations, and even have good reason to believe they're fairly useful simplifications of aspects of the world that ARE real, but too complex for us to grasp (like QM charge for a 5-year-old).
2) As opposed to idealists (who think the inner word is primary), we don't fall into the trap of applying those concepts out of context. Many idealist philosophies and ideologies can fail catastrophically by treating such simplified ideas as fundamental axioms from which they can deduce all sorts of absurdities.
They're incredibly useful tools for thinking about things. You don't need or want QM to perform most reasoning tasks. Even MO is often overkill.
I agree though that we should lead with the truth - that these models you're being taught are useful abstractions but ultimately wrong. That each successive model brings with it more accuracy and nuance but is more difficult to comprehend.
A particular strength of that approach is that after making it to QM at the end it leaves you wondering what's next. It really drives home the point that the map is not the territory and that all we as humans can ever actually have is a succession of maps.
> Forget about billiard balls and wave/particle duality.
Actually that one is rather important. QM wave functions really do collapse. Things really do switch from behaving like a wave to behaving like a particle. This fact has significant effects on behavior.
> these odd shapes originate from symmetry constraints
Well they might fit those constraints, but can they really be said to originate from them? Is there actual cause and effect there? The answer to that would require understanding what gave rise to the phenomenon to begin with.
Like all simple description it's wrong. It [electron] is a quanta of the electron field which is a fermion matter (spin 1/2, therefore is charged) subject to the Pauli Exclusion principle.
Electromagnetism is one of the four fundamental forces mediated by photons which are its basic quanta and is Bosonic field (spin 1) and therefore are neutral.
The interaction of these two fields is depicted via Feynman diagrams.
Macroscopically observed Electrostatic field of a charged capacitor, is mediated by the superposition of virtual zero frequency (ν = 0) photons, which are off-shell and non-radiative. Field’s energy arises from the cumulative effect of infinite virtual photon exchanges. Whether virtual photons are "real" is debatable and confuses those who prefer intuition to computation.
I agree that we should teach the quantum basis of things earlier. I just think a lot of people don't know it, and we don't have a good curriculum for kids to start with.
We'd also need to revamp some of the math, chemistry, and physics curricula to build on the quantum basis of things.
When I was taught the planetary model of atoms I was quite suspicious. My reaction was - you can't be serious ... to ... are you making fun of me ... to... just because you have a hammer ...
Few years later orbitals were introduced as, essentially, motion blurs formed by our little zippy guy. I approached our teacher and asked what if that 'motion blur' is all there is and that billiards ball electron is just a bed-time story. That's an adequate way to think he said. Electron mass gets more difficult to explain to school kids in this line of thinking.
> We don’t have philosophically satisfying insights into the universe at subatomic scales. We have quantum mechanics: a set of equations that are good at predicting the behavior of elementary particles, but that don’t line up with our intuition about the macroscopic world.
Our analogies and intuitions are based off of our macroscopic experienced reality, this seems to be an entirely emergent phenomenon based on those strange behaviors described by quantum mechanics. If those insights ever do come, I don’t believe they’ll correspond to anything prewired into our brains or experienced in our lives, and will never be remotely satisfying.
> If those insights ever do come, I don’t believe they’ll correspond to anything prewired into our brains or experienced in our lives, and will never be remotely satisfying.
I don't know why you'd assume that.
Euler's identity certainly doesn't correspond to anything prewired, and yet it's very satisfying.
The philosophical insights will be satisfying if they are simple and elegant. Our theory of biological evolution through natural selection isn't remotely prewired either, but that doesn't stop it from being one of the most philosophically satisfying theories we've come up with.
For introductory articles like this I also find it helpful to know that the whole positive / negative thing is arbitrary. In fact the assignment of "negative" to electrons arise due to a mistaken interpretation by Benjamin Franklin of one of his experiments. So if you're wondering why gaining the primary mobile charge carrier makes things more negative blame Ben Franklin!
> Pay no mind: it’s enough to say that most nuclei on Earth were formed through nuclear fusion in stars and won’t undergo any change on the timescales of interest to electronics — or to terrestrial life.
It's impossible for me to understate how awesome this is. And how hard it is for me to truly grok.
Yes, and in particular that it's in all the "ordinary stuff" around you - wood, air, glass, sand. Reality is truly extraordinary upon any closer examination.
Great article! I particularly like this paragraph:
>It’s important to note that while the charge equalization process is fast, the drift of individual electrons is not. The field propagates at close to the speed of light in vacuum (circa 300,000 km/s); individual electrons in a copper wire typically slither at speeds measured in centimeters per hour or less. A crude analogy is the travel of sound waves in air: if you yell at someone, they will hear you long before any single air molecule makes it from here to there.
So basically electricity flows like a Newton's cradle. But this leaves one nagging question: what is the nature of the delay? This question also arises when considering the microscopic cause of index-of-refraction for light[1]. If you take a simple atom, like hydrogen, and shine a light on it of a particular frequency, I understand that the electron will jump to a higher energy energy level, and then fall back down. But what governs the delay between these jumps? And also, how is it that, in general, light will continue propagating in the same direction? That is, there seems to be some state-erasure or else the electron would have to "remember" more details about the photon that excited it. (And who knows? Maybe the electron does "remember" the incident photon through some sort of distortion of the quantum field which governs the electron's motion.) The same question applies to electron flow - what are the parameters that determine the speed of electricity in a conductor, and how does it work?
1. 3blue1brown recently did a great video describing how light "slowing down" can be explained by imagining that each layer of the material introduces its own phase shift to incoming light. Apparently this is an argument Feynman used in his Lectures. But Grant didn't explain the nature of the phase shift! https://www.youtube.com/watch?v=KTzGBJPuJwM
What governs the delay between one ball hitting the cradle and the opposite ball going up?
It's the electrical equivalent of the same thing. Specifically, electricity is delayed by the material absorbing it "elastically" for a short time before emitting it back. This is usually modeled as a capacitance and inductance on the medium.
> And also, how is it that, in general, light will continue propagating in the same direction?
It actually doesn't. It mostly follows the medium. That's why you can bend your wires and they keep working.
But if your question is why it doesn't go "backwards", they go, but there's an electrical potential there pushing your electrons on the other direction.
>It actually doesn't. It mostly follows the medium. That's why you can bend your wires and they keep working.
Sorry, its my fault for introducing light into a discussion about electric current. In fiber optics I believe they add "cladding" to achieve "total internal reflection" that somehow keeps the light going - not sure how it stays coherent though! And in electronics, I assume that the boundary of the conductor with non-conductor (e.g. air) provides a similar function. I've heard that conductors conduct almost entirely on their surface, another curious effect I'd like to undersatnd, and I'd also be curious if any applications use hollow tubes to conduct large currents and save on weight.
> And in electronics, I assume that the boundary of the conductor with non-conductor (e.g. air) provides a similar function.
Electricity inside a conductor works more like a sound wave the article talks about than optical fiber. It's not coherent or directed, you have a high "pressure" on one end pushing the electrons, and they push each other forward as a consequence. There is no care about reflections (up until radio frequencies), it just moves into the direction of less "pressure" through the medium. (Even in RF, but on those the reflections cause noise.)
Optic fiber depend a lot on conservation of momentum. Electrical current has none of that. Even the reflections are caused by the "elastic absorption" of the medium, and don't behave like a collision.
> I've heard that conductors conduct almost entirely on their surface
That's not really right. Conductors conduct through all of their area, unless you have high frequencies. At high frequencies there is magnetic interaction between the electrons so they are pushed out of the conductor's center, but this is not a universal thing.
And then, in high frequencies you don't use hollow tubes. You use thin wires, insulated from each other, knitted in a way that every wire spends the same length on the middle of the bundle.
Ever since seeing this time traveling meme [0], I'm constantly thinking about how I would be the worst time traveler ever. Maybe I could explain flight? But not electricity, definitely not the engineering needed to make it usable.
As someone who studied physics and electronics for many years, I still appreciate an article like this for reminding me how profoundly weird science is. Working day to day with the equations and practical applications of electricity gives you a false sense of confidence that we actually have any fucking clue what’s going on.
> The field propagates at close to the speed of light in vacuum (circa 300,000 km/s); individual electrons in a copper wire typically slither at speeds measured in centimeters per hour or less.
It would be worth mentioning why it happens as it's quite interesting.
I'm no physicist, but if I remember correctly, light is some form of electromagnetic radiation, which means EMR travels at speed of light. In that case, it's not surprising that an electrostatic field can travel at similar speed.
From my understanding the quote is talking about electrostatic effects that occur when electrons move to fill a void/go away from a negatively charged area. Since the force that makes electrons repeal each other is very weak, I think it makes sense. But note that it mention "a single electron." Voltage deals with a difference in immense scales of electrons, so I assume the effect and speed would be different in practical cases.
If people are going to bring up Veritasium and AlphaPhoenix, then I'm going to want to say something like:
ElectroBOOM and AvE and bigclive have done more to further my understanding of electricity, both theoretically and practically, than everything else combined.
I’ve always wondered if the electrons bound to a nucleus are somehow bound to the element they were attached to.
Changing an element from Hydrogen to Helium or any other variant of conversion seems like it breaks an especially solid confluence. Each proton determines the atomic number, and there is a corresponding electron for each proton after all.
They may float around the universe, but could they still “belong” to the element they were formed with, bound to be impacted in some way when that element converts to another (in a stellar reactor for example).
This would mean electrons are somehow unique most likely, but stranger things have been observed.
To be fair, what he proposed isn't (immediately obviously) mutually exclusive with your points. If it were true it would be almost impossible to detect experimentally.
So it gets tossed on the stack with all the other complex-and-unfalsifiable theories for which no evidence exists.
> To be fair, what he proposed isn't (immediately obviously) mutually exclusive with your points. If it were true it would be almost impossible to detect experimentally.
The obviousness or lack thereof is subjective, but the exclusivity is firmly established. The absolute indistinguishability of particles is deeply woven into quantum mechanics; you don't get a Pauli exclusion principle without it, for example. If the particles remembered their previous lives, and an electron that used to be tied to an iron nucleus weren't completely identical to one that used to be stuck to a carbon nucleus, all of quantum mechanics as we know it would be impossible.
I don't see why? They could be indistinguishable from our perspective while mysteriously being affected in some way if certain things happened to their "partner". We can experimentally set an upper bound on the permissible weirdness but I don't think we can eliminate the possibility.
Experimentally you'd be attempting to detect inexplicable single particle events above some level of rarity. You'd have access to only one side of the pair - you can't tell which one the other side is even if it's right in front of you (and it almost certainly isn't). So there's no discernible (to you) trigger for these events you're trying to detect. So you'd be trying to correlate frequency counts with bulk conditions as averaged across more or less the entire universe.
In the same vein as the God of the gaps the phenomenon could always be hiding below the noise floor.
Electricity finally made some sense to me when I wrote down the basic high school electricity equations and derived the basic energy and charge units from them.
That's how I finally understood that power is energy rate (Joules per unit time), current is charge rate (electrons per unit time), and voltage is energy per unit charge (Joules per electron). Voltage makes a lot more sense as the excitation of electrons; them wanting to jump gaps and go to a lower energy place feels more intuitive in that framing.
> A crude analogy is the travel of sound waves in air: if you yell at someone, they will hear you long before any single air molecule makes it from here to there.
Isn’t this a very good analogy? What’s so crude about it?
In pop-science articles and videos, the concept is usually illustrated using the dated but intuitive model developed by Niels Bohr around 1918. The model envisions electrons that travel around the nucleus in circular orbits:
Please stop teaching the history of what we used to think the atom looked like. We’ve reached the point where we spend 99% of the material teaching what we know the atom doesn’t look like and very little on what it does look like. Even this author offers a picture for what it doesn’t look like and nothing for what it does. Physicists should know the value of a good picture/mental model better than anyone else.
I challenge you to go on Wikipedia and find the article/space for the current understanding of the atom. Was that hard for you to find? Would a curious high schooler have enough information within that article/space to learn everything you now know (textbooks are too expensive and inaccessible for high schoolers to rely on, physics websites are hit/miss). Is this how you would prefer to have been taught?
The picture of electrons sitting in states (or the empty states without electrons) is not used for historical reasons. It's used because the next level of explanation requires partial differential equations.
A point not mentioned by the article: the electrons in a metal at room temperature are already moving very quickly due to their thermal energy (at the order of 100km/s) — much faster than the speeds quoted in the article, which is what's called the "drift velocity".
This thermal motion is essentially random, and the electrons constantly scatter off the nuclei every which way, so it cancels out and doesn't create a net current.
So, it's less than the electrons gently move under the influence of an electric field, and more that it introduced a slight bias in the existing thermal motion.
E: To clarify in case it may have been unclear, this is unrelated to the speed of propagation of the electric field, which as the article says is the speed of light.
It probably should be noted that the reason the drift velocity is so much lower than velocity due to thermal motion is that electrons cannot move very far before they collide with an atom which changes their direction.
The mean distance they move in a copper wire between collisions is about 0.00000004 meters. At 100 km/s it would take 0.4 picoseconds to travel that distance.
That reminds me of something interesting about biological cells. The molecules in that thick stew might be bouncing around at 20-250 miles per hour, depending on their size.
More importantly, this means that any given molecule will, in a very short time, bump into almost every other molecule in the cell. Thus, the answer to the question, how does protein X find fuel, or resource Y, or a protein Z it needs? Simple: it doesn't - it eventually just bumps into it. Similarly, in simple cells, there may be no need to transport stuff from place to place - chemical diffusion is fast enough!
Meanwhile larger cells massively benefit by speeding up those processes. So there exists lot of structures to group and or transport where different molecules are.
https://en.wikipedia.org/wiki/Golgi_apparatus
Similarly, organelles group related process together. So for example DNA related proteins are in a nucleus not inefficiently bouncing off the cell wall.
Which helps explain why mitochondria were so beneficial. Keeping a bunch of related machinery all tightly clustered together it makes more efficient use of each individual protein and can quickly replace anything that gets damaged. ATP however can diffuse through the cell just fine because so many processes use it that extra copies isn’t a major issue.
What’s really mind blowing is how fast the gluons inside the protons inside those molecules are moving.
How fast are those moving?
Speed of light.
That's really fast.
> electrons cannot move very far before they collide with an atom which changes their direction
And yet we know it’s mostly just empty space. I’m assuming it’s more because of the electromagnic force being particularly strong at those scales rather than a straight up “collision” right?
Isn't a straight up physical "collision" just a strong interaction between the electromagnetic field of multiple particles?
I believe particle colliders really overcome the forces and have the particles touch and annihilate. But sure, at our level it’s all normal low energy electromagnetic stuff and nothing every really touches anything else.
I would really argue that "electromagnetic stuff" is what "touches" means in the first place.
Atoms aren't really empty. They're electron clouds with an extremely dense core of protons and neutrons, but the electron clouds are what we care about.
I think that’s just arguing over the imprecision of language. Are we referring to the human sense which is the electromagnetic field or are we referring to the abstract mathematical concept of whether two line segments touch which is what the particle accelerator does or matter/antimatter. Both definitions are valid imho.
I think it's really hard to define this fully. The most relevant parts are probably the Pauli exclusion principle (which prevents two fermions from occupying the same point in space if they are otherwise identical), and the size of elementary particles (which, as far as we can tell, is 0). Overall this means that you can get two fermions arbitrarily close, but not to overlapping positions. Since they have no know dimension, they can't "touch" unless they are completely overlapping, and since they can't overlap, and since the electromagnetic repulsion increases rapidly as you get closer and closer together, I think the parent poster is right - regardless of inertia, they will always have some kind of electromagnetic interaction that will change their course or nature before actually "touching".
To add a bit of a complication, if two particles that have mass, even one as tiny as the electron, come closer together than some minimum, general relativity predicts that they would collapse into a (really really tiny) black hole. Of course, we don't know how and if general relativity really works at this level, so the relevance of this is unclear.
No, it's just different field interactions. None of this is tangible in the way that billiard balls are tangible. With particle physics there are no "things" that can "touch" - just different events in different particle fields which are more or less likely at different energies and spatial separations.
You can even take it up a level and say that interaction events are the only things that really exist. That's more or less what Copenhagen QM boils down to. It is not at all a given that "particles" even exist between interactions.
Intuition insists that if something happens here and something apparently related happens over there and you can move here and there around to make a line or curve, then something physical is moving between here and there.
But actually - no. Not necessarily. All you have are ghost traces of an apparent chain of causality. And you can play with those traces experimentally to make them do incredibly weird shit in very surprising ways.
When does anything really touch?
never!
So all atoms are married?
Virtual particles are. Usually they wind up touching though, which always destroys both of them.
> mostly just empty space
I'm so tired of this. If atoms are "empty space", then there's no such thing as non-empty space, which makes the concept meaningless. Electron clouds have all the properties we want from non-empty space, mainly excluding other objects made of non-empty space, so let's just admit that's what non-empty space is and move past this pseudo-profound silliness.
I don't know how established this is, but I'm partial to the "vacuum is elastic, non-linear medium, and particles are waves trapped in the non-linearity - self-confining energy" hypothesis. IANAPhysicist, but this really seems to make a lot of sense and is quite elegant:
https://www.youtube.com/watch?v=tMP5Pbx8I4s
And with this, there's no empty space in atoms, as the vacuum perturbations from each particle spread out to infinity, so inside the atom there's more energy in those perturbations than outside - the space is less empty inside than outside.
But what is the nature of the "perturbations"? Relativistically self-interacting knots/manifolds? Can you take the elastic medium and fold it onto itself (through higher dimensions?) in a manner similar to (but not the same as) a microscopic black hole to stabilize it into a particle?
Either I'm going schizo or I should read some real books on the subject.
Apparently they're just the same stuff that light is - i.e. presumably this: https://en.wikipedia.org/wiki/Vacuum#Quantum_mechanics
As I understand it, the idea in the video is that if vacuum can be seen as non-linear elastic medium, then with the right functions for elasticity and stress, you can create conditions where EM waves at high enough frequency will hit the "sweet spot", a local minimum, where the energy gets confined in space by those non-linear properties, and can't leave without supplying additional energy. And such confined energy seems to behave like you'd expect particles to.
Again, I am not a physicist, so I'm probably wrong in understanding what half of the words I used above mean. But I do understand the idea of multiple forces creating semi-stable states that are "energy traps". For instance, the balance between electric repulsion vs. attraction from strong nuclear force is what defines how tightly bound is the nucleus of any given atom (aka. the "nuclear binding energy"). The nucleus can't expand or split apart, nor can it contract, unless you supply additional energy. When you do - say, you hit a large atom with high-speed neutrons to break past strong force attraction, or smash two small nuclei together at high speeds to overcome electric repulsion, the other force takes over resulting in a rather spectacular release of energy[0] as matter finds new stable configuration.
So I feel the idea here is similar, but with stress and elasticity in place of strong and electric forces - if there's enough energy propagating through some space, the wave gets trapped in a spatially-confined region instead of dispersing into the medium.
--
[0] - https://www.marketbusinessnews.com/wp-content/uploads/2014/0...
Ye olde Rutherford experiment nicely illustrates the points of 'yes it is all empty space' and why that is both non-obvious and profound. Nothing pseudo about it.
> Ye olde Rutherford experiment nicely illustrates the points of 'yes it is all empty space'
No, it does not. It illustrates that the portion of an atom that strongly scatters alpha particles occupies a small fraction of the atom's volume. Asserting that alpha particle scattering is the only correct definition of the threshold between emptiness and non-emptiness is an awfully strange position to take.
When interacting with other electrons at energy scales relevant to human life, i.e. maybe ten electron volts and down, an electron cloud is somewhere between solid (core electrons especially in heavy atoms) and squishy (valence electrons). Give a rock the ol' "I refute it thus!" and your toes will report that that rock's electrons are quite competently occupying their space.
Ask an alpha particle at nuclear energy scales a million times greater, and that electron cloud is like a swarm of gnats to a speeding car. The electrons only interact with the alpha electromagnetically, and it takes another object that interacts through the strong nuclear force to really bother a nucleus (most of the time).
Ask a neutrino, and it will tell you that all the nuclei in the Sun are but a wisp of fog.
This is a good example of how approximations valid in one regime fail in another but are not therefore useless. Rutherford's experiment reveals that there is more going on with matter than what can be probed by fingers and microscopes and chemical reactions, but it in no way invalidates those other observations.
That's a defensible perspective, I guess, but the fact remains that you don't fall through your chair, and that's all we really want from a description of "solid" matter. Turns out your chair behaves weirdly when you shoot alpha particles through it, but your bottom isn't made of alpha particles, so what does that prove as far as solidity goes? I suppose it proves that solidity is a non-fundamental property of matter, and that is profound, but that doesn't mean we need to change the name. It just seems like a semantic cul-de-sac to me.
I think of it more like a pincushion. A needle will go right into it with very little resistance. One’s hand or fingertip — not so much.
Right. And no one confuses a pincushion for empty space. They just also don't confuse it with adamantium.
You can sit on a pincushion filled very densely with needles equally well as a solid wooden chair. Yet the needles clearly have more space between them than a solid block of wood. Your personal inability to penetrate or see into that space doesn’t mean the space isn’t there. Indeed, how do you know if there’s a microscopic crack? You look at photons reflecting and amplify them into your optical range. How do we know whether or not there’s space in the atom range? We use an electron microscope. It’s still a microscope - a tool to objectively pierce the physical realm in ways that you can’t with your own sense.
None of that, even to the extent it's true, makes a lick of difference to my point. You definitely don't need to lecture me about how scientific instruments supply info my biological senses can't. I know how subatomic particles work. I'm trying to make a point about obfuscatory terminology.
Ever heard of the Casimir Effect?
Just to clarify, the high speed at which electrons generally move around in metal is called "Fermi velocity". Like you said, since it's random, on average it cancels out to 0. When applying an electric field, the electrons achieve a non-zero average velocity which is called the "drift velocity".
I hadn't heard of the term "Fermi velocity", but if it is the classical velocity derived from the Fermi energy, that should only be a lower bound on the average speed of the electrons in the metal.
That said, I don't remember the orders of magnitude here, the relevant question being how cold "room temperature" is with respect to the Fermi level -- a quick Google suggests quite cold, in which case the Fermi velocity should be a very good approximation.
In which case, it's interesting to realise that the motion of the electron gas comes mainly from Pauli exclusion rather than thermal noise! It's not a result I would have expected.
> which as the article says is the speed of light.
This is pedantic because there's practically no difference. But just to be pedantic, it's not the speed of light and I'd argue it's not usually even close to the speed of light. In communications we are talking about anywhere from 60% to 80% the speed of light through most mediums.
To be even more pedantic, it is 60-80% of the speed of light through vacuum. Because the speed of light in those media is 60-80% of the speed of light in vacuum.
To be even less pedantic, it's really fast and basically instant.
For most use cases that's true. But for a surprisingly large number of use cases that's not true at all.
I was thinking of using a high-frequency trading example, but here's a case that's a bit more normal: stadium concert audio for a live band. Stadiums are big enough that you need to deal with latency issues for audio, because the back of the venue will get its audio a bit later than the front of the venue (assuming the sound is wired so the board is in the front of the floor and no adjustments are made). That is obviously disconcerting to the audience. Adjustments usually are made to handle this problem.
If it was instant we wouldn’t have latency and the three generals problem would be somewhat trivial (ordering would be mildly interesting in an algorithmic sense, but practically meaningless).
we were hungry so we decided to discuss this philosophically over a pasta lunch, but we deadlocked with our forks and that dispute kept us from our main topic
Confused. Are you saying that, if we took a light bulb (off) and a metal rod (0 charge) beside on another.
Then were somehow able to turn the light bulb on and apply a charge to the that rod at the same time. While also having a detector that can sense a photon and a change in electric field some equal distance away from the bulb and rod.
Then the photon (from the bulb) would reach our detector before the detection of the change in electric field (from the rod)?
Let's suppose the medium is just plain air, and not particularly humid.
Yes. As a more common example, fiber optic cabling is known to transmit information significantly faster then copper.
The speed of fibre optic cabling has more to do with signal integrity and bandwidth than any difference in propagation delay. In fact, in some cases a signal can travel faster on a copper wire (0.8c) than it can through an optical fibre (0.6c).
I am sorry if my experimental setup was not clear.
Copper is clearly a different medium than fiber optic.
That is why I stipulated a single medium for the experiment.
No, the two would arrive at the same time. What the poster above was saying is that the speed of EM propagating through copper is much lower than c, which is the speed of EM in a vacuum.
Overall the picture is like this:
- massless particles like photons travel at speed c in empty space, in a straight line
- massive particles like electrons travel at a speed slightly less than c even in empty space, in a straight line
- inside a medium, any particle traveling in a straight line will quickly bounce off/be absorbed and re-emitted in a new direction because of some field given off by another particle; so, the average speed at which particles actually traverse through the material is lower than c. How low depends on properties of the medium, mainly how dense it is; for copper and most metals commonly used in electronics, this varies between 60-80% of c.
- When applying a potential difference to a metal wire, the electrons which normally are moving at speeds very close to c in the empty space between atoms in random directions (amounting to an overall speed of 0 along the wire) will start collectively moving in the direction of the potential difference (towards the positive node), at a very low average speed called "drift speed"; this is caused by their normally completely random bounces now being biased in the direction of the electric field;
- However, the current in the wire (the EM radiation) moves extremely fast along the wire, at the same speed that light moves (on average) through the wire. You can think of this as being caused by photons (since EM waves are photons) moving much more easily than electrons through the wire, simply because they don't have a charge of their own and so don't get caught so easily by other atoms as electrons do.
So you have 5 speeds relevant to this: the speed of light/photons/EM waves in vacuum (c), the speed of an electron in vacuum (very close to c), the average speed of all electrons in a metal wire without any electric potential (0), the average speed of all electrons in a metal wire with an electric potential (drift velocity, very small), and the speed of EM waves in a wire (60-80% of c in typical conductors).
Edit: one more note that complicates this picture, but the EM waves in a charged wire don't really move inside the wire, or not entirely - they move mostly around the wire - which means that their actual speed depends not (just) on the material from which the wire is made, but also the insulation outside the wire. That is, the EM field will propagate a different fraction of c for a copper wire than for an aluminum wire; but also for a copper wire wrapped in plastic versus one exposed directly in the air.
So if I understand this correctly, the information is moving along the wire, at a speed between 60 to 100% of c.
This brings me to a Veritasium video of a few years back that I didn't quite understand at the time. [1]
The claim being made was that if you connected a light bulb to a switch, with 300'000 km of wire left and right, and if the switch and the light bulb were 1 m away, the light would turn on in 1/c seconds.
But this would imply that the information travel is not along the wires, but straight from the switch to the light bulb?
[1] https://www.youtube.com/watch?v=bHIhgxav9LY
Iirc I thought they were a bit disingenuous.
In theory the change in electric field will induce a small current in the other wire, and their magical science lamp turns on at any non-zero electricity. Whether the wires are connected or not at the far end doesn’t matter.
They never clarified how strong the other current would be.
Information is a bit weird as a way to put it, but it may be correct. Still, in case this is very intentional wording, the information about the electrical circuit being lit up will (that is, any photon emitted at the moment and location that the voltage appeared, such as hitting a switch to close a circuit) still travels at c in all directions. It's only the (main) electric current that travels at 60-80% of c and along the wire.
Also, the phenomenon being discussed in the Veritasium video is slightly different, and that video sort of mixes up two things to make things more mysterious.
The simple explanation for what is going on in the video is that whenever you have a voltage change in a circuit, such as closing it with a switch, you get a tiny bit of radio waves being emitted radially outwards from that point out *.
In the Veritasium experiment, with a very precise measurement tool, they could detect the tiny radio pulse emitted by the switch at the moment it was closed, far before the main current reached the same point moving along the long wires.
The other thing he was mixing this up with is the Poynting vector, which is a mathematical object that represents the direction along which power (as in, worl, the thing you measure in Watts, not the actual current which you'd measure in Amperes) propagates in an electrical circuit. It's not clear at all that the Poyinting vector is a meaningful aspect of reality and not just some mathematical tool. It does happen to coincide with the direction that the radio waves propagate as described earlier, so some physicists do like to interpret it as a deeper, physically real, thing that is related, but you can just as easily ignore it.
Note that when I say that it's "not real", this is similar to how I would describe the Lagrangian and Hamiltonian of a classical mechanics system. It's an extremely useful mathematical tool and it predicts the behavior of the system perfectly well, and is mathematically equivalent to Newton's laws of motion. But while the laws of motion describe real direct aspects of reality, such as objects having mass and momentum, the Lagrangian doesn't really represent an actual thing in physical reality.
The above comes with a gigantic caveat: in QM, the Lagrangian and Hamiltonian correspond to the wave function, which is, to the best of our current understanding, the real underlying thing behind reality, with particles and waves and things localized being the artifical objects we introduce merely for convenience.
* This is normally completely negligible for DC current, but for AC current, where the current switches direction all the time, a relatively high amount of radio waves get generated radically put from the wires along their whole direction. This can lead to problems such as AC power lines causing significant radio interference. This is also normally described as "parasitic", since the power those waves carry is lost from the transmission line, which is designed to carry as much as possible of the power generated at the source towards the destination.
Yes. What's confusing there?
u/cogman10 comment reads to me as if they are saying that EM radiation and light travel at different speeds.
They travel at the same speed, in the same medium. c is the speed of light in a vacuum, all EM radiation travels slower in any other medium.
The complicating factor here is that when you have electricity flowing in a wire, the fields are generally mostly outside the conductor, not in it. That is, the signal propagation delay depends more in what you are using as an insulator around wire than the material of the wire. This has had practical consequences in the past; if you replace the insulating jacket of one wire in a twisted pair with a slightly different material, on long runs it will ruin your signal.
EM radiation, whether radio or visible light is photons, and photons only have one speed. However, electrical conduction is the movement of electrons, not photons.
> EM radiation, whether radio or visible light is photons
Is the only part that's not wrong in your post.
Photons only have one speed in empty space. They slow down when traveling through any medium other than vacuum.
> However, electrical conduction is the movement of electrons, not photons.
Electrons move because they influence each other through their fields, which are transmitted by photons. Electrical conduction happens at the speed of the fields, not at the speed of the electrons. When you push one more electron into one side of a conductor, an electron flows out the other side when the fields reach the other side, not when the electron does.
(As an analogy, consider a rubber hose full of steel balls. When you slowly push an additional ball in from one side, another ball starts to fall out of the other side as you push the first ball in, perceptually instantenously⁰, regardless of the speed you are pushing the new ball in.
(0): After a delay of (length of tube)/(speed of sound in steel)
I'm not a physicist, but as far as I know, outside of general relativity electromagnetic perturbations always travel at the speed of light (i.e. to affirm that photons always travel at c is correct).
It's only after the fields interact with electrical charges (atoms and their electrons for example) that a secondary field is induced as these charges begin to oscillate. This field will add over the original field, "shielding" an external observer from the original oscillation and apparently slowing down the propagation of electromagnetic waves.
There's a very good video by 3Blue1BRown that explains this kind of weird concept way better than I could: https://www.youtube.com/watch?v=KTzGBJPuJwM
Yes, you can see one of the best people do a demonstration of it https://m.youtube.com/watch?v=2Vrhk5OjBP8
It's close enough to c that you should just use c but it can be observed that it's less.
https://en.wikipedia.org/wiki/Electromagnetic_radiation
"In a vacuum, electromagnetic waves travel at the speed of light, commonly denoted c."
I would expect that in air, that the photon from the light source and the perturbance of the electric field from the charge to reach the detector at the same time.
Yes, essentially.
The perturbation of the electrical field causes EM radiation (radio), which moves at the speed of light.
It moves at the same speed as light (it is light), but not at the "speed of light" (c). Light/EM waves move slower than c in air (and much slower than c in a copper wire).
Correct, you're detecting electromagnetic fields in the same medium.
>electrons constantly scatter off the nuclei every which way, so it cancels out and doesn't create a net current
no, not because it's every which way.
it doesn't create net current because if, randomly, net charge moves in some direction, the resultant electric field will put pressure on the random movement to bring it back to equilibrium, 0.
Btw, Grok explained the point you brought up rather well. I am personally finding AI to be better at explaining concepts than writing code with imaginary API calls.
"Random Motion: Even without current, electrons are jiggling around at high speeds (~10⁶ m/s at room temperature) due to thermal energy. The electric field just adds a slight bias to this chaotic motion, resulting in the net drift."
The difference is that if you ask it to write code, you'll find its mistakes, but if you ask it to explain neutron stars, you won't.
interestingly this is also true with most popular internet/youtube personalities...
Another good reason why we don't consider "watched a bunch of youtube videos" to be a credentialed education.
that's a really helpful clarification about drift velocity vs. thermal motion... it's easy to get those mixed up. the analogy i always think of is a crowded dance floor - everyone's moving fast, but not really going anywhere until there's a general push in one direction.
Kind of like a chotic crowd suddenly leaning slightly in one direction rather than an orderly march
This feels like a very nice intuition to have thank you for explaining!
Is this related to how some materials become superconductors at low temperature? Does the slowing down of this electron flux improve the material's ability to conduct electricity or is there some other mechanism at play?
Superconductivity is fascinating. I don't know how people were able to come up with the explanations. Crudely, the reduced temperature means less jiggling of the metal lattice. This in turn makes it possible for the nuclei to be pushed around by electrons to form essentially sound waves (phonons) in the lattice (think of the lattice compressing and expanding due to interplay with electrons). At a certain temperature and therefore a certain frequency of lattice oscillation, electrons pair up to form "Cooper pairs" - they move in concert due to the lattice movement. What's crazy is that cooper pairs become a sort of pseudoparticle, and their quantum behaviour is different to regular electrons. Cooper pairs have integer spin (as opposed to half-integer spin), so they no longer obey the Pauli exclusion principle and all the electrons in the entire material basically form one giant condensate that extends through the whole material and can all occupy the same lowest energy quantum state.
Similar to superconductors, there are superinsulators, which have an infinite resistance so that no electric current passes through. The phenomenon of superinsulation can be regarded as an exact dual to superconductivity.
https://en.wikipedia.org/wiki/Superinsulator
That is the BEST explanation of superconductivity I have ever heard.
Thanks for the kind words! For anyone curious to dive deeper into the crazyness that is quantum mechanics I can highly recommend a few resources:
- Sean M Carroll's work, in particular his Biggest Ideas in the Universe books: https://www.preposterousuniverse.com/biggestideas/
- Artur Ekert, basically the father of Quantum Cryptography has an amazing course for free on youtube: https://www.youtube.com/@ArturEkert . It's a very precise and understandable explanation of quantum computing, and some of the math that is involved with quantum mechanics.
- If you have hours to spare, watch Richard Behiel's videos on Youtube. He's like the 3Blue1Brown of Quantum Physics. His latest video on superconductivity and the Higgs Field is almost 5 hours long (!!!) https://youtu.be/DkH1citHtgs?si=-yQNYDu9TlTpE1A0 . It builds on his other videos, so I'd recommend starting at the beginning.
My background is material's science so there might be another meaning physicists use that I'm not familiar with but my very simplified understanding/explanation of Phonons are that they are vibrations within a crystal lattice. These vibrations exists at all temperatures above absolute zero, so I don't think it is accurate to say phonons form due to reduced temperature.
> all the electrons in the entire material basically form one giant condensate
Very not my field, but perhaps that's "all the paired electrons"? Brief ai-ing (do we have a verb yet?) suggests only some small fraction of conduction electrons form pairs, let alone all the rest.
Only Cooper pairs can condensate, but in a superconducting material at the critical temperature Cooper pairs account for nearly all free electrons.
"Slop pass"?
Another way to put it: Cooper pairs become _bound_ objects. And so their energy levels are quantized, and to excite them, you need to apply more energy than is available from the thermal motion of atoms in a superconductor.
This doesn't happen with unpaired free electrons because their energy spectrum is pretty close to continuous.
Electrons are actually delocalised in a metal: rather than point particles bouncing around the nuclei like a pinball, they're more like waves that ripple and diffract around them. This means that to good approximation, the electrons pass right through each other. Because of this, I don't expect the electron motion to affect resistance much.
What definitely affects resistance is the vibration of the nuclei lattice, in which thermal energy is also stored. This vibration makes the electrons more likely to scatter. This means even in a non-superconducting metal, resistivity drops as you get colder.
The special thing about superconductors is that there's a temperature where the resistivity suddenly drops to zero. (If you look up "superconductivity resistance against temperature", you'll see some graphs showing what I mean.)
I don't know exactly the details of why this happens, but it has something to do with Cooper pairs. Electrons in these states are also sensitive to being knocked out and bumped up to regular conducting states by thermal noise.
On a whim, I bought a book called There Are No Electrons at a used book store, some years ago.
The idea of the book is that we spend lots of time teaching students various incorrect and inconsistent models for how electricity works, that also don’t optimally build intuition for working with the stuff.
The book’s remedy is to say “forget all that: here’s a wrong model that is good at building intuition for working with electricity, and if you’re not planning to go for a physics PhD, that’s much better for you than the other wrong models”
I don’t know enough about electricity to evaluate whether this was a good idea or well executed, but it’s an interesting approach.
https://goodreads.com/book/show/304551.There_Are_No_Electron...
Could you give a gist of such a wrong model?
If that's the book I am thinking of everything an analogy with little aliens called greenies that bounce on trampolines.
I couldn't get through it. I got to just past the holes part.
It is written well, so might be worth a shot.
Angry pixies.
Or the plum pudding thing. Oh, and phlogiston.
The classical model of an atom with circulating dots representing electrons. I believe those dots must be more like waves.
“There is no X” is an interesting trope. I’ve heard it with Fish and Trees as well.
Would these not exist due to taxonomic reasons (trees are just big plants, fish are too many things to be one thing) or for other reasons?
Trees are also too many things to be one thing.
https://eukaryotewritesblog.com/2021/05/02/theres-no-such-th... crops up on HN periodically.
And it’s not just ancestors! One thing that blew me away after moving from Finland to Germany was that while willows in Finland are predominantly shrubs or bushes, and maples are trees, in Germany willows grow to be trees, but maples mostly stay as vines.
There's a book: "Why Fish Don't Exist" which is pretty good. Evolutionary, my understanding is that things are usually named after their ancestry (phylogenetically?), while fish are basically... "have fish shape and is in water" which becomes awkward. Lungfish and coelacanths are a lot closely related to humans than to salmon.
At the end of the day, every model is "wrong" to some extent
I do embedded stuff and spend a lot of time looking at electric signals.
Attitude I have about knowing stuff is asking if I can use it predicatively or for design. I feel that for most attempts at explaining how electricity actually works the answer is no. And the amount of torment I'd have to subject my brain to order to make that a yes is higher than I'd like.
There is a story of a student taking an oral exam at Oxford or Cambridge many many years ago.
Examiner: "What is electricity?"
Student: "Oh, I do know, I mean I used to know, but now I've forgotten."
Examiner: "How very unfortunate. In the whole of history only two people have known what electricity is - the Creator and yourself. And now one of the two has forgotten."
That reminds me of my introduction to linguistics class, where I promptly learned that no one knows what a word is.
It's funny how the deeper you go into the subject, the more you realize that even the best explanations are just increasingly refined approximations
I've heard it said that we don't know what electricity is or how it works. To what extent or in what sense is that true?
It depends how deep you go. I mean it's the force between charged particles and their movements but if you go too deep into how did the laws of physics get there and what are electrons made of you get stuck.
Why is there anything at all?
If we were ever to actually work that out, you can be sure it would all disappear and immediately be replaced with something even more comprehensive.
I've got a theory it's all maths and is there in the same way that two plus two equals four, because it has to. Not that it gets you anywhere really.
So the answer to why is there anything at all? is:
It wasn't a choice.
That goes along with the other old saying, "All models are false, but some models are useful." We don't know what electricity is in the sense that if you keep asking "But why does object X cause (or experience) effect Y?", you will eventually reach a point where we don't know the answer.
In that sense, we don't know what anything is. But we can still use it. And because everything we learn seems to become useful sooner or later, it doesn't pay to stop asking.
I'm not sure that was the point of the poster's question about electricity, because I've heard the same assertion made by science writers and such.
Our current BFF, ChatGPT, says the question is about "charge" in that we don't know why particles have a charge. So what is a "charge" and why? Gravity is also presented as a thing we don't fundamentally (ontologically) know about. Interesting!
And not disagreeing with the desire to keep asking, nor with the desire to find a final answer. The author of the article puts it fairly well:
We don’t have philosophically satisfying insights into the universe at subatomic scales...there’s no straightforward explanation of what a bound electron actually does: it’s not orbiting the nucleus or spinning around its own axis in any conventional sense. Most simply, it just exists as a particular distribution of an electrostatic field in space.
The answer to "why?" has to stop somewhere.
Why?
Because 'unknowable' will stop you cold.
And many unknowns are practically unanswerable.
But don't worry, you won't exhaust the findable.
There's a finite amount of time until the heat death of the universe and a finite number of atoms to ask and answer the question.
I got told what electricity is in primary school, in middle school, in high school, and in university.
Every time I understood it less.
I even watched some videos where people interviewed physics professors, to explain what it really is, and the explanations only got more convoluted.
Seemingly not because those people were bad at explaining, but because if you want to explain it as correctly as possible, it just isn't intuitive at all.
AlphaPhoenix's measurements, experiments and visualisations really help! They show some things that are normally not even taught and even electrical engineers will appreciate to see this. This video he made about it is very good, and worth watching if you wonder about how electricity propagates from one end of a conductor to the other.
https://www.youtube.com/watch?v=2AXv49dDQJw
Yes, this video and the one before called "An intuitive approach for understanding electricity" are absolutely great.
He's done a handful of similar videos, and they're all by far the best explanations of electricity I've ever encountered.
And at each level the degree of confidence seems to be dropping. When you get to the point where the explanation includes an electron being made from a changes in a quantum field and a quantum field is probability, it starts to feel like there’s nothing underpinning reality.
"It's all just information" is a valid take on modern physics, with quite a few adherents.
> nothing underpinning reality
Gravity is a thing, and we can feel it under us. But we can’t touch it, or see it.
Intuitively things that we can touch and see should be qualitatively different, but this intuition is wrong. Turns out it’s just big weak fields like gravity, and small strong fields like electromagnetism.
> and we can feel it under us.
I think that we don't feel gravity directly, just our atoms resisting being squashed.
> it starts to feel like there’s nothing underpinning reality.
"Lock it up, they're starting to catch on."
;-)
Impostor Syndrome.
Ugh, this is one thing I really hate about academia. There's this weird fixation on trying to make things intuitive or tangible at every step along the way. The result is that you get five hundred very wrong/sort of wrong/okayish explanations that all differ enough to make it so we score the student wrong during examinations anyway because they chose the wrong tutorial to follow.
Sometimes, things are just hard to understand. Sooner or later students are going to have to face that, so why do we delay the inevitable?
Because there's a lot you can do with basic knowledge of a subject.
Very few people need to know the subatomic behaviour of electromagnetic fields that make electricity work, but all of us need to know that it travels in wires and it can kill you.
I'm particularly talking about in the context of teaching someone what electrons are and how they behave at the subatomic level. High level overview? Sure, use analogies or models that aren't 100% accurate. But if we're talking about students in a university, then it shouldn't really be an issue to get into the nitty gritty.
I see this as well when it comes to teaching programming to freshman CS students. For some reason, we've strayed away from lower level languages like C and don't introduce the low level details until students are quite far into their curriculum. Abstracting away the details just muddies the waters in my opinion.
As a recent graduate who did both since I switched from a very theory first uni to a practice first uni I find starting with a high level language much better.
If you're trying to learn "class, method, extends, static, var/Integer/int, interface, abstract, virtual, different exceptions, recursion and loops, etc" having to also learn about memory is not helping and if you try to do basic pointers first it becomes kind of like spell chanting. You just start trying different combinations of * and & until it works. Partially because you're a bit overwhelmed and partially because it seems like useless knowledge.
The more I worked the more I started to appreciate subjects like operating systems, algorithms, etc but at the time of doing them they seemed too theoretical since they were way above my practical knowledge and useless to the projects I was doing. "Why would I need to know how to build a file system/compiler/etc? Why in the hell would I ever do that?"
The other aspect is that if you start with learning the theoretical side you end up worrying you won't be able to code by the end after a bit. For example you've been there for 2 semesters and while you can talk about low level subjects you've barely done a todo list/calculator/chess. If you start with the more high level things by the third semester you can definitely be working part time.
This is from the POV of doing CS to start working as a dev and not do academia.
Yep, that's the paradox of learning: sometimes the more you know, the more you realize you don't know
it's confusing because all of the words we use in the field make it seem like it's akin to water flowing (current) whereas the physical phenomenon is far beyond the movements of individual electrons.
The problem with (or the advantage of) the water flowing analogy, or even more broadly the discrete element model, is that it explains reality good enough to be used in most practical situations. Schematics are ubiquitous, yes they are "fake", but they are also usually "correct enough". Kind of like the incorrect Bohr's model of electrons orbiting the nucleus actually does explain the emission spectra (up to a point).
But there is an accessible video that explains electricity pretty well. Veritasium - The Big Misconception About Electricity: https://www.youtube.com/watch?v=bHIhgxav9LY
There is one commonly used concept that requires understanding electricity correctly, and not just as a combination of waterhoses and gizmos. It's impedance, and it directly corresponds to the "controversial" experiment that Veritasium is proposing in his video. Impedance breaks the pipe-of-electrons analogy.
Are you sure we can't explain impedance with the water analogy?
You would have to start with alternating current water, since "DC" water maps to DC, where impedance =resistance.
Once you've got alternating water, you can add inductance (inertia) and capacitance (rubber diaphragm tanks) and I think it all works out.
It's just that we don't have a good intuition for alternating water current so it's not a very useful analogy in that case.
Yes you can keep going a bit further, but that's still the lumped element model. The problem is when you analyze something like a transmission line - like that circuit presented in the video. Or a PCB with very fast signals where you have to understand that the energy moves through the insulator, not through the conductor, or the circuit will not work.
One way to look at this is that there is no such thing as a hose for electricity. It cannot be confined to a conductor, even if it is also wrapped by an insulator. It is only mostly confined. And this is not some failure of materials engineering that we may overcome one day, this is just how this stuff works.
BTW, the answer in the video is 1/c seconds, i.e. one meter worth of speed of light. And the lightbulb will experience current determined by the impedance of the transmission line. Then the fields will do a full wraparound the ends, at which point the circuit will start stabilizing around the resistance of the load. It can take a few back-and-forth iterations to stabilize the current.
I have gone down this hole many times before and while it is kind of possible (the equations for a capacitor and an inductor are basically just a spring and a flywheel), it just creates really convoluted images that won't fit well (or will be too convoluted) when you try to integrate into wider electronics.
Antennas are sprinklers in the water analogy.
And when looking at alternating current, all the pipes are kind of leaky?
Yes, all pipes leak, even DC pipes, in two almost entirely independent ways (electric and magnetic). And the vector product of these leakages (the Poynting vector) is what actually transmits power. Note that this energy transfer happens entirely outside of the conductor.
Sounds bizarre, right? That's why this is mostly ignored unless it can't be, for example in very fast circuits.
It's not a bad analogy if you also consider the pressure wave. That travels a whole lot faster than the water molecules.
>>Seemingly not because those people were bad at explaining, but because if you want to explain it as correctly as possible, it just isn't intuitive at all.
Sometimes its a whole lot easier to speak the truth than to speak in analogies.
My favorite thing about electrical theory is that all this business about flow of energy going from + to - is the idea of "electron holes" flowing, instead of the actual electrons! Basically all of electronics and electricity uses hole flow convention. It seems weird to me we don't use electron-flow convention (aka: reality), but then again I'm a weird guy.
The terms are due to Ben Franklin:
We say B is electrised positively; A negatively: or rather B is electrised plus and A minus ... These terms we may use until your philosophers give us better.
Here A and B are Franklin's buddies, standing on insulating plates while one of them rubs a glass tube with a piece of, if I remember rightly (can't find the proper source), "buckskin". Then they reach out to join hands and a spark crosses the gap.
Problem is, it isn't even clear from the experiment which of A and B really was negatively charged, because it turns out the charge depends on the nature of the "buckskin" (or whatever term he used), and how hairy, furry, or possibly even leathery it was. The resulting charge could be positive or negative, depending. So he defined the terms, but didn't even clearly assign them to direction of electron flow.
Edit: the ambiguity is shown in this picture:
https://en.wikipedia.org/wiki/Triboelectric_effect#/media/Fi...
Here leather is above glass, and fur is below it. He was definitely rubbing glass with something like leather or fur, but the resulting charge depends on where in the series that thing was relative to glass.
> but the resulting charge depends on where in the series that thing was relative to glass
You'd think we would understand the science of contact/static/tribo electricity by now... And yet this posted 1 day ago: "Static electricity depends on materials' contact history" https://phys.org/news/2025-02-static-electricity-materials-c...
Discuss: https://news.ycombinator.com/item?id=43134657And https://en.wikipedia.org/wiki/Triboelectric_effect#Explanati...
We don't even really understand the "tribo-" part alone. Never mind with electricity added.
I guessed it meant "rubbing to create electricity" like a van De Graaf.
Uh, "tribbing" is a sexual act that women can do.
that's how i figured it out, but:
> 1965, "study of friction," from tribo-, a word-forming element in physics with the sense "friction," from Greek tribos "rubbing," from tribein "to rub, rub down, wear away" (from PIE root *tere- (1) "to rub, turn") + -logy. Related: Tribologist; tribological.
Woah. Well that just adds to my confusion!
I just came in here thinking hole flow was weird but wow, I need to learn some things!
I remember reading an early book on the topic where the author describes two kinds of electricity: "glass-electricity" and "resin-electricity". The experiments seemed to involve rubbing either glass or hardened resin (amber?) with something. The author (it wasn't Franklin) concluded, after a series of experiments, that this produces two different "kinds" of electricity which seem to cancel each other out.
Edit: I think I found the author: https://en.wikipedia.org/wiki/Charles_Fran%C3%A7ois_de_Ciste...
His wikipedia page seems to confirm he discovered there are two kinds of electricity and named them "vitreous" and "resinous".
And then once you realize that the ancient Greek name for amber was “ἤλεκτρον” or “electron”…
Funnily enough, we still don’t fully understand the mechanism by which static electricity is built up when rubbing things together. Professor Merrifield covers this in a very approachable way here: https://youtu.be/0UZb07imNLU. Skip to around 6:00 (or watch the whole thing, it’s well worth it)
> We say B is electrised positively; A negatively: or rather B is electrised plus and A minus ... These terms we may use until your philosophers give us better.
I can relate. This is just a quick hack to get to production, we can always rewrite it later!
As usual, xkcd has a relevant comic about this: https://xkcd.com/567/
Found it from the horse's mouth, finally: "We rub our tubes with buckſkin".
https://archive.org/details/experimentsobser00fran_0/page/17...
Don't know where Randall get "silk" from.
Silk is usually used in school demonstrations in place of buckskin.
This makes the robot apocalypse happen at an increased rate. All the advancements made without this stupid error infecting everything.
But that’s because of historical precedent, if you weren’t aware of this then congrats for being one of today’s lucky 10,000!
Ben Franklin arbitrarily picked the positive anode as the starting point when coming up with the idea of electricity flowing, long before we had any understanding of atomic theory.
It wouldn’t make sense to just invert everything after we discovered that electrons are the actual fundamental charge carriers.
I had a similarly funny discussion with a proper engineering student on refrigeration insulation and why it isn't more natural to express it similarly with temporarily trapping cold given the inevitable nature of heat/entropy.
I remember when I was very young I asked my dad if a refrigerator works by moving cold into the box or heat out of the box and he told me it was the latter. It’s a strong memory because he also said something to the effect of that being a good question and at that age, comments like that had an impact on me.
It seems to me like it's an artifact of how it was all discovered and initially implemented.
like scientists didn't realize electrons were flowing in the opposite direction, but engineers already had working electrical devices
> about flow of energy going from + to - is the idea of "electron holes" flowing
You mean flow of charge.
My favourite thing about electricity is how the actual energy is transferred on the outside of the wires, in both the directions of positive and negative charge. Resistance is the portion of the energy that accidentally enters the wire. The energy flux inside the wires -- and on the surface of the wires -- is zero. Just outside their surface it is very high.
A capacitor wouldn't work if the energy came from its poles. No, the energy used to charge it enters from the side. This is so counter-intuitive!
Charge flows on the same medium as energy. There's current on the equation of electrical power for a reason.
That bullshit model about electricity flowing around the wires is good for generating Youtube engagement, but it doesn't represent the actual physics, makes things impossible to calculate, doesn't lead to any intuitive understanding, and makes things impossible to learn. Or, in other words, the model is bullshit.
DC current flows entirely in the wires (up to at least "parts per billion" precision), as does energy, because energy flows at the same place current flows. AC current leaks. Everybody knows that, how it leaks is well known, and there are plenty of resources to calculate almost everything around it.
Exactly. If energy flowed around the wires and resistance was “the portion of the energy that accidentally entered the wire” then larger wires would have larger resistance (due to a higher chance of “stray energy entering the wire”) than smaller wires, which is clearly incorrect. It really is a terrible model in almost every way.
When I was young and had just finished my EE training I ran into a guy working with electrical systems in cars.. he claimed that wires are resistance (true enough), but also that the thicker the wire, the more resistance.. so when he wired up cars (for stereo or extra lights etc) he would use as thin wires as possible in order to reduce the resistance.
I didn't even bother asking why he hadn't thought of just removing the wire altogether.
> accidentally
Well, we may see it this way, but there's nothing accidental in it, it's juts an inherent property of every conductor (except superconductors).
And it's wild for me that we've been teaching it this way for so long just because of a historical guess made before electrons were even discovered
It’s weird in semiconductor physics too because the electrons flow uphill through voltage potentials
I have expressed this repeatedly: the assignation of a negative value to the electron and a positive value to the proton has probably slowed humanity by a decade.
Anybody who needs to know this detail to design or repair a device, already knows it. There are no simple enough devices where this is relevant for casual repair or modification. You can fix your DC devices such as cellphones and your AC appliances such as refrigerators, and even design your own phones and refrigerators, without knowing in which direction the fundamental particles flow.
It didn’t
Could you give one example
In terms of pedagogy, it is a problem.
I started tutoring other kids in grade school and eventually got paid for it as a side gig later in life. If I ended up covering electronics and/or general electricity, I saw the same thing I saw as an undergrad taking EE courses: confusion. A lot of people found the situation counter-intuitive. It required extra mental labor for them. Hence, multiplied against millions of people, there's overall lost time.
And this isn't just in young people, either. Knew a guy who swore up and down that the "electron holes" really represent positrons.
Bad notation, weird syntax, poor choices in variable names, and so on, all of these are a collective drag which could be streamlined away.
Seems a pretty trivial change, like, hot water's on the left tap not the right tap.
Humans are more than capable of handling that minor conceptual change. Life routinely throws far more challenging changes to adopt to.
Nothing fundamentally changes because of Franklin's incorrect guess. The postit sticker goes "there " instead of "here" kind of a thing.
Certainly not as grandiose and melodramatic as putting humanity back by a decade.
This was pretty clear and readable, I guess. But the most succinct explanation of electricity that I know of is from Stephen Leacock:
> Electricity is of two kinds, positive and negative. The difference is, I presume, that one comes a little more expensive, but is more durable; the other is a cheaper thing, but the moths get into it.
And that's sort of all I need to know.
Except in extremely exceptional circumstances, it is not useful to try to reason about the behavior of electrons when looking at electricity. Treat it as electrical fields and you'll be fine. The fields represent reality at the non-quantum scale much better than attempts to model non-quantum reality.
As a notable example, macroscopically electricity is totally symmetric - positive current flows the same as negative current does. There are components that exhibit asymmetric behavior but they can be arbitrarily oriented so it doesn't really matter.
You're broadly correct when it comes to designing practical circuits.
But if your aim is to answer questions like "why do different colour LEDs need different resistors" or "what does it mean for something to be a semiconductor" pretty soon people will start talking about 'electrons' and 'holes' and 'band gaps' and 'depletion regions'.
It is true that this is how we tend to discuss these things, but the important thing in practical circuits (even semiconductor circuits) is how they behave, not why they behave that way.
Really there has been only limited success in discovering new semiconducting technologies motivated from first principles -- it's mostly intuition and dabbling and experimentation that has yielded the advancements. To some degree of course motivated by models of behavior, but it's very easy to ignore all the blind alleys that theory has led down.
Mostly you're better off understanding that semiconductors work by "magic" and knowing what the response curves look like (or building mental approximations and heuristics) and otherwise just treating things in terms of currents and voltages (and fields at the lower level as necessary).
> Most simply, it [electron] just exists as a particular distribution of an electrostatic field in space.
Best simple description of an electron I think I’ve heard yet. I wish we would drop all the dumb analogies. From a kid’s perspective (at least what I can recall from high school), these macroscopic analogies mislead you into thinking the laws of physics work differently than what humanity’s best models of physics actually predict.
For instance, I never liked the sense of “arbitrariness” I felt while learning about the periodic table in K-12 school. The diagonal rule. Hund’s rule. The exception to Hund’s rule. And so on. Don’t even get me started on organic chemistry. But if someone had told me “Forget about billiard balls and wave/particle duality. Our best models consists of solutions to simple and beautiful equations that are extremely difficult to solve”, then that would have made a lot more sense to me.
The author of the article describes the truth as “weird math”. I don’t think that’s necessarily the case. Unitarity is aesthetic—it just “feels right”. The correspondence of atomic orbitals to irreducible representations of symmetry groups is beautiful. Why don’t we teach that to kids? You don’t have to go into the mathematical details of group theory, but just let them know these odd shapes originate from symmetry constraints. Much better than my reaction to seeing an illustration of a d_z^2 orbital in high school. I remember thinking “What the heck is that? This subject makes no sense.”
All of education is “lies to children”. Those lies start off as brazen untruths, designed to get the basic concepts across, and then we, iteratively, make things more and more accurate as time goes by. Eventually, if you go far enough in a particular discipline, you’ll reach the boundary of our knowledge, and that’s when things start to get “fun”.
I did a physics PhD. I still never got a really good answer to a question I asked in year-1 senior-school (11 years old, for non-Brits)… “What, exactly, is a positive charge ?” The waviness of the hands diminished over time, but it never really went away.
There is an old story I read years ago that went something like:
When you are in grade school science you are told that a car can just be modeled as a cube.
In high school you learn you can use 3 cubes.
In college you learn you can do with hundreds of cubes.
In post-doc you learn to do it billions of points with fractal levels of interaction.
When writing the text book for grade school after years in physics academia, you write:
"A car can be sufficiently modeled as a cube".
The lie I was told: "it has more electrons than it can stably accommodate, so it will give some up to anything around that can accommodate more°".
What's the next level where that breaks down?
°Though the Benjamin-Franklin-reversed-the-signs thing I learned about for the first time up-thread has me thoroughly confused. Positive charge means it... Has fewer?
Asking what charge, momentum or energy (or other conserved quantities like QCD color) in Physics basically boils down to something that is invariant under some symmetry.
Momentum and energy may feel more intuitive, but I'm not sure they really are, especially within QM.
I'm not sure if we have any deeper explanations than these symmetries.
Btw, questions formed like "What is X?" can have this kind of problem in any domain, especially if we expect some answer that is both intuitive and provides an essentialist explanation.
For instance "What is consciousness?". "What is intelligence?", "What is the meaning of life?"
What I've come to think, is that these questions come from the same type of mistake:
As any Physicist would know, the world as described by Physics and the world as we intuitively start to understand it as small children are quite different, especially at scales far removed from our senses (like in QM or Cosmology).
Humans simply doesn't have access to the full extent of reality, nor would our brains be able to do something useful with it if we had it, since we don't have anything near the processing power to comprehend it.
What we're always stuck in, is an inner world model that is some kind of rough representation of the outside world. Now let's assume the outside world actually EXISTS, even if we don't know all that much about it. Physics is just a hint of this mismatch. If we simply let go of the assumption that there is a close correspondence between our internal model of the world and the actual world, we no longer have an obligation to form strict correspondences between object within our internal simplified simulation and the outside world.
Now we're prepared for the next step: To understand that there probably is a REASON why we have this internal representation: It's there for evolutionary purposes. It helps us act in a world. Even for concepts that do not have a 1:1 correspondence with something in the Physical world, they may very well have correspondences to aspects of the world we're simply not able to comprehend otherwise. For instance, fully understanding what "consciousness" represents (how it emerges) may not even be possible without extreme amounts of computational power (the compute part may be irreducible).
Concepts like charge are similar, except that we DO (through some advanced math) have some kind of ability to build mental models that DO (perhaps) capture what gives rise to it in the Physical world.
But it still will not map onto our intuition in a way that give us the feeling of "understanding" what it "is". It kind of feels like "consciousness is an emergent property of sufficiently large scale computational system that build world models that include themselves". Still doesn't correspond to how we "feel" that consciousness "is".
But if we simply stop insisting on full correspondence between the intuitive representation of the world and the "real" (or rather, the one represented through accumulated scientific knowledge), but instead realize that the intuition MAY still be useful, we not only avoid stress related to the disconnect, we even allow ourselves to bring back concepts (like "free will") into our intuitive world model without worrying about whether it's "real".
This provides two benefits:
1) We are "allowed" to use concepts that we know are not 100% accurate representations, and even have good reason to believe they're fairly useful simplifications of aspects of the world that ARE real, but too complex for us to grasp (like QM charge for a 5-year-old).
2) As opposed to idealists (who think the inner word is primary), we don't fall into the trap of applying those concepts out of context. Many idealist philosophies and ideologies can fail catastrophically by treating such simplified ideas as fundamental axioms from which they can deduce all sorts of absurdities.
> I wish we would drop all the dumb analogies.
They're incredibly useful tools for thinking about things. You don't need or want QM to perform most reasoning tasks. Even MO is often overkill.
I agree though that we should lead with the truth - that these models you're being taught are useful abstractions but ultimately wrong. That each successive model brings with it more accuracy and nuance but is more difficult to comprehend.
A particular strength of that approach is that after making it to QM at the end it leaves you wondering what's next. It really drives home the point that the map is not the territory and that all we as humans can ever actually have is a succession of maps.
> Forget about billiard balls and wave/particle duality.
Actually that one is rather important. QM wave functions really do collapse. Things really do switch from behaving like a wave to behaving like a particle. This fact has significant effects on behavior.
> these odd shapes originate from symmetry constraints
Well they might fit those constraints, but can they really be said to originate from them? Is there actual cause and effect there? The answer to that would require understanding what gave rise to the phenomenon to begin with.
Like all simple description it's wrong. It [electron] is a quanta of the electron field which is a fermion matter (spin 1/2, therefore is charged) subject to the Pauli Exclusion principle.
Electromagnetism is one of the four fundamental forces mediated by photons which are its basic quanta and is Bosonic field (spin 1) and therefore are neutral.
The interaction of these two fields is depicted via Feynman diagrams.
Macroscopically observed Electrostatic field of a charged capacitor, is mediated by the superposition of virtual zero frequency (ν = 0) photons, which are off-shell and non-radiative. Field’s energy arises from the cumulative effect of infinite virtual photon exchanges. Whether virtual photons are "real" is debatable and confuses those who prefer intuition to computation.
https://en.wikipedia.org/wiki/Virtual_particle
I agree that we should teach the quantum basis of things earlier. I just think a lot of people don't know it, and we don't have a good curriculum for kids to start with.
We'd also need to revamp some of the math, chemistry, and physics curricula to build on the quantum basis of things.
When I was taught the planetary model of atoms I was quite suspicious. My reaction was - you can't be serious ... to ... are you making fun of me ... to... just because you have a hammer ...
Few years later orbitals were introduced as, essentially, motion blurs formed by our little zippy guy. I approached our teacher and asked what if that 'motion blur' is all there is and that billiards ball electron is just a bed-time story. That's an adequate way to think he said. Electron mass gets more difficult to explain to school kids in this line of thinking.
> We don’t have philosophically satisfying insights into the universe at subatomic scales. We have quantum mechanics: a set of equations that are good at predicting the behavior of elementary particles, but that don’t line up with our intuition about the macroscopic world.
Our analogies and intuitions are based off of our macroscopic experienced reality, this seems to be an entirely emergent phenomenon based on those strange behaviors described by quantum mechanics. If those insights ever do come, I don’t believe they’ll correspond to anything prewired into our brains or experienced in our lives, and will never be remotely satisfying.
> If those insights ever do come, I don’t believe they’ll correspond to anything prewired into our brains or experienced in our lives, and will never be remotely satisfying.
I don't know why you'd assume that.
Euler's identity certainly doesn't correspond to anything prewired, and yet it's very satisfying.
The philosophical insights will be satisfying if they are simple and elegant. Our theory of biological evolution through natural selection isn't remotely prewired either, but that doesn't stop it from being one of the most philosophically satisfying theories we've come up with.
For introductory articles like this I also find it helpful to know that the whole positive / negative thing is arbitrary. In fact the assignment of "negative" to electrons arise due to a mistaken interpretation by Benjamin Franklin of one of his experiments. So if you're wondering why gaining the primary mobile charge carrier makes things more negative blame Ben Franklin!
I wouldn't call it "mistaken". It was an arbitrary choice at the time, nothing but a naming convention.
Why does this otherwise excellent series always depict electrons as red and protons as blue when everybody knows it’s the other way round?
If you have ever looked at an old transmission electron microscope with a viewing screen, you know electrons are green ; )
Electrons are blue and oxygen is red. Don't be ridiculous.
I thought everyone knew that neutrons are blue and electrons are yellow.
I refuse to believe neutrons can be any color other than white.
> Pay no mind: it’s enough to say that most nuclei on Earth were formed through nuclear fusion in stars and won’t undergo any change on the timescales of interest to electronics — or to terrestrial life.
It's impossible for me to understate how awesome this is. And how hard it is for me to truly grok.
Yes, and in particular that it's in all the "ordinary stuff" around you - wood, air, glass, sand. Reality is truly extraordinary upon any closer examination.
Great article! I particularly like this paragraph:
>It’s important to note that while the charge equalization process is fast, the drift of individual electrons is not. The field propagates at close to the speed of light in vacuum (circa 300,000 km/s); individual electrons in a copper wire typically slither at speeds measured in centimeters per hour or less. A crude analogy is the travel of sound waves in air: if you yell at someone, they will hear you long before any single air molecule makes it from here to there.
So basically electricity flows like a Newton's cradle. But this leaves one nagging question: what is the nature of the delay? This question also arises when considering the microscopic cause of index-of-refraction for light[1]. If you take a simple atom, like hydrogen, and shine a light on it of a particular frequency, I understand that the electron will jump to a higher energy energy level, and then fall back down. But what governs the delay between these jumps? And also, how is it that, in general, light will continue propagating in the same direction? That is, there seems to be some state-erasure or else the electron would have to "remember" more details about the photon that excited it. (And who knows? Maybe the electron does "remember" the incident photon through some sort of distortion of the quantum field which governs the electron's motion.) The same question applies to electron flow - what are the parameters that determine the speed of electricity in a conductor, and how does it work?
1. 3blue1brown recently did a great video describing how light "slowing down" can be explained by imagining that each layer of the material introduces its own phase shift to incoming light. Apparently this is an argument Feynman used in his Lectures. But Grant didn't explain the nature of the phase shift! https://www.youtube.com/watch?v=KTzGBJPuJwM
> But what governs the delay between these jumps?
What governs the delay between one ball hitting the cradle and the opposite ball going up?
It's the electrical equivalent of the same thing. Specifically, electricity is delayed by the material absorbing it "elastically" for a short time before emitting it back. This is usually modeled as a capacitance and inductance on the medium.
> And also, how is it that, in general, light will continue propagating in the same direction?
It actually doesn't. It mostly follows the medium. That's why you can bend your wires and they keep working.
But if your question is why it doesn't go "backwards", they go, but there's an electrical potential there pushing your electrons on the other direction.
>It actually doesn't. It mostly follows the medium. That's why you can bend your wires and they keep working.
Sorry, its my fault for introducing light into a discussion about electric current. In fiber optics I believe they add "cladding" to achieve "total internal reflection" that somehow keeps the light going - not sure how it stays coherent though! And in electronics, I assume that the boundary of the conductor with non-conductor (e.g. air) provides a similar function. I've heard that conductors conduct almost entirely on their surface, another curious effect I'd like to undersatnd, and I'd also be curious if any applications use hollow tubes to conduct large currents and save on weight.
> And in electronics, I assume that the boundary of the conductor with non-conductor (e.g. air) provides a similar function.
Electricity inside a conductor works more like a sound wave the article talks about than optical fiber. It's not coherent or directed, you have a high "pressure" on one end pushing the electrons, and they push each other forward as a consequence. There is no care about reflections (up until radio frequencies), it just moves into the direction of less "pressure" through the medium. (Even in RF, but on those the reflections cause noise.)
Optic fiber depend a lot on conservation of momentum. Electrical current has none of that. Even the reflections are caused by the "elastic absorption" of the medium, and don't behave like a collision.
> I've heard that conductors conduct almost entirely on their surface
That's not really right. Conductors conduct through all of their area, unless you have high frequencies. At high frequencies there is magnetic interaction between the electrons so they are pushed out of the conductor's center, but this is not a universal thing.
And then, in high frequencies you don't use hollow tubes. You use thin wires, insulated from each other, knitted in a way that every wire spends the same length on the middle of the bundle.
Ever since seeing this time traveling meme [0], I'm constantly thinking about how I would be the worst time traveler ever. Maybe I could explain flight? But not electricity, definitely not the engineering needed to make it usable.
[0] https://cheezburger.com/9253930240/this-electricity-business...
> what is electricity?
My favorite answers are:
* http://amasci.com/miscon/whatis.html
* https://blog.rootsofprogress.org/the-significance-of-electri...
As someone who studied physics and electronics for many years, I still appreciate an article like this for reminding me how profoundly weird science is. Working day to day with the equations and practical applications of electricity gives you a false sense of confidence that we actually have any fucking clue what’s going on.
*how profoundly weird reality is. (◕‿ ◠)
Or is it "how profoundly weird this simulation is"? we'll never know!
> The field propagates at close to the speed of light in vacuum (circa 300,000 km/s); individual electrons in a copper wire typically slither at speeds measured in centimeters per hour or less.
It would be worth mentioning why it happens as it's quite interesting.
I'm no physicist, but if I remember correctly, light is some form of electromagnetic radiation, which means EMR travels at speed of light. In that case, it's not surprising that an electrostatic field can travel at similar speed.
From my understanding the quote is talking about electrostatic effects that occur when electrons move to fill a void/go away from a negatively charged area. Since the force that makes electrons repeal each other is very weak, I think it makes sense. But note that it mention "a single electron." Voltage deals with a difference in immense scales of electrons, so I assume the effect and speed would be different in practical cases.
There's also the field theory of electricity: https://youtube.com/watch?v=bHIhgxav9LY
It’s an eye opening alternative explanation to the electrons flowing like a chain theory of this article.
If people are going to bring up Veritasium and AlphaPhoenix, then I'm going to want to say something like:
ElectroBOOM and AvE and bigclive have done more to further my understanding of electricity, both theoretically and practically, than everything else combined.
https://www.youtube.com/@arduinoversusevil2025
https://www.youtube.com/@ElectroBOOM
https://www.youtube.com/@bigclivedotcom
I’ve always wondered if the electrons bound to a nucleus are somehow bound to the element they were attached to.
Changing an element from Hydrogen to Helium or any other variant of conversion seems like it breaks an especially solid confluence. Each proton determines the atomic number, and there is a corresponding electron for each proton after all.
They may float around the universe, but could they still “belong” to the element they were formed with, bound to be impacted in some way when that element converts to another (in a stellar reactor for example).
This would mean electrons are somehow unique most likely, but stranger things have been observed.
I'm not sure exactly what your imagining, but there are two principles that are related to your idea.
1. Particles are indistinguishable from each other. It's a very deep principle, i.e. a lot of stuff relies on this being true.
2. States and particles can absolutely be entangled ("bound") to each other, but it tends to be pretty fragile.
https://en.wikipedia.org/wiki/Indistinguishable_particles
https://en.wikipedia.org/wiki/Quantum_entanglement
To be fair, what he proposed isn't (immediately obviously) mutually exclusive with your points. If it were true it would be almost impossible to detect experimentally.
So it gets tossed on the stack with all the other complex-and-unfalsifiable theories for which no evidence exists.
It might make for an amusing sci-fi plot though.
> To be fair, what he proposed isn't (immediately obviously) mutually exclusive with your points. If it were true it would be almost impossible to detect experimentally.
The obviousness or lack thereof is subjective, but the exclusivity is firmly established. The absolute indistinguishability of particles is deeply woven into quantum mechanics; you don't get a Pauli exclusion principle without it, for example. If the particles remembered their previous lives, and an electron that used to be tied to an iron nucleus weren't completely identical to one that used to be stuck to a carbon nucleus, all of quantum mechanics as we know it would be impossible.
I don't see why? They could be indistinguishable from our perspective while mysteriously being affected in some way if certain things happened to their "partner". We can experimentally set an upper bound on the permissible weirdness but I don't think we can eliminate the possibility.
Experimentally you'd be attempting to detect inexplicable single particle events above some level of rarity. You'd have access to only one side of the pair - you can't tell which one the other side is even if it's right in front of you (and it almost certainly isn't). So there's no discernible (to you) trigger for these events you're trying to detect. So you'd be trying to correlate frequency counts with bulk conditions as averaged across more or less the entire universe.
In the same vein as the God of the gaps the phenomenon could always be hiding below the noise floor.
Relevant:
How Electricity Actually Works by Veritasium ~ https://www.youtube.com/watch?v=oI_X2cMHNe0
Electricity finally made some sense to me when I wrote down the basic high school electricity equations and derived the basic energy and charge units from them.
That's how I finally understood that power is energy rate (Joules per unit time), current is charge rate (electrons per unit time), and voltage is energy per unit charge (Joules per electron). Voltage makes a lot more sense as the excitation of electrons; them wanting to jump gaps and go to a lower energy place feels more intuitive in that framing.
> A crude analogy is the travel of sound waves in air: if you yell at someone, they will hear you long before any single air molecule makes it from here to there.
Isn’t this a very good analogy? What’s so crude about it?
I used to read W. Beaty, the guy from http://amasci.com/
Still, getting electricity right is not easy.
In pop-science articles and videos, the concept is usually illustrated using the dated but intuitive model developed by Niels Bohr around 1918. The model envisions electrons that travel around the nucleus in circular orbits:
Please stop teaching the history of what we used to think the atom looked like. We’ve reached the point where we spend 99% of the material teaching what we know the atom doesn’t look like and very little on what it does look like. Even this author offers a picture for what it doesn’t look like and nothing for what it does. Physicists should know the value of a good picture/mental model better than anyone else.
I challenge you to go on Wikipedia and find the article/space for the current understanding of the atom. Was that hard for you to find? Would a curious high schooler have enough information within that article/space to learn everything you now know (textbooks are too expensive and inaccessible for high schoolers to rely on, physics websites are hit/miss). Is this how you would prefer to have been taught?
The picture of electrons sitting in states (or the empty states without electrons) is not used for historical reasons. It's used because the next level of explanation requires partial differential equations.