Category Archives: Wikipedia

Inertial frame of reference (Wikipedia)

Parts from Wikipedia article are quoted in black. My comments follow in bold color italics.

Inertial frame of reference – Wikipedia

An inertial frame of reference, in classical physics, is a frame of reference in which bodies, whose net force acting upon them is zero, are not accelerated, that is they are at rest or they move at a constant velocity in a straight line. In analytical terms, it is a frame of reference that describes time and space homogeneously, isotopically, and in a time-independent manner. Conceptually, in classical physics and special relativity, the physics of a system in an inertial frame have no causes external to the system. An inertial frame of reference may also be called an inertial reference frame, inertial frame, Galilean reference frame, or inertial space.

As described in the paper, The Electromagnetic Cycle, “The electromagnetic cycles collapse into a continuum of very high frequencies in our material domain, which provides the absolute and independent character to the space and time that we perceive.”

The “inertial frame of reference” of classical physics describes only the space and time occupied by matter. It does not describe the space and time that is not occupied by matter.

All inertial frames are in a state of constant, rectilinear motion with respect to one another; an accelerometer moving with any of them would detect zero acceleration. Measurements in one inertial frame can be converted to measurements in another by a simple transformation (the Galilean transformation in Newtonian physics and the Lorentz transformation in special relativity). In general relativity, in any region small enough for the curvature of spacetime and tidal forces to be negligible, one can find a set of inertial frames that approximately describe that region.

As described in the paper, The Problem of Inertia, “The uniform drift velocity results naturally from the innate acceleration of disturbance balanced by its inertia. The higher is the inertia, the smaller is the velocity. Matter may be looked upon as a “disturbance” of large inertia. Therefore, black holes of very large inertial mass shall have almost negligible velocity. On the other hand, bodies with little inertial mass shall have higher velocities.”

These inertial frames are valid for the material domain only. They are described by the Newton’s Laws of motion. The velocities in this domain are extremely small compared to the velocity of light. These material velocities are described in relation to each other by simple Galilean transformations. Acceleration applied to a body changes its velocity only for the duration of that acceleration. In the absence of acceleration the original velocity restores itself if the inertia of the body has not changed.

In a non-inertial reference frame in classical physics and special relativity, the physics of a system vary depending on the acceleration of that frame with respect to an inertial frame, and the usual physical forces must be supplemented by fictitious forces. In contrast, systems in non-inertial frames in general relativity don’t have external causes, because of the principle of geodesic motion. In classical physics, for example, a ball dropped towards the ground does not go exactly straight down because the Earth is rotating, which means the frame of reference of an observer on Earth is not inertial. The physics must account for the Coriolis effect—in this case thought of as a force—to predict the horizontal motion. Another example of such a fictitious force associated with rotating reference frames is the centrifugal effect, or centrifugal force.

An accelerating non-inertial frame is changing in inertia. Therefore, additionalforces appear in that frame to balance that additional inertia.



The motion of a body can only be described relative to something else—other bodies, observers, or a set of space-time coordinates. These are called frames of reference. If the coordinates are chosen badly, the laws of motion may be more complex than necessary. For example, suppose a free body that has no external forces acting on it is at rest at some instant. In many coordinate systems, it would begin to move at the next instant, even though there are no forces on it. However, a frame of reference can always be chosen in which it remains stationary. Similarly, if space is not described uniformly or time independently, a coordinate system could describe the simple flight of a free body in space as a complicated zig-zag in its coordinate system. Indeed, an intuitive summary of inertial frames can be given as: In an inertial reference frame, the laws of mechanics take their simplest form.

It is not true that the motion of a body can only be described relative to something else. A body’s absolute motion may be described in terms of its inertia. In the absence of externally applied forces, the velocities of two bodies differ because of difference in their inertia. The velocities become equal when the difference in inertia is balanced by externally applied forces.

The motion of any non-accelerating body may be chosen as a frame of reference. Another body with the same motion will appear at rest in this frame of reference. Such arbitrary frames of references provide fictitious motion on a relative basis. Absolute motion may be perceived only in a frame of zero inertia.

In an inertial frame, Newton’s first law, the law of inertia, is satisfied: Any free motion has a constant magnitude and direction. Newton’s second law for a particle takes the form…

All observers agree on the real forces, F; only non-inertial observers need fictitious forces. The laws of physics in the inertial frame are simpler because unnecessary forces are not present.

In an inertial frame, a body has no acceleration. Its absolute motion is determined by its inertia. Its apparent velocity and direction is determined by the frame of reference being used. The body is imparted acceleration by a force. The acceleration is proportional to the force applied. The proportionality constant is called the “mass” of the body.

The “mass” of the body is an aspect of its inertia. It shows how “pinned” the body is in space. We know from experience that any rotating motion pins a body in space. Therefore, the “mass” of a body may be looked upon as representing some rotating frame of reference.

In Newton’s time the fixed stars were invoked as a reference frame, supposedly at rest relative to absolute space. In reference frames that were either at rest with respect to the fixed stars or in uniform translation relative to these stars, Newton’s laws of motion were supposed to hold. In contrast, in frames accelerating with respect to the fixed stars, an important case being frames rotating relative to the fixed stars, the laws of motion did not hold in their simplest form, but had to be supplemented by the addition of fictitious forces, for example, the Coriolis force and the centrifugal force. Two interesting experiments were devised by Newton to demonstrate how these forces could be discovered, thereby revealing to an observer that they were not in an inertial frame: the example of the tension in the cord linking two spheres rotating about their center of gravity, and the example of the curvature of the surface of water in a rotating bucket. In both cases, application of Newton’s second law would not work for the rotating observer without invoking centrifugal and Coriolis forces to account for their observations (tension in the case of the spheres; parabolic water surface in the case of the rotating bucket).

The fixed stars represent a reference frame of infinite inertia. The absolute motion of such a reference frame is almost zero. An object with lesser inertia will be seen to be in motion in this reference frame. The lesser is the inertia of an object the greater shall be its motion. A rotating frame of reference shall also be rotating with respect to the fixed stars. In that rotating frame of reference there will be inertial forces that are not fictitious, but real, such as, parabolic water surface in the case of the rotating bucket.

As we now know, the fixed stars are not fixed. Those that reside in the Milky Way turn with the galaxy, exhibiting proper motions. Those that are outside our galaxy (such as nebulae once mistaken to be stars) participate in their own motion as well, partly due to expansion of the universe, and partly due to peculiar velocities. The Andromeda galaxy is on collision course with the Milky Way at a speed of 117 km/s. The concept of inertial frames of reference is no longer tied to either the fixed stars or to absolute space. Rather, the identification of an inertial frame is based upon the simplicity of the laws of physics in the frame. In particular, the absence of fictitious forces is their identifying property…

The identification of an inertial frame is based upon the absence of unexplained force or acceleration.



A brief comparison of inertial frames in special relativity and in Newtonian mechanics, and the role of absolute space is next.

A set of frames where the laws of physics are simple

According to the first postulate of special relativity, all physical laws take their simplest form in an inertial frame, and there exist multiple inertial frames interrelated by uniform translation:

Special principle of relativity: If a system of coordinates K is chosen so that, in relation to it, physical laws hold good in their simplest form, the same laws hold good in relation to any other system of coordinates K’ moving in uniform translation relatively to K.
— Albert Einstein: The foundation of the general theory of relativity, Section A, §1

The special principle of relativity seems to consider the inertial frames of references in material domain only.

This simplicity manifests in that inertial frames have self-contained physics without the need for external causes, while physics in non-inertial frames have external causes. The principle of simplicity can be used within Newtonian physics as well as in special relativity; see Nagel and also Blagojević.

The laws of Newtonian mechanics do not always hold in their simplest form…If, for instance, an observer is placed on a disc rotating relative to the earth, he/she will sense a ‘force’ pushing him/her toward the periphery of the disc, which is not caused by any interaction with other bodies. Here, the acceleration is not the consequence of the usual force, but of the so-called inertial force. Newton’s laws hold in their simplest form only in a family of reference frames, called inertial frames. This fact represents the essence of the Galilean principle of relativity: The laws of mechanics have the same form in all inertial frames.
— Milutin Blagojević: Gravitation and Gauge Symmetries, p. 4

Only the laws of mechanics have the same form in all inertial frames because they operate on a relative basis in the material domain (the continuum of very high frequencies).

In practical terms, the equivalence of inertial reference frames means that scientists within a box moving uniformly cannot determine their absolute velocity by any experiment (otherwise the differences would set up an absolute standard reference frame). According to this definition, supplemented with the constancy of the speed of light, inertial frames of reference transform among themselves according to the Poincaré group of symmetry transformations, of which the Lorentz transformations are a subgroup. In Newtonian mechanics, which can be viewed as a limiting case of special relativity in which the speed of light is infinite, inertial frames of reference are related by the Galilean group of symmetries.

Absolute motion shall be visible only in a frame of reference of zero inertia. Newtonian mechanics uses light as the reference point of “infinite velocity” for material domain. This is adequate except on cosmological scale where the finite speed of light generates anomalies. Special relativity accounts for the finite velocity of light and explains the cosmological anomalies. Special relativity is adequate except on atomic scale where the finite inertia of light generates anomalies.

Absolute space

Newton posited an absolute space considered well approximated by a frame of reference stationary relative to the fixed stars. An inertial frame was then one in uniform translation relative to absolute space. However, some scientists (called “relativists” by Mach), even at the time of Newton, felt that absolute space was a defect of the formulation, and should be replaced.

As explained in the paper, The Electromagnetic Cycle, space and time may be treated as absolute in the material domain only. The doubts entered only where cosmic dimensions were involved in which light’s finite velocity could not be ignored.

Indeed, the expression inertial frame of reference (German: Inertialsystem) was coined by Ludwig Lange in 1885, to replace Newton’s definitions of “absolute space and time” by a more operational definition. As translated by Iro, Lange proposed the following definition:

A reference frame in which a mass point thrown from the same point in three different (non co-planar) directions follows rectilinear paths each time it is thrown, is called an inertial frame.

A discussion of Lange’s proposal can be found in Mach.

The inadequacy of the notion of “absolute space” in Newtonian mechanics is spelled out by Blagojević:

  • The existence of absolute space contradicts the internal logic of classical mechanics since, according to Galilean principle of relativity, none of the inertial frames can be singled out.
  • Absolute space does not explain inertial forces since they are related to acceleration with respect to any one of the inertial frames.
  • Absolute space acts on physical objects by inducing their resistance to acceleration but it cannot be acted upon.
— Milutin Blagojević: Gravitation and Gauge Symmetries, p. 5

“Absolute space” is actually the reference frame of zero inertia. Newton approximated “absolute space” as the background of fixed stars. But fixed stars provide a reference frame of infinite inertia and not of zero inertia.

The utility of operational definitions was carried much further in the special theory of relativity. Some historical background including Lange’s definition is provided by DiSalle, who says in summary:

The original question, “relative to what frame of reference do the laws of motion hold?” is revealed to be wrongly posed. For the laws of motion essentially determine a class of reference frames, and (in principle) a procedure for constructing them.
— Robert DiSalle Space and Time: Inertial Frames

Inertial frames are frames of constant inertia. Acceleration is always related to change in inertia. The physicists have overlooked the concept of zero inertia


Newton’s inertial frame of reference

Within the realm of Newtonian mechanics, an inertial frame of reference, or inertial reference frame, is one in which Newton’s first law of motion is valid. However, the principle of special relativity generalizes the notion of inertial frame to include all physical laws, not simply Newton’s first law.

Newton viewed the first law as valid in any reference frame that is in uniform motion relative to the fixed stars; that is, neither rotating nor accelerating relative to the stars. Today the notion of “absolute space” is abandoned, and an inertial frame in the field of classical mechanics is defined as:

An inertial frame of reference is one in which the motion of a particle not subject to forces is in a straight line at constant speed.

An inertial frame of reference has its true basis in zero inertia of EMPTINESS, and not in the infinite inertia of fixed stars.

Hence, with respect to an inertial frame, an object or body accelerates only when a physical force is applied, and (following Newton’s first law of motion), in the absence of a net force, a body at rest will remain at rest and a body in motion will continue to move uniformly—that is, in a straight line and at constant speed. Newtonian inertial frames transform among each other according to the Galilean group of symmetries.

In material domain, the level of inertia is so high that compared to it, the differences in inertia of material bodies, and its effect on their velocities can be ignored.

If this rule is interpreted as saying that straight-line motion is an indication of zero net force, the rule does not identify inertial reference frames because straight-line motion can be observed in a variety of frames. If the rule is interpreted as defining an inertial frame, then we have to be able to determine when zero net force is applied. The problem was summarized by Einstein:

The weakness of the principle of inertia lies in this, that it involves an argument in a circle: a mass moves without acceleration if it is sufficiently far from other bodies; we know that it is sufficiently far from other bodies only by the fact that it moves without acceleration.
— Albert Einstein: The Meaning of Relativity, p. 58

The weakness here is the implicit assumption that the relative uniform motion remains constant in the absence of external forces. This assumption ignores the influence of inertia on motion.

There are several approaches to this issue. One approach is to argue that all real forces drop off with distance from their sources in a known manner, so we have only to be sure that a body is far enough away from all sources to ensure that no force is present. A possible issue with this approach is the historically long-lived view that the distant universe might affect matters (Mach’s principle). Another approach is to identify all real sources for real forces and account for them. A possible issue with this approach is that we might miss something, or account inappropriately for their influence, perhaps, again, due to Mach’s principle and an incomplete understanding of the universe. A third approach is to look at the way the forces transform when we shift reference frames. Fictitious forces, those that arise due to the acceleration of a frame, disappear in inertial frames, and have complicated rules of transformation in general cases. On the basis of universality of physical law and the request for frames where the laws are most simply expressed, inertial frames are distinguished by the absence of such fictitious forces…

A source of “force” is body’s inertia, from which the body cannot be separated. We cannot assume all inertial frames to have the same inertia.


Separating non-inertial from inertial reference frames


Inertial and non-inertial reference frames can be distinguished by the absence or presence of fictitious forces, as explained shortly.

The effect of this being in the noninertial frame is to require the observer to introduce a fictitious force into his calculations….
— Sidney Borowitz and Lawrence A Bornstein in A Contemporary View of Elementary Physics, p. 138

The presence of fictitious forces indicates the physical laws are not the simplest laws available so, in terms of the special principle of relativity, a frame where fictitious forces are present is not an inertial frame:

The equations of motion in a non-inertial system differ from the equations in an inertial system by additional terms called inertial forces. This allows us to detect experimentally the non-inertial nature of a system.
— V. I. Arnol’d: Mathematical Methods of Classical Mechanics Second Edition, p. 129

Bodies in non-inertial reference frames are subject to so-called fictitious forces (pseudo-forces); that is, forces that result from the acceleration of the reference frame itself and not from any physical force acting on the body. Examples of fictitious forces are the centrifugal force and the Coriolis force in rotating reference frames…

The “fictitious” forces are essentially due to the inertia of the reference frame. The influence of inertia can be fully accounted only in a frame of reference of zero inertia.


Inertial navigation systems used a cluster of gyroscopes and accelerometers to determine accelerations relative to inertial space. After a gyroscope is spun up in a particular orientation in inertial space, the law of conservation of angular momentum requires that it retain that orientation as long as no external forces are applied to it. Three orthogonal gyroscopes establish an inertial reference frame, and the accelerators measure acceleration relative to that frame. The accelerations, along with a clock, can then be used to calculate the change in position. Thus, inertial navigation is a form of dead reckoning that requires no external input, and therefore cannot be jammed by any external or internal signal source…

The “inertial space” is set by the orientation of spinning gyroscopes.


Newtonian mechanics

Classical mechanics, which includes relativity, assumes the equivalence of all inertial reference frames. Newtonian mechanics makes the additional assumptions of absolute space and absolute time. Given these two assumptions, the coordinates of the same event (a point in space and time) described in two inertial reference frames are related by a Galilean transformation…

Newtonian mechanics applies only to the material domain of very high inertia, where differences in the inertia of material bodies can be ignored.

Special relativity

Einstein’s theory of special relativity, like Newtonian mechanics, assumes the equivalence of all inertial reference frames, but makes an additional assumption, foreign to Newtonian mechanics, namely, that in free space light always is propagated with the speed of light c0, a defined value independent of its direction of propagation and its frequency, and also independent of the state of motion of the emitting body. This second assumption has been verified experimentally and leads to counter-intuitive deductions including:

  • time dilation (moving clocks tick more slowly)
  • length contraction (moving objects are shortened in the direction of motion)
  • relativity of simultaneity (simultaneous events in one reference frame are not simultaneous in almost all frames moving relative to the first).

These deductions are logical consequences of the stated assumptions, and are general properties of space-time, typically without regard to a consideration of properties pertaining to the structure of individual objects like atoms or stars, nor to the mechanisms of clocks…

From this perspective, the speed of light is only accidentally a property of light, and is rather a property of spacetime, a conversion factor between conventional time units (such as seconds) and length units (such as meters).

Incidentally, because of the limitations on speeds faster than the speed of light, notice that in a rotating frame of reference (which is a non-inertial frame, of course) stationarity is not possible at arbitrary distances because at large radius the object would move faster than the speed of light.

As described in the paper, The Electromagnetic Cycle“The electromagnetic cycles collapse into a continuum of very high frequencies in our material domain, which provides the absolute and independent character to the space and time that we perceive.

“This is the Newtonian domain of space and time. Einsteinian length contraction and time dilation does not occur in this Newtonian domain. It occurs at much lower electromagnetic frequencies.”

Special relativity allows frames of references that are outside the material domain. The error is to consider them equivalent to those in material domain.


General relativity

General relativity is based upon the principle of equivalence:

There is no experiment observers can perform to distinguish whether an acceleration arises because of a gravitational force or because their reference frame is accelerating.
— Douglas C. Giancoli, Physics for Scientists and Engineers with Modern Physics, p. 155.

General relativity acknowledges inertia in the context of a field.

This idea was introduced in Einstein’s 1907 article “Principle of Relativity and Gravitation” and later developed in 1911. Support for this principle is found in the Eötvös experiment, which determines whether the ratio of inertial to gravitational mass is the same for all bodies, regardless of size or composition. To date no difference has been found to a few parts in 1011. For some discussion of the subtleties of the Eötvös experiment, such as the local mass distribution around the experimental site (including a quip about the mass of Eötvös himself), see Franklin.

Inertial and gravitational mass are equivalent.

Einstein’s general theory modifies the distinction between nominally “inertial” and “noninertial” effects by replacing special relativity’s “flat” Minkowski Space with a metric that produces non-zero curvature. In general relativity, the principle of inertia is replaced with the principle of geodesic motion, whereby objects move in a way dictated by the curvature of spacetime. As a consequence of this curvature, it is not a given in general relativity that inertial objects moving at a particular rate with respect to each other will continue to do so. This phenomenon of geodesic deviation means that inertial frames of reference do not exist globally as they do in Newtonian mechanics and special relativity.

The inertial frames of general relativity acknowledge the differences in their inertia and start to account for it.

However, the general theory reduces to the special theory over sufficiently small regions of spacetime, where curvature effects become less important and the earlier inertial frame arguments can come back into play. Consequently, modern special relativity is now sometimes described as only a “local theory”. The study of double-star systems provided significant insights into the shape of the space of the Milky Way galaxy. The astronomer Karl Schwarzschild observed the motion of pairs of stars orbiting each other. He found that the two orbits of the stars of such a system lie in a plane, and the perihelion of the orbits of the two stars remains pointing in the same direction with respect to the solar system. Schwarzschild pointed out that that was invariably seen: the direction of the angular momentum of all observed double star systems remains fixed with respect to the direction of the angular momentum of the Solar System. These observations allowed him to conclude that inertial frames inside the galaxy do not rotate with respect to one another, and that the space of the Milky Way is approximately Galilean or Minkowskian.

Special relativity uses light as its reference frame. This is different from a reference frame of zero inertia. This introduces an error that is carried forward into General relativity.


Comments on Electric Charge


Reference: Disturbance Theory


Electric Charge – Wikipedia

Electric charge is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. There are two types of electric charges: positive and negative (commonly carried by protons and electrons respectively). Like charges repel and unlike attract. An absence of net charge is referred to as neutral. An object is negatively charged if it has an excess of electrons, and is otherwise positively charged or uncharged. The SI derived unit of electric charge is the coulomb (C). In electrical engineering, it is also common to use the ampere-hour (Ah), and, in chemistry, it is common to use the elementary charge (e) as a unit. The symbol Q often denotes charge. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that don’t require consideration of quantum effects.

Force is experienced when electrically charged matter is brought in vicinity of another electrically charged matter. The “charges” seems to be part of fields. The fields interact as if to establish some kind of equilibrium. The interaction takes the form of attractive and repulsive forces. Force implies change in momentum. In case of field, force implies frequency gradient.

The frequency gradient seems to be established by eddy type formation in which frequency increases toward the center of the eddy. The higher frequency at the center represents negative charge; the lower frequency at the periphery represents positive charge. The center is denser in terms of lines of force and appears as a particle in contrast to the periphery. Therefore, electrons are more likely to be observed as particles than positrons.

In an atom the negatively charged “center of electronic region” is aligned with the positively charged “periphery of the nucleus” (see the picture above). This is because nucleus appears at the center of the electronic region. The “periphery of the electronic region” is positively charged as shown. The “center of the nucleus” is negatively charged as shown. It is incorrect to view the whole electron as negative and the whole nucleus as positive. The “attractive force” between electrons and the nucleus is better understood in terms of alignment of the frequency gradient.

An equilibrium is sought in terms of alignment of frequency gradients. Interaction takes place between fields when the frequency gradients are not aligned. So we see attractive and repulsive forces between the charges. We have a frequency gradient within the atom that is well aligned from the center of the nucleus to the outer periphery of the atom and balanced by the eddy-like rotations within the atom.

From Newton’s second law of motion, Newton’s law of gravity and Coulomb’s law we get the dimensions of mass and charge to be the same.

[M] = [Q] = [L3-2]

Mass is the constant of proportionality between force and acceleration.
Force = mass x acceleration

Similarly, charge could be defined as the constant of proportionality between force and frequency.
Force = charge x [velocity of light] x frequency

This is dimensionally accurate. Therefore,

The electric charge is a fundamental conserved property of some subatomic particles, which determines their electromagnetic interaction. Electrically charged matter is influenced by, and produces, electromagnetic fields. The interaction between a moving charge and an electromagnetic field is the source of the electromagnetic force, which is one of the four fundamental forces (See also: magnetic field).

The charge, like mass, is closely related to inertia. It is conserved like mass is conserved. It may appear that charge produces EM field, but charge is simply a part of the electromagnetic phenomenon. Movement of charge is the shifting of frequency gradient in the field, and this manifests as force. Normally this frequency gradient is balanced by the eddy-like motion within the field.

Twentieth-century experiments demonstrated that electric charge is quantized; that is, it comes in integer multiples of individual small units called the elementary charge, e, approximately equal to 1.602×10−19 coulombs (except for particles called quarks, which have charges that are integer multiples of ⅓ e). The proton has a charge of +e, and the electron has a charge of −e. The study of charged particles, and how their interactions are mediated by photons, is called quantum electrodynamics.

Quantization of electrical charge means that only the multiples of a basic frequency gradient are permitted in the structure of atom. The charge of an electron represents that basic frequency gradient. The electron may be modeled as a 3D vortex in the electromagnetic field. The mathematics of this model may reveal the fundamental frequency gradient. It may also provide a meaning to “conservation of charge” or “conservation of force” as hinted by Michael Faraday. This may lead to understanding of stable configurations of elementary particles and the quantization of properties at atomic dimensions.

It is this gradient of frequency that appears as the four fundamental forces – gravitational, electromagnetic, and strong and weak interactions. The details need to be worked out.


Comments on Matter (Revised)


Reference: Disturbance Theory


Matter – Wikipedia

In the classical physics observed in everyday life, matter is any substance that has mass and takes up space by having volume. This includes atoms and anything made up of these, but not other energy phenomena or waves such as light or sound. More generally, however, in (modern) physics, matter is not a fundamental concept because a universal definition of it is elusive; for example, the elementary constituents of atoms may be point particles, each having no volume individually.

Matter represents substance. Substance is something that can be felt and experienced. It is the essential aspect of any interaction. Without substance there can be no interaction, feeling and experience. Matter is one aspect of substance. The other aspect is field. An interface occurs between field and matter within an atom. In the atom we observe the field increasing in frequency toward the center, where it ends up as matter with mass.

Space is a manifestation of the extension property of field and matter. Without field and matter there is no space. The gaps between material objects are filled with gaseous matter and field. A vacuum is not entirely empty even when there are no atoms and molecules of gaseous material in it. There is still field in that vacuum for space to appear.

The idea that the fundamental constituents of atoms may be point particles is a mathematical conjecture. In reality, matter in atom reduces to field. The “volume” of matter reduces to cycles of field.

All the everyday objects that we can bump into, touch or squeeze are ultimately composed of atoms. This ordinary atomic matter is in turn made up of interacting subatomic particles—usually a nucleus of protons and neutrons, and a cloud of orbiting electrons. Typically, science considers these composite particles matter because they have both rest mass and volume. By contrast, massless particles, such as photons, are not considered matter, because they have neither rest mass nor volume. However, not all particles with rest mass have a classical volume, since fundamental particles such as quarks and leptons (sometimes equated with matter) are considered “point particles” with no effective size or volume. Nevertheless, quarks and leptons together make up “ordinary matter”, and their interactions contribute to the effective volume of the composite particles that make up ordinary matter.

Matter has shaped science’s viewpoint of reality. Even when field is discovered as a more basic substance, Science still uses matter as its reference point. This has led to considerable confusion in theoretical physics, which is now taken over by increasingly compartmentalized mathematical theories of Newton, Einstein and Quantum Mechanics.

Atom is not made up of point particles, but of field that is increasing in frequency toward the center of the atom. The “point particles” are high frequency regions of the field. The cycles of very high frequencies get compacted and appear as mass. Thus we have protons and neutron as regions of very high frequency and compactness at the core of the atom. The electrons are regions of relatively lower frequency and compactness that surround the nucleus of the atom.

Rest Mass is best understood as the inertia of a “particle”. Volume is best understood in terms of the cycles that make up the “particle”. Photons may be massless, but they are not inertia-less. They may not be matter but they are made up of cycles, which is the substance of field. Science, with its fixation on matter tries to evaluate field properties in terms of classical material properties of mass and volume. It refuses to go for a deeper understanding in terms of inertia and cycles. “Particles” such as quarks and leptons are mathematical conjectures that have not been encountered in reality.

Matter exists in states (or phases): the classical solid, liquid, and gas; as well as the more exotic plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma.

These states of matter are essentially hybrids of field and matter.

For much of the history of the natural sciences people have contemplated the exact nature of matter. The idea that matter was built of discrete building blocks, the so-called particulate theory of matter, was first put forward by the Greek philosophers Leucippus (~490 BC) and Democritus (~470–380 BC).

Matter has been contemplated upon since the beginning of human consciousness.


Comparison with mass

Matter should not be confused with mass, as the two are not the same in modern physics. Matter is itself a physical substance of which systems may be composed, while mass is not a substance but rather a quantitative property of matter and other substances or systems. While there are different views on what should be considered matter, the mass of a substance or system is the same irrespective of any such definition of matter. Another difference is that matter has an “opposite” called antimatter, but mass has no opposite—there is no such thing as “anti-mass” or negative mass. Antimatter has the same (i.e. positive) mass property as its normal matter counterpart.

Mass is a quantitative property of matter. There is no such thing as “anti-mass” or negative mass.

Different fields of science use the term matter in different, and sometimes incompatible, ways. Some of these ways are based on loose historical meanings, from a time when there was no reason to distinguish mass from simply a quantity of matter. As such, there is no single universally agreed scientific meaning of the word “matter”. Scientifically, the term “mass” is well-defined, but “matter” can be defined in several ways. Sometimes in the field of physics “matter” is simply equated with particles that exhibit rest mass (i.e., that cannot travel at the speed of light), such as quarks and leptons. However, in both physics and chemistry, matter exhibits both wave-like and particle-like properties, the so-called wave–particle duality.

Matter is substance like field. Mass is a property like frequency.



Based on atoms

A definition of “matter” based on its physical and chemical structure is: matter is made up of atoms. Such atomic matter is also sometimes termed ordinary matter. As an example, deoxyribonucleic acid molecules (DNA) are matter under this definition because they are made of atoms. This definition can extend to include charged atoms and molecules, so as to include plasmas (gases of ions) and electrolytes (ionic solutions), which are not obviously included in the atoms definition. Alternatively, one can adopt the protons, neutrons, and electrons definition.

Matter and field are basic forms of substance. Matter is condensed field.


Based on protons, neutrons and electrons

A definition of “matter” more fine-scale than the atoms and molecules definition is: matter is made up of what atoms and molecules are made of, meaning anything made of positively charged protons, neutral neutrons, and negatively charged electrons. This definition goes beyond atoms and molecules, however, to include substances made from these building blocks that are not simply atoms or molecules, for example electron beams in an old cathode ray tube television, or white dwarf matter—typically, carbon and oxygen nuclei in a sea of degenerate electrons. At a microscopic level, the constituent “particles” of matter such as protons, neutrons, and electrons obey the laws of quantum mechanics and exhibit wave–particle duality. At an even deeper level, protons and neutrons are made up of quarks and the force fields (gluons) that bind them together, leading to the next definition.

Molecules, atoms, protons, neutrons, and electrons are all constituents of matter.


Based on quarks and leptons

As seen in the above discussion, many early definitions of what can be called ordinary matter were based upon its structure or building blocks. On the scale of elementary particles, a definition that follows this tradition can be stated as: ordinary matter is everything that is composed of quarks and leptons, or ordinary matter is everything that is composed of any elementary fermions except antiquarks and antileptons. The connection between these formulations follows.

Quarks and leptons as basis of matter can be better understood as durable condensations of the field.

Leptons (the most famous being the electron), and quarks (of which baryons, such as protons and neutrons, are made) combine to form atoms, which in turn form molecules. Because atoms and molecules are said to be matter, it is natural to phrase the definition as: ordinary matter is anything that is made of the same things that atoms and molecules are made of. (However, notice that one also can make from these building blocks matter that is not atoms or molecules.) Then, because electrons are leptons, and protons, and neutrons are made of quarks, this definition in turn leads to the definition of matter as being quarks and leptons, which are two of the four types of elementary fermions (the other two being antiquarks and antileptons, which can be considered antimatter as described later). Carithers and Grannis state: Ordinary matter is composed entirely of first-generation particles, namely the [up] and [down] quarks, plus the electron and its neutrino. (Higher generations particles quickly decay into first-generation particles, and thus are not commonly encountered.)

Quarks and leptons are mathematically postulated particles. It is interesting to note that the connection between field and matter is yet to be established by science.

This definition of ordinary matter is more subtle than it first appears. All the particles that make up ordinary matter (leptons and quarks) are elementary fermions, while all the force carriers are elementary bosons. The W and Z bosons that mediate the weak force are not made of quarks or leptons, and so are not ordinary matter, even if they have mass. In other words, mass is not something that is exclusive to ordinary matter.

Fermions are concentrated regions of the field. Bosons are gradients of frequency between these regions of the field.

The quark–lepton definition of ordinary matter, however, identifies not only the elementary building blocks of matter, but also includes composites made from the constituents (atoms and molecules, for example). Such composites contain an interaction energy that holds the constituents together, and may constitute the bulk of the mass of the composite. As an example, to a great extent, the mass of an atom is simply the sum of the masses of its constituent protons, neutrons and electrons. However, digging deeper, the protons and neutrons are made up of quarks bound together by gluon fields (see dynamics of quantum chromodynamics) and these gluons fields contribute significantly to the mass of hadrons. In other words, most of what composes the “mass” of ordinary matter is due to the binding energy of quarks within protons and neutrons. For example, the sum of the mass of the three quarks in a nucleon is approximately 12.5 MeV/c2, which is low compared to the mass of a nucleon (approximately 938 MeV/c2). The bottom line is that most of the mass of everyday objects comes from the interaction energy of its elementary components.

Field has substance and its gains mass as it condenses. Both condensed regions of the field and increasing gradients of frequency have mass.

The Standard Model groups matter particles into three generations, where each generation consists of two quarks and two leptons. The first generation is the up and down quarks, the electron and the electron neutrino; the second includes the charm and strange quarks, the muon and the muon neutrino; the third generation consists of the top and bottom quarks and the tau and tau neutrino. The most natural explanation for this would be that quarks and leptons of higher generations are excited states of the first generations. If this turns out to be the case, it would imply that quarks and leptons are composite particles, rather than elementary particles.

The Standard Model is a classification that mathematically explains the interactions at particle level.

This quark-lepton definition of matter also leads to what can be described as “conservation of (net) matter” laws—discussed later below. Alternatively, one could return to the mass-volume-space concept of matter, leading to the next definition, in which antimatter becomes included as a subclass of matter.

The concept of mass-volume-space changes as substance shifts from matter to field.


Based on elementary fermions (mass, volume, and space)

A common or traditional definition of matter is anything that has mass and volume (occupies space). For example, a car would be said to be made of matter, as it has mass and volume (occupies space).

The confusion of space comes about because it is looked upon as independent of matter and not as the generalization of material dimensions.

The observation that matter occupies space goes back to antiquity. However, an explanation for why matter occupies space is recent, and is argued to be a result of the phenomenon described in the Pauli exclusion principle, which applies to fermions. Two particular examples where the exclusion principle clearly relates matter to the occupation of space are white dwarf stars and neutron stars, discussed further below.

The idea that matter occupies space is flawed.  From matter to emptiness there is the gradient of field. Space is how we see the extensions of substance. The “empty space” that we see is actually the “invisible” field.

Pauli Exclusion Principle actually relates to the “extension” of condensed regions of fermions. Each fermion has its own extension defined by its quantum numbers.

Thus, matter can be defined as everything composed of elementary fermions. Although we don’t encounter them in everyday life, antiquarks (such as the antiproton) and antileptons (such as the positron) are the antiparticles of the quark and the lepton, are elementary fermions as well, and have essentially the same properties as quarks and leptons, including the applicability of the Pauli exclusion principle which can be said to prevent two particles from being in the same place at the same time (in the same state), i.e. makes each particle “take up space”. This particular definition leads to matter being defined to include anything made of these antimatter particles as well as the ordinary quark and lepton, and thus also anything made of mesons, which are unstable particles made up of a quark and an antiquark.

The antiparticle appears to be part of some interaction where particle reduces in condensation to a field.


In general relativity and cosmology

In the context of relativity, mass is not an additive quantity, in the sense that one can add the rest masses of particles in a system to get the total rest mass of the system. Thus, in relativity usually a more general view is that it is not the sum of rest masses, but the energy–momentum tensor that quantifies the amount of matter. This tensor gives the rest mass for the entire system. “Matter” therefore is sometimes considered as anything that contributes to the energy–momentum of a system, that is, anything that is not purely gravity. This view is commonly held in fields that deal with general relativity such as cosmology. In this view, light and other massless particles and fields are all part of “matter”

Mass is “high frequency cycles” that have become compacted. Both mass and frequency have energy as their common denominator.



In particle physics, fermions are particles that obey Fermi–Dirac statistics. Fermions can be elementary, like the electron—or composite, like the proton and neutron. In the Standard Model, there are two types of elementary fermions: quarks and leptons, which are discussed next.

Fermions are part of the substance, which is broadly represented by field and matter. Matter results from condensation of field; so we are basically looking at field as the spectrum of substance.



Quarks are particles of spin − 1/2, implying that they are fermions. They carry an electric charge of − 1⁄3 e (down-type quarks) or + 2⁄3 e (up-type quarks). For comparison, an electron has a charge of −1 e. They also carry colour charge, which is the equivalent of the electric charge for the strong interaction. Quarks also undergo radioactive decay, meaning that they are subject to the weak interaction. Quarks are massive particles, and therefore are also subject to gravity.

Quarks are theoretical particles with postulated properties. They are simply dense regions of the field. Spin is eddy-like property of dense region. The charge represents one end of the frequency gradient.


Baryonic matter

Baryons are strongly interacting fermions, and so are subject to Fermi–Dirac statistics. Amongst the baryons are the protons and neutrons, which occur in atomic nuclei, but many other unstable baryons exist as well. The term baryon usually refers to triquarks—particles made of three quarks. “Exotic” baryons made of four quarks and one antiquark are known as the pentaquarks, but their existence is not generally accepted.

The strong interaction occurs between dense regions (of the field) of very high frequency gradient.

Baryonic matter is the part of the universe that is made of baryons (including all atoms). This part of the universe does not include dark energy, dark matter, black holes or various forms of degenerate matter, such as compose white dwarf stars and neutron stars. Microwave light seen by Wilkinson Microwave Anisotropy Probe (WMAP), suggests that only about 4.6% of that part of the universe within range of the best telescopes (that is, matter that may be visible because light could reach us from it), is made of baryonic matter. About 26.8% is dark matter, and about 68.3% is dark energy.

Dark matter and dark energy seem to relate to the field, which we see as “empty space”.

As a matter of fact, the great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10 per cent of the ordinary matter contribution to the mass-energy density of the universe.

The “empty space” is essentially made up of field that has a frequency less than that of baryonic matter.


Degenerate matter

In physics, degenerate matter refers to the ground state of a gas of fermions at a temperature near absolute zero. The Pauli exclusion principle requires that only two fermions can occupy a quantum state, one spin-up and the other spin-down. Hence, at zero temperature, the fermions fill up sufficient levels to accommodate all the available fermions—and in the case of many fermions, the maximum kinetic energy (called the Fermi energy) and the pressure of the gas becomes very large, and depends on the number of fermions rather than the temperature, unlike normal states of matter.

These are dense regions of the field clustering together.

Degenerate matter is thought to occur during the evolution of heavy stars. The demonstration by Subrahmanyan Chandrasekhar that white dwarf stars have a maximum allowed mass because of the exclusion principle caused a revolution in the theory of star evolution.

These clusters have size limitation.

Degenerate matter includes the part of the universe that is made up of neutron stars and white dwarfs.


Strange matter

Strange matter is a particular form of quark matter, usually thought of as a liquid of up, down, and strange quarks. It is contrasted with nuclear matter, which is a liquid of neutrons and protons (which themselves are built out of up and down quarks), and with non-strange quark matter, which is a quark liquid that contains only up and down quarks. At high enough density, strange matter is expected to be color superconducting. Strange matter is hypothesized to occur in the core of neutron stars, or, more speculatively, as isolated droplets that may vary in size from femtometers (strangelets) to kilometers (quark stars).

Strange matter is a particular form of quark matter. It is theoretical.

Two meanings of the term “strange matter”

In particle physics and astrophysics, the term is used in two ways, one broader and the other more specific.

  1. The broader meaning is just quark matter that contains three flavors of quarks: up, down, and strange. In this definition, there is a critical pressure and an associated critical density, and when nuclear matter (made of protons and neutrons) is compressed beyond this density, the protons and neutrons dissociate into quarks, yielding quark matter (probably strange matter).
  2. The narrower meaning is quark matter that is more stable than nuclear matter. The idea that this could happen is the “strange matter hypothesis” of Bodmer and Witten. In this definition, the critical pressure is zero: the true ground state of matter is always quark matter. The nuclei that we see in the matter around us, which are droplets of nuclear matter, are actually metastable, and given enough time (or the right external stimulus) would decay into droplets of strange matter, i.e. strangelets.

Strange matter is the theoretical projection of matter under very high pressure.



Leptons are particles of spin − 1⁄2, meaning that they are fermions. They carry an electric charge of −1 e (charged leptons) or 0 e (neutrinos). Unlike quarks, leptons do not carry colour charge, meaning that they do not experience the strong interaction. Leptons also undergo radioactive decay, meaning that they are subject to the weak interaction. Leptons are massive particles, therefore are subject to gravity.

The frequency gradient of these dense regions lies in lower gamma regions of the spectrum and it is less steep also.



In bulk, matter can exist in several different forms, or states of aggregation, known as phases, depending on ambient pressure, temperature and volume. A phase is a form of matter that has a relatively uniform chemical composition and physical properties (such as density, specific heat, refractive index, and so forth). These phases include the three familiar ones (solids, liquids, and gases), as well as more exotic states of matter (such as plasmas, superfluids, supersolids, Bose–Einstein condensates …). A fluid may be a liquid, gas or plasma. There are also paramagnetic and ferromagnetic phases of magnetic materials. As conditions change, matter may change from one phase into another. These phenomena are called phase transitions, and are studied in the field of thermodynamics. In nanomaterials, the vastly increased ratio of surface area to volume results in matter that can exhibit properties entirely different from those of bulk material, and not well described by any bulk phase (see nanomaterials for more details).

Phases are sometimes called states of matter, but this term can lead to confusion with thermodynamic states. For example, two gases maintained at different pressures are in different thermodynamic states (different pressures), but in the same phase (both are gases).

There exist more phases of matter than what we are normally familiar with.



In particle physics and quantum chemistry, antimatter is matter that is composed of the antiparticles of those that constitute ordinary matter. If a particle and its antiparticle come into contact with each other, the two annihilate; that is, they may both be converted into other particles with equal energy in accordance with Einstein’s equation E = mc2. These new particles may be high-energy photons (gamma rays) or other particle–antiparticle pairs. The resulting particles are endowed with an amount of kinetic energy equal to the difference between the rest mass of the products of the annihilation and the rest mass of the original particle–antiparticle pair, which is often quite large. Depending on which definition of “matter” is adopted, antimatter can be said to be a particular subclass of matter, or the opposite of matter.

The concept of annihilation of matter appears to be an interactions that reduces the dense regions of field in frequency. The antimatter could be a region of the field that has a polarity opposite to the usual polarity.

Antimatter is not found naturally on Earth, except very briefly and in vanishingly small quantities (as the result of radioactive decay, lightning or cosmic rays). This is because antimatter that came to exist on Earth outside the confines of a suitable physics laboratory would almost instantly meet the ordinary matter that Earth is made of, and be annihilated. Antiparticles and some stable antimatter (such as antihydrogen) can be made in tiny amounts, but not in enough quantity to do more than test a few of its theoretical properties.

Antimatter seems to act like a highly reactive chemical that quickly exhausts itself.

There is considerable speculation both in science and science fiction as to why the observable universe is apparently almost entirely matter (in the sense of quarks and leptons but not antiquarks or antileptons), and whether other places are almost entirely antimatter (antiquarks and antileptons) instead. In the early universe, it is thought that matter and antimatter were equally represented, and the disappearance of antimatter requires an asymmetry in physical laws called CP (charge-parity) symmetry violation, which can be obtained from the Standard Model, but at this time the apparent asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. Possible processes by which it came about are explored in more detail under baryogenesis.

There is no mystery. The situation is similar to the lack of highly reactive chemicals in our daily environment.

Formally, antimatter particles can be defined by their negative baryon number or lepton number, while “normal” (non-antimatter) matter particles have positive baryon or lepton number. These two classes of particles are the antiparticle partners of one another.

This “mystery” simply indicates some inconsistency in the mathematical theory at particle level.

In October 2017, scientists reported further evidence that matter and antimatter, equally produced at the Big Bang, are identical, should completely annihilate each other and, as a result, the universe should not exist. This implies that there must be something, as yet unknown to scientists, that either stopped the complete mutual destruction of matter and antimatter in the early forming universe, or that gave rise to an imbalance between the two forms.

This area is highly speculative. The mystery could be of our own making.


Conservation of matter

According to CP Symmetry, the two quantities that can define an amount of matter in the quark-lepton sense (and antimatter in an antiquark-antilepton sense), baryon number and lepton number, are conserved—or at least nearly so, considering CP violation. A baryon such as the proton or neutron has a baryon number of one, and a quark, because there are three in a baryon, is given a baryon number of 1/3. So the net amount of matter, as measured by the number of quarks (minus the number of antiquarks, which each have a baryon number of -1/3), which is proportional to baryon number, and number of leptons (minus antileptons), which is called the lepton number, is practically impossible to change in any process. Even in a nuclear bomb, none of the baryons (protons and neutrons of which the atomic nuclei are composed) are destroyed—there are as many baryons after as before the reaction, so none of these matter particles are actually destroyed and none are even converted to non-matter particles (like photons of light or radiation). Instead, nuclear (and perhaps chromodynamic) binding energy is released, as these baryons become bound into mid-size nuclei having less energy (and, equivalently, less mass) per nucleon compared to the original small (hydrogen) and large (plutonium etc.) nuclei. Even in electron–positron annihilation, there is actually no net matter being destroyed, because there was zero net matter (zero total lepton number and baryon number) to begin with before the annihilation—one lepton minus one antilepton equals zero net lepton number—and this net amount matter does not change as it simply remains zero after the annihilation. So the only way to really “destroy” or “convert” ordinary matter is to pair it with the same amount of antimatter so that their “matterness” cancels out—but in practice there is almost no antimatter generally available in the universe (see baryon asymmetry and leptogenesis) with which to do so.

There is definitely a bias in this universe that has created the electromagnetic spectrum of gradually condensing field that ends up in matter. The expectation of complete symmetry is, therefore, inconsistent.

There seems to be a balance between high density regions of the field converting to low density regions and vice-versa, with net existing as a spectrum.


Other types

Ordinary matter, in the quarks and leptons definition, constitutes about 4% of the energy of the observable universe. The remaining energy is theorized to be due to exotic forms, of which 23% is dark matter and 73% is dark energy.

Dark energy and dark matter have to be part of the electromagnetic spectrum.


Dark matter

In astrophysics and cosmology, dark matter is matter of unknown composition that does not emit or reflect enough electromagnetic radiation to be observed directly, but whose presence can be inferred from gravitational effects on visible matter. Observational evidence of the early universe and the big bang theory require that this matter have energy and mass, but is not composed ordinary baryons (protons and neutrons). The commonly accepted view is that most of the dark matter is non-baryonic in nature. As such, it is composed of particles as yet unobserved in the laboratory. Perhaps they are supersymmetric particles, which are not Standard Model particles, but relics formed at very high energies in the early phase of the universe and still floating about.

Dark matter could be condensed regions of the field that are not as dense as the known particles, but exist in the lower gamma region of electromagnetic frequency.


Dark energy

In cosmology, dark energy is the name given to source of the repelling influence that is accelerating the rate of expansion of the universe. Its precise nature is currently a mystery, although its effects can reasonably be modeled by assigning matter-like properties such as energy density and pressure to the vacuum itself.

These could be regions of the electromagnetic field that are lower in frequency than the gamma region.

Fully 70% of the matter density in the universe appears to be in the form of dark energy. Twenty-six percent is dark matter. Only 4% is ordinary matter. So less than 1 part in 20 is made out of matter we have observed experimentally or described in the standard model of particle physics. Of the other 96%, apart from the properties just mentioned, we know absolutely nothing.

— Lee Smolin: The Trouble with Physics, p. 16

The confusion is coming from the fixed idea that space is independent of matter and field. That is not so. Space is always accompanied by presence of field.


Exotic matter

Exotic matter is a concept of particle physics, which may include dark matter and dark energy but goes further to include any hypothetical material that violates one or more of the properties of known forms of matter. Some such materials might possess hypothetical properties like negative mass.

This seems to be a conjecture projected from an inconsistent mathematical theory of particle physics.


Historical development

Antiquity (c. 610 BC–c. 322 BC)

The pre-Socratics were among the first recorded speculators about the underlying nature of the visible world. Thales (c. 624 BC–c. 546 BC) regarded water as the fundamental material of the world. Anaximander (c. 610 BC–c. 546 BC) posited that the basic material was wholly characterless or limitless: the Infinite (apeiron). Anaximenes (flourished 585 BC, d. 528 BC) posited that the basic stuff was pneuma or air. Heraclitus (c. 535–c. 475 BC) seems to say the basic element is fire, though perhaps he means that all is change. Empedocles (c. 490–430 BC) spoke of four elements of which everything was made: earth, water, air, and fire. Meanwhile, Parmenides argued that change does not exist, and Democritus argued that everything is composed of minuscule, inert bodies of all shapes called atoms, a philosophy called atomism. All of these notions had deep philosophical problems.

The first confusion seems to be between matter and emptiness. It is difficult to visualize emptiness.

Aristotle (384 BC – 322 BC) was the first to put the conception on a sound philosophical basis, which he did in his natural philosophy, especially in Physics book I. He adopted as reasonable suppositions the four Empedoclean elements, but added a fifth, aether. Nevertheless, these elements are not basic in Aristotle’s mind. Rather they, like everything else in the visible world, are composed of the basic principles matter and form.

For my definition of matter is just this—the primary substratum of each thing, from which it comes to be without qualification, and which persists in the result.

— Aristotle, Physics I:9:192a32

The word Aristotle uses for matter, ὕλη (hyle or hule), can be literally translated as wood or timber, that is, “raw material” for building. Indeed, Aristotle’s conception of matter is intrinsically linked to something being made or composed. In other words, in contrast to the early modern conception of matter as simply occupying space, matter for Aristotle is definitionally linked to process or change: matter is what underlies a change of substance. For example, a horse eats grass: the horse changes the grass into itself; the grass as such does not persist in the horse, but some aspect of it—its matter—does. The matter is not specifically described (e.g., as atoms), but consists of whatever persists in the change of substance from grass to horse. Matter in this understanding does not exist independently (i.e., as a substance), but exists interdependently (i.e., as a “principle”) with form and only insofar as it underlies change. It can be helpful to conceive of the relationship of matter and form as very similar to that between parts and whole. For Aristotle, matter as such can only receive actuality from form; it has no activity or actuality in itself, similar to the way that parts as such only have their existence in a whole (otherwise they would be independent wholes).

Matter forms as emptiness is “disturbed”.


Seventeenth and eighteenth centuries

René Descartes (1596–1650) originated the modern conception of matter. He was primarily a geometer. Instead of, like Aristotle, deducing the existence of matter from the physical reality of change, Descartes arbitrarily postulated matter to be an abstract, mathematical substance that occupies space:

Descartes arbitrarily postulated matter to be an abstract, mathematical substance that occupies space.

So, extension in length, breadth, and depth, constitutes the nature of bodily substance; and thought constitutes the nature of thinking substance. And everything else attributable to body presupposes extension, and is only a mode of extended

— René Descartes, Principles of Philosophy

For Descartes, matter has only the property of extension, so its only activity aside from locomotion is to exclude other bodies: this is the mechanical philosophy. Descartes makes an absolute distinction between mind, which he defines as unextended, thinking substance, and matter, which he defines as unthinking, extended substance. They are independent things. In contrast, Aristotle defines matter and the formal/forming principle as complementary principles that together compose one independent thing (substance). In short, Aristotle defines matter (roughly speaking) as what things are actually made of (with a potential independent existence), but Descartes elevates matter to an actual independent thing in itself.

Descartes postulated that extension is the property of matter and thinking is the property of thought; and the two are independent of each other. But to Aristotle, matter and thought were complementary principles.

For Aristotle, things were made of matter. But for Descartes matter was a thing in itself.

The continuity and difference between Descartes’ and Aristotle’s conceptions is noteworthy. In both conceptions, matter is passive or inert. In the respective conceptions matter has different relationships to intelligence. For Aristotle, matter and intelligence (form) exist together in an interdependent relationship, whereas for Descartes, matter and intelligence (mind) are definitionally opposed, independent substances.

For Aristotle, matter and mind exist as one. For Descartes, matter and mind are separate and opposite in nature.

Descartes’ justification for restricting the inherent qualities of matter to extension is its permanence, but his real criterion is not permanence (which equally applied to color and resistance), but his desire to use geometry to explain all material properties. Like Descartes, Hobbes, Boyle, and Locke argued that the inherent properties of bodies were limited to extension, and that so-called secondary qualities, like color, were only products of human perception.

Descartes argued that the inherent properties of bodies were limited to extension, and the so-called secondary qualities, like color, were only products of human perception.

Isaac Newton (1643–1727) inherited Descartes’ mechanical conception of matter. In the third of his “Rules of Reasoning in Philosophy”, Newton lists the universal qualities of matter as “extension, hardness, impenetrability, mobility, and inertia”. Similarly in Optics he conjectures that God created matter as “solid, massy, hard, impenetrable, movable particles”, which were “…even so very hard as never to wear or break in pieces”. The “primary” properties of matter were amenable to mathematical description, unlike “secondary” qualities such as color or taste. Like Descartes, Newton rejected the essential nature of secondary qualities.

To Newton the “primary” properties of matter were amenable to mathematical description, unlike “secondary” qualities such as color or taste.

Newton developed Descartes’ notion of matter by restoring to matter intrinsic properties in addition to extension (at least on a limited basis), such as mass. Newton’s use of gravitational force, which worked “at a distance”, effectively repudiated Descartes’ mechanics, in which interactions happened exclusively by contact.

Newton developed Descartes’ notion of matter by attributing to it the intrinsic properties of extension, hardness, impenetrability, mobility, and inertia. He was troubled by the notion of gravity as “action at a distance.”

Though Newton’s gravity would seem to be a power of bodies, Newton himself did not admit it to be an essential property of matter. Carrying the logic forward more consistently, Joseph Priestley (1733-1804) argued that corporeal properties transcend contact mechanics: chemical properties require the capacity for attraction. He argued matter has other inherent powers besides the so-called primary qualities of Descartes, et al.

Priestley argued that matter has other inherent powers besides the so-called primary qualities of Descartes.


Nineteenth and twentieth centuries

Since Priestley’s time, there has been a massive expansion in knowledge of the constituents of the material world (viz., molecules, atoms, subatomic particles), but there has been no further development in the definition of matter. Rather the question has been set aside. Noam Chomsky (born 1928) summarizes the situation that has prevailed since that time:

What is the concept of body that finally emerged?[…] The answer is that there is no clear and definite conception of body.[…] Rather, the material world is whatever we discover it to be, with whatever properties it must be assumed to have for the purposes of explanatory theory. Any intelligible theory that offers genuine explanations and that can be assimilated to the core notions of physics becomes part of the theory of the material world, part of our account of body. If we have such a theory in some domain, we seek to assimilate it to the core notions of physics, perhaps modifying these notions as we carry out this enterprise.

— Noam Chomsky, Language and problems of knowledge: the Managua lectures, p. 144

So matter is whatever physics studies and the object of study of physics is matter: there is no independent general definition of matter, apart from its fitting into the methodology of measurement and controlled experimentation. In sum, the boundaries between what constitutes matter and everything else remains as vague as the demarcation problem of delimiting science from everything else.

The primary boundary is between substance and emptiness. The substance starts out as a gradient of disturbance (change from emptiness) with increasing levels of disturbance as we see in the electromagnetic spectrum. The upper level of this spectrum solidifies into atom that forms the basis of matter.

In the 19th century, following the development of the periodic table, and of atomic theory, atoms were seen as being the fundamental constituents of matter; atoms formed molecules and compounds.

The common definition in terms of occupying space and having mass is in contrast with most physical and chemical definitions of matter, which rely instead upon its structure and upon attributes not necessarily related to volume and mass. At the turn of the nineteenth century, the knowledge of matter began a rapid evolution.

The extents of substance (field and matter) are generalized as space. The “empty space,” in reality, is the field.

Aspects of the Newtonian view still held sway. James Clerk Maxwell discussed matter in his work Matter and Motion. He carefully separates “matter” from space and time, and defines it in terms of the object referred to in Newton’s first law of motion.

Matter forms only the upper end of the electromagnetic spectrum. The extent of the field is perceived as the “empty space”. The duration of substance is perceived as time.

However, the Newtonian picture was not the whole story. In the 19th century, the term “matter” was actively discussed by a host of scientists and philosophers, and a brief outline can be found in Levere. A textbook discussion from 1870 suggests matter is what is made up of atoms:

  • Three divisions of matter are recognized in science: masses, molecules and atoms.
  • A Mass of matter is any portion of matter appreciable by the senses.
  • A Molecule is the smallest particle of matter into which a body can be divided without losing its identity.
  • An Atom is a still smaller particle produced by division of a molecule.

Matter is a specialed aspect of substance. The general aspect of substance is the field.

Rather than simply having the attributes of mass and occupying space, matter was held to have chemical and electrical properties. In 1909 the famous physicist J. J. Thomson (1856-1940) wrote about the “constitution of matter” and was concerned with the possible connection between matter and electrical charge.

All substance has inertia and frequency of cycles. For matter inertia is large and frequency of cycles is compacted into mass. All other properties can be explained from these two fundamental properties.

There is an entire literature concerning the “structure of matter”, ranging from the “electrical structure” in the early 20th century, to the more recent “quark structure of matter”, introduced today with the remark: Understanding the quark structure of matter has been one of the most important advances in contemporary physics. In this connection, physicists speak of matter fields, and speak of particles as “quantum excitations of a mode of the matter field”. And here is a quote from de Sabbata and Gasperini: “With the word “matter” we denote, in this context, the sources of the interactions, that is spinor fields (like quarks and leptons), which are believed to be the fundamental components of matter, or scalar fields, like the Higgs particles, which are used to introduced mass in a gauge theory (and that, however, could be composed of more fundamental fermion fields).”

The current description of matter is heavily theoretical.

In the late 19th century with the discovery of the electron, and in the early 20th century, with the discovery of the atomic nucleus, and the birth of particle physics, matter was seen as made up of electrons, protons and neutrons interacting to form atoms. Today, we know that even protons and neutrons are not indivisible, they can be divided into quarks, while electrons are part of a particle family called leptons. Both quarks and leptons are elementary particles, and are currently seen as being the fundamental constituents of matter.

Quarks and leptons are theoretical entities.

These quarks and leptons interact through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard Model of particle physics is currently the best explanation for all of physics, but despite decades of efforts, gravity cannot yet be accounted for at the quantum level; it is only described by classical physics (see quantum gravity and graviton). Interactions between quarks and leptons are the result of an exchange of force-carrying particles (such as photons) between quarks and leptons. The force-carrying particles are not themselves building blocks. As one consequence, mass and energy (which cannot be created or destroyed) cannot always be related to matter (which can be created out of non-matter particles such as photons, or even out of pure energy, such as kinetic energy). Force carriers are usually not considered matter: the carriers of the electric force (photons) possess energy (see Planck relation) and the carriers of the weak force (W and Z bosons) are massive, but neither are considered matter either. However, while these particles are not considered matter, they do contribute to the total mass of atoms, subatomic particles, and all systems that contain them.

The Disturbance theory sees particles (fermions) as made up of the frequency cycles of the field, and force carriers (bosons) to be made up of frequency gradients in the field.



The modern conception of matter has been refined many times in history, in light of the improvement in knowledge of just what the basic building blocks are, and in how they interact. The term “matter” is used throughout physics in a bewildering variety of contexts: for example, one refers to “condensed matter physics”, “elementary matter”, “partonic” matter, “dark” matter, “anti”-matter, “strange” matter, and “nuclear” matter. In discussions of matter and antimatter, normal matter has been referred to by Alfvén as koinomatter (Gk. common matter). It is fair to say that in physics, there is no broad consensus as to a general definition of matter, and the term “matter” usually is used in conjunction with a specifying modifier.

The basic building blocks of matter (substance) are the cycles of disturbance. Each cycle of disturbance is an oscillation between electrical and magnetic energies, just like a cycle of pendulum is an oscillation between kinetic and potential energies. Each cycle has energy equal to the Planck’s constant ‘h’.

Matter (substance) is basically a spectrum of cycles whose frequency is compacted into mass. The spectrum of substance goes from zero frequency (emptiness) to extremely high frequencies (mass). This spectrum may be looked upon as a field that is increasingly becoming dense. Solid matter lies at the upper end of this spectrum.

The history of the concept of matter is a history of the fundamental length scales used to define matter. Different building blocks apply depending upon whether one defines matter on an atomic or elementary particle level. One may use a definition that matter is atoms, or that matter is hadrons, or that matter is leptons and quarks depending upon the scale at which one wishes to define matter.

The wavelength and period of cycles shrink with increasing frequency. This is seen as the condensing of space (extents) and time (durations). Matter consists of highly condensed space and time that exists within extremely expanded space and time of field. The length scales of the two are not the same.

These quarks and leptons interact through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard Model of particle physics is currently the best explanation for all of physics, but despite decades of efforts, gravity cannot yet be accounted for at the quantum level; it is only described by classical physics (see quantum gravity and graviton).

These different forces are frequency gradients that lie at different sections of the spectrum. Their position on the spectrum determines their nature as gravity, electromagnetism, weak interactions, and strong interactions. These gradients have different degrees also.



Comments on Thermodynamic temperature

Reference: Disturbance Theory


Thermodynamic temperature

Thermodynamic temperature is the absolute measure of temperature and is one of the principal parameters of thermodynamics.

Thermodynamics is a branch of physics concerned with heat and temperature and their relation to energy and work. The absolute measure of temperature is one of the principal parameters of thermodynamics.

Thermodynamic temperature is defined by the third law of thermodynamics in which the theoretically lowest temperature is the null or zero point. At this point, absolute zero, the particle constituents of matter have minimal motion and can become no colder. In the quantum-mechanical description, matter at absolute zero is in its ground state, which is its state of lowest energy. Thermodynamic temperature is often also called absolute temperature, for two reasons: one, proposed by Kelvin, that it does not depend on the properties of a particular material; two that it refers to an absolute zero according to the properties of the ideal gas.

The concept of absolute zero provides a reference point of the ground state of matter, which is the state of lowest energy. It is the same for all matter.

The International System of Units specifies a particular scale for thermodynamic temperature. It uses the kelvin scale for measurement and selects the triple point of water at 273.16 K as the fundamental fixing point. Other scales have been in use historically. The Rankine scale, using the degree Fahrenheit as its unit interval, is still in use as part of the English Engineering Units in the United States in some engineering fields. ITS-90 gives a practical means of estimating the thermodynamic temperature to a very high degree of accuracy.

The thermodynamic temperature provides an absolute scale.

Roughly, the temperature of a body at rest is a measure of the mean of the energy of the translational, vibrational and rotational motions of matter’s particle constituents, such as molecules, atoms, and subatomic particles. The full variety of these kinetic motions, along with potential energies of particles, and also occasionally certain other types of particle energy in equilibrium with these, make up the total internal energy of a substance. Internal energy is loosely called the heat energy or thermal energy in conditions when no work is done upon the substance by its surroundings, or by the substance upon the surroundings. Internal energy may be stored in a number of ways within a substance, each way constituting a “degree of freedom”. At equilibrium, each degree of freedom will have on average the same energy: kBT/2 where kB is the Boltzmann constant, unless that degree of freedom is in the quantum regime. The internal degrees of freedom (rotation, vibration, etc.) may be in the quantum regime at room temperature, but the translational degrees of freedom will be in the classical regime except at extremely low temperatures (fractions of kelvins) and it may be said that, for most situations, the thermodynamic temperature is specified by the average translational kinetic energy of the particles.

TEMPERATURE = a measure of the mean of the energy of the kinetic motions (translational, vibrational and rotational) of matter’s particles. For most situations, the thermodynamic temperature is specified by the average translational kinetic energy of the particles. The Boltzmann constant (kB) is a physical constant relating the average kinetic energy of particles in a gas with the temperature of the gas.

INTERNAL ENERGY = the full variety of kinetic motions along with potential energies of particles, and also occasionally certain other types of particle energy in equilibrium with these.

DEGREES OF FREEDOM = the number of ways in which the internal energy may be stored within a substance. At equilibrium, each degree of freedom will have on average the same energy = kBT/2


Comments on Wave Function

Reference: Disturbance Theory


Wave function – Wikipedia

A wave function in quantum physics is a mathematical description of the quantum state of a system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a wave function are the Greek letters ψ or Ψ (lower-case and capital psi, respectively).

A wave function describes the configuration of high frequency, compacted regions of the electromagnetic field. The probability amplitude measures the density of disturbance in that region. The disturbance is the back and forth oscillation of electric and magnetic energies.

The wave function is a function of the degrees of freedom corresponding to some maximal set of commuting observables. Once such a representation is chosen, the wave function can be derived from the quantum state.

This is basically a Hamiltonian look at the interplay of forces and energies.

For a given system, the choice of which commuting degrees of freedom to use is not unique, and correspondingly the domain of the wave function is also not unique. For instance it may be taken to be a function of all the position coordinates of the particles over position space, or the momenta of all the particles over momentum space; the two are related by a Fourier transform. Some particles, like electrons and photons, have nonzero spin, and the wave function for such particles includes spin as an intrinsic, discrete degree of freedom; other discrete variables can also be included, such as isospin. When a system has internal degrees of freedom, the wave function at each point in the continuous degrees of freedom (e.g., a point in space) assigns a complex number for each possible value of the discrete degrees of freedom (e.g., z-component of spin) — these values are often displayed in a column matrix (e.g., a 2 × 1 column vector for a non-relativistic electron with spin 1⁄2).

The high disturbance densities of the field appear as “particles”.  They are not discrete “particles” as they are continuous with the surrounding field. There is a gradient of frequencies between the dense region and surrounding field. Spin is the eddy-like rotation of disturbance at high frequency. Only certain values of spin are stable.

According to the superposition principle of quantum mechanics, wave functions can be added together and multiplied by complex numbers to form new wave functions and form a Hilbert space. The inner product between two wave functions is a measure of the overlap between the corresponding physical states, and is used in the foundational probabilistic interpretation of quantum mechanics, the Born rule, relating transition probabilities to inner products. The Schrödinger equation determines how wave functions evolve over time, and a wave function behaves qualitatively like other waves, such as water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name “wave function,” and gives rise to wave–particle duality. However, the wave function in quantum mechanics describes a kind of physical phenomenon, still open to different interpretations, which fundamentally differs from that of classic mechanical waves.

The quantum “particles” are high frequency, compact disturbances that have curved upon themselves like eddies. Only certain configurations of such disturbances are stable.

In Born’s statistical interpretation in non-relativistic quantum mechanics, the squared modulus of the wave function, |ψ|2, is a real number interpreted as the probability density of measuring a particle’s being detected at a given place – or having a given momentum – at a given time, and possibly having definite values for discrete degrees of freedom. The integral of this quantity, over all the system’s degrees of freedom, must be 1 in accordance with the probability interpretation. This general requirement that a wave function must satisfy is called the normalization condition. Since the wave function is complex valued, only its relative phase and relative magnitude can be measured—its value does not, in isolation, tell anything about the magnitudes or directions of measurable observables; one has to apply quantum operators, whose eigenvalues correspond to sets of possible results of measurements, to the wave function ψ and calculate the statistical distributions for measurable quantities.

There is no particle to be detected at any position. There are no probability densities. There are only disturbance densities and frequency gradients. They take care of relativistic considerations. Absolute values of these frequency gradients and disturbance densities in terms of inertia are measurable against the background of emptiness of zero inertia. This gives us a different interpretation of the quantum phenomena than the current one.