The Tom Bearden
Website

Help support the research

 

Date: Sat, 12 Apr 2003 18:13:15 -0500
 

Dear J. B.,

 

Glad to see you're struggling with the basic questions; none of them are really "solved" yet, but at best only "modeled".

 

Just one fundamental problem.

 

Mass is an observable, and thus does not persist in time as such.  Using "d" for the partial (can't make that correct "partial with respect to" symbol in this medium), any observable is the frozen 3-snapshot of an ongoing 4-space process, achieved by forcing a d/dt operation to be performed on that 4-process.  Of course this d/dt "observation" process is very rapidly iterated, but not a single observable in the universe persists in time or can persist in time, in that observable form, a priori.   The mechanism for the so-called "march of a mass through time" is the fundamental photon interaction, as we stated in the book and showed by using the neglected delta t component of the photon in its interaction with mass (in both absorption and emission).

 

So nothing actually "travels through "3-space" or "persists in 3-space", even though we seem to "observe" it that way.  Unless it has an actual extension in the time domain, an entity does not persist (does not have an extension in the time domain!) nor can it "travel" between successive points in 3-space.

 

It is the assumption of "persistence of observables" that is one of the fundamental problems of physics.  Leads to all sorts of substitution of effect for cause; as an example, the notion in mechanics that a separate 3-space force "acts" on a separate (persisting) 3-space mass.  Nothing "acts" in 3-space alone, and 3-mass alone is not "persisting" (changing its location on the time axis).  It is not even connected to the time axis!  Mass is actually a component of force; no mass component, no force.  At best, a massless field or massless potential, etc.

 

Further, no model is perfect now, nor will one ever be perfect --- so long as Godel's theorem and its proof holds.  All models should be spoken of in terms of their usefulness, never even suggesting an "absoluteness".  That includes my own stuff!  Everything is a model, and not an 11th commandment that Moses brought down off the mountain on those stone tablets.

 

Finally, Aristotelian logic itself is flawed and incomplete; simply look at the Venn diagrams used to "prove" logic theorems, and insist on removing all the boundary lines since on that line both A and not-A are identical.  Or, I particularly like Morris Kline's book, Mathematics: The Loss of Certainty.  Really lets some of the cats out of the mathematics bag.

 

So I prefer to approach things as just "models", and the best model being the one that fits (predicts) the observed results the best.

 

And two different models can be used successfully to describe the same "thing", particularly at different levels.  Witness the use of different fundamental units to make a model, including a very successful model build from a single fundamental variable, and used in physics today.

 

Much of all this sort of stuff, I think, will wash out from some very fundamental new work by Michael Leyton.  In 1872 Klein formed his geometry and also his Erlanger program.  Much of physics since then has been driven by that geometry and program.  Leyton has formed a new object-oriented geometry, with rigorous group theoretic methods, of which Klein geometry is but a subset.  Leyton's work has already been successfully applied in robotics, pattern recognition, and in some other areas, where it works when the Klein geometry methods fail.  In Leyton's geometry, there emerges the hierarchy of symmetries, not as something that one just meets curiously happening in the universe for some unfathomable reason (as particle physics views it right now, per Weinberg and others).  Instead, when there is a broken symmetry at one level, it GENERATES a new higher level symmetry, but one which infolds all the geometric information that preceded it at the lower levels.

 

I have fitted Leyton's effect to my proposed source charge solution, and it generates all the symmetries and broken symmetries involved, in the exact order involved, while nothing else does.  Doesn't prove it of course, but gives powerful support by excellent group theoretic methods.  Note that the present classical Maxwell-Heaviside electrodynamics and electrical engineering assume that (1) all EM fields and potentials and their energy come from their associated source charges, and (2) the source charge freely creates all those fields and potentials and all that EM energy out of thin air, from nothing at all.  This used "problem of the fields and their source charges" used to be acknowledged as the most formidable problem in electrodynamics, but was not solved and became embarrassing, so it was just scrubbed out of the texts and out of the literature. 

 

Sen referred to it as "The connection between the field and its source has always been and still is the most difficult problem in classical and quantum electrodynamics."  

 

Bunge put it even more strongly: "...it is not usually acknowledged that electrodynamics, both classical and quantal, are in a sad state." 

 

Feynman pointed out that . "It is important to realize that in physics today, we have no knowledge of what energy is."  He also was well aware of the force problem, and stated: .  "One of the most important characteristics of force is that it has a material origin, and this is not just a definition. … If you insist upon a precise definition of force, you will never get it!"

 

If the Leyton effect holds, then he has already written a most profound revolution in physics, electrodynamics, and thermodynamics, and one that will equal the original revolution that arose from Lee and Yang's prediction of broken symmetry in 1956-57, and the experimental proof of it in 1957 by Wu and her colleagues.  So revolutionary was broken symmetry that, with unprecedented speed, Lee and Yang were awarded the Nobel Prize in 1957.

 

For about five months I've been looking into the ramifications of Leyton's work (and of some other things) in thermodynamics, and they are remarkable.  Much of the present formulation would appear to need serious reformulation to remove non sequiturs and errors.

 

Anyway, I think that there is much to say of encouragement, since many scientists are still struggling with the nature of things and not just repeating the "status quo".  What I wish they would do more, is accent the "it's still just a model" aspect, instead of turning it into dogma by proclaiming some model "absolute".  It isn't, and any good scientist is supposed to know that.  The struggle with scientific dogmatists is still one of the greatest problems in science, and it has been directly responsible for seriously delaying the progress of science in many fields. It is for that very reason that often the military will go outside the scientific community and form a "skunk works" to get something done, instead of just watch the scientists passionately argue their favorite theories and interpretations.  If the Manhattan Project had been done by the "open" scientists, it would have fared no better than hot fusion. Or more accurately, it would have been lumped in the "crackpot" category, as was cold fusion. 

 

It is indeed odd that in July last year Evans et al. proved experimentally that little zones do occur in fluid electrolytes where "reactions run backwards" and negentropy occurs.  That has always been true for "one or a few" entities, in statistical mechanics (used as the basis for much of modern thermodynamics). But statistical fluctuation was thought to apply only to "just a few" entities and only for just a fragment of a moment at best.  What was shocking was that fluctuation occurs for up to two seconds, at cubic micron level -- and in water, e.g., a cubic micron contains about 30 billion molecules and ions.  Well, a little group of 30 billion or so ions, where REACTIONS CAN AND DO RUN BACKWARDS, tears the guts right out of the coulomb barrier in hot fusion, and the presence of that barrier is what necessitates that high temperature is required in order for fusion to occur.  The present hot fusion assumes that one must always overcome that same coulomb barrier -- and that is now revealed as a false assumption, or certainly one that is not absolute.   In a little region where the law of attraction and repulsion of charges is momentarily reversed, then two D+ ions can attract each other so closely that each enters the strong force region of the other, forming a quasi-nucleus.  Then (from some recent work), once the quasi-nucleus forms (beating that old coulomb barrier bugaboo), there is still one more probability to work through, the probability of that quasi-nucleus then tightening just a bit into a fully conventional nucleus, and bingo!  One has a nucleus of He4, known as an alpha particle.  Many other similar fusion reactions exist, once the Coulomb barrier vanishes.

 

The only reason that transmutation does not usually occur chemically at low energies and low temperatures is the coulomb barrier.  Since that barrier can now be occasionally changed into the "coulomb attractor", then the new work actually puts a solid experimental demonstration of why low temperature fusion is not only possible but does experimentally occur.  With more than 600 successful cold fusion experiments now, worldwide, it is just a matter of time before the iron dogma of "big nuclear science" gets forcibly changed and overhauled, for the basic change and overhaul of their assumption that sheer kinetic energy of the particle is necessary, that it requires such a high temperature before two like charges can be "forcibly driven together".  Now one thinks the exact opposite, in that at low temperature in a momentary reversal region, the two like charged ions or particles can and will attract together, forming that quasi nucleus.  It still requires further work on the second probability (not yet too well understood), where the quasi-nucleus passes into the formal nucleus.

 

And thanks for the kind words and concern.  My physical condition will not get any better, but hopefully it will also not get any worse.  So my continued "persistence"  is a matter of whatever chances to happen, from the next hour to possibly the next 10 years.  Anyway, it gives one a different kind of perspective on life and what one should do.  One starts not sweating the small stuff so much, and concentrates mostly on the more important stuff.  For myself, I simply plan to continue along the lines of my present 3 projects, particularly concentrated on two projects:
 

(1)     To finish the energy project, working closely with Bedini we will --- if we live --- get out the information on inverted circuits (how to use a circuit completely backwards from the textbook), and also taking all the energy one wishes from a zero reference potential.  (The zero reference potential is another sadly misunderstood thing).  This area turns out to be one of the areas that some very powerful folks have spent a great deal of money and effort in suppressing, since shortly before 1900.  They still are doing intensive suppression of it today.  The reason is that, if this area can be properly understood and a decent math model developed, then extracting from the vacuum and using all the electrical energy one wishes becomes almost absurdly simple.  But the "reasoning" is mind-wrenching, quite different from everything one has been taught.  So hopefully we'll just put out a small book with the information in it, and containing a couple of working Bedini circuits that those interested can build.  I'll have to wait till John files his patents, of course, and I'll do everything I can to help him on that one.  The actual discovery is John's, not mine.  I'm must struggling to contribute a "reasonable" explanation in terms of physics and thermodynamics.

(2)     Thermodynamics of COP>1.0 and COP = infinity circuits and devices.  Oddly, most persons have a knee-jerk response to the phrase "perpetual motion", not realizing that Newton's first law is indeed the law of perpetual motion.  A thing initially placed in motion will remain perpetually in that state of motion, until or unless interrupted and changed by an external force (Newton's second law, essentially).  If a thing did NOT stay perpetually in its initially induced motion until forcibly changed, there would be no stability at all in the entire universe -- and the organized macro-universe as we experience it could not even exist, since all would be chaotically changing totally haphazardly, without stability. In other words, there would be no "persistence", no inertia, etc.
     So we start from there, and then add the source charge problem and Leyton's hierarchies of symmetry.  We already found a small but significant flaw in the present statement of the First Law of thermo, where it is assumed that change of an external parameter is identically work.  That is not necessarily true at all; e.g., it forbids gauge freedom, which is widely used in all physics and which falsifies that assumption in present thermodynamics.  Actually, one can freely change the potential of a system (if one does it by just adding more of that potential) without doing any work himself.  The potential is SEPARATE from the system, and what is normally calculated is NOT the potential per se, but the potential's point intensity as ascertained by interaction of a unit point static charge.  And since the actual potential is a set of bidirectional EM energy flows, and NOT that point intensity, then the simple equation W = (phi)q shows that, for any given finite potential phi, one can collect any amount of potential energy W that one desires, on charges q, if one has sufficient intercepting and collecting charges q.  So one has to be careful when dealing with potential energy of a system and with simply changing the potential being introduced to potentialize the waiting intercepting charges. 
     Just changing the potentializing potential alone is not work and does not require work, by the gauge freedom axiom.  In short, "the potential" has been confused with "potential's point intensity", or with what is diverted from it at a point, by a physical interception of one certain kind.  E.g., if one simply changes the intercepting charge so that it is in particle resonance, then the apparent "intensity" of the potential changes appreciably, for much more energy is diverted by that same charge in resonance, as by that same charge in static condition.  This of course is the well-known and proven "negative resonance absorption of the medium".
     Where changing the potential of a system DOES require one to do work, is when the energy is introduced to the system in different form, so that its form must first be changed to the form of the energy in the external parameter such as the potential.  Work rigorously is defined as the change of form of energy, not the CHANGE of energy magnitude, as thermodynamics has defined it.  So the change of magnitude of the potential energy involves us "paying" something for it only when we input the energy in different form.  And so on.
     Anyhow, what comes out of all the work is, among other things, the missing negentropic interaction(s), which by being neglected in thermodynamics has led to the "asymmetry" of thermo, considered to be its greatest problem.  Actually Leyton's work takes care of that one, if one interprets correctly the fundamental generation of higher level symmetry by broken symmetry at a lower level.

 

So we just plan to keep on working in those two main areas, and hopefully will get out (eventually) sufficiently definitive initial information on them that the young fellows can take over those two projects from there.  The third project I'm continuing to work on is the business of how the cellular regenerative system actually heals a damaged cell, and how to amplify that mechanism electromagnetically (requires higher group symmetry EM).  This is an extension of Priore's proven work in France.  Priore discovered how to do it, and his team's work was done by rigorous protocols and is fully documented in the French literature.  Eminent French scientists worked with him on the project, and in fact later (very privately) the French Government secretly weaponized part of the background basis.  The Priore work was suppressed in the early 1970s, because of its revolutionary cures of some dread diseases (such as terminal cancers in lab animals) under rigorous scientific protocols.   It was just that no one could explain the perplexing fundamental mechanism.  Now I think we can, and also I think we have been able to extend it.  Here the human need around the Earth is so great, that one simply must do whatever one can in this area, in the time one has left, and get it out so the young fellows can start from there.

 

So yes, we will continue so long as we can, as so long as there's any life left in the old carcass.  But we are trying to use the time remaining to set up a passage of the information, for whatever it is worth, to those who come after and can hopefully see these things through to the finish.  Then they can just start from where I am, correct any errors I may have inadvertently made (all my pencils still need their erasers), and go much further.

 

Best wishes,

 

Tom Bearden


 
 
Tom:
 
I bought your book "Energy from the Vacuum" in response to your "Anniversary Special" and wish, first, to express my sympathy for your physical sufferings.  I am closing in on 80 but, while afflicted with an assortment of ailments, have no intention of giving up until I'm stone cold dead.  You do the same.  The following is for your consideration as a different slant on things, one which may fuel some additional creative thought, or, at least, be entertaining reading during your convalescence.
 
Secondly, in the very first pages of the book (Chapter I) I was immediately irritated by your reference to Curved Space, the Expanding Universe, the notion of Time as being (in some way) a component of Energy, and the use of C in the above formula.
 
The E = mC^2 formula, as written, raises the question: Why should the velocity of an em wave through the vacuum have anything whatsoever to do with the exchange of m into E?  However, if we substitute the value of C, in terms of u and e (the permeability and permittivity of the vacuum, i.e., the characteristics of that vacuum) we get E = m/(ue), a formulation that has more substance.  Especially in that the "vacuum" is the very "seething vacuum", or aether, or (better still), the diffuse energy of space upon which we are drawing to give us our COP > 1.0 machines.  So, what is this energy, and where does it come from?
 
I started my studies some 40 years ago when I heard, once too often, a reference to an Expanding Universe.  No way!  And so I was faced with finding a logical (and common sense) way of getting a Cosmological Redshift in a Universe that is better behaved, one with galaxies that do their thing and float around in infinite numbers throughout the Infinity of Existence, and one wherein galaxies are still here, in all their splendor, after an eternity of burning up, i.e., of undergoing the E = m/(ue) transformation.
 
I begin with the obvious fact that galaxies do exist and the observed fact that all are converting their mass into energy at a prodigious but constant rate.  The energy so generated has mass, i.e., it is an existent.  Having mass it remains bound to its parent galaxy and, therefore, becomes an invisible "dark mass" enveloping that galaxy, an energy envelope which is increasing in size and density at a constant rate.  It becomes immediately apparent that any em wave passing through this energy of increasing density (and of increasing e) will travel at an ever slower speed and that, therefore, the number of wavefronts occupying a fixed increment of that space will constantly increase.  Hence, F_o = F_s - d(N)/sec. = F_s - d(F), and we have a redshift in frequency. (F_o and F_s are observed and source frequencies, respectively.)  Light from a distant source will pass through a succession of such spaces of constantly increasing density and therefore the effect will be compounded, becoming F_o = F_s (1-R)^t (R being the average rate of change in densities, and t being the number of one-second intervals of space traversed from Source to Observer).  (This can be re-written as F_o = F_s x e^-Rt, with e, in this instance, being 2.7..., the base of the natural log system, and the formulation being the Growth/Decay Formula.)  The bottom line is that we now have a Cosmological Redshift, a redshift which is a function of the distance traveled by the light.
 
How can we have a continual increase an energy density without running into a problem?  What we are seeing is the average change in density for all galaxies.  Each galaxy starts out in life with lots of material mass and not much energy mass.  Initially, the burning is fierce with the rate of consumption of material mass and the rate of production of energy mass both being high.  As it ages, both rates fall off until, 100 billion years later, the energy mass has become massive and the material mass relatively small.
The "constant" rate of these changes consequently is not constant but, instead, falls off to near zero (again, relatively).  Observe that the total mass of the galaxy, material and energy, does not appreciably change.  It is only the vibrations in the energy mass, the em waves, that travel on to infinity.
 
So, where do the new galaxies originate?  Halton Arp has convincingly shown that Quasars are not at the great distance implied in the Expanding Universe scenario, but, instead, are highly redshifted objects ejected periodically, and in  pairs, from active galaxies, such as the Seyfert's.  His studies indicate that as these new objects move away from their parent galaxy they begin to show signs of a "fuzziness" and eventually expand into regular galaxies with normal redshifts, not too different than that of their parent.  (The high redshift of the quasars is caused by the highly compressed energy within the atoms slowing the atomic activity and lowering its radiated frequencies.  As the compression decreases, after expulsion, the radiated frequencies rise.
 
It seems that as average galaxies reach maturity (and even before that time), they gravitationally collide and merge with other galaxies so that, eventually, enough mass (material and energy) accrues for it to become an "active" galaxy, at which time it starts the birthing procedure.  The time between births is used in a continuing accural a mass.  An infinite number of such Cosmic Cycles keeps our Universe a vibrantly alive place in which to live.
 
What are the characteristics of the Energy?  In addition to exhibiting u and e, it has mass and so is gravitationally responsive.  And so, as the Dark Mass it forms an envelope around the galaxy which has a density gradient which is a function of the inverse of the square of its distance from the galactic core, being extremely dense at that core.  It fills all of space from the interstices of the atoms of the galactic core to inter-galactic space.  And the fact of the existence of this density gradient implies a self-repulsiveness which prevents ultimate collapse, just as our earthly atmosphere, being charged, repulses gravity.  So, does the energy have a charge as well?
 
Our Sun ejects both energy and material masses, as well as radiant energy, as all proper stars in our galaxy do.  The material mass is in the form of the Solar Wind, an ionized gas which is positively charged, implying that the Sun itself has a high positive charge which is manifest in its Corona and which expels the positive gases.  Why is it thus charged?  Could it be that in its E = m/(ue) activity the negative charge is, in some way, consumed and flows away with the energy mass?  And, if so, could the energy mass be negative, perhaps be intrinsically negative, but not as a "charge", per se, wherein an actual flow of charge could occur? 
 
Returning to "Arp's Objects" and the active galaxies, it should be obvious that the material/energy mass conglomeration at the galactic center is under tremendous pressure and has reached a critical mass wherein some cataclysmic re-conversion of energy mass into material mass takes place.  The fact that equal sized pairs of objects are ejected in opposite directions suggests that electro-magnetic propulsion is the driving force and that ejection is from the galactic North and South Poles, which, in turn, suggests that the newly formed object has a magnetic pole opposite that of the parent galaxy.  Being centered in that galaxy the object is torn in half with the two halves going in opposite directions.
 
Coming down to Earth, we should expect the Energy to have the following characteristics:
 
        1)  A permeability and permittivity (which, though seemingly constant, should be increasing at such a rate as to give us a decrease in C of 2.7 x 10^-18 part/part/sec. (which rate is derived from the: Speed of recession = 70 km/mega-parsec/sec. formulation of Expanding Universe tribe.).
 
        2)  Mass, and a responsiveness to gravitation.
 
        3)  An intrinsic negativity, and hence a self-repulsiveness.
 
        4)  Compressible, but fully elastic so as to respond appropriately to gravity and its self-repulsiveness.
 
        5)  Fully fills all of space from inter-galactic to intra-atomic, having a total mass some 10 to 100 times greater than that of the material mass of the Universe
 
        6)  It supports em waves as vibrations which, near Earth, travel at 3 x 10^8 m/s.
 
To these we add:
 
        7)  Responds to electrical charges by being repelled by electrons and attracted by positrons, forming energy density "lows" and "highs", respectively.  (The Biefeld/Brown Effect is a good experimental demonstration of this Effect, where, when the plates of an air dielectric capacitor are charged to a high voltage, the capacitor as a whole moves in the direction that its positive plate is facing.)
 
        8)  Responds to an electro-magnet, or permanent magnet, by flowing into the South Pole of the magnet and through the magnet to exit at the North Pole, and thence back around to the South Pole.  This flow forms a density "low" at the South end of the magnet and a density "high" at its North end, and can be seen as a pumping action.
 
        These last two characteristics are, of course, the electro-static and electro-magnetic fields of the two components.
 
These thoughts give a totally different picture of the energy with which we have to work, not the "vibrating grid" of H. Aspden, nor the "seething vacuum" of T. Bearden, nor those of others.  The Earth's gravity draws the energy to itself and, together with the gravity of the Sun and of the Galaxy, places us at a point in space where we are subject to an energy pressing in upon us from all directions.  All electric or magnetic fields (# 7, above) extend into this energy continuum a great distance, and any change that we make in those fields affects, and is affected by, the totality of that vast field.
 
We should recognize that this energy ocean is responsive in accord with its characteristics, one of which (#  6) indicates that the effects of such changes is transmitted at a nominal speed into the aether, and a second (# 4) which suggests that the waves are compressive in nature, displaying a transverse behavior due to its negativeness.  If those changes are reversed in sign periodically then the energy will alternately be pushed and then pulled, and, in that it has mass, should be expected to have both inertia and momentum and to be able to do work as with any machine. 
 
One can imagine that a condition of resonance in the energy ocean could be induced and, given the perfect elasticity of the energy (# 4), that this resonance could be sustained by periodic nudgings and raised to a such a level that we could extract a useful amount without damping its resonance, thus achieving a COP>1.  However, the field of your permanent magnet in the MEG is routed through the core and, presumably, does not escape into space. Or, is the resonance entirely within the core?
 
Your thoughts?
 
Jim