all type of study test papers are available here. all types of books are available here.

Monday 31 August 2015

About the LYM

About the LYM
The LaRouche Youth Movement was started around the year 2000 when the economist and statesman Lyndon LaRouche was running a presidential campaign in the United States.  The participation of a certain group of young bright students in campaigning for the LaRouche’s Candidacy led to the formation of the most creative political force in the world today.
The activities of the LYM in the United States created an echo worldwide in the springing up of several similar movements throughout the world and their adoption of the same curriculum and emphasis on universal principles acting as the only true and just authority in the Universe.
It is within this Dynamic of Change that the Committee for the Republic of Canada extended itself to support the development of the LaRouche Youth Movement in Canada.
The Movement studies Science, Art and Physical-Economics to better intervene politically in our Nation with the intention and assumption that we can change the current dynamic presently destroying civilization as we speak.  That is the job of every patriot, to take responsibility for all of mankind.  With this in mind, it becomes absolutely necessary to eliminate the biggest threat to the survival of our civilization, namely the British Monetary System of Finance based on the tyrannical control over money.  A creative system of National Credit rooted in the historical tradition that led to the adoption by the United States of America of it’s unique Constitution is needed to free mankind from the principle of Empire.

Our Mission

The LYM in Canada has a mission which is subsumed by the necessity for all of mankind to free itself from the tyranny of Empires.  As this economic system is now dead, we have a unique opportunity to win people over to LaRouche’s Solution.
Only LaRouche’s solution aims high enough to give humanity a fighting chance at a decent future.  We can’t be allowed to compromise when the opportunity to eliminate the British Empire is now, in a very real way, within reach as it has never been before.
You want to live in a dark age?  We don’t think you do, but that’s exactly what’s in store for all of us, already, if we were to shy away from the necessary swiftness and boldness that is absolutely necessary today, now, to intervene with the castration of the fake debts out there with the “Glass Steagall” criteria which determines a legitimate debt from a  piece of worthless gambling paper.  If we can have the guts to do just that, then, a new start, a new credit system as defined by LaRouche in sundry locations, will enable the whole world to rebuild and ameliorate the physical conditions of life for everyone.
We are going to Mars!  That’s the idea that can unite Mankind.  With this idea in mind ask yourself what is needed to have humans living there in the next 75 years?  That idea can and must serve to unite all of mankind to recognize our common aims now and into the future.

Join Us

Don’t let us have all the fun!
You are encouraged to see yourself as a unique sovereign personality with the potential to intervene and help us change the world.  We are recruiting the population to fight for the future of Mankind.
So what can you do?
Well, you can start by reading up or watching some of the product available on the website so that you may understand the quality of the fight we are waging to the British Empire.
Start organizing your friends and family, come to a class, get educated.
Get your subscription today so that you may have a regular stream of some of the best intelligence available.
Join us, or at least, support us by making a contribution.

Mind Over Mathematics: How Gauss Determined the Date of His Birth

By Bruce Director
This afternoon, I will introduce you to the mind of Carl Friedrich Gauss, the great 19th century German mathematical physicist, who, by all rights, would be revered by all Americans, if they knew him. Of course, in the short time allotted, we can only glimpse a corner of Gauss’s, great and productive mind, but even a small glimpse, into a creative genius, by working through a discovery of principle, gives you the opportunity to gain an insight into your own creative potential.
I caution you in advance, some concentration will be required over the next few minutes, in order to capture the germ of Gauss’s creative genius. So stick with me, and you will be greatly rewarded.
I have chosen to look into a subject, which most of you, and even your children, think you know something about: arithmetic. Plato, in the Seventh Book of the Republic, says that all political leaders must study this science, because arithmetic is the science whose true use is “simply to draw the soul towards being.” Arithmetic, Plato says, is never rightly used, and mostly studied by amateurs, like merchants or retail traders for the purpose of buying and selling. Instead, political leaders, must study arithmetic, “until they see the nature of numbers with the mind only, for their military use, and the use of the soul itself; and because this will be the easiest way for the soul to pass from becoming to truth and being.”
By now, you may already get the hint, that what Plato and Gauss, meant by arithmetic, is not, the buying and selling arithmetic, you and your children were taught in school. Gauss called his, the “higher arithmetic,” and in 1801, he published the definitive study into higher arithmetic, called by its Latin name, {Disquisitiones Arithmeticae.} As you will see, if your teachers had been interested in training true citizens of a republic, higher arithmetic, is what you would have learned; not the amateurish calculations, needed by fast-food cashiers, stock-brokers, derivatives-traders, or calculating methods of the statisticians and actuaries who determine what lives are cost-effective, for HMO’s and insurance companies.
Unfortunately, the Enlightenment dominates the thinking of most people today, comprising an Empire of the Mind, where people instinctively stick to simple addition, even though a higher ordering principle is discoverable. They labor under the illusion, that adding the numbers one-by-one, is their only choice, simply because the creative powers of their minds are unknown to them. It is the underlying assumptions, of which people are not even aware, which determine their view of the world. Only by becoming conscious of these underlying assumptions, and then changing them, can any scientific discovery be made.
Now let’s catch a further glimpse of Gauss’s genius, (and a little of our own) by turning to another, more profound application of higher arithmetic.
Humble Beginnings
Carl Friedrich Gauss, came from a very humble family. His father, Gebhard Dietrich Gauss was a bricklayer; his mother, Dorothea Benz, the daughter of a stonemason. She had no formal schooling, could not write, and could scarcely read. They were married April 25, 1776, three months before the signing of the American Declaration of Independence. Sometime in the Spring of the next year, Dorothea gave birth to Carl Friedrich.
Being barely literate, Gauss’s mother could not remember the date of her first son’s birth. All that she could remember, was that it was a Wednesday, eight days before Ascension Day, which occurs 40 days after Easter Sunday. This was not necessarily an unusual circumstance in those days, as most parents were preoccupied with keeping their infant children alive. Once the struggle for life was secured, the actual date of birth might have gone unrecorded.
Twenty-two years later, the mother’s lapse of memory, provoked the son to employ the principles of higher arithmetic, to measure astronomical phenomena, with a discovery grounded in the principle that the cognitive powers of the human mind are congruent with the ordering principles of the physical universe.
In 1799, Gauss determined the exact date of his birth to be April 30, 1777, by developing a method for calculating the date of Easter Sunday, for any year, past, present or future.
A lesser man, wanting to know such a bit of personal information, would have relied on an established authority, by looking it up in an old calendar, or some other table of astronomical events.
Not Gauss! He saw, in the riddle of his own birth-date, an opportunity, to bring into his mind, as a unified idea, the relationship of his own life, to the universe as a whole.
The date that Easter is celebrated, which changes from year to year, is related to three distinct astronomical events. Easter Sunday, falls on the first Sunday, following the first full moon (the Paschal Moon), following the first day of Spring, (the vernal equinox). Because of Easter’s spiritual significance, and its relationship to these astronomical phenomena, finding a general method for the precise calculation of the date of Easter, had long been a matter of scientific inquiry.
Let’s look at what’s involved in the problem of determining this date. You have the three astronomical events to account for. First, is the year, the interval from one vernal equinox to the next, which reflects the rotation of the earth around the sun. Second, the phases of the moon, from new moon to full moon to new again, which reflects the rotation of the moon around the earth; and third, is the calendar day, which reflects the rotation of the earth on its axis.
Of course, you can only “see” these astronomical events in your mind. No one has ever seen, with their eyes, the orbit of the earth around the sun, or the orbit of the moon around the earth. Not until modern space travel had anyone ever seen the rotation of the earth on its axis. We see with our eyes, the changes, in the phases of the moon, the changes in the position of the sun in the sky, and the change from day to night and back to day. We see “with our minds only”, the cause of that change. This type of knowledge is outside the Enlightenment’s straight jacket.
Each of these astronomical phenomena is an independent cycle of one rotation. The problem for calculation, is that, when compared with each other, these rotations do not form a perfect congruence (Fig. 1).
There are 365.2422 days in one year; 29.530 days in one rotation of the moon around the earth, called a synodic month; and 12.369 synodic months in one year. These figures are also only approximate, as the actual relations change from year to year, depending on other astronomical phenomena.
Here we confront something, which was known to Plato, and specifically identified by Nicholas of Cusa, in his {On Leonard Ignorance.} There exists no perfect equality in the created world. Perfect equality, exists only in God. But since, man is created in the image of God, through his creative reason, man can rise above this limitation, and see the world with his mind, ever less imperfectly, as God sees it.
As Leibniz says in the {Monadology,} man can reflect on God with his reason only; and “we recognize, what is limited in us, is limitless in Him.”
So if there is no equality in the created world, we need a different concept. Our mathematics must be concerned with some other relationship than equality, if we are to successfully measure the created world.
A New Type of Mathematics
Gauss did this by inventing an entirely new type of mathematics. A mathematics, which reflected the creative process of his own mind. If the mathematics accurately reflects the workings of the mind, it will accurately reflect the workings of the created world, as any Christian Platonist would know.
This is real mathematics, not the Enlightenment’s dead mathematics of Leonhard Euler and today’s illiterate computer nerds, like Bill Gates, who think a computer is the same as the human mind. Their mathematics is no more than a system of rules to be obeyed. The Enlightenment imposes a false separation between the spiritual and physical realms. If the physical world doesn’t conform to the mathematics, the Enlightenment decrees, there is something wrong with the physical world, not the mathematics! And, if the creative mind rebels against the dead mathematics of Euler and Gates? The Enlightenment demands that the mind {submit} to the tyranny of mathematics.
Gauss’s higher arithmetic begins with a concept different from simple equality. The concept of “congruence.” Here again, you see how you and your children have been lied to by your teachers. Most of you have been taught, that congruence is the same as equality, when applied to geometrical figures, such as equal triangles. Not true.
Gauss’s concept of congruence, follows the concept of congruence developed by Johannes Kepler, in the second book of the {Harmonies of the World.} The word congruence, Kepler says, means to Latin speakers, what harmonia, means to Greek speakers. In fact the word harmonia, and arithmetic, both come from the same Greek root. Instead of equality, congruence means harmonic relations.
Here are some examples of what Kepler means by congruence (Fig. 2). As you imagine, in the plane, I can increase the size and number of sides, of each polygon, without bound. But, when I try to fit polygons together, with one another, I bump into a boundary. Triangles, squares, and hexagons are perfectly congruent. Pentagons, for example, are not (Fig. 3). In some cases, when I mix polygons together, such as octagons and squares, I can make a mixed congruence.
However, when I go from two to three dimensions, and try to form a solid angle, the boundary conditions for congruence change. For example, pentagons, which aren’t congruent in a plane, are congruent in a solid angle (Fig. 4). Hexagons, which are congruent in a plane, are not congruent in a solid angle.
So you see, the type of congruences which can be formed from polygons, is dependent on the domain, in which the action is taking place.
Gauss carried this concept of congruence over into arithmetic, using whole numbers alone. Two whole numbers are said to be congruent, relative to a third whole number, if the difference between them is divisible by that third number. The third number is called the modulus (Fig. 5). Gauss designated the symbol {@id} to distinguish congruence from equality ({=}).
You may recognize a similarity between the concept of congruence with the idea of musical intervals. In higher arithmetic, it is the interval between two numbers, and relationship between those intervals, which concern us. Just as in music, it is the intervals, and the relationship between the intervals, which communicates the musical ideas, not the notes themselves.
Another property of congruent numbers, is that they leave the same remainder when divided by the modulus (Fig. 6). These remainders are called least positive residues. For example, 16 and 11 are both congruent to 1 modulo 5. In higher arithmetic, numbers are related, not by their equality, but by their similarity of difference, with respect to a given modulus.
There are other important relationships among congruent numbers. For example, if two numbers are congruent relative to a given modulus, they will be congruent to a modulus which divides that modulus. For example, if 1,997 is congruent to 1,941 modulo 28, they will also be congruent, relative to modulus 4 and modulus 7, as 4 X 7 = 28 (Fig. 7).
Here we are ordering the numbers, not according to their “natural” given order, but according to a mental concept of congruence. In this way, we make the numbers work for our mind, not enslave our minds, to the order of the numbers.
Calculating the date of Easter
For purposes of our present problem, calculating the date of Easter Sunday for any year, you can think of the astronomical cycle as the modulus. The day, the year, and the synodic month, are all different moduli. The scientific question to solve, is, how can these three moduli, be made congruent?
If this weren’t hard enough, we still have another problem: the imperfection of human knowledge. This reflects itself in the problem of the calendar.
In 45 B.C., Julius Caesar, decreed the use of a calendar throughout the Roman Empire, that approximated the length of the year as 365 and 1/4 days. The 1/4 day, was accounted for, by adding one day to the year, every fourth year, the familiar “leap year.” In the language of Gauss’s higher arithmetic, the years are in a cycle of congruences relative to modulus 4. Those years, which leave no remainder when divisible by 4, are leap years; those that leave a remainder of 1 are 1 year after a leap year, and so forth (Fig. 8).
However, as we have seen, the length of the year is not exactly 365 1/4 days. It’s a little bit shorter. This difference, is not very significant, in the span of one human life, but is significant over centuries, and millennia. In fact, the Julian calendar is off by {one day,} every 128 years. Such a difference may not concern you, if your mind is narrowly focused on your own physical existence. It {will} concern you, if you’re thinking of your own life with respect to posterity.
By 1582, the Julian calendar was off by ten days. The vernal equinox, the first day of Spring, was occurring on March 10th or 11th instead of March 21. Easter, therefore was also occurring earlier in the year. Both the material and spiritual world, had gotten out of whack.
So, in 1582, Pope Gregory XIII, put a new calendar into effect; ten days were dropped out of that year. In addition, the leap year skipped three out of four century years, and every fourth century year, would be a leap year; for example, the year 2000 will be a leap year, but 1900, 1800, and 1700, were not.
Thus, in order to calculate Easter Sunday, and thus determine his own birthday, Gauss had to make congruent, three astronomical phenomena, and two imperfect states of human knowledge!
He accomplished this by reference to two other cycles, or moduli. Because the synodic month and the calendar year, are unequal, the phases of the moon occur on different calendar days, from year to year. But every 19 years, the cycle repeats. So, for example, if the Paschal Moon occurs on say, March 23, in one year, it will occur on March 23, 19 years later. If the Paschal Moon occurs on April 11, the next year, it will occur on April 11, again in 19 years.
If we call the first year in this cycle “year 0,” the next year, “year 1,” the last year will be “year 18.” In this way, the calendar years in which the phases of the moon coincide, will be congruent to each other relative to modulus 19. So, if you divide the year by 19, those years with the same remainder, will have the same dates for the phases of the moon.
The calendar days on which the days of the week occur, also change from year to year. Today is Sunday, February 16. Next year February 16, will be on a Monday. Since there are seven days of the week, this cycle would repeat every seven years, but because every four years is a leap year, this cycle repeats itself, only every four x seven, or 28 years.
However, in the Gregorian calendar, this cycle is thrown off, by the century years. This cycle is called the solar cycle.
Gauss’s Algorithm
Prior to Gauss’s discovery, a complicated series of tables, was compiled from these cycles, by which one could determine the date of a specific astronomical occurrence. Gauss’s genius was to find a simple algorithm, by means of higher arithmetic, which didn’t require any tables, but simply the number of the year. I will illustrate it for you by example (Fig. 9)
Take the number of the year, divide by 19, call the remainder {a.} For 1997, a=2. In the language of higher arithmetic, 1997 is congruent to two, modulo 19. This tells you where, in the 19-year cycle of the phases of the moon, and the calendar day, the year 1997 falls.
Divide the year by four. Call the remainder {b.} For 1997, b=1. 1997 is congruent to one, modulo four. This tells you the relationship with the leap year cycle.
Divide the year by seven. Call the remainder {c.} For 1997, c=2. 1997 is congruent to two, modulo seven. This tells you the relationship between the calendar day, and the day of the week.
The next step is a little more complicated (Fig. 10): Divide (19a + M=24) by 30; call the remainder {d.} For 1997, d=2. This gives you the number of days, after the vernal equinox, that the Paschal Moon will appear. M changes from century to century, and is calculated from the cycle of dates on which the Paschal Moon occurs, in that century. For the 18th and 19th century, M=23. For the 20th century M=24.
Finally, divide (2b + 4c + 6d + N=5) by seven and call the remainder {e.} For 1997, e=6. This gives you the number of days from the Paschal Moon, to the next Sunday. This formula takes into account the relationship of the year to the solar cycle. N also changes from century to century and is based on the cycle of the days of the week on which the Paschal Moon occurs in that century. Sunday being 0, Monday being 1, Saturday being 6. For the twentieth century, N=5.
Gauss calculated the values of M and N into the 25th century, and derived a general method for calculating these values for any century. Unlike some people today, Gauss, was not planning on the “end times.”
Therefore, Easter Sunday is March 22 + d + e. For 1997 that is March 22 + 2 + 6 or March 30, 1997 (Fig. 11).
Gauss’s method, obviously has applications, far beyond the determination of his birthday, or the date of Easter Sunday, for any year. In his later work, Gauss brought even more complex astronomical observations into congruence, by use of these same powers of the mind. But, this little example gives you a sense of how a universal creative mind can take any problem, and see in it an opportunity to extend human knowledge beyond all previous bounds.
Of course, we too can learn a lesson from this. The next time a child asks you a question about how the world works, something like, “why does the moon change from day to day?” or, “why does the sun change its place during the day and over the course of the year?,” don’t tell that child to look up the answer in a book, or log onto the Internet. Help that child to discover how, as Plato says, to see the nature of numbers with the mind only.
Then, take that child, with this newly acquired discovery, outside and show him the night sky. Then, that child will be able to see, in that night sky, the image of the workings of his or own mind, and to see also, reflected back, in that image, an imperfect, yet faithful, image of the Creator, Himself.

Plato’s {Meno} Dialogue

Can You Solve This Paradox?
By Sylvia Brewda
Plato’s dialogue the “Meno” has been one of the documents most cited, of the anti-Aristotelian faction throughout history, and is thus a most appropriate benchmark for the offensive against Eulerian, linear thinking in ourselves. In a crucial section, a young, uneducated slave-boy is guided by Socrates, from wrong opinion, to awareness of his own ignorance, to the {knowledge} of how to construct a square with exactly twice the area of a given square.
First, Socrates shows the slave-boy that his immediate, naive presumption, that a square with area equal to 8 can be constructed from one of area equal to 4 by simply doubling the lengths of the sides, from 2 to 4, is wrong. Then, the boy is shown that his guess of sides equal to 3 will not work either.
Socrates comments to his chief interlocutor in the dialogue, “Observe, Meno, the stage he has reached on the path” of coming to know.
“At the beginning he did not know the side of the square of eight feet. Nor indeed does he know it now, but then he thought he knew it, and answered boldly, as was appropriate–he felt no perplexity. Now, however, he does feel perplexed. Not only does he not know the answer, he doesn’t even think he knows…. Isn’t he in a better position now in relation to what he didn’t know?
“…|Now notice what, starting from this state of perplexity, he will discover by seeking the truth in company with me….”
Starting at that point, with nothing more than stick drawings in the sand, how can you show that this boy could {knowingly} succeed in constructing the required square? What are the {Analysis Situs} implications of this?
{Figure 1.} Here we see the original square of area equal to 4, and the method of producing a square double in area (square BUWD).
{Figure 2.} The construction undertaken here points to a simple geometric proof of the Pythagorean Theorem (which says that the square on the hypotenuse of any right triangle is equal to the sum of the squares on the other two sides, or legs.)
{Figure 3.} Fig. 3(a) shows that in a right triangle in which both legs are equal to {a}–that is, an isoceles right triangle–the area of the right triangle equals one-half of the area of the square whose side is {a.}
Figure 3(b) shows that, in a right triangle with legs of different lengths ({a} and {b}), the area of the right triangle equals one-half the area of the rectangle whose sides are {a} and {b.}
Look for the following autosorts in this article:
@s for superscripts
@ts for multi signs
? for square root signs
Also, in the use of a and b to denote sides of triangles & squares, I have marked those boldface {}, because I don’t want them to be confused with, e.g., Fig. 2(b), or something. They don’t have to be bf, but they do have to be somehow set apart.
Thanks
How Socrates Resolves the Paradox
CAN YOU SOLVE THIS PARADOX?
by Sylvia Brewda
Here we reprint the first part of this column, which appeared in the last issue of New Federalist (No. 14, April 14, p. 12), along with the second, concluding part, in which Socrates resolves the paradox.
Plato’s dialogue the “Meno” has been one of the documents most cited, of the anti-Aristotelian faction throughout history, and is thus a most appropriate benchmark for the offensive against Eulerian, linear thinking in ourselves. In a crucial section, a young, uneducated slave-boy is guided by Socrates, from wrong opinion, to awareness of his own ignorance, to the {knowledge} of how to construct a square with exactly twice the area of a given square.
First, Socrates shows the slave-boy that his immediate, naive presumption, that a square with area equal to 8 can be constructed from one of area equal to 4 by simply doubling the lengths of the sides, from 2 to 4, is wrong. Then, the boy is shown that his guess of sides equal to 3 will not work either.
Socrates comments to his chief interlocutor in the dialogue, “Observe, Meno, the stage he has reached on the path” of coming to know.
“At the beginning he did not know the side of the square of eight feet. Nor indeed does he know it now, but then he thought he knew it, and answered boldly, as was appropriate–he felt no perplexity. Now, however, he does feel perplexed. Not only does he not know the answer, he doesn’t even think he knows…. Isn’t he in a better position now in relation to what he didn’t know?
“…|Now notice what, starting from this state of perplexity, he will discover by seeking the truth in company with me….”
Starting at that point, with nothing more than stick drawings in the sand, how can you show that this boy could {knowingly} succeed in constructing the required square? What are the {Analysis Situs} implications of this?
To lead the boy to discover how to construct a square with an area twice that of the original one, Socrates draws three additional squares, each equal to the original one. He has labelled the corners of the original square, ABCD, and draws two new squares: BCUT; CDXW; and, finally, “to fill up the corner here,” CUYW (see Fig. 1). Thus, he has again drawn a square four times larger than the original one.
Then he draws one diagonal in each of the four small squares: BD, DW, WU, and UB. These diagonals form a new square, inside the larger one, and, rotated half a turn with respect to it. It is clear to the boy that, these diagonals divide each of the small squares in half. Therefore, he can see that the area inside the new square made from the diagonals is half that of the large square, since it contains half of each of its component squares.
And thus, the new square, with the diagonal of the first as its side, is {known} to have twice the area of the first.
Clearly, this discovery has been made without the use of any asserted authority. Instead, it is the innate characteristic of the human mind, its own {Analysis Situs,} that allowed it to know the truth of what has been drawn out of it by the Socratic method employed.
This is the power of this brief section of the “Meno” dialogue; this is the reason why constructive geometry is of such importance in true education. Johannes Kepler, the great Renaissance scientist, wrote about this passage in his great work, <cf2>“Harmonice Mundi”<cf1> (“Harmonies of the World”):
“And, indeed, this was Plato’s judgment concerning mathematical things: that the {human mind} is, from itself, fully informed about all, species or figures, axioms, and conclusions about these things; truly, when the mind seems to be instructed, this [process] is nothing other than to be reminded by diagrams, which can be grasped by the senses, of those things which the mind must know through itself. This he represents with singular art in the dialogues, introducing a boy who, being questioned by a teacher, answers everything that is asked.”
Another construction can be generated which solves the problem, and also points to a simple geometric proof of the famous Pythagorean Theorem–namely, that the square on the hypotenuse of {any} triangle containing a right angle is equal to the sum of the squares on the other two sides (the hypotenuse is the side of a right triangle opposite the right angle).
In this construction, a square of side {a} is first cut by diagonals, forming four equal triangles on the four sides. Then, an equal square is constructed on each of the sides of the first square, and each of these new squares is also divided by its diagonals into four equal parts [see Fig. 2(a)]. Next, erase from the sand the three triangular parts of each of these new squares whose bases do not correspond with those of the original square [see Fig. 2(b)]. Now, a new square is seen, one which is made up of the original square plus the four remaining triangles outside that original square, each of which equals one-quarter of the original. That is:
4|@ts|[{a}@s2/4]|+|{a}@s2|=|2{a}@s2
What is the length of the side of this new square? Each side is made up of two parts, each equal to half the diagonal of the original square. Let us call this part {b.} The sides of the outer square are equal to the diagonal of the original one, that is, 2{b.}
Consider the four right triangles which are made by the diagonals meeting inside the original square, {a}@s2. Each has legs equal to {b} and hypotenuse equal to {a} [one is shown as the shaded area in Fig. 2(c)]. The construction has already generated the squares on the legs of each such triangle [a pair are shown marked by hatching in Fig. 2(c)]. For each pair of such squares, the triangular sections left outside the original square correspond exactly to the area within it left uncovered by those sections of the same small squares which are inside it. Thus, the area of the original square, whose side is the diagonal of the small square, is twice that of each small square, or the sum of the squares on the legs of the small right triangle.
However, this construction also leads us further. We can see from Fig. 3 that the area of any right triangle is equal to one-half the product of the two legs. Here, in our construction, this means that the areas of the small triangles are each = ({b}|@ts|{b})/2, and the area of the original square is equal to four of these. That is:
{a}@s2|=|4|@ts|({b}@s2/2)|=|2|@ts|{b}@s2
From this, if the symbol ?{a}@s2 indicates the side of a square of area {a}@s2, clearly {a,} the side of the original square, can be denoted as:
{a}|=|?(2|@ts|{b}@s2)|=|{b}|@ts|@sr(2)
In future articles, we will discover that this relationship means that the two magnitudes, {a} and {b,} are of two completely different types, such that neither can be measured by the other.
The last construction, plus the material presented up to this point, provides the basis for a geometric proof of the Pythagorean Theorem. The starting point is consideration of the effect of changing the lengths of the legs of the small triangles, both on the angles of such triangles, and on the geometry of their fitting together.
How {do} you know that the sum of the squares on the sides of a right triangle is equal to the square on the hypotenuse?
Look for the following autosorts in this article:
@s for superscripts
@ts for multi signs
@sr for square root signs
There are also + and = signs, but I left them uncoded–no need to code.
Also, in the use of a, b, and c to denote sides of triangles & squares, I have marked those boldface {}, because I don’t want them to be confused with, e.g., Fig. 2(b), or something. They don’t have to be bf, but they do have to be somehow set apart.
Thanks
CAN YOU SOLVE THIS PARADOX?
by Sylvia Brewda
As we said in last week’s column, a construction can be generated which solves the problem of doubling the square, and also points to a simple geometric proof of the famous Pythagorean Theorem, which says that the square on the hypotenuse of {any} triangle containing a right angle is equal to the sum of the squares on the other two sides. (The hypotenuse of a right triangle is the side opposite the right angle.)
Here, the square of side {a} is first cut by diagonals, forming four equal triangles on the four sides. Then, an equal square is constructed on each of the sides of the first square, and each of these new squares is also divided by its diagonals into four equal parts [see Fig 1(a)]. Next, erase from the sand the three triangular parts of each of these new squares whose bases do not correspond with those of the original square [see Fig. 1(b)]. Now a new square is seen, which is made up of the original one plus the four remaining triangles outside it, each of which equals one-quarter of the original. That is:
4|@ts|{[a}@s2/4]|+|{a}@s2|=|2{a}@s2
What is the length of the side of this new square? Each side is made up of two parts, each equal to half the diagonal of the original square. Let us call this part {b.} The sides of the outer square are equal to the diagonal of the original one, that is, 2|@ts|{b.}
Consider the four right triangles which are made by the diagonals meeting inside the original square, {a}@s2. Each has legs equal to {b} and hypotenuse equal to {a} [one is shown as the shaded area in Fig. 1(c)]. The construction has already generated the squares on the legs of each such triangle [(a pair are shown hatched in Fig. 1(c)].
This construction provides the basis for a geometric proof of the Pythagorean Theorem. The starting point is consideration of the effects of changing the lengths of the legs of the small triangles, on both the angles of such triangles, and on the geometry of their fitting together inside the square.
– * * * * –
There are many proofs of the famous Pythagorean Theorem, but the following requires no further information than the evident fact that any right triangle can be considered as half a rectangle, constructed with sides equal to the legs of the triangle and cut along one diagonal. From this construction, it is clear that the two angles in each such triangle which are not right angles must fit together to form one right angle, since they can be generated by cutting the right angles in the corners of the rectangle.
To start, consider the figure of the square we just constructed [see Fig. 1(b)]. Clearly, this is a square whose side is the hypotenuse of the right triangle created by the intersection of its diagonals, and the square contains four of these triangles. What changes if these four right triangles with {equal} legs, are replaced by, again, four copies of the particular right triangle we are investigating, but now it is a triangle with legs of different lengths, {a} and {b,} and hypotenuse of length {c?} The square is still the square on the hypotenuse of this particular triangle, but now the copies have to be placed so that each corner of the square is filled by the meeting of the two different acute angles, to equal a right angle. That means that the triangles have to be placed with the short leg of one coinciding with part of the longer leg of the next [see Fig. 2(a)].
Each of these four right triangles is also half of a rectangle, and when the rectangles are drawn in, a new square is created outside the first, with each side equal to the sum of the sides of the rectangle, or the legs of the triangle, {a}|+|{b} [see Fig. 2(b)]. From this we can see that {c}@s2, the square on the hypotenuse, is equal to the square on the sum of the legs, {(a}|+|{b)}@s2, less the four triangular sections of the rectangles which are outside the square on {c,} each of which is equal to the triangle we started with. These four triangles are each half of a rectangle with sides {a} and {b,} and thus add up to two such rectangles. This can be denoted as:
{c}@s2|=|{(a}|+|{b)}@s2|@ms|2|@ts|{(a}|@ts|{b)}
Now, we must investigate the size of the square on {(a}|+|{b),} that is with a side made by adding the two, unequal, legs of the triangle. It can be divided to include one square with side {a} and, in the diagonally opposite corner, another with side {b} (see Fig. 3). Since the side of the large square is {(a}|+|{b),} the area that remains is that of two rectangles, each with sides {a} and {b.} Therefore, the square on {(a}|+|{b)} is equal to sum of the squares on {a} and on {b,} plus these two rectangles:
{(a}|+|{b)}@s2|=|{a}@s2|+|{b}@s2|+|2|@ts|{(a}|@ts|{b)}
Thus, these two rectangles are equal to {both,} the difference between the square on {(a}|+|{b)} and the square on {c, and}, the difference between that same square on {(a}|+|{b)} and the sum of the two squares, on {a} and on {b.}
{(a}|+|{b)}@s2|=|{c}@s2|+|2|@ts|{(a}|@ts|{b)}
{(a}|+|{b)}@s2|=|{a}@s2|+|{b}@s2|+|2|@ts|{(a}|@ts|{b)}
We now {know} that the square on the hypotenuse, {c,} of any right triangle, is equal to the squares on the two legs, {a} and {b:}
{c}@s2|=|{a}@s2|+|{b}@s2
Further, the hypotenuse, {c,} will be equal to the side of the square which is the sum of the squares on the two legs, or, if we use the symbol @sr{c}@s2 for the side of the square with area {c}@s2, we can write:
{c}|=|@sr{(a}@s2|+|{b}@s2)
Simple? Yes, but necessary. Only when this basic theorem of the relation between two independent directions of action has been established on a solid basis, can we develop the experimental aspect of scientific progress, the process of measuring the effects of an additional dimension. Now, secure in the knowledge of this simple relationship, we can begin to measure the universe.

Measurement and Divisibility

By Bruce Director
In 1818, Karl F. Gauss accepted the assignment to conduct a geodesic survey of a large part of the Kingdom of Hannover, or, in other words, to measure a section of the surface of the Earth. The project involved many difficulties, and requires, first, that one reflect on the general concept of measurement.
Gauss’ friend and collaborator, the astronomer Bessel, thought a man with Gauss’ mathematical ability, should not be involved in such a practical project, to which Gauss replied:
“All the measurements in the world are not worth {one} theorem by which the science of eternal truths is genuinely advanced. However, you are not to judge on the absolute, but rather on the relative value. Such a value is without doubt possessed by the measurements by which my triangle system is to be connected with that of Krayenhoff, and thereby with the French and the English. However low you estimate this work, in my eyes it is higher than those occupations which are interrupted by it. … you will agree with me, that, when one does without all real help in numerous petty affairs, the feeling of losing one’s time can only be removed when one is conscious of pursuing a {great important} purpose…
“What do I have for such work, on which I myself, could place a higher value, except {fleeting hours of leisure?…”
How can you measure the surface of the Earth? Don’t even think about using a yardstick. First think what it means to measure. You cannot measure one thing by another, unless you first can determine, if the two things are commensurable. If you worked through the last several weeks’ pedagogical discussions, you know it is not always self-evident, whether two magnitudes are commensurable with each other.
To get a sense of this, look at a similar problem, investigated by Euclid, Archimedes, Cusa and Kepler, about which much commentary has already been written: Measuring the circumference of the circle.
One can measure a circle by another circle, or a part of a circle, but not by a line, or any other curve. A whole circle can measure another whole circle, only with respect to size, i.e., one circle is either greater or less than the circle by which it is measured. But, to measure along the circumference of the circle, the circle must be divided. The circumference can then be measured by the divided parts.
The first and most obvious division, is by half. This creates two semi-circles and a straight line diameter. Archimedes thought, that, by dividing the diameter into small parts, one could measure the circumference of the circle, but, Cusa proved, [and if you worked through last week’s pedagogical, you would have proved to yourself], that the diameter and circle are incommensurable. One cannot measure the other. So in order to measure the circle, we must divide the circumference itself into smaller parts.
Well, if we continue folding the circle in half and in half again, we will divide the circumference into smaller and smaller parts. The number of parts, will be powers of 2. (That is, 2, 4, 8, 16, ….) But other types of divisions must be discovered, if we want to measure a part of the circumference which is not a power of two.
If we unfold the circle, after folding it into quarters, we will have constructed, two diameters, which meet at the center of the circle. Now fold the circle, so a point on the circumference touches the center. This will form a new line, shorter than the diameter, which intersects the circumference in two points. Once this fold is made, it is easy to find two other folds which will also meet at the center, forming two more lines, which will make a triangle. (It is easier for you to discover this by experiment, than for me to describe it without the use of diagrams.) This divides the circle in three parts.
By a more complicated process, the circumference of the circle can be divided into five parts, the description of which, would require a digression here, but will be discussed in future briefings.
It was long assumed, and Kepler proved, that it were impossible to divide the circle into seven parts. Until Gauss, it was believed, that this was the ultimate boundary of the divisibility of the circumference of the circle. Gauss discovered the divisibility of the circle into 17 parts, and other divisions also. But for purposes of today’s discussion, what is important, is, that the process of division has a boundary. Not all divisions are possible, and since division is necessary for measurement, to measure requires one to discover, and if possible, overcome these boundaries.
To conduct his geodesic survey, Gauss had to determine how to divide the surface of the Earth, which presented many similar problems, albeit more complex, to our above example. For example, instead of measuring a curve, Gauss had to measure an area. This area, was on a curved surface, which in first approximation is a sphere, but is actually closer to an ellipsoid. How are these surfaces divided? How are these divisions, once discovered, measured on the surface of the earth itself? These and other problems, will be discussed in future pedagogicals.
But, while contemplating the above, it is not unhelpful to reflect on the following statement of Gauss, excerpted from his “Astronomical Inaugural Lecture” in which Gauss argues against the idea of sperating so-called practical, from so-called theoretical science:
“To judge in this way demonstrates not only how poor we are, but also how small, narrow, and indolent our minds are; it shows a disposition always to calculate the payoff before the work, a cold heart and a lack of feeling for everything that is great and honors man. One can unfortunately not deny that such a mode of thinking is not uncommon in our age, and I am convinced that this is closely connected with the catastrophes which have befallen many countries in recent times; do not mistake me, I don not talk of the general lack of concern for science, but of the source from which all this has come, of the tendency to everywhere look out for one’s advantage and to relate everything to one’s physical well-being, of the indifference towards great ideas, of the aversion to any effort which derives from pure enthusiasm: I believe that such attitudes, if they prevail, can be decisive in catastrophes of the kind we have experienced.”
Measurement and Divisibility Part II
Last week, we investigated the measurement of the circumfrence of the circle. What was required, was to divide the circumference into commensurable parts. It was demonstrated, that division by 2, and powers of 2, was possible by repeated folding and division by 3 was possible, by folding in a different way. Division by 5 was stated as possible, and left to the reader to accomplish, and division by 7 was stated to be impossible, and the reader was refered to Kepler’s proof (Harmony of the World, Book 1). To the eye, the circumference of the circle appears smooth, and everywhere the same, yet when one tries to divide the circle, one discovers boundaries, with each new {type} of division. Thus, the numbers 2, 3, 5, and 7 each signify a {type} of divisibility with respect to the circumference of the circle.
The word {type} here is used in the sense of Cantor and LaRouche. Each {type} of division, is seperated from the other, by a discontinuity. One cannot divide the circle into 3 parts, from the method of division by 2 or powers of 2. One can combine division by 2 and 3 to divide the circle into 6 parts, but a new {type} of division is required for 5 parts.
Let’s experiment with other types of divisions, with respect to other types of curves and surfaces.
Once the circle is divided, polygons can be formed by connecting the points on the circumference, with each other, and triangles can be formed, by connecting the vertices of the polygon, to the center of the circle. It is easily demonstrated, that these triangles are all equal. Thus, the relationship of all parts of the circumference to the center are the same.
Now look at an ellipse. The ellipse differs from the circle, in that all parts of the circumference of the ellipse have a relationship to two points, (called foci) not one, as in the case of the circle. Specifically, the distance from one focus, to the circumference of the ellipse, plus the distance from the circumference to the other focus is always the same. In the case where these two foci come together, and become one, the ellipse becomes a circle.
Look further at the ellipse. One can fold the ellipse in half in only two ways (which for convenience we can call horizontal and vertical), whereas, the circle can be folded in half in an infinite number of ways. When the ellipse is folded in half, one of the lines generated, will be longer than the other, the intersection of these two lines, (called axes) will be called the center of the ellipse. Two circles can be drawn, using this center, related to this ellipse. One will have the smaller line as its diameter, and the other will have the longer line as its diameter. The former will be smaller than the ellipse, the latter will be larger.
Now divide the larger circle into any possible number of parts, and form the triangles associated with the polygon which is formed by the division. The sides of the triangles, which correspond to radii of the circle, will intersect the circumference of the ellipse, dividing the circumference of the ellipse. Now connect the points of intersection with the circumference of the ellipse, to one another, forming triangles in the ellipse. It is easily seen, that unlike the circle, these triangles are not equal, consequently, the divisions of the circumference of the ellipse, formed by these divisions of the circle, are not equal. Hence, the ellipse, cannot be divided, or measured, in the same way as the circle. A new discontinuity has been reached.
This new discontinuity arises from the difference in the characteristic curvature, between the circle and the ellipse. The curvature of the circle is constant, while the curvature of the ellipse is always changing.
This problem, of measuring the circumference of the ellipse, a crucial problem for physics and astronomy, was investigated by Kepler, and further developed by Gauss, by applying his hypothesis of the complex domain. These issues will be investigated in future pedagogical discussions. But for now, take one more step. Now think of a sphere. By what method, can one divide the sphere in half, and what will this tell us about the underlying hypothesis concerning the divisions of the circle and the ellipse?
More next week.
MEASUREMENT AND DIVISIBILITY PART III
Last week’s discussion ended with the question: By what method can we divide a sphere in half? Let’s compare this problem, with the problem of dividing the circle in half. This was accomplished by folding the circle on itself, and, we discovered certain boundary conditions, with respect to that process. How can we apply this method to the problem of dividing a sphere?
First think about what we did when we folded the circle. We weren’t simply dividing the circle. We were applying a rotation to the circle, in a direction different then the rotation which generated the circle itself. That is, a circle of 2 dimensions, is rotated in 2 + 1 dimensions. Division in n dimensions, was effected by a transformation in n + 1 dimensions.
Now apply this to the sphere. Obviously the sphere can not be folded, but it can be spun. Or, in other words, if we consider the sphere, as a surface of 2 dimensions, we must take action in 2 + 1 dimensions, in order to divide it. So, if we pick a point on the surface of the sphere, and, spin the sphere around that point, every point on the sphere, except the one exactly opposite the initial point, will move. These two points can be connected by the equivalent of the diameter of the circle, which on the sphere is a great circle, that divides the sphere in half.
Now apply this principle, of measuring n dimensions, with respect to n+1 dimensions, to the initial discussion three weeks ago about Gauss’ efforts at measuring the surface of the Earth. How do we locate our initial position? With respect to north and south, we can measure the angle at which we observe the North Star. The higher overhead the North Star is, the farther north our position on the Earth. To measure our position on the surface of the Earth, we must look up, to the stars. This measurement is, therefore, n+1 dimensions, with respect to the n dimensions of the surface of the Earth. Now for our position with respect to east west, we must refer to the rotation of the earth on its axis, which goes from east to west. We measure this, with respect not only to a change in position with respect to heavenly bodies, but with respect to a change in time. Another dimension, (n+1)+1.
Once this position is determined, we now measure other locations in a similar manner, and then measure the distance between those locations, using triangles. In order to meaningfully measure the surface of the Earth, these triangles must be large. Too large to measure with rulers, yardsticks, or chains. If we start with two relatively close points on the earth, and precisely mark off the distance between them, we can then measure the distance between these two points and a third point, by measuring the angles that form the triangle between these three points. This is done, by placing an object at each point, that can be seen, using a telescope, from the other points, and we measure the angle at which the telescope has to be turned, to see each point.
Gauss invented a device, called the heliotrope, that used a small mirror to reflect sunlight, that could be seen, by a telescope, from many miles away. If three such devices are positioned at three different points on the Earth’s surface, a very large triangle can be formed, that can be measured precisely. In this way, the surface of the Earth, can be covered with a network of triangles, and measured.
But, when we look through these telescopes, to see each point, the light is refracted (bent), by the atmosphere, and the lens of the telescope. This makes what we see, different from the actual position of the point on the Earth. So this physical property, refraction of light, must be taken into account in our measurement–another dimension, [(n+1)+1]+1.
But since our measuring points are at different elevations, we use a level, which adjusts its position with respect to gravity. So we must measure variations of the gravitational field of the Earth, yet another dimension, {[(n+1)+1]+1}.
Likewise, when using a compass, which reflects changes in the magnetic field of the Earth, we must measure variations in the magnetic field of the Earth–yet another dimension, {[(n+1)+1]+1}+1. And so on, with each new physical principle discovered.
The inclusion of each new dimension is not a simple addition, but a transformation in the hypotheses underlying our conception of physical space-time. Just as the idea of dividing a circle, contained within it, an underlying assumption of a higher dimension, which wasn’t apparent, until thought of in terms of dividing the sphere, each new dimension, corresponding to a physical principle, uncovers previously “unseen” assumptions, with respect to the hypothesis of lower dimensions.
But, these assumptions, expressed in the form of anomalies and paradoxes, won’t be “seen,” unless you look for them, not in n dimensions, but in n+1 dimensions. You can’t measure where you are, except with respect to the horizon, which cannot be “seen”, except with respect to the higher dimensionality, which you are seeking to discover, but which you will not find, unless you have the passion to “look” for it.

The Importance of Good Maps

by Bruce Director
As the pedagogical series on spherical geometry has indicated, a profound discovery arises, when you attempt to map spherical action on to a flat plane. Any such effort, immediately presents to the mind, the existence of two distinct types of action. Basic investigations of the physical universe, astronomy and geodesy, immediately confront us with the need to discover the conceptions that underlay this discontinuity.
Already we have presented several examples of this, which you can work through quickly in your mind before proceeding. Think of the various examples that demonstrated that spherical nature of the manifold of measurement of space. Think of the conception of the Platonic Solids from the standpoint of Kepler’s re-discovery of the Pythagorean concept of congruence (harmonia). Think how we demonstrated that solids arise as the characteristic perfect congruences on a surface of constant positive curvature, as distinct from the perfect congruences that arise on a surface of zero curvature. And also, think of the pentagramma mirificum, and emergences of two distinct periodicities that arise from carrying out the same action, on surfaces of two different curvatures. (All the above examples were elaborated in pedagogical discussions published over the first three months of 1999.)
Now let’s delve into this area once again. First, from the standpoint of mapping the stars, as represented on a surface of constant positive curvature, onto a surface of zero curvature, a most ancient investigation.
In our observation of the heavens, the stars are projected onto a spherical surface, as a function of our measuring their changing positions, as a change in the angle between the line of sight, the horizon and some arbitrary direction perpendicular to the horizon, such as north, or even “straight ahead.” In this way, the changes in position of the stars, and their relationship to each other, are represented as arcs of circles and the angles between such arcs.
However, as we’ve seen before, when we try to project this spherical projection of the stars, onto a flat surface, discontinuities aries. Furthermore, the nature of these discontinuities changes depending on how we effect that projection. In other words, not all projections from a sphere onto a plane are the same.
You can carry out a simple demonstration of this, by drawing a series of great circle arcs, intersecting at different angles, on a clear plastic hemisphere. (For purposes of this description, call the circular edge of the hemisphere the equator, and the pole of this equator the north pole.) Hold a flashlight or candle at the position equivalent to the south pole of the sphere so that the great circle arcs cast shadows onto a marker board. Trace the shadows. Now, move the flashlight toward the center of the sphere, stopping at various intervals, and tracing the shadows of the arcs at each interval. Make one of those intervals the center of the sphere. Trace the shadows.
You will notice a change in the curvature of the shadows, as the point of projection changes from the south pole to the center of the sphere. At the south pole of the sphere, the shadows are arcs of circles. As the flashlight moves toward the center, the shadows straighten out, until at the center, the shadows are straight lines.
Now make a more precise demonstration. Draw on the hemisphere, an equilateral spherical triangle, such as the face of the octahedron, that has three 90 degree angles. Perform the above projections. When the flashlight is at the south pole, trace the shadows. Now move the flashlight to the center of the sphere, and trace the shadows.
The tracings of the shadows from the south pole projection are circular arcs. Measure the angle between the lines tangent to each arc at the each vertex. Now measure the angles between the sides of the straight line shadows projected from the center.
These are two specific projections, the first called the stereographic, the second called central projection, that transforms the great circle arcs on the sphere, to the plane. As you can see, each transformation is different. In the central projection, the spherical equilateral triangle with three 90 degree angles is transformed into a flat equilateral triangle with three 60 degree angles. In the stereographic projection, the spherical triangle is transformed into three circular arcs that intersect each other in 90 degrees. So the angular relationship between the vertices of the triangle is invariant under the stereographic projection.
With a little bit of thought, you should be able to figure out why that is the case. Think of the point of projection as the apex of a cone of light. The projection on the flat surface is formed by the intersection of a line that starts at the point of projection, and continues through a point on the sphere, and then intersects the marker board. If the point of projection is at the center of the sphere, than the lines connecting the point of projection to points on a great circle, will all be in the same plane. Consequently, the projection of these great circle arcs will be a straight line. In this way, the center of the sphere can be thought of as the unique singularity from which great circles can be projected into straight lines!
Not so if the point of projection is other than the center of the sphere. However, if the point of projection is the south pole, the angles between the projected arcs, are the same as the angles between the spherical arcs. This property has come to be called, “conformal”.
Because of this angle preserving characteristic, this projection is particularly useful for mapping stars. The written discovery of the stereographic projection is attributed to the Greek astronomer Hipparchus, but its actual origins are most likely quite older. Under this projection, the entirety of the celestial sphere can be mapped onto a flat surface.
To do this, think of a sphere with a plane representing the horizon, going through the center of the sphere. (You can represent a cross section of this on a flat piece of paper as a circle with two perpendicular diameters. Call the endpoints of one of the diameters the north and south pole. Let the other diameter represent the horizon.) Now, draw a line that connects every point of the “northern” hemisphere with the south pole. Those lines will intersect the horizon and those intersections will form a stereographic projection. The north pole will project onto the center of the sphere. All the points of the northern hemisphere will project onto the inside of a circle formed by the intersection of the sphere with the plane, and all the points of the southern hemisphere will project to points outside that circle. Where will the south pole projet to? What other discontinuities or distortions emerge under this transformation?
YOU HAVE TO CARRY OUT THIS CONSTRUCTION IF THE ABOVE DESCRIPTION IS TO MAKE ANY SENSE TO YOU.
Over the last two millennia, the stereographic projection has been used to map the celestial sphere onto a plane and is the basis of the construction of the astrolabe, one of the earliest astronomical measuring instruments. (Rick Sanders has produced an interesting unpublished paper on the astrolabe available to those who are interested from RSS.)
The stereographic projection, therefore, represents a unique way of projecting one surface onto another, such that a certain characteristic, is invariant under the transformation. But, this projection is specific to the mapping of a sphere onto a plane. Can we find, for example in the case of a geodetic survey, where we are mapping the geoid, onto an ellipsoid, onto a sphere, onto a flat plane, a way to perform such a series of transformation, in which a certain characteristic, remains invariant under repeated arbitrary projections?
This formed the subject of Gauss’ famous 1822 paper for which he won the Copenhagen prize. The paper was titled, “General Solutions of he Problem to so Represent the Parts of One Given Surface upon another Given Surface that the Representation shall be Similar, in its Smallest Parts, to the Surface Represented.” In this investigation, Gauss delved even further into the nature of non-linear curvature in the infinitesimally small.
The Importance of Good Maps-Part II
Last week we undertook a preliminary investigation into the projection of a sphere onto a plane. Now the fun starts.
If you carried out the constructions, you would have re- discovered, in a formal sense, certain principles whose ancient discovery was crucial for the development of human civilization. That discovery can be thought of in two aspects; 1) that elementary form of action in the physical universe is curved, and 2) that curved action is of a different “transcendental cardinality” than linear action. The nature of that difference is revealed in the investigation, not simply of each type of action, but by investigating transformations between each type, i.e., the “in betweenness.” In that sense, the study of these projections has a significance for both the development of the higher cognitive powers of the mind, and the capacity of those powers to bring the physical universe increasingly under its dominion.
In general, there is no transformation of a sphere onto a plane that does not result in distortions and discontinuities, and it is by those distortions and discontinuities that the difference in “transcendental cardinalities” becomes apparent. But, there are a myriad of such transformations, each of which produces different characteristic distortions and discontinuities. (Last week, we investigated, preliminarily, two such transformations, the gnomic and the stereographic projection, but there are many others.) In order to more fully grasp the nature of the difference in “transcendental cardinalities” between the sphere and the plane, we cannot focus simply on specific types of transformations. We must investigate the general nature of transformations and not just between two specific types of surfaces, such as a sphere and a plane, but between any series of arbitrarily curved surfaces. That is, we must jump from investigating a particular projection, to the investigation of the general principle of projection itself. That puts us in the domain the hypergeometric. This is the domain unique to the contributions of Gauss and the subsequent discoveries of Riemann.
Today’s pedagogical discussion seeks to start down the road to the re-discovery of Gauss’ and Riemann’s contributions. There is nothing contained below that is beyond the scope of most of the readers, but, be prepared to concentrate on the train of thought. You will find in it an illustration, typical of Gauss, of taking a previously discovered principle of classical Greek science, and approaching it from a new higher standpoint, which establishes that classical principle, as a special case of a more general concept. It is congruent with Beethoven’s re-thinking of the significance of the Lydian interval, in his late quartets, to establish a new conceptualization of the domain of J.S. Bach’s well-tempered system of bel canto polyphony.
From last week’s discussion, you should have already demonstrated to yourself, some of the characteristics of the gnomonic and stereographic projection of the sphere onto the plane. Specifically, the gnomonic, (projection from the center of the sphere), transforms great circle arcs on the sphere, into straight lines on the plane. Obviously, since the sum of the angles of all plane triangles is 180 degrees, and the sum of the angles of triangles on the sphere are always greater than 180 degrees, angular relationships are changed under the gnomonic projection. On the other hand, last week’s constructions provided the basis to demonstrate, at least initially, that under the stereographic projection, i.e., where the point of projection is a pole of the sphere instead of the center, the angular relationships are unchanged when projected from the sphere onto the plane. This characteristic is obviously crucial for geodesy and astronomy, as the relationships between stars projected onto the celestial sphere and positions on the surface of the Earth, as these relationships are measured as only as angular relationships. If a representation of these spherical relationships onto a flat surface is to be of any use, the angular relationships must be invariant under the projection.
When thinking of possible projections from the sphere onto a plane, the gnomonic projection seems to suggest itself most easily. For example, in the case of the celestial sphere, the point of projection is the observer, who projects the celestial sphere the stars along the lines of sight from the observer through the stars, to a plane. This projection was apparently discovered by Thales, but it is quite possible that it was known much earlier. However, because it distorts angles, it has obvious failings for a useful map of the stars or the Earth.
The stereographic projection is much less obvious. Here, the point of projection, a pole, is no where in the manifold of the observer. But, when the projection plane is the plane of the observer, (as in last week’s example), the point of the observer is the only point that is unchanged under projection! This and the property that angular relationships are not changed under the projection, make the stereographic projection suitable for astronomical uses, such as a star chart, or astrolabe.
The experiment in last week’s discussion, for pedagogical purposes, indicated by demonstration, but did not prove, that angular relationships are invariant under the stereographic projection, a characteristic called “conformal.” One can, as Hipparchus did, prove by principles of Euclidean geometry, that this is the case.
(Such a proof is not very complicated. It relies on properties of similar triangles. But, to describe it in this cumbersome format would be, for the moment, distracting. So, we leave it to the reader to carry out.)
Gauss’ standpoint was to go beyond the principles of Euclidean geometry, by inverting the question. Instead of starting with stereographic projection and asking, “Is it conformal?” Gauss asked. “What is the nature of the being conformal, and under what projections does it exist?” The former sets out to discover the existence of a general principle in a specific case. The latter question seeks the nature of the general principle, under which the special cases are ordered.
Gauss’ approach is best grasped pedagogically by a demonstration. Take the clear plastic hemisphere you used last week, preferably with the 270 degree equilateral spherical triangle still draw on it. Cut out four circles out of cardboard, of different sizes. For my experiment, I made a circles with diameters, 3 1/2, 1 1/2, 1, 1/2. (For the circle of 1/2 inch diameter I used a thumb tack.) With tape, attach these circles to the sphere, all at the same “latitude”, so that they are approximately tangent to the sphere at their centers.
Now, project this arrangement onto a plane. This is most easily done, by holding the hemisphere so that the plane of the equator is parallel to a wall or the ceiling, and use a flashlight to project the spherical images onto the wall or ceiling.
When you hold the flashlight so that the bulb is at the center of the hemisphere, the shadows of the spherical triangle will, as we saw last week, be straight lines. The shadows of the tangent disks, will be ellipses. When you pull the flashlight back to the position of where the south pole of the sphere would be, you will see that the shadows of the spherical triangle will be circular arcs, intersecting at 90 degree angles, and the shadows of the tangent disks will be almost circular.
The change in the projection of the tangent disks, from ellipses in the gnomonic projection, to circles in the stereographic, is a reflection of a crucial element of Gauss’ discovery.
Gauss’ first step, was to abandon the idea of the sphere and plane being objects embedded in three dimensional Euclidean space, and instead, he thought of each as a two dimensional surface of different curvatures. On any two dimensional surface, the angular relationship of 90 degrees is a singularity, consistent with Cusa’s notion of maximum and minimum. That is, geodetic arcs, or lines that intersect at 90 degrees are at the maximum point of divergence. Or, in other words, any two such arcs, or lines, define two divergent directions. Any other angle, at which geodetic arcs lines intersect, is merely a combination of these two directions. (Gauss goes to great lengths to point out that these two directions are arbitrary, but once one is chosen, the other is determined.)
Now look back to the difference in the transformation of the tangent disks in the two projections. In the gnomonic projection, the change of those disks from circles to ellipses, is a reflection that the gnomonic projection changes one direction in a different way than the other. The transformation of those disks into circles in the stereographic projection, is a reflection of how this projection changes both directions exactly the same.
But, there is another principle at work here that you can discover with some careful observation. If you look closely at the tangent disks, you should notice that in the gnomonic projection, the shadows of the disks become more elliptical, the smaller the disk. And, in the stereographic projection, the shadows of the disks become more circular the smaller they are.
Remember these disks are not on the sphere, but tangent to it. Therefore, the smaller the disk, the closer to the surface of the sphere it is. As the disks become infinitesimally small, the characteristic change in curvature, becomes even more pronounced. In other words, the characteristic curvature of these projections, or any other for that matter, is reflected in every infinitesimally small area of both surfaces. And, the smaller the area, the more true is the reflection! Just the opposite of linearity in the small.
Do this experiment and play with this idea a while. You are getting close to a very fundamental principle discovered by Gauss and Riemann, which we’ll take up in the final installment of this series next week.
The Importance of Good Maps–Part 3
I hope you had fun conducting the experiment described at the end of the last pedagogical discussion. This week, we will conclude this preliminary phase of pedagogical discussions on the early development of the Gauss-Riemann theory of manifolds, with a discussion of the general principles of Gauss’ theory of conformal mapping. In future weeks, we can extend these investigations, using this preliminary work as a starting point.
It is important to remember the context in which these investigations of Gauss and Riemann occurred. The thread begins with Cusa’s {Learned Ignorance}, and his insistence that action in the physical universe was elementarily non-uniform. The discoveries of Kepler on planetary orbits, and Leibniz and Huygens on dynamics, and light, confirmed and validated what Cusa had anticipated. In each case, the general nature of the non- uniformity of physical action, was discovered by the manifestation of that characteristic in an infinitesimally small interval of action.
Gauss’ geodesy is a good case in point. Between 1821 and 1827 Gauss supervised and conducted a geodetic triangulation of most of the Kingdom of Hannover. That undertaking confronted him with a myriad of scientific problems, that sparked a series of fundamental discoveries about the nature of man and the physical universe.
A short review is necessary, from the standpoint of the last several month’s pedagogical discussions on spherical action. Think back to the question of the measurement of the positions of the stars with respect to a position on the Earth. Those positions will change over the course of the night, the course of the year, and the course of the longer equinoctial cycle. The geometrical form of the manifold of such changes, is the inside of sphere. The daily, yearly and equinoctial changes of the stars’ position trace curves on the inside of the sphere. Those curves can be thought of as functions of the Earth’s motion.
Now, think of those same observations as taken from another position on the Earth’s surface. A new set of curves will be generated that are a function the same motion of the Earth. But, the nature and position of those curves will be different than the curves traced by the observations from the first position.
These two sets of curves, give rise to a new function, that transforms the first set of curves into the second. That function reflects the effect of the curvature of the surface of the Earth. This function can not be visualized in the same way, as a set of curves, as in the case of the first two functions. This new type of function, a function of functions, is congruent with what Gauss and Riemann would refer to as a complex function.
In this example, a complex function is discovered that maps spherical functions into other spherical functions, which is another way of thinking about the concept of projection. The previous two discussions in this series, looked into types of complex functions that project spherical functions onto a surface of zero curvature (a plane), such as the gnomonic projection and the stereographic projection. These two complex functions transform the same curves from the sphere onto the plane, but in different ways.
The stereographic projection had the unique characteristic that the angles between great circle arcs on the sphere are not changed when projected onto the plane. This characteristic Gauss called conformal.
In his announcement to the first treatise on Hider Geodesy, Gauss points out that the curves conform in the infinitesimally small. However, in the large, the projection of the great circle arcs are magnified, the degree of magnification changes, depending on their position with respect to the point from which the projection is made. The experiment projecting circles tangent to the sphere, suggested in the last pedagogical, illustrated this point, at least intuitively.
In other words, if you think of the stereographic projection from Gauss’ standpoint, it is a special case of a complex function. A complex function that transforms curves on a sphere to curves on the plane, according to a law, that conforms in the infinitesimally small.
In the course of his geodesic investigations, Gauss was confronted with the requirement of discovering other complex functions that transformed functions on one surface to another. Rather than tackle each case separately, Gauss went into the matter more deeply, discovering the general principles on which these complex functions rested. This was the subject of his 1822 paper referred to in previous weeks, “General Solution of the Problem to so Represent the Parts of One Given Surface upon another Given Surface that the Representation shall be Similar, in its Smallest Parts, to the Surface Represented”. These investigations formed the foundation for Riemann’s theory of complex functions.
In his paper Gauss gives an example of such a problem from Higher Geodesy. In his geodetic survey, Gauss measured the area of a portion of the Earth’s surface, by laying out a series of triangles whose vertices were mutually visible. By measuring the angles between the lines of sight between these vertices, the area of the triangle could be computed. As this network of triangles was extended over the Kingdom of Hannover, the entire area of the entire region could be computed by adding up the areas of the smaller triangles in the network.
As discussed in previous weeks, the area of these triangles is a function of the shape of the surface on which they lie. If a spherical shape of the Earth is assumed, then the size of the triangle is a function of the sum of the angles comprising it multiplied by the diameter of the Earth.
Look back on our first example above. Between two positions on the surface of the Earth, a complex function characterizes the difference between the observed positions of the stars at those two positions. (For purposes of this example, consider the two positions as lying on the same meridian. Then the measurement of that complex function can be expressed as simply the difference in the angle of observation of the pole star between the two positions.) Based on an assumption about the size and shape of the Earth, the distance between the two positions along the surface of the Earth can be calculated.
The distance between those two positions can also be calculated by a geodetic triangulation carried out over the area of the Earth’s surface between the two positions. That distance, when compared with the enables us to test the original assumption of a spherical shape for the Earth. That type of measurement determined the shape of the Earth to be closer to an rather than a perfect sphere.
This confronted geodesist with the requirement of projecting those ellipsoidal triangles onto a sphere, conformally. Gauss was the first one to be able to solve this, by applying his general method of conformal projection. The method employed is analogous to Kepler’s measurement of planetary motion in an elliptical orbit, by the eccentric and mean anomalies, but with the use of complex functions, of the type described above.
In future weeks we will develop pedagogical exercises from Gauss’ examples, and then go on to a more thorough examination of Riemann’s revolutionary extension of Gauss’ discovery.

The Poetry of Logarithms

by Ted Andromidas
Note: For this pedagogical discussion, you will need Appendices I and X to {The Science of Christian Economy}, {So You wish to Learn All About Economics,}, and the April 12, 2002 issue of {Executive Intelligence Review}.
“You have no idea how much poetry there is in a table of logarithms.” — Karl Friedrich Gauss to His Students
Developing a function for the distribution of the prime numbers has been one of the great challenges of mathematics. An exact solution to this problem, of how many numbers generated between 1 and any given number, N, are actually prime, has not yet been discovered, though there is a general notion of a succession of manifolds as determining to any solution.
One of the most stunning demonstrations of the generation of number by an orderable succession of multiply-connected manifolds, is Karl Friederich Gauss’ discovery of the “Prime Number Theorem.” The wonderfully paradoxical nature of Gauss’ approach, in contradistinction to that of Euler, is that we must move to geometries associated with the physics of higher-order forms of curvature, such as the non- constant curvature of catenary functions, and those forms of physical action associated with living processes, for a first approximation solution.
To understand the importance, and the elegance, of this discovery, we must first investigate a class of numbers called logarithms. Hopefully, it will all so demonstrate the inherent differences between a “constructive” approach to the questions of the generation of such numbers as logarithms as over, and against, the formalisms of the textbook. I have included as an addendum at the end of this discussion, a short rendering on the subject of logarithms, modeled on that of a typical textbook , so the reader might more appreciate the conceptual gulf separating the constructive approach from that of classroom formalisms.
“It is more or less known that the scientific work of Cusa, Pacioli, Leonardo, Kepler, Leibniz, Monge, Gauss, and Riemann, among others, is situated within the methods of what is called synthetic geometry, as opposed to the axiomatic- deductive methods commonly popular among professionals today. The method of Gauss and Riemann, in which elementary physical least action is represented by the conic form of self-similar- spiral action, is merely a further perfection of the synthetic method based upon circular least action, employed by Cusa, Leonardo, Keller, and so forth. [fn. 1]
It is in this domain, physical least action associated with the self-similar spiral characteristic of living processes, that we search for a solution to the ordering principle which, in fact, might generate the prime numbers. Gauss approach involves understanding the idea behind the notion of a logarithm.
Logarithms are numbers which are intimately involved in the algebraic representations of self-similar conic action. In previous discussions, we saw that number measures more than just position or quantity; number can also measure action. We discovered that numbers in one manifold measure distinctly different qualities, than numbers in another manifold, and that what and how you count can sometimes leave “footprints” of a succession of higher ordered manifolds.
All descriptions of logarithmic spiral action, and the rotational action associated with them, are of two types of projection:
1) The 3 dimensional spiral on the of cone; we understand that each increase in the radial length of the 2 dimensional, self-similar spiral on the plane, is a projection from the 3 dimensional manifold of the conic spiral. The projection of the line along the side of a cone, which intersects and divides the spiral is called “the ray” of the cone. [See {The Science of Christian Economy}, APPENDIX I]
2) The 3 dimensional helical spiral action from the cylinder; the rotation of the three dimensional manifold of the cylindrical spiral (helix) projects on to the two dimensional plane as a circle. Nonetheless, some action is taking place, and that action is represented, therefore, by a “circle of rotation”, as simple cyclical action, i.e. we “count” the cycles of each completed, or partially, completed cycle of rotation of the spiral.
Turn to the April 12, 2002 issue of EIR, page 16, (See figure), “The Principle of Squaring”; review the caption associated with that figure [“The general principle of ‘squaring’ can be carried out on a circle. z^2 is produced from z by doubling the angle x and squaring the distance from the center of the circle to z.”] and construct the relevant diagonal to a unit square. The side of the square is one, the diagonal that square equals the square root of two. Use that diagonal, the square root of two, as the side of a new square; the diagonal to that square, whose area 2, will also be a length equal to two. We are generating a series of diagonals, each, in this case, a distinct power of the square root of two. In this case, it is a spiral which increases from 1 to 16 after the first complete rotation; 16 to 64 after the second rotation, etc. As we will soon see, each of the successive diagonal beginning with the first square 1, is also part of a set of “roots” of 16.
Each diagonal is 45 degrees of rotation from the previous diagonal; this should be obvious, since the diagonal divides the 90 degree right angle of the square in half. Therefore, each time we create a new diagonal and a new square, in turn generating another diagonal and another square, we generate a series diagonals, each 45 degrees apart. It should also be obvious that 45 degrees is equivalent to 1/8 of 360 degrees of rotation or 1/8 of a completed rotation of the spiral.
Let us now review a few fundamental elements of this action: we can now associate, in our spiral of squares, a distinct amount of rotation with a distinct diagonal value. In this case the diagonal values are powers of the square root of two or some geometric mean between these powers.
Table 1 
Rotation  Diagonal Value 
0        1 or ?20 
1/8      ?2 or ?21 
2/8       2 or ?22 
3/8      ?8 or ?23 
4/8       4 or ?24 
5/8     ?32 or ?25 
6/8       8 or ?26 
7/8    ?128 or ?27 
8/8      16 or ?28 
9/8    ?512 or ?29 
10/8      32 or ?210
The diagonals of this “spiral of squares” function much like the rays [fn2] (or radii) of a logarithmic, self similar spiral. We can imagine an infinite number of self-similar spirals increasing from 1 to any number N, after one complete rotation. Each successive complete, whole rotation will then function as a power of N[table 2]:
Table 2 
Rotation   Power 
0       N0 or 1 
1       N1 or N 
2       N2 
3       N3 
4       N4

Since each rotation of the logarithmic spiral increases the length of the ray (or growth of the spiral) by some factor that we can identify as the “base” of the spiral. In other words the base of the spiral which increases from 1 to 2 in the first rotation ( and doubles each successive rotation), is identified as base 2; the base of the spiral which increases from 1 to 3, as base 3; from 1 to 4, as base 4;…. 1 to N, as base N, etc.. The spiral, base N, will after one complete rotation beginning with ray length 1, generate a ray whose length is N^1; after 2 rotations, the spiral will generate a ray whose length is N^2; after 3 complete rotations the ray length will equal N^3, etc.
To measure or count rotation, we now define a “unit circle of rotation”. We can map a point of intersection with a spiral, and a ray spiral whose length is equal to or greater than one, on to a point on a unit circle. In this way it seems that a point on our circle of rotation can map on to, potentially, an unlimited number of successive points of intersection of a spiral and any given ray. But, when we look at our circle of rotation, we are looking at the projection of a cylindrical spiral. We can therefore “count”, as cycles or partial cycles, the amount of rotation required to reach the point on the unit circle which a ray maps onto the unit circle and the spiral at the same time.
Look again at the musical spiral of the equal tempered scale. (see figure 1, page 50, {So You wish to Learn All About Economics}). Here, I am looking, not at successive ROTATIONS of the spiral, but DIVISIONS, in this case one rotation of the octave or base 2 spiral.
When I divide the rotation of the spiral by half (6/12ths), I get F# or the square root of 2.[see chart 2]. When I divide the rotation of the spiral by 3 (4/12ths) the first division is the G# or the cube root 2. So each successive rotation is a power of N, i.e. N^1, N^2, N^3, etc. Each successive DIVISION represents a root of N, i.e. ?N, 3?N, 4?N, 5?N, etc.

Chart 2
Division    Root of Two   Musical Note 
0           0            C 
1/12       12th          B 
2/12        6th          A# 
3/12        4th          A 
4/12        3rd          G# 
5/12     5/12th          G 
6/12     square root     F# 
etc. 
1         2            C

As we have now discovered, given any spiral base N, we can associate a distinct amount of rotation with a distinct power or root of N. Each successive complete rotation can be associated with a power of N; each division or partial rotation can be associated with some root of N, or a mean between N and another number. This distinct amount of rotation to a point on the “circle of rotation”, which can then be associated with a distinct rotation of a self-similar cylindrical spiral, is the logarithm of the number generated as a ray intersecting the spiral at a particular point.
For example, take our spiral of the squares; that spiral is base 16. The logarithm of 16 is one, written as Logv16(16) = 1[footnote 3]. Using our table 1, we can create a short “Table of Logarithms” for base 16. Turn once again to the April 12, 2002 issue of EIR, pages 16 and 17; as Bruce indicates, if I double the rotation, I square the length. Let us try various operations with the table of logarithms below. Table of Logarithms, Base 16 Logarithms unit value of diagonal or “ray” 0 1 or ?2^0 1/8 ?2 or ?2^1 2/8 2 or ?2^2 3/8 ?8 or ?2^3 4/8 4 or ?2^4 5/8 ?32 or ?2^5 6/8 8 or ?2^6 7/8 ?128 or ?2^7 8/8 16 or ?2^8 9/8 ?512 or ?2^9 10/8 32 or ?2^10
Add the logarithm of 2 to the logarithm of 4, base 16. What is the result? (2/8 + 4/8 = 6/8 or the logarithm of 8, base 16.) If I add the logarithm of 2, base 16 to the logarithm of 4, base 16, the two ADDED rotations give my the logarithm of 8, base 16, which is the product of 2 x 4.
Now subtract the logarithm of 4, base 16, i.e. 4/8 from the logarithm of 8, base 16, i.e. 6/8 and the remainder will be the logarithm of 2, base 16 or 2/8. Now take any of the logarithms from our table, base 16; add or subtract the logarithms of any number of numbers and see if they correlate with the division or multiplication of those same numbers. In other words: adding or subtracting the logarithms of numbers (i.e. the amount of rotation) correlates with multiplication or division of those numbers,
When I am looking at the number we call a logarithm, I am actually looking at the measure of two distinct forms of action in the complex domain of triply extended magnitudes, i.e. the cyclical nature of helical action, with the continuous manifold of the logarithmic spiral. Which is precisely why Gauss understood “…how much poetry there is in a table of logarithms.” We will look at this relationship in another way next time when we investigate why: “It’s Really Primarily Work.”
Footnotes
1) NON-LINEAR ELECTROMAGNETIC EFFECTS WEAPONS: IN THE CONTEXT OF SCIENCE & ECONOMY speech by Lyndon H. LaRouche, Jr. Milan, Dec. 1, 1987
2) The ray of a cone is a line perpendicular to the axis of the cone, intersecting the spiral arm [It can also be constructed as a straight line from the apex of the cone to an intersection with the spiral. Both project onto the plane as the same length. When we project from the 3 dimensional cone to the two dimensions of the plane we assume that the incidental angle of the cone is 45 ray of the cone and the axis are of equal length.
3) LogvN(N) = 1 is the equivalent of saying “the logarithm (Log) in base N (vN) of N (N) equals 1. In the above case we’re saying the Logarithm of 2 in base 2 is 1
ADDENDUM I: “What is a logarithm?” according to the book.
“… a logarithm is number associated with a positive number, being the power to which a third number, called the base, must be raised in order to obtain the given positive number.”
Presuming we understand the concept of “the power to which a number is raised”, then a definition for “exponent” and a “base” might be necessary at this time. An exponent “…is a symbol written above and to the right of a mathematical expression to indicate the operation of raising to a power. In other words, in the simple function of 2^2 = 4, ^2 is the exponent, in the function 2^3 = 8, ^3 is the exponent, etc. The definition of a “base” is a little more complicated.
When we write our numbers we use the digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. Since we use these 10 digits and each digit in the number stands for that digit times a power of 10, this is called “base ten”. For example, 6325 means:
6 thousands + 3 hundreds + 2 tens + 5 ones.
Each place in the number represents a power of ten:
(6 x 10^3) + (3 x 10^2) + (2 x 10^1) + (5 x 10^0), or 6325
We could also use base 2, 3, 5, or any other that would seem most appropriate to our requirements.
Let us look at base 2, the mathematics of the computer. There are 2 digits in base 2, 0 and 1; as with base ten, each digit represents a power of the base number, in this case 2. For example the number 1101, base 2, is: (1 x 2^3) + (1 x 2^2) + (0 x 2^1) + (1 x 2^0) or 13, base 10.
Base 10 is called “the common base” and was most widely used in developing the Logarithmic tables. Let us take an example: the logarithm of 100 in base 10, which is 2. To say it in another way, in base 10, 10 ^2 (^ denotes exponent or power) = 100, and the exponent, in this case, is 2. We will note this relationship in the following way: v denotes the subscript followed by the base number, such that, in mathematical shorthand, the logarithm of 100 in base 10 will be written Logv10 (100) = 2.
The logarithm of 10 base 10 or Logv10(10) = 1, Logv10(100) = 2, Log v10(1000) = 3, etc. Therefore, if I add:
Logv10(10) + Logv10(100) = 3
I get a logarithm of 1000 in base 10, which is also the exponent of 10^3, or 1000.
If I subtract:
Logv10(10,000) – Logv10(100) = 2
I get 2, which is the logarithm of 100 base 10, which is also the exponent of 10^2, or 100.
In other words, adding logarithm of any number, N, to the logarithm of any other number of that base number system, N1, generates the logarithm of the product of those numbers:
Log(N) + Log(N1) = Log(N x N1)
Subtracting logarithm of N from N1 generates the logarithm which is the quotient of those numbers:
Log(N1) – Log(N) = Log(N1/ N)
Consequently, tedious calculations, such as multiplication and division, especially of large numbers, can be replaced by the simpler processes of adding or subtracting the corresponding logarithms. Before the age of computers and rapid calculating machines, books of the tables of logarithms of numbers were for engineers or astronomers or anyone else who needed to calculate large numbers.
I think the preceding discussion has been a relatively accurate one page “textbook” introduction to logarithms and their use. If it seems somewhat confusing, one solution is that described by a typical professor of mathematics identified as “Dr. Ken”, who, using the Pavlov/Thorndike approach to arithmetical learning, suggests that:
“The way you think about it is this: the log to the base x of y is the number you can raise x to get y. The log is the exponent. That’s how I remembered logs the first time I saw them. I just kept repeating ‘the log is the exponent, the log is the exponent, the log is the exponent, the log is the exponent,…’ “
A singular problem arises when we use the Pavlov/Thorndike approach, replacing the name of one number with that of another, “x is y” or “the log is the exponent”, and then simply memorizing it. If we don’t know the characteristic of action generating the exponent, then what the heck is the logarithm anyway; if this simple equivalency were all there was to the matter, then we have no concept of the characteristic action corresponds to this class of numbers.

Can There Be Any Linearity At All?

by Phil Rubinstein
It is often the case that mathematicians, scientists, and their followers are able to see anomalies, paradoxes, and singularities, but maintain appearances by limiting such incongruities to the moment or the instant or position of their occurrence, only to return immediately to whatever predisposition existed in their prior beliefs, mathematics, assumptions. It is precisely this error that allows linearization in the small, in the typical case through reducing said singularities to an infinite series. In fact, in even the simplest cases, as we shall see, the singularity, anomaly or paradox requires every term in the pre-existing system to change, never to return to its prior form.
There is nothing complex or difficult in this. Let us take the simplest example. Construct or imagine a circle with the two simple folds we have used before. Now, construct the diameter and its perpendicular bisector giving us four quadrants. Now, take the upper left hand quadrant and connect the two perpendicular radii by a chord at their endpoint. If we consider the radius of the circle to be 1, we have a simple unit isosceles right triangle. Thus, from previous demonstrations, the chord connecting the two legs of the right triangle is the incommensurable square root of 2. Now, rotate the chord or hypotenuse until it lies flat on the diameter, or, alternatively, fold the circle to the same effect. The anomaly here is quite simple. Not only is the ratio of the chord to the diameter of the circle incommensurable, but the question arises: where does the end point of the chord touch the diameter? How do we identify it? From the standpoint of integral numbers and their ratios, this position cannot be located, neither can it be named within that system. This, despite the fact that if we take all the ratios of whole numbers between any two whole numbers, or ratio of whole numbers, we have a continuity, that is, between any two, there are an infinite number more. What, then, is the location? Is there a hole there or break? While this has often been the description, this is clearly no hole! By the simplest of constructions, we have the location, exactly. Our chord does not “fall through,” its end does not “fall into a hole”!
Now, we find the typical effort is to say, yes, there is a strangeness here, but we can make it as small as we like. By constructing a series of approximations, we get a series of ratios that get closer and closer. Fine, one might say, but still, what is the description or number by which we designate the location? Well, comes the answer, the infinite series description can be substituted for the place or number, and everything in this description is itself a number, or ratio of numbers. Thus, we have reduced the problem in fact and located the continuum on our diameter. One may reflect that, as simplified as this is, it is essentially the point made by Cauchy, etc., although in a different context.
In the calculus of Leibniz, the differential or limit exists as the area of change which determines the path of physical action. Cauchy reduces that physical reality to a mere calculation, by substituting an infinite approximation, or series for the limit, or area of change. What is lost is simply that reality which determines the physical action, and thus the ability to generate the idea of lawful change as a matter of physics.
But, does the anomaly go away? Clearly, it does not. To identify the actual position, which exists by construction, with a series that is infinite, endless and made up of precisely components proven NOT to be at that position, does not solve the anomaly. The position exists, is different, and remains singular.
In fact, much more follows. Label the left end of the diameter A and the location where the chord and diameter meet B. We will label the intersection of the diameters O. We can now ask what happens if we move back along the two lines, the chord and diameter. Let us say we move from B towards O, the center of the circle. Since the end point B of the chord is incommensurable with the diameter, if one subtracts any rational distance towards O, the position reached is still not commensurable, and this is so for ANY rational distance from B all the way to A. So, every position so attained is likewise incommensurable, as many as there are rational numbers. If I attempt to subtract an incommensurable amount (e.g., by constructing an hypotenuse and folding it), one has not solved the problem but merely used a position unlocatable by integral numbers or ratios of them. In fact, we now have a new infinity of these unlocatable positions back on the diameter.
This process can be looked at in the following manner. Is the position at the end of the chord greater than, less than, or equal to a given position back on the diameter? If we take also any position obtained by subtraction as above, do we attain a position greater than, less than, or equal to a rational number on the diameter? In fact, it is impossible to express the answer to these questions! One may attempt to say that an infinite series is as close to, but always less than, some arbitrary distance, but unless one knows beforehand the position, one can never know whether we have passed the position, or are not there yet. The concept of predecessors or successors or equivalence is inoperable, inclusive of whole number cases.
Since this occurs as has been shown, everywhere on the two lines, the only solution is to change the conception of number, measure, or position for every position on the diameter and chord. To simply add “irrationals” will not do, since this will leave us with inconsistency everywhere: in effect, a line made up of locations that cannot be compared.
The problem expands to a critical point with the addition of the relation of the diameter to the circumference. We must change the concept of number for every position. In this case, integers, rationals become a case of a changed number concept or metric. Properly understood, rather than attempting to linearize the discontinuity, we should say every position on the line has “curvature.” This becomes more transparent if we think of Cusa’s infinite circle as in fact the ontological reality of the so-called straight line. Only such a “straight” line could contain the positions cited above, could be everywhere curved, and yet a line.
How did this occur? An anomaly was shown to exist. To incorporate that anomaly’s existence requires a full shift in hypothesis. More especially, any linear construction is not an actual hypothesis, since it is unbounded and open ended, its extension is always arbitrary. To exist, an hypothesis requires, conceptually, “curvature,” that is, change which identifies its non-arbitrary character. That is its hypothesis. That is, what exists in the anomaly in the small is a reflection of its characteristic actions, its hypothesis. There are no holes, no arbitrary leaps. Now, of course, this leaves open the question — what other changes, hypotheses may be reflected requiring further hypothesis. It is no mystery that any line, or segment of a line existing in a universe of such action will manifest those actions down to its smallest parts, and do so for each such action.

Transfinite Principle of Light, Part I: Prologue

by Jonathan Tennenbaum
Last week my esteemed colleague Bruce Director poked into a real hornets’ nest, when he asked: What makes people so susceptible to the kinds of frauds now perpetrated routinely by the mass media? Is there something {sinister} involved, a vulnerability inside the minds of our fellow-citizens, that leads them to desire a world {uncomplicated} by the primacy of {nonlinear curvature in the small}?
What, {sinister}? You surely don’t mean the ordinary, simple folk, do you? The poor innocent people who are being lied to, abused, ripped off, tormented, destroyed by the oligarchy? The ones who are “just trying get along and raise their families?” The “noble savages” of modern times, those honest, unassuming folk who nobly desire nothing more than to eat and sleep and watch their favorite TV sports, undisturbed by the world’s problems — the which, after all, they did not create? Aren’t they so homely and nice? Don’t they have legitimate grievances? Their lives are dull, boring, oppressive, even unbearable. And yet if you try to organize them, if you try to {change} them, you find they can become {very unpleasant}, very nasty indeed! Beneath their anarchistic, individualist exteriors, they are often pathologically, fanatically attached to their identity as “simple-minded, ordinary folk.” Their minds seem to repel the effort at thinking outside the tight circles of so-called “practical life.”
“Explain it in terms I can understand.” “Give me the bottom line.” “Don’t make things complicated.” “Don’t bother me with history and all that other fancy stuff.” “I know what you are saying. But don’t you realize I have to make a living?”
And yet, after hundreds of millenia of human development, can there be any excuse to remain “simple folk”? To be ignorant of the work of past thinkers, to be indifferent to the great drama of history and the fate of entire civilizations, nations and cultures?
A beautiful thing is, that oligarchism is {doomed}. Why doomed? Because oligarchism is implicitly a type of {physics}; and as physics, oligarchism is {demonstrably false}. The demonstration is at the same time proof of the anti-entropic character of our Universe, a Universe which has no more place for inert “hard balls” of Newton’s fancy, than it could long tolerate such abominations as the “sleepy South” where “each person knows his place” and “it’s always been like this and always will be.”
The following series is designed quite literally to cast light on this problem. We shall focus on a celebrated experimental discovery by Ampre’s closest friend and collaborator, Auguste Fresnel, which overthrew once and for all the attempts by LaPlace and others to impose Newtonianism on all of natural science. Fresnel demonstrated that the propagation of light, while strictly lawful, is not “simple” at all. Following Huygens and anticipating Ampre’s closely-related demonstration of the so-called “angular force” in electrodynamics, Fresnel showed conclusively that the notion of a straight-line propagation of light breaks down in the “very small” — at the level of definite, irreducible wavelengths of the order of thousandths of a millimeter. In fact, there is no smooth, “straight-line” action anywhere to be found in the propagation of light! Behind the gross appearance of (approximately) straight light-rays, is a multiply-connected, spherically-bounded rotational process which is everywhere dense in singularities. What a wealth of activity, concealed beneath a “simple” exterior!
Fresnel’s demonstrations at the same time became the basis for a revolution in machine-tool design. In anticipation of what we shall rediscover in the following couple of weeks, the reader should ponder the following question, for example: How is it possible, using instruments machined to a precision of, say, millimeters, to carry out precise measurements at scales more than a thousand times smaller? Not in a linear Universe!
By juxtaposing Fresnel’s work to the preceeding optical discoveries of Leonardo, Kepler, Fermat and Huygens, we obtain a glimpse of the transfinite nature of physical action — a nature which is incomprehensible to the simple-minded, because it embodies not only already-discovered physical principles, but also those which are yet-to-be-discovered and yet in a sense already “present”. Those principles are not predicates of light as an isolated, supposed “objective” physical entity, but pertain to Man’s relationship with the Universe as a whole.
And so our study may illuminate some secrets of the human mind itself, and suggest joyful means by which “simple folk” might be uplifted from oligarchical darkness.
The Transfinite Principle of Light, Part II – The Saga of the “Poisson spot”
by Jonathan Tennenbaum
We are in Paris, at the highpoint of the oligarchical restoration in Europe, the period leading up to and following the infamous, mass-syphilitic Congress of Vienna. Under the control of LaPlace, the educational curriculum of the famous Ecole Polytechnique is being turned upside-down, virtually eliminating the geometrical-experimental method cultivated by Gaspard Monge and Lazard Carnot and emphasizing mathematical formalism in its place. The political campaign to crush what remained of the republican faction at the Ecole Polytechnique reaches its highpoint with the appointment of the royalist Auguste Cauchy in 1816, but the methodological war had been raging since the early days of the Ecole.
With Napoleon’s rise to power and the ensuing militarization of the Ecole in 1799, Laplace’s power in the Ecole was greatly strengthened. At the same time, Laplace consolidated a system of patronage with which he and his friends could exercize increasing control over the scientific community. An important instrument was created with the Societe d’Arcueil, which was founded in 1803 by Laplace and his friend Berthollet and financed in significant part from the pair’s own private fortunes. Although the Societe d’Arcueil supported some useful scientific work, and its members included Chaptal, Arago, Humboldt and others in addition to Laplace and his immediate collaborators (such as Poisson and Biot), Laplace made it the center of an effort to perfect a neo-Newtonian form of mathematical physics in direct opposition to the tradition of Fermat, Huygens and Leibniz. In contrast to the British followers of Newton, whose efforts were crippled by their own stubborn rejection of Leibniz’ calculus, Laplace and his friends chose a more tricky, delphic tactic: use the superior mathematics developed from Leibniz and the Bernoullis, to “make Newtonianism work.”
Poisson, whose appointment to the Ecole Polytechnique had been sponsored by Laplace and Lagrange, worked as a kind of mathematical lackey in support of this program. He was totally unfamiliar with experimental research, and had been judged incompetent as a draftsman in the Ecole Polytechnique. But he possessed considerable virtuosity in mathematics, and there is a famous quote attributed to him: “Life is good for only two things: doing mathematics and teaching it.” An 1840 eulogy of Poisson gives a relevant glimpse of his personality:
“Poisson never wished to occupy himself with two things at the same time; when, in the course of his labors, a research project crossed his mind that did not form any immediate connection with what he was doing at the time, he contented himself with writing a few words in his little wallet. The persons to whom he used to communicate his scientific ideas know that as soon as he had finished one memoir, he passed without interruption to another subject, and that he customarily selected from his wallet the questions with which he should occupy himself.”
In the context of Laplace’s program, Poisson was put to work to elaborate a comprehensive mathematical theory of electricity on the model of Newton’s Principia. Coulomb had already proposed to adapt Newton’s “inverse square law” to the interaction of hypothetical “electrical particles”, adding only the modification, that like charges repel and opposite charges attract — the scheme which is preserved in today’s physics textbook as “the Coulomb law of electrostatics”. Poisson’s 1812 Memoire on the distribution of electricity in conducting bodies, was hailed as a great triumph for Laplace’s program and a model for related efforts in optics.
Indeed, between 1805 and 1815 Laplace, Biot and (in part) Malus created an elaborate mathematical theory of light, based on the notion that light rays are streams of particles that interact with the particles of matter by short-range forces. By suitably modifing Newton’s original “emission theory” of light and applying superior mathematical methods, they were able to “explain” most of the known optical phenomena, including the effect of double refraction which had been the focus of Huygen’s work. In 1817, expecting to soon celebrate the “final triumph” of their neo-Newtonian optics, Laplace and Biot arranged for the physics prize of French Academy of Science to be proposed for the best work on the theme of <diffraction> — the apparent bending of light rays at the boundaries between different media.
In the meantime, however, Augustin Fresnel, supported by his close friend Ampere, had enriched Huygens’ conception of the propagation of light by the addition of a <new physical principle>. Guided by that principle — which we shall discover in due course –, Fresnel reworked Huygens’ envelop construction for the self-propagation of light, taking account of distinct <phases> within each wavelength of propagational action, and the everywhere-dense interaction (“interference”) of different phases at each locus of the propagation process.
In 1818, on the occasion of Fresnel’s defense of his thesis submitted for the Academy prize, a celebrated “show-down” occurred between Fresnel and the Laplacians. Poisson got up to raise a seemingly devastating objection to Fresnel’s construction: If that construction were valid, a <bright spot> would have to appear in the middle of the shadow cast by a spherical or disk-shaped object, when illuminated by a suitable light source. But such a result is completely absurd and unimaginable. Therefore Fresnel’s theory must be wrong!
Soon after the tumultuous meeting, however, one of the judges, Francois Arago, actually did the experiment. And there it was — the “impossible” bright spot in the middle of the shadow! Much to the dismay of Laplace, Biot and Poisson, Fresnel was awarded the prize in the competition. The subsequent work of Fresnel and Ampere sealed the fate of Laplace’s neo-Newtonian program once and for all. The phenomenon confirmed by Arago goes down in history with the name “Poisson’s spot,” like a curse.
We shall work through the essentials of these matters in subsequent pedagogical discussions and demonstrations. But before proceeding further it is necessary to insist on some deeper points, which some may find uncomfortable or even shocking. Without attending to those deeper matters, most readers are bound to misunderstand everything we have said and intend to say.
It is difficult or even virtually impossible, in today’s dominant culture, to relive a scientific discovery, without first clearing away the cognitive obstacles reflected in the tendency to reject, or run away from, the essential <subjectivity> of science. Accordingly, as a “cognitive IQ test” in the spirit of Lyn’s recent provocations on economics, challenge yourself with the following interconnected questions:
1) Identify the devastating, fundamental fallacies behind the following, typical textbook account:
“There were two different opinions about the nature of light: the particle theory and wave theory. Fresnel and others carried out experiments which proved that the particle theory was wrong and the wave theory was right.”
2) Asked to explain the meaning of “hypothesis” a student responds:
“An hypothesis is a kind of guess we make in trying to explain something whose actual cause we do not know.”
Is this your concept? Is it right?
3) What is the difference between what we think of as a property of some object, and a physical principle? Why must a physical principle, insofar as it has any claim to validity, necessarily apply to all processes in the Universe, <without exception>?
If you encounter any difficulty in answering the above, reread Lyn’s “Project A.”
Next week: Leonardo and the paradox of the “camera oscura.”
Transfinite Principle of Light, Part III: The Phantom of Linearity
By Jonathan Tennenbaum
Look at Leonardo’s drawings of rays of light reflected in a curved mirror. Leonardo draws the incoming rays as parallel straight lines. Reflected off the mirror, the rays form an envelope — a curve that Leibniz’s friend Tschirnhaus later called a {caustic}. Looking at the drawing, we might think to ourselves: “Here Leonardo has shown how the complex is generated by the simple. See how this beautiful curve, the caustic, is created from the simple, straight-line rays, which are the natural, the elementary form of light propagation.”
But, stop to think: Did Leonardo really think that way? Did he believe that straight-line action is primary, and curved forms are secondary? Was Leonardo a Newtonian?
Or have we gotten it backwards? That Leonardo saw, in the production of the caustic, a characteristic manifestation of the {fundamentally non-linear, high-order process} underlying light, and which generates the appearance of straight-line rays as a mere {effect}?
Looking more carefully at Leonardo’s manuscripts with our mind’s eye wide open, the evidence jumps out at us. Indeed, Leonardo even states it explicitly: The propagation of water waves, sound and light alike are based on a {common principle of action}; that principle is not straight-line action, but curved, (to a first approximation) circular action!
Leonardo implies, in fact — as he demonstrated for the case of water waves — that the {action} which generates the outward propagation of light from a source, is {not} basically directed in the “forward” direction, i.e., outward from the source, but essentially perpendicularly, {transverse} to the apparent direction of propagation!
Now let’s turn to the contrary, so-called “emission theory” which is commonly attributed to Newton (although much older), and which he elaborated in Book III of his famous “Opticks”. Newton writes, for example: “Are not the rays of light (streams of) very small bodies emitted from luminous substances? For such bodies will pass through uniform media in straight lines without bending into the shadow, which is the nature of rays of light.” Newton adds many other arguments, which I shall not reproduce here.
Doesn’t this picture indeed seem very agreeable to our naive imagination? Indeed, someone might plausibly argue that: 1) since light evidently moves outward from the source in straight lines and 2) since no motion is possible without some material bodies which are moving, therefore 3) the light rays must consist either of material particles (photons?) or maybe a continuous fluid emitted from the source and moving outward from it.
And how to account for the {bending} or change of direction (diffraction) of light rays, when they pass from one medium to another (i.e., from air to water) or through a medium of changing density? Simple! Since the “natural” or elementary motion is straight-line motion, the bending of the trajectories of the particles forming the rays, must be due to some “forces”, which are pulling the rays (or the particles making up the rays) out of that straight motion, into curved trajectories. What could be more self-evident than that?
Newton actually provides a program for elaborating this emission theory more and more: By studying the laws of diffraction of light rays, and other aspects of their behavior in passing through various materials, we should {deduce}, by mathematics, the microscopic forces which must be acting upon the light particles in interaction with the medium. And then from those “force laws”, once established, we will in turn be able to calculate the behavior of light rays under arbitrary conditions.
Newton puts his own work on gravitation and planetary motion forward as the model for this, stating, in the famous “General Scholium” from Philophiae Naturalis Principia Mathematica:
“Hitherto we have explained the phenomena of the heavens and of our sea by the power of gravity, but we have not yet assigned the cause of this power…. I have not been able to discover the cause of those properties of gravity from phenomena, and I frame no hypotheses; for whatever is not deduced from the phenomena is to be called a hypothesis, and hypotheses, whether metaphysical or physical, whether occult qualities or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena and afterwards rendered general by induction. Thus it was that the impenetrability, the mobility and the impulsive force of bodies, and the laws of motion and of gravitation, were discovered. And to us it is enough that gravity does really exist and act according to the laws that we have explained…”
This same argument was repeated by the Marquis de Laplace, the self-proclaimed high priest of Napoleon’s “orthodox Newtonianism”, in an 1815 attack on the early work of Fresnel. Laplace said that in view of the “success” of Newton’s emission theory, he greatly regretted that anyone would presume
“to substitute for it another, purely hypothetical one, and which, so to speak, can be manipulated at will: that of Huygens’ ondulations. One must limit oneself to repeating and varying experiments and deducing laws from them, that is, coordinating facts, and avoid any undemonstrated hypothesis.”
But did you pick up the “big lie” which Newton told in the passage cited above? Don’t let him get away with it!
Newton claims, among other things, that his law of gravitation was “deduced from the phenomena”, without the use of hypothesis. That is a bald-faced lie. As even Laplace admits, Newton obtained his “force law” by inverting Kepler’s construction for the elliptical orbital motion of the planets. But Kepler’s construction was by no means deduced from the visible motion of the planets; indeed, what could anyone “deduce” from the wild, tangled mass of looping motions of the planets, as seen from the Earth? Rather, Kepler arrived at his results step-by-step through a series of {creative hypotheses} — by cognition! — as documented by Kepler himself in his works, from the Mysterium Cosmographicum through to the New Astronomy. Even Newton’s so-called force law is no deduction from Kepler’s work, but was obtained only by imposing a whole array of {arbitrary assumptions} which are neither in Kepler, nor “deduced from the phenomena”, nor otherwise demonstrated in any way. So, for example, the hypothesis that space has the form of a simple Cartesian manifold, and that straight-line action is elementary.
Now, step back from the specifics of this “big lie” and ask yourself: Why are so many people, even scientists, fooled so much of the time? Could it be, because the supposed elementarity of straight-line action is merely a lawfully-generated, externalized {image} or artifact of a defective form of mental processes?
Exclude {cognition} from mental processes. What is the typical form of action in the “mental vacuum” so created? The characteristic of deduction, as the “elementary” form of non-cognitive reasoning, is that no cognitive considerations are permitted to disturb the “perfect vacuum” in which the deductive chains of logical premises and conclusions are unfolded. No “leakage” of reality from outside the system, which could call its basic assumptions into question, is permitted to interfere with the growth of the theorem-lattice.
Now look, from this standpoint, at what Riemann had to say about Newton’s famous “First Law of Motion”:
“I find the distinction that Newton makes, between laws of motion, axioms and hypotheses, untenable. The law of inertia is an hypothesis: If a material point were all alone in the Universe, and if it were moving with a certain velocity, then it would keep moving with the same unchanged velocity”.
Now here comes a simple-minded fellow, and says to himself: “Well, isn’t that First Law self-evident? After all, {if there were nothing around in the Universe} to interfere with the particle’s motion, then nothing would change that motion, either in direction or in speed. Since there would be no reason for it to bend in one direction rather than another, or to slow down or speed up, the particle would keep moving at a constant velocity in a straight line.” So, in particular, straight-line motion is elementary!
What happened? With his logical premise of a Universe consisting of nothing but a single particle alone in an infinitely extended empty space, our simple-minded fellow has thrown cognition (and the real Universe!) out of the window. He has put himself into a wildly arbitrary phantasy-world; and now proposes, as Newton did, to make that phantasy-world into his yardstick for the real Universe!
If we dig a bit deeper, our fellow might come up with another logical idea: the simple precedes the complex, so to understand the complicated real Universe, we have to break it down into simple parts, into simple hypothetical situations. Then we can deduce the complex situations from the simpler ones. But what if the supposed “simple parts” don’t exist and could not exist in and of themselves? What if the only “simple” existence were the indivisible unity of the Universe as a whole, a Universe graspable only by cognition? But cognition is not simple in the way our vacuum-headed fellow imagines rational thinking to be.
From this it should be obvious, that the issue fought out by Fresnel and Ampere against Laplace, by Kepler against Galileo, by Leibniz against Newton and so forth, is not one of this or that theory or doctrine. It is emphatically not the so-called wave theory versus the particle theory. The issue, as emphasized in Plato’s Parmenides, is the human mind.
Ask yourself: what is the transverse nature of the action, upon which the physical growth of any economy is based?
Transfinite Principle of Light, Part IV: Least Time
by Jonathan Tennenbaum
In last week’s pedagogical discussion, Phil Rubenstein provoked us with a beautiful glimpse into Leibniz’s notion of physical space-time, observing that:
“[T]he totality of space is altered when an action introduces something incompatible to the previous ordering, and that is what introduces real time as changed space. Thus, all of the space-time is truly changed and the primacy of facts is altered.”
Most of us have been trained or otherwise induced to think of events in terms of an implicitly fixed ordering of the Universe. When an event occurs, we too often only ask ourselves: “Where does this event fit into the scheme of the world as I know it?” or “What category does it belong to?” Whereas Phil (following Leibniz) wanted to get us to look out for the anomalous characteristics of an event, and to ask ourselves, instead: “What is the change in ordering of the world, which this anomaly implies?” Or even better: “How does this event open up a potential flank, by which I might change the current ordering of the world into a better one?”
As Phil also pointed out, the two modes of thought are associated with two very different notions of causality. In the first, we put our noses close to the ground and follow events one at a time, in chains of “cause-and-effect.” So, A causes B, B causes C, C causes D and so on like a chain of dominoes, each falling over and pushing the next one in turn. If someone asks, “Why did event X occur?”, our answer will be: “Because W occurred, and W caused X.” And W occurred because of V, V because of U and so forth ad infinitum (or until we find the guy who pushed over the first domino, Aristotle’s “Prime Mover”!). But the platonic mind would rather ask: “Who arranged the dominoes that way, so that the trajectory of apparent cause-and-effect took that particular form?”
When we raise ourselves to the second, higher level, we look for those crucial actions and events, that define the {total geometry} (i.e. ordering) within which entire ranges of other events occur, take a certain form, and tend toward a pre-determinable array of outcomes. This latter standpoint is congruent with Kepler’s conception of a planetary orbit and brings us to Leibniz’ notion of {sufficient reason}. So, referring in his “Principles of Nature” to the higher (transfinite) ordering of the Universe as a whole, Leibniz said:
“The sufficient reason for the Universe cannot be found in the sequence of contingent events…. Since the present motion of matter comes from the preceeding, and that one from an earlier still, one never comes closer to the answer, however far one goes, because the question always remains. Thus it is necessary that the sufficient reason, which does not require another reason, {lies outside this series of contingent events}, and this must be sought in a substance which is the cause, and is a necessary being … this last reason of things is God.”
A beautiful example for the two conflicting outlooks is provided by Pierre Fermat’s discovery of the Principle of Least Time on the basis of he called “my method of maxima and minima.” [fn1] This example is all the more notable, as Leibniz himself used it repeatedly in his polemics against Descartes and the Cartesians.
To set the stage, I should report that around 1621 the Dutch astronomer Snell (who also made major contributions to geodesy) studied the bending of light rays when passing from one medium (for example, air) into another medium (say, water). In each of the two media, insofar as they are relatively homogenous, the propagation of light appears to occur along straight-line pathways. But it had long been recognized, that light entering from air into water at a certain angle, propagates at a different, much steeper angle inside the water. Now Snell studied the functional relationship between the angle (call it X) which the ray makes to the vertical {before} entering the water, and the angle (Y) which is formed with the vertical by the direction of the ray {after} it has passed into the water. He discovered a very simple relationship, which holds quite precisely within certain limits: namely that the {sines} of the two angles are {proportional} to each other. To make these relationships clear, draw the following “classical” diagram, which Leibniz, Fermat et al. employed in their discussions of these matters.
Let a line segment AB represent the surface of the water and let point C represent the locus on AB where the ray of light enters the water. Draw a circle around C. Mark by “D” the point on the upper half of the circle (the part in the air), at which the light ray enters the circle on the way to C, and mark by “E” the locus at which the ray, now propagating in water, crosses the lower half of the circle. The line segments DC and CE represent the directions of the light ray before and after passing from air into water. Now draw the vertical line L through C. The angle between DC and L, is what we called X above, and the angle between CE and L is Y.
Finally, project D and E horizontally (i.e. perpendicularly) onto L, defining two points F and G which are the projections of D and E, respectively, onto the vertical L. (DF and EG are proportional to the {sines} of the angles X and Y.)
Now imagine we vary the angle at which the ray enters the water, while keeping the entry point C fixed. In other words, D moves along the upper part of the circle and the angle X changes correspondingly. What happens to angle Y and the position of E?
Snell found that in the course of these changes, {the ratio of DF to EG remains constant}. For the case of air and water, it turns out that DF:EG = (approximately) 1.33 : 1. From this, we can determine the angle Y corresponding to any given angle X, by a simple geometrical construction.
But what is the explanation of this relationship, its “sufficient reason”? Leibniz himself was convinced that Snell did not find his law by mere empirical trial-and-error, but that he worked from an {hypothesis} derived from the work of the ancient Greek scientists who had discovered an analogous (but simpler) law for the {reflection} of light over 1500 years earlier. While Snell’s original train of thought seems to have been lost, Rene Descartes later (1637) restated the same law, which he claimed to have discovered by himself, and offered an explanation or “proof” based on his own special notion of physics and the nature of light.
Descartes’ argument, as published for example in his “Dioptrique,” is somewhat muddled and difficult to present in a few words. Essentially, Descartes likened the motion of light to that of a small ball or other object which encounters greater or lesser resistance along the path of its motion. The circumstance, that the light ray is bent toward the vertical direction on passing into the water — i.e. becomes “steeper” in its passage through the water — Descartes took as evidence that the {light moves more easily through the water} and is less retarded in its motion, than in the air. At the point of transition into the “easier” medium of the water, Descartes thought, it is as if the ball (the light) would pick up an extra “kick”, continuing at a steeper direction.
Now, disregarding the vagueness and confusing nature of Descartes’ argument, his thinking is clearly trapped in what we referred to above as the first mode: namely to follow a process from one step to the next within a fixed notion of ordering, which is (in Descartes’ case) essentially the naive housewives’ “common sense” notion of the motion of material bodies.
Now in closing, let us listen to what Fermat has to say, in his “Method for the Research of the Maximum and Minimum”:
“The learned Descartes proposed a law for refractions which is, as he says, in accordance with experience; but in order to demonstrate it he employed a postulate, absolutely indispensible to his reasoning, namely that the propagation of light takes place more easily and faster in more dense media than more rarefied media; however, this postulate seems contrary to natural light.”
[“Natural light” was a common expression for “Reason”. Fermat is poking fun at Descartes. He continues:]
“While seeking to establish the true law of defraction on the basis of the contrary principle — namely that the movement of light is easier and faster in the less dense medium than in the more dense one — we arrived at exactly the law that Descartes had announced. Whether it is possible to arrive at the same truth by two absolutely opposing methods, that is a question we will leave to those geometers to consider, who are subtle enough to resolve it rigorously; for, without entering into vain discussions, it is enough for us to have certain possession of the truth, and we consider that preferable to a further continuation of useless and illusory quarrels.
“Our demonstration is based on the single postulate, that Nature operates by the most easy and convenient methods and pathways — as it is in this way that we think the postulate should be stated, and not, as usually is done, by saying that Nature always operates by the shortest lines … We do not look for the shortest spaces or lines, but rather those that can be traversed in the easiest way, most conveniently and in the shortest time.”
Next week we shall look more closely, through the eyes of Leibniz, at Fermat’s discovery and the error of Descartes.
— ————————————————————
1. Here is a deliberately challenging quote from a 1636 letter by Fermat to Roberval, in which he boasts about the scope of his method:
“On the subject of the method of maxima and minima … you have not seen the most beautiful applications; because I make it work by diversifying it a bit. Firstly, in order to invent propositions similar to that of the (parabolic) conoid which I told you about last; 2) In order to find the tangents of curved lines…; 3) To find the centers of gravity for all sorts of figures…; 4) To solve number theoretic problems … it is in this… that I found an infinity of numbers which do the same thing as 220 and 284, namely that the sum of the divisors of the first equals the second and the sum of the divisors of the second equals the first; and if you want another example to give you a taste of the question, take 17296 and 18416. I am sure you will admit that this question and those of the same sort are very difficult…. And so you see four kinds of questions which my method embraces, which you probably didn’t know about.”
Transfinite Principle of Light, Part V: Time To See the Light
by Bruce Director
Last week, you were introduced to a paradigmatic case of a discovery of a universal principle, Fermat’s principle of “Least Time.” Contrary to textbook-educated commentators, Fermat’s Least Time principle, is not a property of light. Rather, it is a characteristic of the Universe, from which light’s properties unfold. The irony is, that this universal characteristic of Least Time, is discovered in its unfolded form, but only KNOWN as a universal principle. For that reason, it epitomizes the discovery of a principle that corresponds to a change in hypothesis from an n- to an n+1-fold manifold, connected with a corresponding change from an m- to an m+1-fold manifold. Consequently, it deserves your careful attention and study.
To summarize: the Classical Greeks had already discovered a special case of this principle, through the investigation of reflected light (catoptics)/1. The Greeks found that the angle at which light is reflected from a shiny surface, is equal to the angle at which the light strikes that surface. Simply stated, the angle of incidence equals the angle of reflection. The equality of these angles, minimizes the length of the path from the source of the light, to the reflecting surface, to the eye. However, this principle is NOT a property of light. It is a manifestation of a universal characteristic: that nature always acts along the shortest path.
The phenomenon of refraction (the change in direction of light when it travels from one medium to another, such as from air to water), appears, at first, to contradict this universal characteristic, as the change in direction at the boundary between the two media, makes the path of the light longer, than were it to continue in the same direction across the boundary.
More than one and one-half millennia after the Classical Greek period, Willibrod Snell showed that when light is refracted, the change in direction is such that the sine of the angle of incidence and the sine of the angle of refraction are always in constant proportion. (See last week’s pedagogical.) The Greek principle of reflection (in which this proportion is one, as equal angles will have equal sines), can thus be seen as a special case, or boundary, of Snell’s more universal principle. Yet, the length of the path of the light under refraction, is still not the shortest path, as in the case of reflection.
While the details of Snell’s reasoning are not entirely known to us, it had been conjectured that the observed refraction resulted from a change in the velocity when light travels through different media./2 Under this idea, it can be shown, that the different velocities are in the same proportion as the sines of the angles of incidence and refraction. Or, in other words, Snell’s law of refraction is, itself a reflection of a physical principle, that velocity of light changes when traveling in different media. (In his “Treatise on Light,” Huygens has a simple and direct geometrical demonstration of this concept, to which the reader is referred.)
Descartes, believing that light was a stream of particles, adopted the conjecture that such particles would travel faster in denser media. From this, he reformulated Snell’s law and claimed it as his own, a fraud so blatant that even Descartes’ apologists no longer can defend it.
Pierre de Fermat adopted the opposite view, that light traveled slower in denser media. But, much more importantly, Fermat came to this idea, not by conjecturing on the properties of light, as Descartes did, but from the standpoint of a new universal principle that he hypothesized: to wit, that nature always acts according to the least time. That is, that the longer path the light travels when refracted, is actually the path that takes the shortest time. From the standpoint of the earlier Greek discovery of reflection, the universal principle that nature seeks the shortest path in space, has been transformed into the principle of shortest path in space-time. A transformation from a universal hypothesis of n dimensions, to a universal hypothesis of n+1 dimensions. (Hypothesis is used here in the rigorous Socratic terms defined by LaRouche, not the banalized general usage concept more closely equated with the verb “to guess.”)
Or, in the words of Fermat, quoted in last week’s pedagogical discussion:
“Our demonstration is based on the single postulate, that Nature operates by the most easy and convenient methods and pathways — as it is in this way that we think the postulate should be stated, and not, as usually is done, by saying that Nature always operates by the shortest lines…. We do not look for the shortest spaces or lines, but rather those that can be traversed in the easiest way, most conveniently and in the shortest time.”
Leibniz in his “Discourse on Metaphysics,” addresses this question this way:
“The method of final causes is more easy and can often be used to divine important and useful truths, which one would have to seek for a long time by a more physical approach, for which anatomy provides major examples. Thus I believe that Snell, who is the first discoverer of the laws of refraction, would have had to spend a long time finding them, if he had started by first trying to find out how light is formed. Rather, he apparently followed the method which the ancients used in catoptics, which is based on final causes. By looking for the easiest pathway to pass light from a given point to another given point by reflection on a given plane (supposing this is the intention of Nature), the ancients found the equality of the angles of incidence and reflection, as one can see from a little treatise of Heliodor of Lariss, and elsewhere. Which is what Snell as I believe, and after him (although without knowing from his) Fermat applied very ingeniously to refraction…. And the proof which Descartes’ wanted to give for the same theorem, by the method of efficient causes, would need much improvement to be as good. At least, there is reason to suppose that Descartes would never have discovered the law in that way, unless he had learned something in Holland about Snell’s discovery.”
“Descartes thought the opposite of what we think concerning the resistance of various media (to the propagation of light). That is why the very illustrious Spleissius, a man well versed in these matters, has no doubt that Descartes, when he was in Holland, saw Snell’s theorem; and in fact he remarks that Descartes had the habit of omitting mention of authors, and takes as an example the vortices in the Universe which Giordano Bruno and Johannes Kepler pointed to, in such a way that only the word itself was missing in their work. It happens that Descartes, in order to prove his theorem by his own efforts… From which Fermat correctly concluded that Descartes had not given the real reason for his theorem.”
The Cartesians, Galileans, and the whole plethora of Aristotelian-Manichean sects, squealed with rage, at Fermat’s principle of Least Time. How could Fermat say that light sought the shortest time? Why, that would mean that either, light would have to have some “intelligence” by which to “decide” whether its choice of path was using up the shortest time, or, there would have to be some pre-arranged “track,” like Ptolemy’s solid orbs, that guided the light along the shortest path.
These objections are identical to those raised against Kepler, who demonstrated that the elliptical planetary orbits, rather than uniform circular ones, are the pathways that correspond to the universal space-time characteristic of the solar system. Kepler dethroned Ptolemy’s demi-gods and solid orbs, along with the poly-copulating Olympians, from whom Ptolemy and his fellow Bogomils, drew their authority.
Taking up the defense of Fermat’s principle, Leibniz dealt the decisive blow to the Cartesians:
“…Thus we have reduced to pure Geometry all of the laws which confirm experimentally the behavior of light rays, and have established their calculus on the basis of a unique principle, that you can grasp following a specific causality, but providing you consider appropriately the case in point: indeed, neither can the ray coming from C make a decision [1] about how to arrive, by the easiest way possible, at points E, D, or G, nor is this ray self-moving towards them [2]; on the contrary, the Architect of all things created light in such a way that this most beautiful result is born from its very nature. That is the reason why those who, like Descartes, reject the existence of Final Causes in Physics, commit a very big mistake, to say the least; because aside from revealing the wonders of divine wisdom, such final causes make us discover a very beautiful principle, along with the properties of such things whose intimate nature is not yet that clearly perceived by us, that we can have the power to explain them, and make use of their efficient causes, along with their artifacts, such as the Creator employed them in order to produce their results, and to determine their ends. It must be further understood from this that the meditations of the ancients on such matters are not to be taken lightly, as certain people think nowadays.”
Reflect on that, until next week.
1/ The history of these Greek investigations deserves careful study by us, as its development in textbooks is vague and confusing. For pedagogical purposes, and for posterity’s sake, it needs to be pulled together by someone wanting to do a service to humanity.
2/ This is also an area of historical research which is necessary for us to fill out.
Transfinite Principle of Light, Part VI: Passion and Hypothesis
by Jonathan Tennenbaum
There is a tendency for people to misconstrue and banalize ad absurdum the polemic Lyn has developed about the need to change fundamental assumptions. Some think to themselves: “Lyn says that assumptions are bad. So I’ll play it safe. I won’t make any assumptions at all.”
This wimpy attitude, already strong among baby-boomers, is even more pronounced among Generations X and Y. These people have resolved never to commit themselves fully to anything, never to make a strong emotional investment, never to make a decision which might irreversibly change their lives: “No, no I don’t go there” is the motto. Their policy is to “keep all the doors open,” particularly the hind doors through which to escape when the going gets too tough.
Ironically, no behavior demonstrates the influence of hidden ontological assumptions more clearly, than the obsessive, schlmiehl-like behavior of people trying to “play it safe,” hiding behind an illusion of “objectivity,” “sticking to the facts,” and “playing according to the rules.” Whereas today the very survival of the world depends on {strong hypotheses} — hypotheses discovered, transmitted, and executed with the most impassioned quality of moral commitment.
So, Schiller said, he who would not give up his life, will not gain it. It is impossible to make or relive a scientific or equivalent quality of creative discovery without risk, without sacrificing some cherished thing inside oneself and even confronting something akin to the fear of death.
As an example, let us listen to Brahms’ student Gustav Jenner, as he describes how Brahms forced him through the agonizing process of knowing, as opposed to superficial learning. Jenner recounts his first encounter with Brahms. Personally, Brahms was very kind and friendly to the budding young composer. But when it came to criticizing the compositions Jenner had put in front of him — naturally the ones Jenner was most proud of — Brahms’ remarks were devastating:
“After it was all over, I felt like someone who, after wandering long on a false path, thinks his goal is near, but suddenly realizes his error and now sees his goal vanish into the distance…. Despite the mercilessly strict judgement which my labors elicited from him, not a single ironical or even an angry word fell from his lips…. He simply demonstrated to me, relentlessly and without brooking any contradiction, that I didn’t know how to do anything … After a stringent examination concerning what I had been doing with my life up to then, Brahms said: `You see, in music you have not yet learned anything in an orderly fashion; for, everything you’ve been telling me about the theory of harmony, your attempts to compose, instrumentation, and so forth, I count as nothing.'”
That was only the beginning. After Jenner had moved to Vienna to study under Brahms, the old master became more still more strict and rigorous with him than before.
“I never again heard from Brahms an encouraging word — let alone praise — about my works…. It took a long time before I truly learned how to work … Only a full year later did Brahms say to me on one occasion, `You will never hear a word of praise from me; if you cannot tolerate that, then everything within you is only of value by virtue of the fact that it will fail.'”
But what did Brahms teach Jenner? For that I advise everyone to read all of Jenner’s short book. Here I just want to quote from one passage, especially relevant to the point at hand:
“I learned the most not by him pointing out my mistakes per se, but by his revealing to me how they had come about in the first place…. From his experience he told me: `Whenever ideas come to you, go take a walk; then you’ll find that what you had thought was a finished idea, was only the beginnings of one.’ He would repeatedly seek to sharpen my distrust in my own ideas. I have often had the experience that precisely such thoughts which become lodged (in the mind) like an idee fixe, pose a natural barrier to creativity, because one has fallen in love with them and, instead of mastering them, has become their slave. `Pens exist not only to write, but also to cross things out,’ said Brahms, `but be careful, because once something has been set down, it is hard to take it away again. But once you realize, that, good though it (a passage) may be in itself, it is not appropriate here (at a given place), don’t mull it over any longer, but simply cross it out.’ And how often we do we not try to save a passage, only to ruin the whole!… When Brahms, with his impartial criticism, reproached me for precisely those passages, I felt surprised and hurt at the beginning, because these had been my favorite passages — until I saw that I hadn’t found the disrupting element because I had unconciously proceeded from the idea, that this passage must stay in, no matter what. I have had to feel the bite of those pronouncements by Brahms in my own flesh; they are the result of his long experience and unbending self-criticism.”
Helped by Brahms to become aware of and correct his own weaknesses of thinking, Jenner wages a war against his own tendency toward superficiality, his frequent infatuation with his own “pet” ideas at the expense of truth, his tendency to be distracted by unimportant particularities instead of concentrating on what is really essential. Does that sound familiar to anyone?
But is the conclusion from this teaching, to avoid having ideas, to not risk putting forward hypotheses, for fear they might turn out to be wrong? Hardly! Nothing could be more boring, more totally useless, than a composer who writes “according to the rules,” and who is unwilling to “live dangerously” by making bold and daring (but true!) hypotheses.
The difficulty Jenner describes — to overcome one’s attachment to strongly-held ideas and habits of thought in a rigorous search for truth — arises in essentially identical fashion in science and every other field of creative endeavor.
But in this regard, unfortunately, people in our organization sometimes fall into a trap: Our ideas are (generally speaking) far superior to those predominating in society nowadays; and thus it appears very easy (or should be) to attack and ridicule the “obviously” silly ideas of ordinary people, without feeling the need to go through {in ourselves} the agony Jenner experienced. Yet, Brahms’ authority as a teacher came from {exactly that}: from Brahms’ own agonizing struggle for rigor and truth vis-a-vis his own mind, and not merely from his superior ideas, knowledge and experience as a composer.
Thus, the main points of reference for ridiculing and refuting wrong or “silly” ideas and habits in others, are the successes one has had in confronting and overcoming one’s own imperfections. That includes insight into the {lawful nature} of human imperfections and the powerful attachments people often form to them. Thereby, one can put one’s own past errors and imperfections to good use, demonstrating once more Leibniz’s profound principle of “the best of all possible worlds.”
Turning now to physical science proper, it is too cheap, and we cheat ourselves if we would do this, to merely ridicule as “obviously wrong” the theories and hypotheses which a given discovery refutes, overthrows, or supersedes. True, in history to date, science has hardly existed except in a constant state of war against oligarchism; and as we have repeatedly documented (as in the case of Fresnel and Ampere), the oligarchical faction (embodying a “{negative} higher hypothesis”) is commonly the active promoter of the inferior hypotheses against which significant discoveries were explicitly or implicitly directed, as means to overcome what had been transformed into the “prevailing public opinion” among scientists and others.
However, to the extent we might tend, too quickly and cheaply, to divide ideas and hypotheses into {self-evidently} good and true on the one hand, and {self-evidently} false and bad on the other, we trivialize the struggle inside the mind of the creative scientist and cheat ourselves out of the possibility of really reliving a discovery. For, the oligarchical element lies not in the inferior idea pe se, but in the deliberate clinging to it, in the satanic {assertion} of backwardness and regression as a {principle} opposed to the principle of perfection. An animal is not an evil thing; but a man who behaves like an animal, is.
The immediate point I wish to stress, is this: the strength of belief in certain assumptions and hypotheses, which the creative scientist must confront in the process of discovery, is (in many if not most cases) not {simply} a product of oligarchical tampering. To a greater or lesser extent those assumptions and hypotheses arose as the product of earlier discoveries, and their relative adequacy was supported by vast arrays of corroborating evidence and by the positive economic impact (increase in Man’s per-capita power over Nature) of technological developments based upon them. In the light of such impressive, even overwhelming grounds to believe in the validity of the relevant assumptions and theories, the psychological difficulty facing the discoverer is qualitatively greater than that of merely refuting an “obviously wrong” idea.
Think of a classical tragedy where the final curtain falls on a stage littered with dead bodies. If the audience had developed no strong and justified engagement, admiration for, or sympathy with the tragic hero or others among the characters whose lives thus ended, what would happen to the tragic effect of the play? So, in the course of scientific discovery, as in the composition of music and drama, some ideas must “die” in order that higher ideas might be expressed. The greater the apparent attractiveness, validity and comprehensiveness of the ideas successfully superceeded, the greater the power embodied by the creative discovery.
– An Inferior, but Fruitful Hypothesis –
For these reasons, before proceeding further with the discoveries of Fermat, Bernoulli, Leibniz, Huygens, and Fresnel, we should look a bit closer at the notion which these discoveries, culminating with Fresnel, finally refuted: The notion that light propagates in the form of “rays” projected outward from the luminous or illuminated object; and that to a very high degree of precision these rays take, in a uniform medium, the form of straight lines.
Before rushing to reject this notion out-of-hand (i.e. simply because of the occurrence of straight lines), let us for a moment reflect on the theorems which flow from it. We shall find, in fact, that this descriptive notion of light rays is {extremely useful and fruitful}, as Leonardo himself and many others demonstrated in countless ways. Its eventual rejection by Huygens and Fresnel is by no means so easy and self-evident, as might appear after-the-fact.
Among other things, the principles of so-called “ray optics” was the basis of perspective, and (supplemented by Fermat’s principle) for the analysis and development of lenses. It is still employed on a large scale today in the design of optical instruments, even though the notion of “ray” itself — as something supposedly self-evident and elementary — was decisively refuted by Fresnel and superseded by an entirely different principle.
– Ray Optics and the Camera Oscura –
The idea of resolving light propagation into “rays” is a not a self-evident idea simply drawn from sense-perception, but an {hypothesis}. True, Nature sometimes provides rare circumstances, such as sunlight shining through a break in clouds, where we seem to “see” straight-line rays. However, it is a big step to go from that mere spectacle to a general conception, and indeed the gateway to that conception is guarded by many paradoxes. For example: if every point of every illuminated object emits rays of light in all directions, so that the entire space is filled with an infinity of crisscrossing rays, then how can we ever see anything clearly? And won’t the rays constantly be colliding into each other?
Leonardo said every illuminated object “fills space with pictures of itself.” But if we stand in the middle of a room and hold up a piece of blank paper, we certainly don’t see any pictures projected on it! The reason is not hard to imagine: the light arriving at any given location on the paper arrives from all objects and comes from all directions at the same time; it is consequently mixed up and jumbled together, and no image can result.
How, then, are we able to see anything at all? How do our eyes manage to organize and untangle the light? Renaissance experiments with the so-called “camera oscura” provide a preliminary hypothesis. Build a closed chamber without windows (a closed box) whose walls and ceiling are completely opaque to light. Install a screen on one of the inside vertical walls of the room, and make a small hole in the middle of the opposite wall. An observer sitting inside the room will see, projected onto the screne, an image of the world outside the chamber! In fact, the image on the screen corresponds to what the observer would see, if he were to look outside directly through the hole — except that the image on the screen is upside-down!
Do the experiment, or an equivalent one. What is the difference between the two situations: A) holding up a piece of paper in the middle of a room, and finding no image at all B) putting up the same piece of paper on the wall of the “camera oscura” (or equivalently, imposing an opaque barrier with a small hole, between illuminated objects and a screen)?
Evidently, the hole in the wall fulfills the function of a {lens}, organizing the propagation of light in such a way, that the image appears on the screen. But note, that if we move the screen directly up to the hole, the images disappear, and we get nothing but an undifferentiated spot of light. Not the hole itself, but the total arrangement of hole and the screen held at a significant distance away, provides the relevant organizing function.
Now, account for the function of the “camera oscura” as a {theorem} based on the hypothesis, that light propagates in (approximately) straight-line rays. Account also for the circumstance, that the images on the screen are slightly blurred, depending on the size of the hole.
Related to this, derive as a theorem another, apparently anomalous phenomenon known to the Greeks and discussed at length by Leonardo: The shadow of any object placed in the rays of the Sun, and projected onto a screen at a suitable distance, is not simple and sharp, but consists of a dark interior region (the “core shadow”) outside of which the light gradually increases. Determine the geometrical law by which the relative sizes of the core shadow and the “blurred” partial shadow change, as the distance between object and screen is varied. The analysis is brilliantly confirmed by such phenomena as eclipses of the Sun.
Examine, thus, these and other fruitful consequences of “ray optics” without the oligarchical admixture of Newton, et al.
Now begin to appreciate the shocking, jarring impact of Fresnel’s hypothesis, that shadows are produced “holographically”, i.e., by {interference} of active wave processes inside and around the shadow area itself, and not merely through the blocking-out of linear rays of light by the object.
Transfinite Principle of Light, Part VII: From Appearance to Knowledge
by Jonathan Tennenbaum
In the latter section of last week’s discussion, I gave arguments in support of the notion, that light propagates in straight-line rays. Indeed, by imagining to ourselves that light is a “something” which propagates outward from each point of a luminous object, in all directions along straight-line trajectories, we can account very well for the functioning of the “camera oscura,” for the main features of the shadows cast by objects, for the changes in apparent size of objects according to their distance from us (and other laws of perspective), and many other things. Furthermore, this idea seems to conform well to our sense experience. Cover a sunlit window by a black shade, and put some holes in the shade. In the darkened chamber we can “see” the straight-line rays of light coming through the hole, just as we can directly “see” the rays coming out of a movie projector, especially in a smoke or dust-filled room. Let yourself become so accustomed to this way of conceiving the propagation of light, that it seems perfectly self-evident.
Now take this notion as a model for {any} sort of {apparently successful} opinion or belief. What attitude should we take to it? A critical attitude, of course. But shall we simply reject the notion of straight-line rays of light out of hand, because it doesn’t fit with some ideological doctrine or metaphysical prejudice of ours? Shall we deny that Leonardo da Vinci, Brunelleschi, Kepler, and other great men drew rays of light as straight lines, or that thousands of practical activities, such as in surveying, in technical drawing, etc. seem based on this notion? Shall we simply deny or ignore the evidence just cited?
Or should we not rather admit that there {does} exist a very wide-spread phenomenon, an {effect}, which corresponds at least approximately to what we have described as “straight-line propagation of light?” If so, then so what? An effect or phenomenon is one thing; the axiomatic assumptions, in terms of which we interpret and judge the {significance} of a given array of phenomena, is something completely different.
We fall into a trap, when we jump from a mere description of appearances — or a limited, simple hypothesis — to imputing or superimposing upon the phenomena certain fundamental, axiomatic qualities of assumption, which are by no means called for by the phenomena themselves. Watch out when anybody points with his finger and says: “See this? It proves X,Y,Z.” The expression “evidence of the senses” is defective, because in reality a process of {judgment} based on certain assumptions is always implicit, albeit preconsciously, in any report of such “evidence”.
Indeed, it is common experience (we confront it daily!) that different people, put in front of one and the same array of phenomena, draw radically different, even completely opposite, conclusions. Sometimes we can even witness two or more individuals in such a debate, pointing to one and the same phenomenon as “definitive proof” for their mutually contradictory opinions!
These observations suggest a very big question. Somebody comes along and challenges us: “If you say your interpretation of evidence is determined by your axiomatic assumptions, then how could you ever {know} whether those basic assumptions are true? Aren’t you caught in a vicious circle? How can you reject self-evident assumptions on the one hand, and at the same time claim there is no purely `objective’ evidence which does not involve assumptions of some kind? You can’t have your cake and eat it, too. If you want to be consistent, you have to finally make up your mind: either 1) to reject all fundamental axioms and assumptions, and accept only empirical experience (sense perceptions) as real, `objective’ knowledge of fact; or 2) admit that your fundamental axioms and assumptions can never be scientifically tested or proved in terms of evidence — that they must therefore either be self-evident, or based on some sort of faith or belief, as in revealed religion. Or would you agree with my opinion, that fundamental assumptions are ultimately a matter of arbitrary choice, so that conflicts of opinion can ultimately only be resolved by people killing each other?”
Leaving the reader to ponder his or her answer to this paradox, let’s go back to our concrete case, the supposed straight-line propagation of light rays.
One person (Newton, for example) draws a light ray, and thinks of it as a self-evident, axiomatically linear entity, an entity obeying the formal axioms of “Euclidean geometry.” A second person (Leonardo Da Vinci, for example) sees the same ray as the trace of an intrinsically {nonlinear} process. The objective appearance of the phenomenon is the same. How can we decide between the two interpretations, the two ways of thinking? Here we get to the issue that Fresnel and Ampere were addressing, as Fermat and Huygens before them. A unique experiment signifies more than simply evoking a new “objective phenomenon” from the Universe. The problem is to evoke and communicate a true, validated change in how human beings {think} about the Universe.
Let us go back to the time of Fermat. We do not yet have the demonstrations of interference and diffraction, which Fresnel used to finally demolish Newton’s linear theory of light. But we do have an anomaly called {refraction} that was the focus of Fermat’s elaboration of the {principle of least time}.
Note, for example, that the size and appearance of the Sun and Moon, and the apparent angular motions of the stars, are changed when they get near the horizon — a phenomenon which is commonly explained by the notion, that the rays of light coming from these objects, are {bent} as they pass obliquely through atmospheric layers of changing density. Compare this with the bending of light rays in passing from air to water, or vice-versa, which we can demonstrate in any classroom. With the aid of a simple apparatus we can make the sharp change of angle of the rays at the surface of the water clearly visible. With a bit more effort, we can produce media of varying density and show clearly how the rays follow {curved} trajectories. Let’s try to take on a Newtonian with this:
“So you see, light does {not} travel in straight lines!”
“Yes it does, if you do not disturb it. But by interposing matter, an inhomogenous medium, you deflected the rays from their natural, straight-line paths.”
“How do you know that straight-line paths are `natural’?”
“If a light ray were allowed to propagate unhindered, in a pure vacuum or perfectly homogeneous medium, then it would propagate precisely along a straight line. It is just like the motion of material bodies in space according to Newton’s first law: `a material body remains in its state of rest or uniform motion along a straight line, unless compelled by forces acting upon it to change its state.’ No one could deny that.”
“Does a `pure vacuum’ exist anywhere in nature? Does a `perfectly homogenous medium’ exist in nature?”
“Well no, of course. There is always a bit of dirt around, or inhomogeneities that disturb the perfectly straight pathways.”
“So the presence of what you call `dirt’ is natural, right?”
“Yes.”
“So then it is natural that light never travels in straight-line paths.”
“Wait a minute. You are mixing everything up. I am talking about the natural propagation of light, quite apart from matter.”
“What do you mean, `quite apart from matter’? Do you assume that the existence of light is something that can be separated from the existence of matter?”
“Yes, certainly. The natural state of light is that of light propagating in a Universe that is completely empty of matter.”
“And a completely empty Universe is a natural thing? Do you claim such a think could ever exist?”
“I could imagine one. Sometimes I get that feeling inside my head.”
“Maybe that is because you are not thinking in the real world.”
“Don’t blame me for that. I am a professional physicist.”
“Well then, fill the vacuum in your mind with the following thought: Light and matter do not exist as separate entities, nor does matter act to bend rays of light from what you imagine in your fantasy-universe to be perfectly straight-line rays. Rather, the existence of what we call matter, the existence of light and the fact that light never propagates in straight lines — except in mere appearance — are both interrelated manifestations of the fundamental curvature of physical space-time, which Fermat began to address with his principle of least time.”
Transfinite Principle of Light, Part VIII: When Long Is Short
by Bruce Director
It is a continuous source of happiness, for men and women who have cultivated a capacity for scientific thinking, that Nature acts along the shortest pathways, and those are always curved. Not so, however, for the petty and small minded. For them, such principles are a constant vexation. There is no better example of this, than Pierre de Fermat’s fight with Descartes.
In 1637, Fermat received a copy of Descartes’ Dioptrics. In that work, Descartes considered light to be an impulse of particles travelling instantaneously. From this conception, Descartes presented a mathematical construct of reflection and refraction, by treating these particles, as if they were hard bodies moving in empty space. This was an obvious absurdity, since refraction is the phenomena that occurs when light travels through two different media, not empty space. Into Galileo’s mathematics of moving bodies, Descartes fitted the observed phenomena of the refraction and reflection of light.
Fermat found the work deeply flawed, and said so to Descartes’ epigone Marin Mersenne. First, Fermat said, Descartes erred by relying solely on mathematical reasoning, which, according to Fermat, could not lead to the discovery of physical truths. Furthermore, Fermat attacked Descartes’ mathematics, “of all the infinite ways of dividing the determination to motion, the author (Descartes) has taken only that one which serves him for his conclusion; he has thereby accommodated his means to his end, and we know as little about the subject as we did before.”
Such insolence from an unknown upstart in Toulouse offended Descartes no end. He wrote to Mersenne, “… I would be happy to know what he will say, both about the letter attached to this one, where I respond to his paper on maxima and minima, and about the one preceding, where I replied to his demonstration against my Dioptrics. For I have written the one and the other for him to see, if you please; I did not even want to name him, so that he will feel less shame at the errors that I have found there and because my intention is not to insult anyone but merely to defend myself. And, because I feel that he will not have failed to vaunt himself to my prejudice in many of his writings, I think it is appropriate that many people also see my defense. That is why I ask you not to send them to him without retaining copies of them. And if, even after this he speaks of wanting to send you still more papers, I beg of you to ask him to think them out more carefully than those preceding, otherwise ask you not to accept the commission of forwarding them to me. For, between you and me, if when he wants to do me the honor of proposing objections, he does not want to take more trouble than he did the first time, I should be ashamed if it were necessary for me to take the trouble to reply to such a small thing, though I could not honestly avoid it if he knew that you had sent them to me.”
There the matter rested for 20 years, until, in 1658, one of Descartes’ zealots, Claude Clerselier, asked Fermat for copies of his earlier correspondence to include in a volume of Descartes letters. In the intervening period, Fermat had done his own original work on light, taking off from the work written by Marin Cureau de la Chambre. In August 1657, Fermat wrote Cureau, “you and I are largely of the same mind, and I venture to assure you in advance that if you will permit me to link a little of my mathematics to your physics, we will achieve by our common effort a work that will immediately put Mr. Descartes and all his friends on the defensive.”
Instead of Descartes’ resort to the mythical hard bodies traveling in empty space, Fermat conceived of light as travelling at a finite velocity, that changed depending on the density of the medium through which it travelled. (This was more than 50 years before Ole Roemer conclusively determined the finite velocity of light, in his observations of the moons of Jupiter.) But, more importantly, Fermat proceeded from the standpoint of a universal physical principle, that nature always acts along the shortest paths. The path, in the case of refraction, was not the simple geometrical length of the path, but the path that covered the distance in the least time. “We must still find the point which accomplishes the process in less time than any other …” Fermat wrote to Cureau in January 1662.
Upon receiving Fermat’s letter, Clerselier. In a letter dated May 1662, (translated here by Irene Beaudry) Clerselier wrote:
“Do not think that I am answering you today because you think you have obtained the objective of troubling the peace of the Cartesians…Permit me just to tell you here the reasons that a zealous Cartesian could allege to preserve the honor and the right of his master, but not to give up his own advantage or to give you the initiative.
“1. The principle that you consider as the foundation of your demonstration, that is, that nature always acts along the shortest and simplest pathways, is nothing but a moral principle and not at all physical, that is, no and could not be the cause of any effect of nature.
“It is not, because it is not this principle that makes nature act, but rather, the secret force and the virtue that is in every thing, that is never determined by such or such an effect of this principle, but by the force that is in all causes that come together into one single action, and by the disposition that is actually found in all bodies upon which this force acts.
“And it could not be otherwise, or else, we would presume nature to have knowledge: and here, by nature, we mean only this order and this law established in the world as it is, which acts without foreknowledge, without choice and by a necessary determination…..”
Clerselier objects not to Fermat’s discovery that light travels the path of shortest time, but to the idea that such a universal principle exists at all. Without a universal principle, there is no shortest path, only the arbitrariness of empty space.
This is a matter that confronts all of us directly each day. If civilization’s survival depends on increasing the quality of human cognition, then the shortest path to that survival is the seemingly long and curved route of curing the population of their insanity through mass outreach. Let the petty Clerselier’s take the short-cuts on that long road of destruction.
Transfinite Principle of Light, Epilogue
LEAST ACTION — PRINCIPLE OF NATURE OR PRINCIPLE OF DISCOVERY?
by Jonathan Tennenbaum
What was it about Fermat’s “principle of least time” and Leibniz’s generalized “principle of least action” that so upset the Cartesians and Newtonians, and continues to upset people up to this very day? In reaction to the beating Fermat and Leibniz administered to Descartes, in the 18th century a heated and very confused debate was whipped up concerning so-called “teleological principles in Nature” — a debate which reached its pinnacle of absurdity when Maupertuis claimed priority over the long-dead Leibniz in concocting his own, incompetent version of the least action principle! Behind the diversionary antics of the buffoon Maupertuis, Euler and Lagrange launched their more sophisticated attack on Leibniz. Euler and Lagrange worked to eliminate the self-conscious {principle of discovery} which Leibniz placed at the center of his conception of the physical universe, and thereby to drive a wedge between “Naturwissenschaft” and “Geisteswissenschaft.” can find the trace of these events in our minds, in own struggles to grasp the central conception of Leibniz’ Monadology, or even the seemingly simple “principle of least time” put forward by Fermat in the 1630s.
Build a simple apparatus to demonstrate how a beam of light changes its direction when passing from air into water. Note how the rate of change of direction itself changes as you change the angle at which the light beam strikes the surface of the water. When the beam enters the water perpendicularly to the surface, no change is apparent: the beam continues onward in the same, perpendicular direction. But as we gradually tilt the beam away from the perpendicular direction, we find that the beam is “bent” more and more at the water surface; the direction of the beam inside the water is steeper, i.e., its angle to the vertical is smaller than that of the original beam in the air. (Readers must perform the experiment!). How can we account for the shape of the pathway, and in particular for the lawful relationship of the angles which describe the deflection of the beam at the surface of the water?
Now, the Newtonian-Cartesian way of thinking about this problem will appear natural and even self-evident to most people, comparing Fermat’s and Leibniz’s, because the former corresponds to axioms which have become deeply embedded in our culture. Let’s look at it for a moment. What, indeed, could be more self-evident, than the idea that the pathway of the light beam is created by the light itself in propagating out from the source?
Just so, in Newton’s mechanics, the orbit of a planet exists only as an imaginary trace of its successive positions; those positions being created by the planet’s motion. To Newton, the orbit itself doesn’t exist as an efficient physical entity; what {exists}, at any given time, is only the planet, its momentary position, its state of motion and the momentary gravitational force acting upon it from the Sun. So according to Newton, the fact that a planet traces an elliptical pathway in the course of its motion is just a mathematical accident, a derived theorem of the Newtonian theorem-lattice. So, today, the student is taught to say: “When you solve the equations for motion of the planet under the force of gravity, it just happens to come out to an ellipse.”
Imagine the precocious child who, caught with his hand in the cookie jar, explains: “I couldn’t help it. My body was just obeying the laws of motion.”
Similarly, according to this way of thinking, the pathway of the light beam is just the trace of a “something” or large number of tiny “somethings”, which travel through space from one moment to the next and one point to the another. They would “naturally” travel in straight lines, except insofar as some “external forces” deflect them from a straight-line path. Analyzing the bending of a beam of light going from air to water in this manner, we divide the process into three phases: A) the light propagates undisturbed in a straight line through the air, until B) the beam suddenly “collides” with the water surface, where the light particles are acted upon by some unknown force causing them to change their direction of motion, and from that point on C) they continue travelling in the water in a straight line in the new direction. This is exactly the thinking of Descartes, Newton, Laplace, Biot et al.
Not so Fermat! To come onto his footsteps, let us start from the well-grounded assumption, that Fermat followed Kepler in these matters. Kepler, as we know, regarded the system of planetary orbits and the orbits themselves as real and their determination as {primary} relative to the motions of the planets. An orbit is determined by a characteristic “curvature in the infinitesimally small”, such that any however-small interval of planetary motion already expresses the efficient principle which predetermines the future course of the planet in that orbit.
Could we say, then, that the light follows a predetermined {orbit}? Or should we be more cautious and merely propose, that the pathway of the light beam is merely a visible expression or characteristic of an {underlying physical process}, whose course is {predetermined} in the same sense that a planet’s motion is predetermined by its Keplerian orbit. Either way, we cannot avoid the implication, that all {three} phases A, B, C defined above, and the sequence of all three taken together, embody {one and the same} characteristic infinitesimal curvature!
At this point the formalist-minded will freak out:
“A and C are straight lines, not curved at all; whereas B is where the beam is “bent”! So how can you talk about the same curvature?”
Well, maybe you ought to conclude that the straight-line propagation in A and C is only an {apparently} linear envelop of a nonlinear process.
“Don’t make things so complicated. After all, so long as the light is travelling in phase A through the air, before it comes to the surface of the water, there is no force to divert it; the light doesn’t yet “know” it is going to hit the water, so it will travel in a perfect straight line. Or do you suggest, that the light can look ahead to see the approaching surface of the water?”
Our interlocutor here is trapped in the Newtonian-Cartesian assumption, that time is a self-evident, linearly ordered succession of “moments,” where only the preceding moment can influence the “next” one; just as space were a triply-linear ordering of “places.” This insistence on a trivial, linear ordering of a supposedly empty space-time, rejecting the idea of “nonlinearity in the small”, is key to the freak-out which Fermat caused by his principle of least time.
To shed more light on this question, let us modify our experiment slightly: Install a small light source shining in all directions (e.g., a light bulb) at some position O in the air above the surface of the water. Now take an arbitrary position X in the water, which is illuminated by the light. {What is the pathway by which that result was accomplished?}
We might investigate as follows: Find the positions Y, both in air and water, at which an opaque object, placed at Y, causes the illumination of X to be interrupted. (Do the experiment!) We find, in fact, that those positions lie along a clearly-defined pathway going from O to X. That pathway in fact runs in an apparent straight-line from O to a certain location, L, at the surface of the water; and there, abruptly changing its direction, it continues on in an apparently straight trajectory to X. We can also verify, that if we now replace the light bulb at O by a device which produces a directed beam, and point the beam in the direction toward L, then it will continue along the entire pathway we just determined, and illuminate X. If we point the beam in a different direction, then (leaving aside extraneous reflections and so forth), it does {not} arrive at X. Our conclusion: this is the {unique} trajectory, by which light, emitted at O, can and does arrive at X.
Now what do we do, striving to follow Kepler in this matter? Instead of trying to concoct some Newtonian-like “law of motion” by which the light supposedly proceeds blindly, step-by-step from one moment to the next, consider instead at the {space-time process as a whole}. How is it that a unique trajectory (or “tube of trajectories” appearing to our senses as a single one) is determined, among all other conceivable other paths running from O to X, as the one which is actually {realized} by light? What is the sufficient and necessary reason? Evidently, not some property of light in and of itself. Ah! Don’t forget the rest of the Universe! Don’t forget that our experiment is part of the ongoing {history of the Universe}, and what we call “light” is just a localized manifestation of the {entire Universe} acting upon itself in that specific historical interval. If so, then shall we not regard the observed pathway of light as a {projection} of the Universe’s ongoing historical orbit, its “world line”?
Now, perhaps, we can begin to appreciate the significance of the Fermat-Leibniz principle and the freak-out it evoked among the followers of Aristotle.

Don’t Vote for Anyone Who Doesn’t Know Kepler

by Bruce Director
The foolishness of relying on pure mathematical models for the design and production of automobiles, nuclear weapons, or any other physical device, would be obvious to anyone with a minimal level of knowledge of the discoveries of Cusa, Kepler, Leibniz, Gauss, Riemann, et al. Unfortunately, such knowledge is virtually non-existent among the leaders of governments and businesses, today, as the frauds of the Mercedes A-class and the Cox report amply demonstrate. Fortunately, those who study the writings of Lyndon LaRouche need not suffer the afflictions of the aforementioned Lilliputians.
Take the case of Kepler’s discovery of the physical characteristics of planetary motion enunciated in his New Astronomy. As we demonstrate below, through their own words, Kepler demolished, nearly 400 years ago, the mathematical modelers of his day.
In the introduction of that work Kepler states:
“The reader should be aware that there are two schools of thought among astronomers, one distinguished by its chief, Ptolemy and the assent of the large majority of the ancients, and the other attributed to more recent proponents, although it is the most ancient. The former treats the individual planets separately and assigns cause to the motions of each in its own orb, while the latter relates the planets to one another, and deduces from a single common cause those characteristics which are found to be common to their motions. The latter school is again subdivided. Copernicus, with Aristarchus or remotest antiquity, ascribes to the translational motion of our home, the earth, the cause of the planets appearing stationary and retrograde. Tycho Brahe, on the other hand, ascribes this cause to the sun, in whose vicinity he says the eccentric circles of all five planets are connected as if by a kind of knot (not physical, of course, but only quantitative). Further, he says that this knot, as it were, revolves about the motionless earth, along with the solar body.
For each of these three opinions concerning the world there are several other peculiarities which themselves also serve to distinguish these schools, but these peculiarities can each be easily altered and amended in such a way that, so far as astronomy, or the celestial appearances, are concerned, THE THREE OPINIONS ARE FOR PRACTICAL PURPOSES EQUIVALENT TO A HAIR’S BREADTH, AND PRODUCE THE SAME RESULT.”
What Kepler is referring to is the fact that the observed motions of the stars, planets, sun, and moon, can be calculated equally by the three radically different mathematical models of Ptolemy, Copernicus, and Tycho Brahe.
The most elementary observations of the motions of heavenly bodies reveal two distinct motions. The so-called first motion, is the uniform daily movement across the sky of the sun, moon, stars, and planets from east to west. (Don’t take my word for it though. Go out an look for yourself!) The so-called second motion, is movement from west to east of the planets, sun, and moon, with respect to the fixed stars, over longer periods of time. Upon careful observation, this second motion is seen to be non-uniform. The planets, moon, and sun move slower and faster at different stages in the second motion, and, the planets, at times appear to stop and move backward with respect to the stars, at different stages in the course of the second motion.
The observation of these two motions is not the stuff of casual sense experience, but a characteristic of human reason. In the first chapter of the New Astronomy, Kepler says:
“The testimony of the ages confirms that the motions of the planets are orbicular. It is an immediate presumption of reason, reflected in experience, that their gyrations are perfect circles. For among figures it is circles, and among bodies the heavens, that are considered the most perfect. However, when experience is seen to teach something different to those who pay careful attention, namely, that the planets deviate from a simple circular pattern, it gives rise to a powerful sense of wonder, which at length drives men to look into causes.”
Neither Ptolemy, Copernicus, nor Tycho Brahe, however, ever laid claim to that “powerful sense of wonder,” of which Kepler speaks.
In the opening of the Almagast, Ptolemy says, “Those who have been true philosophers, Syrus, seem to me to have very wisely separated the theoretical part of philosophy from the practical…. For indeed Aristotle quite properly divides also the theoretical into three immediate genera; the physical, the mathematical, and the theological.”
Ptolemy goes on to say that man can know nothing certain of the theological nor physical:
“The theological because it is in no way phenomenal and attainable, but the physical because its matter is unstable and obscure, so that for this reason philosophers could never hope to agree on them; and meditating that only the mathematical, if approached enquiringly, would give its practitioners certain and trustworthy knowledge with demonstration both arithmetic and geometric resulting from indisputable procedures, we were led to cultivate most particularly as far as lay in our power this theoretical discipline.”
Having dispensed with any pretense that his theory had any physical reality, Ptolemy developed his now infamous system of intricate earth-centered cycles, eccentrics, and epicylces to mathematically calculate the positions of the planets, stars, moon, and sun, over time. While Ptolemy’s system can truthfully be called a fraud, the bigger frauds are those, who until this day, propounded this mathematical system, as physical hypothesis.
Copernicus replaced Ptolemy’s complicated system, with the simpler and more beautiful sun-centered system, where the earth and the planets move in perfect circles about a stationary sun. Nevertheless, this was a purely mathematical model. In the Introduction to his “On the Revolutions of the Heavenly Spheres,” Copernicus says:
“For it is the job of the astronomer to use painstaking and skilled observation in gathering together the history of the celestial movements, and then — since he cannot by any line of reasoning reach the true causes of these movements — to think up or construct whatever causes of hypotheses he pleases such that, by the assumption of these causes, those same movements can be calculated from the principles of geometry for the past and for the future. This artist is markedly outstanding in both of these respects; for it is not necessary that these hypotheses should be true, or even probable; but it is enough if they provide a calculus which fits the observations….”
As Kepler describes above, Tycho Brahe’s mathematical model had all the planets revolving around the sun, and this knot moving around a stationary Earth. But as Kepler says, Brahe’s system is not physical, but merely quantitative.
Since the systems of Ptolemy, Copernicus, and Brahe are all mathematically equivalent, and none lay claim to any physical reality, how can one distinguish which one is true? Only in the domain of physical measurement. This is precisely the revolutionary discovery that Kepler makes, following the path laid out by his mentor, Nicholas of Cusa.
Again, in the Introduction of the New Astronomy Kepler continues:
“My aim in the present work is chiefly to reform astronomical theory (especially of the motion of Mars) in all three forms of hypotheses, so that our computations from the tables correspond to the celestial phenomena. Hitherto, it has not been possible to do this with sufficient certainty. In fact, in August 1608, Mars was a little less than four degrees beyond the position given by calculation from the Prutenic tables. In August and September of 1593 this error was a little less than five degrees, while in my new calculation the error is entirely suppressed.
“… The eventual result of this consideration is the formulation of very clear arguments showing that only Copernicus’s opinion concerning the world (with a few small changes) is true, that the other two accounts are false, and so on.
“Indeed, all things are so interconnected, involved, and intertwined with one another that after trying many different approaches to the reform of astronomical calculations, some well trodden by the ancients and others constructed in emulation of them and by their example, none other could succeed than the one founded upon motions’ physical causes themselves, which I establish in this work.”
Readers of previous pedagogical discussions, and the Fidelio article on Gauss’ determination of the orbit of Ceres for will know something of Kepler’s discoveries. Isn’t it time we raised the level of thinking of the citizenry, so that they would demand such knowledge of their elected officials and designers of automobiles?
Newton’s World: No Love, Just Copulation
by Bruce Director
Several weeks ago we presented, in their own words, a demonstration that Kepler’s determination of the principles of planetary motion, demolished the Aristotelian methods of “mathematical modeling,” adhered to by Ptolemy, Brahe, and Copernicus. This week, we follow up with a further consequence of that demonstration: that all subsequent scientific inquiry that did not follow Kepler’s method was not just wrong, but fraudulent.
As presented in the previous discussion, Kepler, in the “New Astronomy,” set out to completely revolutionize astronomy (and all science) by putting it on a foundation of physical principles. As they testified themselves, Ptolemy, Copernicus, and Brahe were concerned only with developing formal descriptions of the observed motions of the planets. Truthfulness was limited to the logical-deductive consistency of those descriptions, and the consistency of those descriptions with observations. As Kepler stated, all three descriptions were equivalent “within a hair’s breadth,” but all three deviated from the observations by an amount greater than the margin of error associated with the capacity of the measuring instruments used for those observations.
The specific observed phenomena that concerned Kepler, Ptolemy, Brahe, and Copernicus, were the two unequal motions of the planets, observed by humankind since ancient times.
The first “inequality” was the observed non-uniform motion of the planets, in a cycle, from west to east, through the constellations of the zodiac. Each planet made this circuit in different lengths of time, and, as each travelled through its cycle, it appeared to move faster through certain constellations than others, that is, traversing a greater angular arc in the sky for a given time interval, depending on which constellation of the zodiac it was moving through.
The second “inequality” was the so-called “retrograde” motion, when the planet appeared to move from east to west through the zodiac. This was observed when the planet was rising in the east just as the sun set in the west. This configuration was known as “opposition.”
Ptolemy, Copernicus, and Brahe all described these phenomena with radically different geometrical constructions, but all held firm to the belief that these apparent non-uniform motions, were just that; “apparent,” not real. All three believed that the “true” motion of the planet had to be uniform circular motion. The two “inequalities” were simply optical illusions, owing to the complicated concoction of circles, that each had created.
Kepler took an entirely different approach:
“The testimony of the ages confirms that the motions of the planets are orbicular. It is an immediate presumption of reason, reflected in experience, that their gyrations are perfect circles. For among figures it is circles, and among bodies the heavens, that are considered the most perfect. However, when experience is seen to teach something different to those who pay careful attention, namely, that the planets deviate from simple circular path, it gives rise to a powerful sense of wonder, which at length drives men to look into causes.”
Driven by this “powerful sense of wonder,” Kepler looked into the causes. First he established the equality of the Ptolemaic, Brahean, and Copernican models. Then Kepler abandoned the false belief of embedded in all three models, that the “true” motion was uniform circular motion, and the non-uniform motion was simply apparent. Instead, Kepler took the apparent motion as the true. That is, that the planets actually did move non-uniformly. Once this conceptual bridge had been crossed, the geometrical construction of the planets moving on an orbit, about an eccentric and sweeping out equal areas in equal times, proceeded from the physical measurements themselves. The power that moved the planet, according to Kepler, had to be located at that eccentric.
Under this conception, the planet’s distance from the eccentric about which it was moving, varied continuously. That is, as the planet moved about it’s orbit, the distance from the planet to the eccentric was always getting longer or shorter, and consequently, the effect of the moving power was increasing as the distance decreased and diminishing as the distance increased. Then Kepler demonstrated that the moving power resided in the Sun, which was located at the eccentric point. When this conception was again tested against the physical measurements, Kepler refined his construction to an elliptical orbit with the Sun located at one of the foci. Later, Kepler demonstrated a third principle of planetary motion between the periodic times and the size of the orbit, mischaracterized today as his “Third Law.” (The reader can consult chapter’s 5-8 in the Summer 1998 Fidelio article on how Gauss Determined the Orbit of Ceres).
How absolutely banal, sterile, and fraudulent, is therefore, Newton’s resort to action at a distance according to the inverse square law. This is ass backwards. For Newton, the planetary motion is reduced to a copulation along the straight line connecting the planet to the Sun. The physical space time curvature of Kepler is eliminated. Only straight-line copulation remains.
So fraudulent is Newton’s view, that according to Riemann:
“Newton says: `That gravity should be innate, inherent, and essential to matter, so that one body can act upon another at a distance through a vacuum, without the mediation of anything else, by and through which their action and force may be conveyed from one to another, is to me so great an absurdity, that I believe no man who has in philosophical matters a competent faculty of thinking can ever fall into it.’ See the third letter to Bently.”
Yet people continue to adhere to the false beliefs that underlie Ptolemy and Newton. With their asses facing the students, professors throughout the world present Newton’s straight-line copulation as the basis for planetary motion, despite the final burial of Newton by Gauss with his discovery of the orbit of Ceres. In his {Theoria Motus} Gauss says:
“The laws above stated differ from those discovered by our own Kepler in no other respect than this, that they are given in a form applicable to all kinds of conic sections … If we regard these laws as phenomena derived from innumerable and indubitable observations, geometry shows what action ought in consequence to be exerted upon bodies moving about the sun in order that these phenomena may be continually produced. In this way it is found that the action of the sun upon the bodies moving about it is exerted {as if} an attractive force, the intensity of which is reciprocally proportional to the square of the distance should urge the bodies towards the center of the sun. (emphasis supplied.)
Turn again to Kepler from the introduction of the {Mysterium Cosmographicum}:
“Though why is it necessary to reckon the value of divine things in cash like victuals? Or what use, I ask, is knowledge of the things of Nature to a hungry belly, what use is the whole of the rest of astronomy? Yet men of sense do not listen to the barbarism which clamors for these studies to be abandoned on that account. We accept painters, who delight our eyes, musicians, who delight our ears, though they bring no profit to our business. And the pleasure which is drawn from the work of each of these is considered not only civilized, but even honorable. Then how uncivilized, how foolish, to grudge the mind its own honorable pleasure, and not the eyes and ears. It is a denial of the nature of things to deny these recreations. For would that excellent Creator, who has introduced nothing into Nature without thoroughly foreseeing not only its necessity but its beauty and power to delight, have left only the mind of Man, the lord of all Nature made in his own image, without any delight? Rather, as we do not ask what hope of gain makes a little bird warble, since we know that it takes delight in singing because it is for that very singing that the bird was made, so there is no need to ask why the human mind undertakes such toil in seeking out these secrets of the heavens. For the reason why the mind was joined to the senses by our Maker is not only so that man should maintain himself, which many species of living things can do far more cleverly with the aid of even an irrational mind, but also so that from those things which we perceive with our eyes to exist we should strive towards the causes of their being and becoming, although we should get nothing else useful from them. And just as other animals, and the human body, are sustained by food and drink, so the very spirit of Man, which is something distinct from Man, is nourished, is increased, and in sa sense grows up on this diet for these things. Therefore as by the providence of nature nourishment is never lacking for living things, wo we can say with justice that the reason why there is such great variety in things and treasuries so well concealed in the fabric of the heavens, is so that fresh nourishment should never be lacking for the human mind and it should never disdain it as stale, nor be inactive, but should have in this universe an inexhaustible workshop in which to busy itself.”
Newton’s Gore
by Bruce Director
After reading the past two pedagogical discussions on this subject, there should be no doubt in your mind that Newton was a fraud. The question remains: why does Newton work? Not, why do Newton’s theories work — they don’t — but why does the fraud work?
The populist conspiracy theorist, or anyone else prone to superficial thinking might conclude that the fraud works through the suppression of Kepler. True, many of Kepler’s writings have been obscured over the ages, not widely published or translated, nor taught as original sources in secondary schools or universities. Nevertheless, they are available for any thinking person to obtain and study. Furthermore, the physical anomalies, from which the principles on which Kepler’s discoveries are based can be observed any night by anybody from any where on Earth.
No! it is not a lack of information, that keeps the fraud of Newton alive. Nor is the fraud perpetrated by controlling the purse strings of professors and scientists, or the raw political power of the British Royal Society, although that certainly is an element. None of that explains why generation after generation, Newton’s fraud is accepted willingly, to the point where victims of this fraud will hysterically defend it when challenged.
There is something more sinister involved, a vulnerability inside the mind of these wretched creatures that leads them to prefer the straight-line copulative world of Newton; to desire a world uncomplicated by the primacy of curvilinear action; and to yearn for a universe free of disturbing discontinuities.
To find this flaw, start with the report published in the May 31, 1999 briefing, quoting St. Augustine’s report from his Confessions of how his friend was drawn, against his better judgement, into lusting for the savagery of the Roman Circus. This begins to approximate the mindset that draws the unsuspecting dupe into Newton’s world.
Or, turn to the insightful allegory “Mellonta Tauta” of Edgar Allen Poe, whose protagonist reports:
“Do you know that it is not more than a thousand years ago, since the metaphysicians consented to relieve the people of the singular fancy that there existed but {two possible roads for the attainment of Truth!} Believe it if you can! It appears that long, long ago, in the night of Time, there lived a Turkish philosopher (or Hindoo possibly) called Aries Tottle. This person introduced, or at all events propagated what was termed the deductive or {a priori} mode of investigation. He started with what he maintained to be axioms or `self-evident truths,’ and thence proceeded `logically’ to results. His greatest disciples were one Neuclid and one Cant. Well, Aries Tottle flourished supreme until the advent of one Hog, surnamed the `Ettrick Shepherd,’ who preached an entirely different system, which he called the {a posteriori} or {inductive}. His plan referred altogether to Sensation. He proceeded by observing, analyzing and classifying facts — {instantiae naturae}, as they were affectedly called — into general laws. Aries Tottle’s mode, in a word, was based on {noumena}; Hog’s on {phenomena}. Well, so great was the admiration excited by this latter system that, at its first introduction, Aries Tottle fell into disrepute; but finally he recovered ground, and was permitted to divide the realm of Truth with his more modern rival. The savants now maintained that the Aristotelian and Baconian roads were the sole possible avenues to knowledge. `Baconian,’ you must know, was an adjective invented as equivalent to Hog-ian and more euphonious and dignified.
“Now, my dear friend, I do assure you, most positively, that I represent this matter fairly, on the soundest authority; and you can easily understand how a notion so absurd on its very face must have operated to retard the progress of all true knowledge — which makes its advances almost invariably by intuitive bounds. The ancient idea confined investigation to {crawling} and for hundreds of years so great was the infatuation about Hog especially, that a virtual end was put to all thinking properly so called. No man dared utter a truth for which he felt himself indebted to his Soul alone. It mattered not whether the truth was even {demonstrably} a truth, for the bullet-headed {savants} of the time regarded only {the road} by which he had attained it. They would not even look at the end. `Let us see the means,’ they cried, `the means!’ If, upon investigation of the means, it was found to come neither under the category Aries (that is to say Ram) or under the category Hog, why then the {savants} went no farther, but pronounced the `theorist’ a fool, and would have nothing to do with him or his truth….
“Now I do not complain of these ancients so much because their logic is, by their own showing, utterly baseless, worthless and fantastic altogether, as because of their pompous and imbecile proscription of all {other} roads of Truth, of all {other} means for its attainment than the two preposterous paths — the one of creeping and the one of crawling — to {which} they have dared to confine the Soul that loves nothing so well as to {soar}.
“By the by, my dear friend, do you not think it would have puzzled these ancient dogmaticians to have determined by {which} of their two roads it was that the most important and most sublime of {all} their truths was, in effect, attained? I mean the truth of Gravitation. Newton owed it to Kepler. Kepler admitted that his three laws were {guessed at} — these three laws of all laws which led the great Inglitch mathematician to his principle, the basis of all physical principle — to go behind which we must enter the Kingdom of Metaphysics. Kepler guessed — that is to say, {imagined}. He was essentially a `theorist’ — that word now of so much sanctity, formerly an epithet of contempt. Would it not have puzzled these old moles, too, to have explained by which of the two `roads’ a cryptographist unriddles a cryptograph of more than usual secrecy, or by which of the two roads Champollion directed mankind to those enduring and almost innumerable truths which resulted from his deciphering the Hieroglyphics?”
For the moment, no more need be said.

Incommensurability and {Analysis Situs}, Part I

by Jonathan Tennenbaum
The issue of analysis situs becomes unavoidable, when we are confronted with a relationship of two or more entities A and B (for example, two historical events or principles of experimental physics), which do not admit of any simple consistency or comparability, i.e., such that the concepts and assumptions, underlying our notion of “A,” are formally incompatible with those underlying “B.” In the case where the relationship between A and B is undeniably a causally efficient one, we have no rational choice, but to admit the existence of a higher principle of lawful relationship (a “One”) situated beyond the framework provided by A and B as originally understood “in and of themselves.”
Exactly the stubborn, “dumbed down” refusal to accept the existence of such higher principles of analysis situs, lies at the heart of the chronic mental disease of our age. That includes, not least of all, the Baby Boomer’s typical penchant for “least common denominator” approaches to so-called “practical politics.” Antidotes are urgently required.
An elementary access to this problem, as well as a hint at analysis situsitself, is provided by the ancient discovery–attributed to the school of Pythagoras–of the relative incommensurability of the diagonal and side of a square. This discovery, a precursor to Nicolaus of Cusa’s “Docta Ignorantia,” could with good reason be characterized as a fundamental pillar of civilization, which ought to be in the possession of every citizen; indeed, the rudiments thereof could readily be taught to school children. Yet, NOWADAYS there are probably only a HANDFUL of people in the whole world, who approach having an adequate understanding of it.
In order to appreciate the Pythagorean discovery, it were better to first elaborate a lower-order hypothesis concerning measurement and proportion, and then see why it is necessary to abandon that hypothesis at a certain well-defined point, in favor of a higher-order conception. The hypothesis in question is connected with the origin of what might be called “lower arithmetic”–as contrasted to Gauss’ “higher (geometrical) arithmetic”–which however is not to deny the eminent usefulness and even indispensibility of the lower form within a certain, strictly delimited domain. On the other hand, the discoveries of the Pythagorean school put an end to what might otherwise have become a debilitating intoxication with simple, linear arithmetic, one not unsimilar to the present-day obsession with formal algebra and “information theory.”
Linear measure
Already in ancient times, it became traditional to distinguish between three species (or degrees of extension) of geometry within Euclidean geometry itself: so-called linear, plane, and solid geometry. The phenomenon of “incommensurability” bursts most clearly into view, when we attempt to carry over certain notions of measurement and proportion, apparently reasonable and adequate for the comparison of lengths along a line, into the doubly- and triply-extended domains of plane and solid geometry. Actually, the problem is already present in the lower domain; but it takes the transition to the higher domains to “smoke it out” and render it fully intelligible.
The commonplace notion of measurement and proportion, is based on the hypothesis that there exists some basic element or “unit,” common to the entities compared, out of which each of the entities can be derived by some formally describable procedure. In the linear domain of Euclidean geometry–which, incidentally, presupposes the hypothesis, that length is independent of position–this approach to measurement unfolds on the basis of three principles:
First, given two line segments, we preliminarily examine their relations of position, i.e., whether they are disjoint, overlap, or one is contained in the other. Secondly, we superimpose them, by means of so-called “rigid motion” (again, an hypothesis!), to ascertain their relation in terms of “equal length,” “shorter,” or “longer.” And thirdly, we extend or multiply a given line segment, by adjoining to it reproductions of itself, i.e., segments of equal length.
By combining these principles, we arrive at such propositions as “segment B is equal in length to (or shorter or longer than) two times segment A,” or such more complicated cases as “three times segment B is equal to (or shorter or longer than) five times segment A,” and so forth. [Figure 1.] In the case, where a segment B is determined to be equivalent (in length) to a multiple of segment A, it became customary to say, that “A exactly divides (or measures) B,” and to express the relationship by supplying the exact number of times that A must be replicated, in order to fill out a length equivalent to B. Where such a simple relationship does not obtain between A and B, it would be natural to direct our efforts toward finding a smaller segment C, which would exactly divide A and exactly divide B at same time (commensurability!). In case we succeed, the ratio of the corresponding multiples of C, required to produce the lengths of A and B respectively, would seem to perfectly express the relationship between A and B in terms of length. So, the proposition “A is three-fifths of B” or “A is to B as three is to five” would express the case, where we had determined, that A = 3C and B = 5C for some common “unit” C. [Figure 2.] – The paradox of `Euclid’s algorithm’ –
HOW, a practically-minded person would probably ask, might we discover a suitable common divisor C for any given segments A and B? It were natural to first try the shorter of the two lengths, say A, and to seek the largest multiple of A which is not larger than B. If that multiple happens to exactly equal B, we are finished, and can take C = A. Otherwise, we shall have to deal with the occurrence of a “remainder” in the form of a segment R, shorter than A, by which the indicated multiple of A falls short of B’s length. One possible reaction to this would be, to divide A in half, and then if necessary once again in half, and so on, in the hope that one of the resulting series of sub-segments might be found to exactly divide B. Those skillful in these matters will see, however, why such an approach must often lead to a dead end–as for example when the lengths of A and B happen to stand in the ratio 3 to 5, in which case successive halving of A or B could never produce a common divisor. [Figure 3.]
A much more successful approach, which (at this stage of the problem) represents a “least action” solution, became known in later times as “Euclid’s algorithm”: In case the shorter segment, A, does not divide B exactly, we take as next “candidate” the remainder R itself. If R divides A exactly, then R is evidently a common divisor of both A and B. Otherwise, take the remainder of A upon division by R–call it R’–as the next “candidate.” Again, if R’ exactly divides R, then (by working the series of steps backwards) R’ will also divide A and B. If not, we carry the process another step further, producing a new, even smaller remainder R”, and so forth. This approach has the great advantage that, ASSUMING A COMMON DIVISOR of A and B ACTUALLY EXISTS, we shall certainly find one. In such a case, in fact, as the reader can confirm by direct experiments, the indicated process leads with rather extraordinary rapidity, to the greatest common divisor of the segments A and B. [Figure 4.]
The discussion so far, however, leaves us with a rather considerable paradox. For the case, that there exists a segment dividing A and B exactly, the indicated approach to measurement and proportion, provides us with an efficient means to find the largest such common divisor, as well as to derive an EXACT characterization of the relationship of A to B in terms of a ratio of whole numbers. At the same time, however, some of us might have caught a glimpse of a potential “disaster” looming on the horizon: What if the “Euclid algorithm,” sketched above, fails to come to an end? It were at least conceivable, that for some pairs A, B, the successive remainders R, R’, R”…, while rapidly becoming smaller and smaller, might each differ sensibly from zero.
Within the limits of the ideas we have developed up to this point, we find the means neither to rule out such a “disaster” (“bad infinity”), nor to devise a unique experiment which might demonstrate the failure of “Euclid’s algorithm,” while at the same time providing a superior approach.
Evidently, it were folly to search for an answer within the “virtual reality” of linear Euclidean geometry per se. We need a flanking maneuver, to catapult the whole matter into a higher domain. [To be continued.]
EXCERPTS FROM A REPLY BY JONATHAN TENNENBAUM TO QUESTIONS ON HIS PEDAGOGICAL DISCUSSIONS
Dear Reader,
Pardon my delay in responding to your queries concerning the pedagogical discussions.
Let me first address the last point in your letter, which is the most significant. I mean the following passage:
“On the notion that the rate of change, or change in the rate of change is alien to Euclid, needing to be imported from our higher vantage point: A number of us just do not see the revolutionary ‘axiom-busting’ nature of this concept…”
Judging from your report, the problem which came to the surface during your discussions, is fundamental. I am very happy that the problem surfaced, although it tells me that my pedagogical tactic failed, at least in some cases. No matter. We often learn more from our failures, than from our successes!
What I think is going wrong, in part, is that many (probably most) people haven’t broken through yet, or are still resisting, to grasp in a really SENSUOUS way, what Lyn is trying to get at with his discussions of theorem-lattices and changes of axioms. People have a kind of abstract understanding of these matters, which they can present formally, can cite examples and so on, and even apply the concept in a certain way; but it’s still skin-deep, somewhat superficial learning. Above all, there is an emotional problem, a problem of INDIFFERENTISM or “decoupling” of mental activity from passion, which was induced from very early on in school, in university studies, and actually by our whole cultural environment. All of us of our generation — I would not exclude myself — have to struggle with this problem to one extent or another.
In order to function properly, the pedagogical discussions must be composed and read, not like sections of a textbook, but rather as miniature DRAMAS of the most rigorous sort. A drama involves powerful emotion. It is not just an “intellectual exercise.” In a well-composed and well-acted tragedy, the achievement of the desired effect on the audience, requires, that the individuals in the audience actually TAKE INTO THEIR OWN MINDS, by a powerful sort of “resonance” (empathy) the thought-processes projected by the dramatist with the aid of the characters. Under such conditions, the dramatist can operate DIRECTLY on the inner mental processes of the audience.
The simplest form of pedagogical discussion presents a TYPE of physically-demonstrable, valid transition from a hypothesis “A,” to a superior hypothesis “B,” such that the theorem-lattices, corresponding to “A” and “B” respectively, are separated from each other by an absolute mathematical discontinuity. In other words, although “B” subsumes (albeit in reworked form) that aspect of “A” which has not been invalidated by the experimental discovery, there is no way to get from “A” to “B” by deductive methods.
In some cases, an experimental demonstration directly refutes an explicit prediction of “A.” Thus, we demonstrate, that an event, which a theorem of “A” says must occur in a certain way, does NOT occur in that way. But very often, the most prominent characteristic of an experimental demonstration, is that it reveals an implicit LIMITATION in the original hypothesis “A,” rather than, so to speak, an explicit error. Something is demonstrated to occur in the real universe, which COULD NOT EXIST in the “mental world” circumscribed by hypothesis “A.” It is not necessary, that the event AS SUCH be EXPLICITLY FORBIDDEN by “A.” In fact, “A” will generally have NO CONCEPT for the event: “A” cannot account for its existence; it presents an insoluable paradox; it is “unimaginable.” And yet, the human mind (though perhaps not the mind of a radical positivist) is forced to acknowledge its existence as experimentally demonstrated.
Actually, the two cases are not so different, as might appear at first glance, if we understand the concept of “hypothesis” to mean, not just an assumption about this or that specialized area, but (at least, implicitly) a WAY OF THINKING about the ENTIRETY OF THE UNIVERSE. For, THE MIND IS ONE. In fact, our mind tends to extrapolate or “project” the underlying limitations of a given hypothesis, upon the entirety of the universe, in such a way that those limitations become “invisible” to us. So, the fish considers the fishbowl to be the entire universe, until something is demonstrated to exist outside the fishbowl. Only then, do the limits of the fishbowl become apparent.
I suspect that people miss the Earth-shaking implications of the pedagogical demonstration in question, because they are holding the hypotheses involved safely at arm’s length, rather than letting them really sink in. In other words, not really getting involved. You really have to become accustomed to the mental world of hypothesis “A” for a certain time, internalizing the corresponding mode of thinking, in order then to experience FROM “INSIDE,” so-to-speak, a crucial moment of physically demonstrable FAILURE of the mode. This requires a kind of mental dexterity and playfulness, to “forget” or “unlearn” the existence of the superior hypothesis “B” (in this case, connected with the necessary introduction of notions of “rate of change”), even though that has long become a part of our general culture. We have to use our imagination in order to place ourselves mentally, in a sense, back into the period BEFORE the discovery in question was made. In the same way, we should be able to imagine, on the basis of higher hypothesis, a future world embodying experimental refutations of hypotheses which we today regard as self-evident.
Were the Greeks and others, who developed their physical science in terms of “Euclidean geometry,” all stupid or evil? Certainly not! Although an adequate history has yet to be assembled, it is certain, that what we now call “Euclidean geometry” BEGAN as a series of REVOLUTIONARY BREAKTHROUGHS in physics, associated with the discovery and elaboration of certain general principles of CONSTRUCTION. The highest point of this development, as stressed by Kepler, was embodied in the treatment of the five regular solids, formally summarized by Euclid in the famous Tenth Book of Euclid’s {Elements}. The Greek constructive geometry, reworked by Euclid as a prototype of a formal theorem-lattice, embodied a kind of technology of thinking, far superior to what had existed prior to that (for example in the Egyptian or ancient Chinese science, as far as we know).
Thus, it were useful, before proceeding to my pedagogical discussion of the circle, to first get back into the mode of Euclidean geometry. For example by doing constructions such as: constructing perpendiculars and parallels, constructing divisions of the circle (equilateral triangle, square, pentagon, hexagon), constructing the golden section, bisecting any given angle, dividing a line segment into any given number of equal segments, constructing the tangent to a circle at any point, constructing a demonstration of Pythagoras’ theorem, etc. Allow yourselves to get into the “mind set” of this type of approach to problems. This is the same thing I tried to do in the earlier discussion of incommensurability, where I introduced the “Euclid’s algorithm” in one-dimensional geometry, not so much its own sake, but as characteristic of a kind of approach to the problem of measurement.
Of course, the concept of CHANGE is central to the every positive development of human civilization. The constructive geometry of the Greeks itself represents an attempt to deal with that. Of course, the notion of change and rate of change is “always there,” in a certain way, within higher hypothesis (see Plato’s {Timaeus}, for example). But the elaboration of a constructive geometry based explicitly on the notion of variable rate of change, came much later. Just compare the physics of Archimedes, with the physics launched in Nicolaus of Cusa’s {Docta Ignorantia} and brought to full development through the non-algebraic function theory of Huygens, Leibniz and Bernoulli. The turning-point, as far as we can see, came with the revolutionary shift in conception, embodied in Nicolaus of Cusa’s treatment of the circle and related topics, relative to the Euclidean approach of Archimedes.
Thus, you will not find the notion of “variable rate of change,” as that is understood by Leibniz, in Euclidean geometry. It’s not there. It is certainly implicit in the higher hypothesis guiding the development of Greek geometry, in Plato and so forth; but it was not yet actualized as an elaborated hypothesis. Thus, there is a constant TENSION between hypothesis and higher hypothesis, which constantly drives knowledge forward, employing a succession of unique experiments.
I hope these remarks will be helpful to you and your colleagues….
Concerning your reference to “solving” equations for the ratio of diagonal to side of an isoceles triangle, I would caution as follows: When an algebraicist says “the square root of two,” he is usually only slapping a label onto an UNFILLED GAP in his knowledge. He has not thereby developed a CONCEPT. Whereas by contrast, the paradoxical result of the geometrical construction evokes — in the mode of metaphor, and not merely pasting formal labels on things — an actual concept of a precisely-characterized, yet linearly inexpressible magnitude.
Concerning your query on light, I intend to develop some pedagogical discussions on exactly this subject, which requires a certain amount of elaboration. But from the way you expressed your question, I suspect that people have been boxing themselves a bit into a too constricted, literal, “mathematical” way of thinking about these matters. What is worthwhile to reflect about in a broad way — without necessarily expecting to come up with a “final answer” — is the question: What kind of Universe are we living in, in which such phenomena as refraction and diffraction of light can take place? Then, compare that with the “mental world” associated with the Euclidean approach to geometry.
Keep up the good work. I will be happy to help if you have any further queries.
Best wishes,
Jonathan Tennenbaum
Incommensurability and Analysis Situs Pedagogical Discussion Part II: Experimental demonstration of incommensurability
CAN YOU SOLVE THIS PARADOX
by Jonathan Tennenbaum
Moving from singly-extended, linear geometry, to doubly-extended (plane) geometry, provides us with a relatively unique experiment for the solution of the paradox presented above.
Synthetic plane geometry excels over singly-extended linear geometry in virtue of the principle of angular extension (rotation), as embodied by the generation of the circle and its lawful divisions. Among the latter, the square (via the array of its four vertices) is most simply constructed, after the straight line itself, by twice folding or reflecting the circle onto itself.
Having constructed a square by these or related means, designate its corners (running around counterclockwise) P, Q, R, and S. {(Figure 1)} Our experiment consists in “unfolding” the relationship between the two characteristic lengths associated with the square: side PQ and diagonal PR. These two shall play the role of the segments “A” and “B” in our previous discussion. (Note: the following constructions are much easier to actually carry out, than to describe in words. The reader should actually cut out a square and do the indicated constructions.)
For our purposes it is convenient to focus, not on the whole square, but on the right triangle PQR obtained by cutting the square in half along the diagonal PR. {(Figure 2)} Note, that the sides PQ and QR have equal length (PQR is a so-called isoceles right triangle); furthermore, the angle at Q is a right angle and the angles at P and R are each half a right angle.
To compare A (= PQ) with B (= PR), fold the triangle in such a way, that PQ is folded exactly onto (part of) the line PR. Since PQ is shorter than PR, the point Q will not fold to R, but will fold to a point T, located between P and R. {(Figure 3)} By the construction, PQ and PT are equal in length. Next, note that the axis of folding, which divides the angle at P in half, intersects the side QR at some point V, between Q and R. Observe, that the indicated operation of folding brings the segment QV exactly onto the segment TV.
Observe also, that through the indicated folding of the triangle, the triangular region PVT is exactly “covered” by the region PVQ, while the smaller triangle portion VTR is left “uncovered,” as a kind of higher-order “remainder.”
Focus on the significance of that smaller triangle. Note, that in virtue of the construction itself, VTR has the same angles and shape (i.e., is similar to) the original triangle PQR.
Euclid’s Algorithm Again
Comparing the original triangle to the smaller “remainder” triangle VTR, we can easily see that the former’s sides are derived from the latter’s by relationships very similar to, though slightly different from, the steps of the so-called Euclid algorithm! (See Part I, in our issue dated June 2, 1997.)
First, in fact, the side RT results from subtracting the segment PT, equal in length to the original triangle’s side PQ, from the original triangle’s hypotenuse PR. Second, the hypotenuse VR of the small triangle derives from the side QR of the original triangle, by subtracting the segment QV, while the latter (in virtue of the folding operation and the similarity of triangles) is in turn equal to TV, which again is equal to RT. In summary: if the side and hypotenuse of the original triangle are A and B, respectively, then the corresponding values for the smaller triangle will be A? = B – A and B? = A – A?. {(Figure 4)}
Lurking Paradox
The reader might already notice an extraordinary paradox lurking behind these relationships: Were A and B to have a common divisor C, then that same C would–in virtue of the just-mentioned relationships–also have to divide A? and B?. What is paradoxical about that? Well, the smaller triangle is similar to the larger one, so we could carry out the same construction upon it, as we did to derive it from the original triangle. The result would be a third, much smaller triangle of the same proportions, whose leg and hypotenuse, A? and B?, would thereby also have to be divisible by the same unit C. And yet, continuing the process, we would rapidly arrive at a triangle whose dimensions would be smaller than C itself!
We are thus faced with the inescapable conclusion, that A and B cannot have a common divisor in the sense of linear Euclidean geometry. The relationship between A and B cannot be expressed as a simple ratio of whole numbers. As Kepler puts it in his “World Harmony,” the ratio of A to B is Unaussprechbar–it cannot be “spoken”; by which Kepler means, it is not communicable in the literal, linear domain. But Kepler emphasizes at the same time, that it is {knowable} ( wissbar), and is precisely communicable {by other means.}
Evidently, the cognition of such linearly incommensurable relationships, requires that we abandon the notion, that simple linear magnitudes (so-called scalar magnitudes) are ontologically primary. Our experiment demonstrates, that such magnitudes as the ratio of the diagonal to the side of a square (commonly referred to algebraically as the square root of two) are not really linear magnitudes at all, but are “multiply extended,” geometrical magnitudes. They call for a different kind of mathematics. What we lay out on the textbook “number line” are only shadows of the real process, occuring in a “curved” universe. This coheres, of course, with Johannes Kepler’s reading of the significance of Golden Mean-centered spherical harmonics in the ordering of the solar system, and in microphysics as well.
Analysis Situs Relationship
The relevant relationship for analysis situs, in the preceeding discussion, is not between the diagonal and side of a square; but rather that between the hypotheses underlying the linear domain, sketched in Part I of our discussion, and the superior standpoint implied in Part II.
A final note: Observe the rotation and change of scale of the smaller triangle relative to the larger. Our experimental {transformation} of the larger triangle into the smaller, similar triangle, as an {inherent feature} of the relationship of A to B, already points in the direction of Gauss’ complex domain, and the preliminary conclusion, that the complex numbers are ontologically primary–more real–than the so-called “real numbers.”
(Anticipating what might be developed in other locations: The transformation constructed above, belongs to the so-called “modular group” of complex transformations, which are key to Gauss’ theory of elliptic functions, quadratic forms, and related topics. Gauss, in effect, reworks the central motifs of Greek geometry, from the higher standpoint of the complex domain.)

Hypergeometric Curvature

by Bruce Director
Let us turn our investigations to the domain of manifolds of a Gauss-Riemann hypergeometrical form. There is no need, as too often happens, for your mind to glaze over as you read the above mentioned words. Lyn has given us ample guidance for this effort, most recently in his memo on non-linear organizing methods.
Over the next few weeks, let us set a course, by way of several preliminary exercises that will shift our investigations from manifolds of constant curvature, that we’ve been looking at for the last couple of months, to investigations of manifolds of non-constant curvature.
A WARNING: these exercises should not be taken as some type of definition of the concepts involved, any more than bel canto vocalization should be taken as a substitute for singing classical compositions. However, without the former, the latter is unattainable.
As a first start, conduct the following experiment, that was alluded to in the previous pedagogical discussion on the pentagramma mirificum:
Think of a surface of zero curvature, represented as a flat piece of paper. This manifold is characterized by the assumption of infinite extension in two directions. The intersection of these two infinitely extended directions produces a singularity: a right angle, to which all geodetic action is referred.
Now, draw a right triangle, labelling the vertices BAC, with the right angle at A. Extend the hypothenuse BC to some arbitrary point D. (BCD will all lie on the same line.) At D, draw a line perpendicular to line BCD, and extend line AC until it intersects the perpendicular from D. Label that point of intersection E. (You will now have produced two right triangles, with a common vertex at C. The extension of leg AC of the first triangle, will form the hypothenuse CE of the second triangle CDE. Continue this action by extending line ACE to some point F. At F, produce a perpendicular line, and extend leg DE of triangle CDE until it meets the new perpendicular at some point G.
Now you will have three right triangles, BAC, CDE, and EFG forming a kind of chain. Continue to produce this chain of right triangles, by extending the hypothenuse EF of triangle EFG to some arbitrary point H. Draw a perpendicular to H and extend leg FG until the two meet at some point I. Now the chain has four triangles in it.
Keep adding to the chain of triangles in the same manner. You will notice that after every three triangles, the chain “turns” a corner. After the chain has eight triangles, if the appropriate lengths were chosen, the triangle will close. The closed chain of triangles, will resemble two intersecting rectangles. (We leave it the reader to discover what the appropriate lengths are for the chain to exactly close. As you will discover, the fundamental point is not lost, even if arbitrary lengths are used. In that case, the orientation of the 9th triagle will be identical to the 1st.)
Now produce the same action on a sphere, i.e. a surface of constant positive curvature. Begin with a right spherical triangle BAC. Extend its hypothenuse to some point D. At D, draw an orthogonal great circle arc. Extend the side AC until it intersects the orthogonal arc you just drew from D. Continue producing this chain of spherical triangles. You will discover, that the chain of right triangles on the sphere, closes after five “links” have been produced. In other words, the pentagramma mirificum!
(If each hypothenuse is extended to an arc length of 90 degrees, the chain will perfectly close after 5 links. If an arbitrary arc length is used, as in the plane, the chain will not perfectly close, but the orientation of the 6th and 1st triangle will be the same. On a sphere, the lengths need not be arbitrary, as a 90 degree arc length is determined by the characteristic curvature of the sphere. On a plane, no such ability to determine length exists.
Now, think about the results of this experiment. The same action was performed on a manifold of zero-constant curvature and a manifold of constant positive curvature. The same action on, two different manifolds, produces two distinctly different periodicities. What in the naive imagination’s conception of the plane and sphere, accounts for two completely different periodicities arising from exactly the same process?
Now try a second experiment:
Stand in a room fairly close to two walls. Mark a dot on the ceiling directly above your head. Point to that dot and rotate your arm down 90 degrees so that you’re pointing to a place on the wall directly in front of you. Mark a dot on that wall. Point to that dot, and rotate your arm 90 degrees horizontally to a point on the wall directly to your right (or left). Mark a dot on that wall at that point.
As presented in previous pedagogicals, the manifold of action, that generated the positions of these three dots, is characteristic of a surface of constant positive curvature, i.e. a sphere. The three dots are vertices of a spherical equilateral triangle.
Now, take some string and masking tape and connect the dots to one another with the string. Since the strings form the shapes of catenaries, those same dots are now the vertices of a negatively curved triangle.
Finally, in your mind, connect the dots with straight lines, and those same dots represent vertices of a Euclidean triangle.
From this construction, the same three positions lie on three different surfaces.
But, there is also another type of “surface” represented in this experiment. A hypergeometric manifold characterized by the change in curvature from negative, to zero, to positive curvature.
This is not simply a trivial class room experiment. In our previous discussions, we generated the concept of a sphere, as a manifold of measurement of astronomical observations. Instead of being in a room, the three dots can be thought of as stars, whose positions on the celestial sphere are 90 degrees apart.
But, couldn’t the relationship of these three stars, also be conceived to lie on a surface of constant negative curvature? In 1819, Gauss’ collaborator Gerling forwarded to Gauss the work of a friend of his named Schweikart, a professor of law whose avocation was mathematics and astronomy. Schweikart had developed a conception, that he called, “Astralgeometrie”, that conceived of the spatial relationship among astronomical phenomena as a negatively curved manifold. Gauss replied, that Schweikart’s ideas gave him, “uncommonly great pleasure” to read and agreed with almost all of it. In his reply, Gauss added a few additional ideas to Schweikart’s hypothesis.
It should come as no surprise, that Gauss would receive Shweikart’s work so warmly. Three years earlier, Gauss had expressed an even more advanced notion, in his April 1816 letter Gerling, that we have cited several times before, most recently two weeks ago:
“It is easy to prove, that if Euclid’s geometry is not true, there are no similar figures. The angles of an equal-sided triangle, vary according to the magnitude of the sides, which I do not at all find absurd. It is thus, that angles are a function of the sides and the sides are functions of the angles, and at the same time, a constant line occurs naturally in such a function. It appears something of a paradox, that a constant line could possibly exist, so to speak, a priori; but, I find in it nothing contradictory. It were even desirable, that Euclid’s Geometry were not true, because then we would have, a priori, a universal measurement, for example, one could use for a unit of space (Raumeinheit), the side of an equilateral triangle, whose angle is 59 degrees, 59 minutes, 59.99999… seconds.”
I’m sure you found Gauss’ choice of a triangle whose angle is 59 degrees, 59 minutes, 59.99999… seconds curious. But, think about it in the context of the above reference to a hypergeometric manifold characterized by a change from negative to zero, to positive curvature. The surface of zero curvature, is nothing more than a singularity, in that hypergeometric manifold. The sum of the angles of a triangle in a manifold of negative curvature will be less than 180 degrees. The 60 degree equilateral triangle is the maximum. On a surface of positive curvature, the sum of the angles of a triangle is always greater than 180 degrees. The 60 degree equilateral triangle in this manifold, is the absolute minimum.
The triangle Gauss proposes for an absolute length, does not exist in a manifold of negative curvature, nor in a manifold of positive curvature. And, on a surface of zero curvature, it can no longer define an absolute length. On the other hand, in a hypergeometric manifold, that characterizes the change from negative, to zero, to positive curvature, such a triangle represents, a unique singularity, a maximum and a minimum, existing in the infinitessimally small interval, in between two mutually distinct curvatures.
Enjoy the exercises. We’ll be back next week.
The Case For Knowing It All
by Bruce Director
A common mistake can occur, when replicating Gauss’ method for determining the Keplerian orbit of a heavenly body from a small number of observations within a small interval of the orbit, that has wider general implications. The error often takes the form, of asking the rhetorical question, “What did Gauss do, exactly?” and, answering that question, with a rhetorical step-by-step summary of a procedure for calculating the desired orbit. In fact, Gauss himself never published, or wrote down any such procedure. Gauss determined the orbit of Ceres in the summer of 1801, and communicated only the result of that determination, so that astronomers watching the sky could re-discover the previously observed asteroid. It wasn’t until 8 years later that Gauss, after repeated requests, published his “Summary Overview,” and a year after that, his “Theory of the Motion of the Heavenly Bodies Moving About the Sun In Conic Sections.”
Both these works, refrain completely from presenting any step-by-step procedure — because no such procedure existed. Instead, Gauss presented, first in summary form, than in a more expansive way, the totality of interconnected principles that underlay the motion of bodies in the solar system. These principles are not a collection of independent functions that are mutually interdependent. Rather, that mutual connectedness is itself a function, a representation of a higher principle that governs planetary motion.
To illustrate this point, think of Kepler’s principles of planetary motion, maliciously mis-characterized as Kepler’s three laws. The elliptical nature of the orbit, the constant of proportionality for each orbit (the “equal area” principle), and the constant of proportionality between the periodic times and the semi-major axis of the elliptical orbits, were each demonstrated by Kepler as a valid principle governing planetary motion. But (as those who’ve worked through the Fidelio article will recognize), all three principles are inseparably linked in each small interval of every planetary orbit. It is the functional relationship among these principles, the “hypergeomtric” relationship, that is the essence of Kepler’s discovery.
It is the “disassembly” of this hypergeometric relationship, into separate independent functions, that has been the hysterical obsession of the oligarchy and its lackeys, from Newton, to Euler, to today’s academics.
Leibniz, in a letter to Huygens exposed this hoax from the get go:
“For although Newton is satisfactory when one considers only a single planet or satellite, nevertheless, he cannot account for why all the planets of the same system move over approximately the same path, and why they move in the same direction….”
Or, from another angle: Nearly 20 years after his discovery of the orbit of Ceres, Gauss took on the task of measuring the Kingdom of Hannover, by means of a geodetic triangulation. In the course of this investigation, which had many practical implications, Gauss demonstrated a similar “hypergeometric” relationship. Each triangle he measured was “infinitesimally” small with respect to the entire Earth’s surface, and the deviation of those triangles from flat ones was also small. As the network of triangles was extended, however, the small deviation in each individual triangle, became an increasingly significant factor in the measurement of the larger area covered by the connected network of these triangles. Not only did the area measured deviate from flat, but it also deviated from a spherical surface, and more closely resembled an ellipsoidal surface. Furthermore, Gauss discovered an “infinitesimally” small deviation from the astronomical observation of his position on the Earth’s surface, and the position determined by his triangulation. This led Gauss to the discovery of the deviation of the Earth’s surface, from one of regular non-constant curvature, such as an ellipsoid, to a surface of irregular, non- constant curvature, that today is called the Geoid.
This defines a functional relationship of the measurement of the relatively “infinitesimally” small triangles, and the multiple surfaces on which these measurements were performed. That is, each triangle measured, had to be thought of simultaneously as being on a surface of zero-curvature (flat), constant curvature (spherical), regular non-constant curvature (ellipsoidal), and irregular non-constant curvature (the Geoid). The characteristics of each triangle changes from surface to surface. But, in the real world, these surfaces are not independent surfaces, simply overlaid on top of each other. There is a functional relationship among them. Gauss’ genius was to recognize, not only the interaction between the characteristic of curvature of the surface, and the characteristic of the triangles measured, but also the functional relationship that transformed one surface into another.
Or, from an even different angle: In 1832, after nearing the completion of his geodetic survey, Gauss published the results of the work he had been doing along the way. In his second treatise on bi-quadratic residues, Gauss extended the concept of prime numbers into the complex domain, transforming Eratosthenes’ Sieve. Gauss showed that the characteristics of prime numbers, were also a function of the nature of the surface, such that, for example, 5 is transformed from a prime to a composite number. The number 5 exists in both domains, but it’s nature changes, as the domain changes. The number 5 is not two separate independent numbers. Again there is a functional relationship between these two domains, the transformation, that provokes our minds to a higher mode of cognition.
The above three examples, presented in summary form, have been elaborated in previous pedagogical discussions, and will be further elaborated in future ones. The intent in presenting this summary juxtaposition, is to provoke some thought on the functional relationship among these three. They are not three independent concepts. There is a connection, whose active contemplation, gives rise to a conception of functional relationship, that governs the generation of each concept.
As Lyndon LaRouche has wisely advised us, “If you want to know anything, you have to know everything.”

Higher Arithmetic as a Machine Tool

by Bruce Director
Last week’s pedagogical discussion ended with the provocative question: “If there exists no grand mathematical system which can combine and account for the various cycles, then how can we conceptualize the `One’ which subsumes the successive emergence of new astronomical cycles as apparent new degrees of freedom of action in our Universe? How do we master the paradoxical principle of Heraclitus, that `nothing is constant except change?'”
This problem was attacked in a very simple and beautiful way by C.F. Gauss, using purely the principles of higher arithmetic, in his determination of the Easter date. Since the last conference presentation, I have received several requests, to elaborate more completely the derivation of Gauss’ algorithm. While the development of Gauss’ program requires no special mathematical skills other than simple arithmetic, it does require the conceptual skills of higher arithmetic, i.e., the ability for the mind to unify an increasingly complex Many into a One. This is a subjective question. We are not looking for one mathematical formula, but a series of actions, which, when undertaken, enable our minds to wrestle a seemingly unwieldy collection of incommensurable cycles into our conceptual grasp. In a certain sense, we are designing and building a machine tool to do the job, but only the entire machine can accomplish the task. No single part, or collection of parts, will be sufficient. The whole machine includes not only the “moving parts,” but the concepts behind those moving parts. All this, the parts and the concepts, must be thought of as a “One,” or else, the machine, i.e., your own mind, comes to screeching halt, while the earth, the moon, the sun, and the stars, continue their motion, in complete defiance of your blocking.
Over the next few weeks, we will re-discover Gauss’ construction. But, in order to build this machine, you must be willing to get your hands dirty and break a sweat, make careful designs, cut the parts to precision, lift heavy components into place, and finally apply the energy (agape) necessary to get the machine moving and keep it moving.
In the beginning of his essay, “Calculation of Easter,” published in the August 1800 edition of Freiherrn v. Zach’s “Monthly Correspondence for the Promotion of News of the Earth and Heavens,” Gauss states:
“The purpose of this essay, is not to discuss the usual procedure to determine the Easter date, that one finds in every course on mathematical chronology, and as such, is easy enough to satisfy, if one knows the meaning and use of the customary terms of art, such as Golden Number, Epact, Easter Moon, Solar Cycle, and Sunday Letter, and has the necessary helping tables; but this task is to give, independently from those helping conceptions, a purely analytical solution based on merely the simplest calculation-operations. I hope, this will not be disagreeable, not only to the mere enthusiast who is not familiar with those methods, or for the case where one wishes to determine the Easter date, under conditions in which the necessary helping devices are not at hand, or for a year which cannot be looked up in a calendar; but it also recommends itself to the expert by its simplicity and flexibility.”
This article was published after Gauss had completed, and was awaiting publication of the “Disquistiones Arithmeticae.” Of the principles we will develop here, Gauss says:
“The analysis, by means of which the above formulas are founded, is based properly on the foundations of {Higher Arithmetic}, in consideration of which I can refer presently to nothing written, and for that reason it cannot be freely presented here in its complete simplicity: in the mean time, the following will be sufficient, in order to lay the foundation of the direction of the concept and to convince you of its correctness.”
Gauss’ choice of the problem of determining the Easter date, to demonstrate the validity of the principles of his Higher Arithmetic, is not without a healthy amount of irony, but the resulting calculation was by no means Gauss’ only goal. As with LaRouche’s current program of pedagogical exercises, Gauss recognized the effectiveness for increasing the conceptual powers of the human mind, of working through specific examples, which demonstrate matters of principle. Gauss continued this approach in all his work, demonstrating new principles as he conquered one problem after another. Gauss repeatedly found that in these matters of principle, connections were discovered between areas of knowledge which were previously thought to be unrelated.
From the earliest cultures, the various cycles described last week were accounted for separately, and their juxtaposition was studied with aid of the different tables and calculations Gauss mentioned above. These methods were adequate for determining the date of Easter from year to year. Gauss’ calculation is purely a demonstration of the power of the human mind, to create a new mathematics, capable of bringing into a “One” that which the previous state of knowledge considered unintelligible. For that reason, it suits our present purpose.
To begin, we should think about the problem we intend to work through: To determine the date of Easter for any year. Easter occurs on the first Sunday, after the first full Moon (called the Paschal Moon) after the Vernal Equinox. This entails three incommensurable astronomical cycles: the day, the solar year, and the lunar month; and one socially-determined cycle, the seven-day week.
Now look more closely at what this “machine-tool” must do:
1. It must determine the number of days after the vernal equinox, on which the Paschal Moon occurs. This changes from year to year. So the machine must have a function, which modulates the solar year (365.24 days) with the lunar month (29.53 days).
2. Once this is determined the machine must also determine the number of days, remaining until the next Sunday.
The incommensurability of the solar year and the lunar month is an ancient conceptual problem, upon whose resolution man’s potential for economic progress rested. If one relied solely on the easier-to-see lunar month, the seasons (which result from changes of the position of the earth with respect to the sun) will occur at different times of the year, from one year to the next. On the other hand, if one relies on the solar year, some intermediate division between the day and year is necessary, to measure smaller intervals of time. Efforts to combine both the lunar cycle, and the solar cycle, linearly into one calendar, creates a complicated mess. The Babylonian-influenced Hebrew calendar is an example, requiring a special priestly knowledge just to read the calendar. Shortly after the publication of the Easter formula, Gauss applied the same method to a much more complex chronological problem, the determination of the first day of Passover, and in so doing, subjugating the Babylonian lunisolar calendar to the powers of Higher Arithmetic.
In 423 B.C., the Greek astronomer Meton reportedly discovered that 19 solar years contained 235 lunar months. This is the smallest number of solar years, that contain an integral number of lunar months. There is evidence that other cultures, including the Chinese, discovered this same congruence earlier. By the following simple calculation, we can re-discover Meton’s discovery. One solar year is 365.2425 days. 12 lunar months is 354.36 days, (12 x 29.53) or 11 days less than the solar year. This means that each phase of the moon will occur 11 days earlier than the year before, when compared to the solar calendar.
(For example, if the new moon falls on January 1, then after 12 lunar months, a new moon will fall on December 20 — 11 days before the next January 1. The next new moon will occur on January 19, 19 days after the next January 1.)
One solar year contains 12.368 lunar months (365.2425 / 29.530). In 19 years, there are 6939.6075 days (365.2425 x 19). In 19 years of 12.368 lunar months, there are 6939.3137 days (19 x 12.368 x 29.530). That is, if you take a cycle of 6939 days, or 19 solar years, the phases of the moon and the days of the solar year become congruent.
Despite Meton’s discovery, the Greek calendar was still encumbered by a failed effort to combine the lunar months and solar year into a single linear calendar cycle. Since 12 lunar months, are 11 days short of the solar year, the Metonic calendar, like the Babylonian influenced Hebrew calendar, required the intercalation (insertion) of leap months in years 3, 5, 8, 11, 13, and 16 of the 19-year cycle.
In his “History,” Herodotus remarks on the inferiority of the Greek method over the Egyptians, whose calendar was based only on the harder-to-measure solar year. “But as to human affairs, this was the account in which they all agreed: the Egyptians, they said, were the first men who reckoned by years and made the year consist of twelve divisions of the seasons. They discovered this from the stars (so they said). And their reckoning is, to my mind, a juster one than that of the Greeks; for the Greeks add an intercalary month every other year, so that the seasons agree; but the Egyptians, reckoning thirty days to each of the twelve months, add five days in every year over and above the total, and thus the completed circle of seasons is made to agree with the calendar.”
The oligarchical view of this matter is expressed by the Chorus-Leader in Aristophanes, “The Clouds”:
“As we prepared to set off on our journey here, 
The Moon by chance ran into us and said she wanted 
To say hello to all the Athenians and their allies, 
but she’s most annoyed at your treating her so shamefully 
despite her many evident and actual benefactions. 
First off, she saves you at least ten drachmas a month in torches: 
that’s why you all can say, when you go out in the evening, 
No need to buy a torch, my boy, the moonlight’s fine! 
She says she helps in other ways too. But you don’t keep 
your calendar correct; it’s totally out of sync. 
As a result, the gods are always getting mad at her, 
whenever they miss a dinner and hungrily go home 
because you’re celebrating their festival on the wrong day, 
or hearing legal cases or torturing slaves instead of sacrificing. 
And often, when we gods are mounring Memnon or Sarpedon, 
you’re pouring wine and laughing. That’s why Hyperbolus, 
this year’s sacred ambassador, had his wreath of office 
blown off his head by us gods, so that he’ll remember well 
that the days of your lives should be reckoned by the Moon.”
In 46 B.C., with the adoption of the Julian calendar, all attempts to incorporate the lunar cycle into the calendar were abandoned. But, it wasn’t until Gauss’ development of higher arithmetic, ironically based on a re-working and non-linear extension of classical Greek astronomy and geometry, that man had the ability to encompass the seemingly incommensurable lunar month and solar year into a One.
With these discoveries in mind, we can begin to construct the first components of the machine, which will determine the number of days from the vernal equinox, to the Paschal Moon. If we fix the vernal equinox at March 21, our first component must determine some number D, which, when added to March 21, will be the date of the Paschal Moon. (March 21 was the date set at the Council of Nicea. The actual Vernal Equinox, can sometimes occur in the late hours of March 20, or the early hours of March 22.) The Paschal Moon will occur on one of 30 days, the earliest being March 21, the latest being April 19. The variation from year to year, among these 30 possible days, is a reflection of the 19- year Metonic cycle. So, our machine, must make two cycles, the 19-year Metonic cycle, and this 30-day cycle into a One.
This requires some thinking. Since 12 lunar months are 11 days less than the solar year, any particular full moon will occur 11 days earlier than the year before. Naive imagination tells us that if we set our machine on any given year, all it need do is subtract 11 days to find the Paschal Moon on the next year. But we have a boundary condition to contend with. The Paschal Moon can never occur before March 21. So, when the Paschal Moon occurs in March, and our machine subtracts 11 days, to get the date of the Paschal Moon the following year, the new date will be before March 21. That will do us no good at all.
To determine the date of the Paschal Moon from one year to the next, our machine must do something different when the Paschal Moon occurs in March, than when it occurs in April. When the Paschal Moon occurs in April, the machine must subtract 11 days, to determine the date for the following year. But when it occurs in March, the machine must add 19 days to determine the date for the following year.
To construct this component of the algorithm, Gauss began with a known date, and abstracted the year-to-year changes, with respect to that date. In reference to the 19-year Metonic cycle, he chose to begin the calculation with the date of the Paschal Moon in the first year of that cycle (i.e., those years which, when divided by 19, leave 0 as a remainder, or are congruent to 0 relative to modulus 19). In the 18th and 19th centuries, that date was April 13, or March 21 + 23 days.
For clarity, we can make the following chart:
Year Residue Paschal Moon # Days Aft. Equinox (D)
(Mod 19)
1710 0 April 13 23 days 1711 1 April 2 23 – 11 days 1712 2 March 22 23 – (2 x 11) 1713 3 April 10 23 – (2 x 11) + 19 1714 4 March 30 23 – (3 x 11) + 19
(The reader is encouraged to complete this entire chart. When you do this notice the interplay between the 19 year, and 30 day cycles.)
From the chart, you should be able to see the relevant oscillation. For example, for year 1713, were we to have subtracted another 11 days from the year before, we’d arrive at the date of March 11. A full moon certainly occurred on that day, but it wasn’t the Paschal Moon, because March 11 is before the Vernal Equinox. The Paschal Moon, in the year 1713, occurred 30 days later than March 11, on April 10. (March 22 – 11 + 30; or March 22 + 19)
The number of days added or subtracted changes from year to year, in a seemingly non-regular way. What is constant is change. But this step-by-step process, is really no different than if we had a series of tables.
Gauss’ next step, is to transform the two actions, subtracting 11 days or adding 19 days, into one action. There are many ways this can be done. The determination of the appropriate one, is a matter of analysis situs, and involves one of the most important methods of scientific inquiry: {inversion}. The principle of inversion is characteristic of all Gauss’ work. It is one thing to be given a function, and then calculate the result. The inverse question is much more difficult. Given a result, what are the conditions which brought about that result? In the latter case, there are many possible such conditions, which cannot be ordered without consideration of higher dimensionalities. (This subject will be treated more in future pedagogical discussions.)
Our immediate problem can be solved, if we think about it from the standpoint of inversion. All the year-to-year differences between the dates of the Paschal Moon, are either congruent relative to modulus 11 or modulus 19. But neither of these moduli are relevant for the task at hand. A different modulus must be discovered, which is not self-evident from the chart, but is evident from the higher dimensionality of the complete process. As discovered earlier, the Paschal Moon occurs on one of 30 days between March 21, and April 19. We need to discover a means, under which the oscillation of the dates of the Paschal Moon, can be ordered with respect to modulus 30. If we number these days 0-29, the numbers 0 to 29 each represent different days, and are all non-congruent relative to modulus 30.
Gauss chose to combine the two actions into one, by adding 19 days to {every} year, and subtracting 30 days from those years in which the Paschal Moon occurs in April. (For example, in our chart above, the year 1711 would be calculated: 23 + 19 – 30; the year 1712 would be calculated, 23 + (2 x 19) – (2 x 30).
Since all numbers whose differences are divisible by 30, are all congruent relative to modulus 30, adding or subtracting 30 days from any interval, will not change the result. Gauss has transformed this problem into a congruence relative to a single modulus: 30. So the first component of our machine takes the year, finds the residue, multiplies that by 19, adds 23, divides by 30 and the remainder is the number of days from the Vernal Equinox to the Paschal Moon.
Or in Gauss’ more condensed language: Divide the year by 19 and call the remainder a. Then divide (23 + 19a) by 30 and call the remainder D. Add D to March 21 to get the date of the Paschal Moon.
No mountain was ever climbed that didn’t require some sweat. Or, put another way, in order to build the Landbridge, you have to move some dirt.
Next week: From the Paschal Moon to Easter.
Higher Arithmetic as a Machine Tool–Part II
by Bruce Director
Last week we completed the first step of the development of Gauss’ algorithm for calculating the Easter date, using the principles of Higher Arithmetic. This week we continue the climb. Those experienced in climbing mountains are aware, that as one approaches the peak, the climb often steepens, requiring the climber to find a second burst of energy. Even though last week’s climb might have required some exertion, you’ve had a week’s rest, and a national conference in the intervening period. Armed with the higher conceptions of man expressed by Lyn and Helga at the conference, everyone is well-equipped to complete this climb.
Again it is important to keep in mind, that the determination of the date of Easter was not a goal in itself for Gauss. Rather, Gauss understood that working through problems, which required the discovery of new principles, was the only way to advance human knowledge.
Last week, we worked through the first part of the task of determining the date of Easter. Since Easter is the first Sunday after the first full moon, after the vernal equinox, the first job of our machine tool, is to determine the date of the first full moon. This requires bringing into a One, three astronomical cycles: the day, the lunar month, and the solar year. The second part of the job, to determine the number of days from the Paschal Moon until the next Sunday, requires bringing into a One, various imperfect states of human knowledge.
It was a major step forward, for society to abandon all attempts to reconcile the lunar and solar years into one linear calendar, and adopt the solar year, as the primary cycle on which the calendar was based. The conceptual leap involved was to base the calendar on the more difficult to determine solar year, instead of the easier to see lunar months. The implications of this conceptual leap for physical economy are obvious. What is worth emphasizing here, is, that this is a purely subjective matter, whose resolution determines physical processes. This development, however, was not without its own problems.
While the disaster of trying to reconcile the lunar and solar cycles, becomes evident within the span of several years, the problems of the solar calendar, don’t become significant within in the span of a single human life.
As discussed last week, the solar year is approximately 365.24 days. In 46 B.C., the calendar reform under Roman Emperor Julius Caesar, set the solar year at 365.25 days, which was reflected in the calendar, by three years of 365 days, followed by a leap year of 366 days. The number of days in this arrangement, would coincide every four years. Under this arrangement, man has imposed on the astronomical cycles, a new four-year cycle. From the standpoint of Gauss’ Higher Arithmetic, leap years are congruent, in succession to 0 relative to modulus 4, followed by non-leap years congruent to 1, 2, or 3 relative to modulus 4.
Like all oligarchs who delude themselves that their rule will last forever, Julius Caesar’s arrogance of ignoring the approximately .01 discrepancy between his year, and the actual astronomical cycle, became evident long after his Empire had been destroyed. This .01 discrepancy, while infinitesimal with respect to a single human life, becomes significant with respect to generations, causing the year to fall one day behind every 187 Julian years. By the mid-16th century, this discrepancy had grown to 11 days, so the astronomical event known as the vernal equinox was occurring on March 10th instead of March 21st. The economic implications of such a discrepancy is obvious.
This lead to the calendar reforms of Pope Gregory XIII in 1587. In the Gregorian calendar, the leap year is dropped every century year, except those century years divisible by 400. This decreases the discrepancy of the .01 day, but doesn’t eliminate it altogether. In order to get the years back into synch with the seasons, Pope Gregory dropped 11 days from the year 1587. Other countries reformed their calendar much later, having to drop more days, the longer they waited. The Protestant states of Germany, where Gauss lived, didn’t adopt the calendar reform until the early 1700s. The English didn’t change their calendar until 1752. The Russians waited until the Bolshevik revolution.
The other human cycle involved in this next step of the problem is the seven-day week. There is no astronomical cycle which corresponds to the seven-day week. While the Old Testament’s Exodus, attributes the seven-day week to God’s creation of the universe, Philo of Alexandria, in his commentaries on the Creation, cautions that this cannot be taken literally. Philo says the Creation story in Genesis 1, must be thought of as an ordering principle, not a literal time-table. Here is another example of what Lyn has discussed about the unreliability of a literal reading of the Old Testament. The idea that creation took seven days, shows up in Exodus, contradicting the conception of an ordering principle of Creation in Genesis 1.
Of importance for our present problem, is that, the seven-day weekly cycle runs continuously, and independently, from the cycles of the months, (either calendar or lunar) and the years. What emerges is a new cycle which has to be accounted for. Each year, the days of the week occur on different dates. For example, if today is Saturday, September 6, next year, September 6 will be on a Sunday. However, when a leap year intervenes, the calendar dates move up two days. This interplay between the seven-day week and the leap year, creates a 28-year cycle, before the days of the week and the calendar dates coincide again. This cycle also has to be accounted for in Gauss’ algorithm.
So, to climb that last step, from the Paschal Moon to Easter, we have to bring into a One, these two human cycles, the leap year, and the seven-day week.
Before going any further, one must first remember a principle of Higher Arithmetic. Under Gauss’ conception of congruence, it is the {interval} between the numbers, on which the congruence is based, not the numbers themselves. We are relating numbers by their intervals. Consequently, when we add or subtract multiples of the modulus to any given number, the congruence relative to that modulus doesn’t change. For example, 15 is congruent to 1,926 relative to modulus 7. The interval between 15 and 1,926 (1,911) is divisible by 7. If, for example, we subtract 371 (7×53) from 1,926, the result will still be congruent to 15. The reader should do several experiments with this concept, in preparation for what follows.
It were useful to restate here Gauss’ entire algorithm:
Divide the year by 19 and call the remainder a
Divide the year by 4 and call the remainder b
Divide the year by 7 and call the remainder c
Divide 19a+23 by 30 and call the remainder d
(This was discovered last week)
Divide 2b+4c+6d+3 by 7 and call the remainder e
(This is today’s task.)
The number of days from the Paschal Moon until Easter Sunday can be at least 1 and at most 7 days. Because Easter is the first Sunday {after} the first full moon, which follows the Vernal Equinox, the earliest possible date for Easter is March 22. Therefore, Easter will fall on March 22 + d (the number of days to the Paschal Moon) + E (the number of days until Sunday.) E, therefore, will be one of the numbers 0-6, or the least positive residues of modulus 7.
Keeping in mind the exercise we discussed above, the number of days between any two Sundays is always divisible by 7, no matter how many weeks intervene. Consequently, the interval of time between March 22 + d + E (Easter Sunday of the year we’re trying to determine) and any given Sunday in any previous year, will be divisible by 7. So if we begin with a definite Sunday, we can discover a general relationship for determining the date of Easter.
Gauss chose Sunday, March 21, 1700 as his Sunday reference date. Next, he determined a relationship for how many total days elapsed between March 21, 1700 and any subsequent Easter Sunday. That total would be 365 days times the number of elapsed years, plus the number of leap days in those elapsed years. (Remember every four years, has one leap day in it.) Again, this number will be divisible by 7, no matter how many years intervene.
If A is the year for which we want to determine the date of Easter, A-1700 will be the number of elapsed years. (For example, if we want to find Easter in the year 1787, then there were 87 elapsed years (1787-1700).
If we call i the total number of leap days, then the total number of days between Sunday March 21, 1700 and March 22 + d + E, for the year we’re investigating will be:
1 + d + E + i + 365(A-1700)
This number is divisible by 7, (because it is the number of intervening days from one Sunday to another).
At this point, the main conceptual problem has been solved. The date of Easter can be determined as March 22 + d + E, with d being determined by the calculation discussed last week, and E determined by the calculation which will be developed below.
Gauss was never content, unless he found the absolute simplest way to accomplish his task. All that remains is to simplify the above calculation so that E will be the residue which arises when the above number is divided by 7. Gauss accomplished this by repeatedly employing the principle, cited above, that adding or subtracting multiples of the modulus, doesn’t change the congruence. I include the following applications of this principle, even though it is expressed by some algebraic manipulations. The reader should focus on the addition and subtraction of multiples of the modulus 7.
To determine the number of leap days, i. we must first determine what relationship the year in question is to the leap year. Or, in the language of Gauss’ Higher Arithmetic, what is the least positive residue relative to modulus 4 of the year in question? This is the remainder b in Gauss’ algorithm. (For example, if the year is 1787, the least positive residue relative to modulus 4 is 3. That is, 1787 is three years after a leap year. So the total number of leap years between 1787 and 1700 is 87-3/4=21, or 1787-1700-3/4.)
In Gauss’ formula, the total number of leap days i will be:
1/4(A-b-1700)
If A is between 1700 or 1799. If A is between 1800 and 1899, then we have to subtract 1 because 1800 is not a leap year. For now, we will stick to the 18th century.
So the total number of days between March 21, 1700 and Easter Sunday in year A, will be:
1 + d + E + 365(A-1700) + 1/4(A-b-1700).
And this number must be divisible by 7.
This is pretty complicated and cumbersome. But as we know from Gauss’ Higher Arithmetic, if we add or subtract multiples of 7, the result will also be divisible by 7. So Gauss, through the following steps, adds or subtracts multiples of 7, in order to bring this unwieldy formula into a simple calculation.
First he adds the fraction 7/4(A-b-1700) to the above making: 1 + d + E + 365(A-1700) + 8/4(A-b-1700)
Multiplying all this out gives us: 1 + d + E + 367(A-1700)-2b which equals: 1 + d + E + 367A – 623,900 – 2b
Then Gauss subtracts 364(A-1700) (which is divisible by 7) which gives: d + E + 3A – 5099 – 2b
Then Gauss adds 5096 (which is divisible by 7) to get: d + E + 3A – 3 – 2b
Now Gauss eliminates any need for the reference date by replacing A in the following way. First, we divide the year by 7 and call the remainder c. That means, if we subtract c from the year, the result will be divisible by 7. Or, A-c will also be divisible by 7. In the next step, Gauss subtracts 3 times A-c or 3A-3c which gives: d + E + 3c – 3 – 2b
Finally Gauss subtracts this from 7c – 7d which gives: 3 + 2b + 4c + 6d – E.
Which means E is the remainder if we divide 3 + 2b + 4c + 6d by 7. So the determination of the Easter date is March 22 + d + E.
Unfortunately our work is not completely done. Because in the Gregorian calendar, not every century year is a leap year, the algorithm must change from century to century. Gauss also solved this problem using principles of Higher Arithmetic. We will take this up in future pedagogical discussions.

Heraclides of Pontus Was No Baby Boomer

By Robert Trout
It is, today, a commonly believed myth that before the time of Columbus, everyone thought that the earth was flat, and located in the center of the universe, with the rest of the universe orbiting around it. In fact, over 2000 years ago, Greek scientists, using only the most simple instruments, developed an advanced conception of the universe that could have explained the ordering of the solar system, and how this ordering determined the seasonal cycles on the earth. They had even discovered the precession of the equinoxes to begin comprehending the longer astronomical cycles. Today, we will examine Greek discoveries in astronomy through Heraclides of Pontus, who refuted the world view of the baby boomer generation, more than 2000 years before the first boomer was born.
Greek astronomy was based on a scientific method which was in opposition to the methods used in ancient Babylon. The astronomy of the ancient Babylonians is an excellent example of how an oligarchical society does not develop science. The Babylonian oligarchy used a pantheon of cults to control the population. The priest caste studied the heavens for the purposes of omen astrology and for the improvement of their calendar, which was a lunar one unlike the superior Egyptian solar calendar.
The Babylonians left behind thousands of cuneiform tablets pertaining to astronomy. However, in the Babylonian approach to astronomy, not even a trace of a geometrical model is visible. Instead, they developed numerical methods using arithmetic progressions, in a fashion that would remind one of Euler. Using these methods, they were able to predict certain phenomena with the moon, within an accuracy of a few minutes. Although they compiled almost complete lists of eclipses going all the way back to 747 B.C., the Babylonians collected almost no reliable data on the motion of the planets. They never developed accurate methods for measuring the location of celestial objects, and never showed any interest in developing a unified conception of the cosmos.
Greek science developed as part of a cultural current which rejected the domination of an oligarchy. In the Homeric epics, man was presented matching his wits against the oligarchical Greek goods. In Aeschylus’s play, “Prometheus Bound,” the character, Prometheus, the Greek word for forethought, gives science to mankind, to free them from the pagan gods.
Greek culture, was, itself, split between a pro-republican and an oligarchical view, which is brought into sharpest relief by the opposing outlooks of Plato and Aristotle. Plato supplied the scientific method which has guided science ever since. He launched a research project to find “what are the uniform and ordered movements by the assumption of which the apparent movements of the planets can be accounted for.”
Around 150 A.D., under the Roman Empire, the fraudulent astronomy of Ptolemy was imposed, which was based on the ideology of Aristotle. The writings of the Greeks, with few exceptions, were not preserved, so the only records that exist are usually descriptions by later commentators. Therefore, we must reconstruct these discoveries, based on knowing how the mind functions.
Unlike the Babylonians, the ancient Greek astronomers sought a geometrical ordering principle behind the phenomena which are visible in the heavens. An early Greek astronomer would have seen that the motion of the objects in the sky appeared to follow regular cycles. As well, the cycles of the sun, moon, stars, and planets did not exactly correspond, giving rise to longer subsuming cycles.
Each day he would see the sun appear to rise in the east, cross the sky, and set in the west. The moon also rose in the east, crossed the sky and set. However, the moon seemed to travel slower than the sun, with the sun going through a complete extra rotation in approximately 29 1/2 days. The appearance of the moon also changed, going through a complete cycle of phases approximately every 29 1/2 days.
At night, he would see stars, most of which appeared to maintain a fixed relationship to each other. The Greeks developed a conception of a celestial sphere to explain the fixed relationships of these stars. The “fixed stars” rotated as a group throughout the night, around a point in the northern sky which appeared to not move. Also, the position of the “fixed stars” appeared to shift slightly, from day to day, with the same east to west rotation. This slight shift, from day to day, in the fixed stars appeared to go through a complete cycle each year, corresponding to the cycle of the seasons. A number of other cycles corresponded to the year. The sun’s path across the sky changed each day following a yearly cycle.
In addition to the “fixed stars” of the celestial sphere there were a few objects, which they named planets or wanderers, because, although they appeared very similar to stars, they did not remain in the same position in relation to the celestial sphere, but were constantly moving with respect to the rest of the stars.
One of the first known Greek astronomers, Thales, (ca 624 to 547 B.C.) is reported to have measured the angular size of sun and moon at approximately 1/2 degree. Thales, developed basic relations of similar triangles, such as demonstrating that the ratio of 2 sides is the same for similar triangles, and used this principle to measure relations in the cosmos.
Pythagoras (ca 572-? B.C.) is credited with discovering that the earth is approximately a sphere, and that the “morning star” and the “evening star” were the same, what we, today, call the planet Venus. He is also credited with discovering that the musical intervals are determined by number, and recognizing that the universe was governed by the same laws of harmony as those which govern music.
Since no writings from Pythagoras or his followers have survived, we can only speculate how he discovered that the earth is spherical. He might have concluded this based on conceptualizing the cause of eclipses. The discovery of the cause of eclipses is attributed to Anaxagoras (500-428 B.C.), who hypothesized that the sun was a red hot stone and the moon made of earth, for which he was accused of impiety. He recognized that the source of the moon’s light is the reflection of sunlight. He is credited with discovering that an eclipse of the moon is caused by the earth blocking the sun’s light from shining on the moon, and that an eclipse of the sun is caused by the moon blocking the sun’s light from reaching the earth.
Eclipses of the moon give evidence that the earth is spherical. The shadow that the earth makes on the moon during an eclipse is always circular, regardless of the direction from which the sun is shining. This is only true of a sphere, in the geometry that the Greeks were then developing.
Pythagoras could have discovered that the earth is spherical, because he conceptualized the idea of curvature that Erathosthenes understood, which enabled him to design his famous experiment to measure the circumference of the earth. Finally, Pythagoras could have concluded that this must be true, because he recognized that the universe is ordered by geometry and he thought that “the sphere is the most beautiful of solid figures.”
The “morning star” and the “evening star” are the two brightest objects in the night sky after the moon. The two phenomena each go through visible regular cycles, which Pythagoras was able to see were reflections of a subsuming cycle which ordered the two visible cycles.
The “evening star” first appears slightly above the western horizon shortly, after the sun sets. Over a period of months, it will appear each evening, when the sun sets, in a slightly higher position above the western horizon, travelling westward each night apparently in tandem with the rotation of the celestial sphere. Eventually it will appear, when the sun sets, at a position approximately 1/2 of a right angle above the western horizon. It will then start to appear, each night, at a slightly lower position above the western horizon, until it does not appear at all in the evening sky. However, shortly thereafter, the morning star becomes visible.
The “morning star” will first appear, on the eastern horizon immediately before the sun rises. Each night, it will rise slightly earlier, and travel westward apparently in tandem with the rotation of the celestial sphere. Its height above the eastern horizon, when the sun rises, will increase each night, reaching a maximum of slightly more than 1/2 of a right angle. It will then begin rising, later each night until, it rises so late that its appearance is hidden by the daylight. However, shortly after the morning star disappears, the “evening star” will then reappear on the western horizon.
Conceptualize how Pythagoras could have approached this problem, without all the knowledge of the solar system that you think that you know. For Pythagoras to have hypothesized that these two stars were the same, required that he approach the universe with the understanding that it was ordered, lawfully, and its lawfulness was comprehensible by human reason. Only then could he discover that the appearances of the two visible phenomena could be lawfully explained as the result of a process which could be comprehended by the mind but not seen by the senses. His hypothesis could have been that the morning and evening stars were the visible evidence of an object, which accompanied the sun in the sun’s apparent daily rotation around the earth, while oscillating back and forth over a period of approximately 20 months, half the time preceding the sun and half the time following it.
Pythagoras’s discovery, that these two visible phenomena in the night sky were the same, may seem trivial. However, his discovery set the stage for Heraclides of Pontus, approximately 200 years later, to overthrow the baby boomer conception of the universe, as we shall see below.
Philolaus, (second half of 5th century B.C.), a member of the Pythagorean school, introduced conceptions of motion to an earth, which had previously been thought of as largely static. Philolaus is credited with removing the earth from the center of the universe, and replacing it with a central fire, around which the rest of the universe, including the earth, rotated. This hypothesis was gradually rejected, because the existence of a central fire was never verified.
Plato (ca 427-347 B.C.) developed the scientific method, which was inherent in the work of the Greek scientists who preceded him, and was mastered by all scientists who followed him. In the Republic, Plato described how, when the senses give the mind contrary perceptions, the mind is forced to conceptualize an idea which is intelligible rather than visible. Astronomy compels the soul to look upward, not in a physical sense, but towards the realm of ideas. The study of astronomy required that man discover the true motions of the heavens, rather than merely their motion, as it appeared. “These sparks that paint the sky, since they are decorations on a visible surface, we must regard, to be sure, as the fairest and most exact of material things, but we must recognize that they fall far short of the truth, the movements, namely, of real speed and real slowness in true number and in all true figures both in relation to one another and as vehicles of the things they carry and contain. These can be apprehended only by reason and thought, but not by sight, or do you think otherwise?” Further on Plato adds, “It is by means of problems, then, said I, as in the study of geometry, that we will pursue astronomy too, and we will let be the things in the heavens, if we are to have a part in the true science of astronomy and so convert to right use from uselessness that natural indwelling intelligence of the soul.”
Plato rejected the world view of the oligarchy, who projected their own evil caprice onto God, and asserted that the universe was “controlled by a power that is irrational and blind and by mere chance.” On the contrary, Plato stated that he followed “our predecessors in saying that it (the universe) is governed by reason and a wondrous regulating intelligence.” The creator made a universe which is ordered harmonically, by mind that produces order and arranges each individual thing in the way that achieves what is best for each and what is the universal good. Therefore, man can comprehend the universe through reason.
Plutarch wrote of Plato ” … that Plato in his later years regretted that he had given the earth the middle place in the universe which was not appropriate.” Plato laid out a research project for his students to find “what are the uniform and ordered movements by the assumption of which the apparent movements of the planets can be accounted for.”
Heraclides of Pontus, (ca 388-315 B.C.) was a student of Plato at the Academy in Athens. Born more than 2000 years before the advent of today’s baby boomer culture, he made a crucial discovery which all too few baby boomers today have replicated. He discovered that the entire universe was not rotating around the earth, (and around him, who was standing on it) as would appear to be the case to one who believes in sense certainty. Rather, the cause of the rest of the universe appearing to revolve around the earth was that the earth is, itself, rotating around its axis. He also discovered that the cause of the apparently erratic motion of Venus and Mercury is that they are revolving around the sun. While Heraclides still believed that the Sun revolved around the Earth, his discovery that Venus and Mercury revolved around the Sun, set the stage for the later discovery that the Earth and all the other planets also revolved around the Sun.
Although he wrote numerous dialogues including two discussing astronomy, only a few remarks by commentators have survived the dark age, initiated by the Roman Empire, on how he made this remarkable discovery. We must reconstruct how he could have done it. What he must have done is conceptualize an idea of the nature of the Universe, and comprehend that his idea was more real than sense certainty.
The commentator Aetius reports that Heraclides thought that each of the innumerable stars in the sky was also a world surrounded by an atmosphere and an aether. Others, at the time, thought that the stars were attached to some sort of dome or rings. For example, Aristotle argued that the stars and sun were objects carried on rings around the earth at such a high rate of speed that the friction between the stars and the air caused the sun and stars to give off heat and light.
Obviously, Heraclides could not have arrived at his hypothesis based on his senses. (Even in the last few years, when astronomers have developed experiments to try to determine if other stars have planets orbiting them, they have still not “seen” any planets. Instead, they are designing experiments to measure certain phenomena, such as the distribution of heavy elements in the vicinity of distant stars, and, then, interpreting the results of their experiments as proving their hypothesis.) Heraclides must have thought, “If all the innumerable stars are each a world like our own, and they are at so immense a distance, that these worlds appear only as small specks of light in the night sky; why should all of them, and the immense universe in which they are located, orbit around the one world where he happened to be located?” Instead, he recognized that the impression which he received from his senses, that the heavens were rotating around the earth, could be explained by conceptualizing that the earth was, instead, rotating on an axis.
One significant anomaly that lead Heraclides to the discovery that Mercury and Venus revolved around the sun, was that the brightness of the planet Venus and the rate of it’s change in location, from night to night, varies dramatically throughout its cycle. It takes Venus, during the “evening star” part of its cycle, approximately 7 months to rise to its highest position above the western horizon, and only about 2 months for its descent. At the beginning of this cycle, it is dim. It becomes progressively brighter, until near the end of its cycle, it is, by far, the brightest object in the night sky, besides the moon. During the “morning star” part of the cycle, Venus rises rapidly to its highest position above eastern horizon in about 2 months, and, then, decreases in position each night very gradually, taking about 7 months until it disappears entirely under the western horizon. During the “morning star” part of its cycle, Venus starts out very bright and becomes progressively dimmer.
Heraclides hypothesized that his observations were a reflection of how an object rotating around the sun, would appear to an observer located on the earth, which is revolving on its axis. This is more easily understood from the following diagram: Draw a circle with a radius of 3 inches, to represent the orbit of Venus. The center of this circle represents the sun. Then draw a point to represent the earth, approximately 4 1/8 inches from the center of the circle. (For purposes of the diagram, make this dot below the circle.) Heraclides also placed the planet, Mercury, rotating around the sun in a much smaller circle. The cycle of Mercury appears similar to that of Venus, to an observer on earth. However, Mercury is usually much fainter than Venus, and reaches a maximum altitude in the sky only around 1/3 that of Venus.
In the diagram, the motion of Venus, would be represented counterclockwise around the circle. (Remember that Kepler’s discovery of elliptical orbits is almost 2000 years later.) The earth is rotating, daily, on its axis (counterclockwise in our diagram). The clearly visible differences in Venus’s brightness are explained by the dramatic differences in its distance from the earth at different places in its orbit.
Draw 2 lines from the earth, that are tangent to the orbit of Venus. At the points of tangency with the circle, the angle between Venus and the sun is greatest, and Venus will appear the highest in the night sky to an observer on earth. Draw a line through the sun and the earth which bisects the orbit of Venus.
Now, conceptualize what an observer standing on the earth, which is rotating counterclockwise, will see. In left half of the orbit, Venus appears as the “evening star,” and in the right half it appears as the “morning star.” Venus travels a far longer distance in rising to its highest position in the evening sky than in descending, making its assent take a far longer time than its descent. The opposite is true for Venus’s appearance in the morning sky.
Heraclides of Pontus’s discovery advanced Plato’s research project of discovering “what are the uniform and ordered movements by the assumption of which the apparent movements of the planets can be accounted for.” He set the stage for Aristarchus
Share:

0 comments:

Post a Comment

Contact Form

Name

Email *

Message *

Search This Blog

Business

all type of study test papers are available here. all types of books are available here.

Powered by Blogger.

Facebook

Random Posts

Recent Posts

Recent in Sports

Header Ads

Featured

Recent Posts

Unordered List

Theme Support

Definition List