Monday, October 26, 2015

Ben Carson: don't stomp on Jesus or else.

So this one was a bit of a rabbit hole. It started when I saw the following posted on Facebook.


I read the text, which didn't really seem to explain the headline. So then I listened to all the audio (the video and the radio recordings). There was missing context with what he was asked. They (the reporters, or whatever you call them now) had to lead him by the hand all the way to the finish line. He kept going on and on about extremism in colleges, but never said liberal or conservative. But what in the hell is he talking about? What extremist university is he thinking about that would justify transforming the department of education into some kind of 1984 monitoring system to keep tabs on all the lessons in the country? Whether or not he meant it for liberals in his own head, the 'reporter' had to throw it in there for him by making sure he was only wanting to censure 'liberal' biases, and that 'conservative' biases wouldn't be.

Ben made sure to point out he is only concerned with stopping 'extreme political bias'. But what example is he talking about? The only one he gives is from a story from several years about where a professor told students to 'stomp on Jesus', where a student claims he was then suspended for not participating. I mean, yeah that sounds pretty extreme. It sounds like a professor wanted to piss off his christian students: yet another example of the liberal bias in university every conservative knows about.

So, I was curious where this happened. It wasn't difficult to find the story, which happened back in 2013. I never heard it before, and probably because it is mainly a conservative talking point. The first few pages of google will be filled with conservative news organizations covering the story, which are all essentially the same. Here is even a follow up story several months later, in the summer, just in case the conservatives forgot about it:


This only reinforces the story of christian persecution, and worse, it appears the university doesn't even care that it's happening right in the classroom.

But at least now I knew the details. I finally found the official statement from the university:


Better yet, more information on what happened:

The exercise was based on an example presented in a study guide to the textbook Intercultural Communication: A Contextual Approach, 5 th Edition, written by a college professor who is unaffiliated with FAU.
So, why would an intercultural communications textbook be promoting christian persecution? Isn't that a bit ironic?

So, I try to track down the exact lesson. I have the name of the book, but it's actually supposedly in the study guide to the textbook, but I couldn't find it online. However, I don't think I need to read it. I did find a report generated by the university because of this incident which describes what the lesson was supposed to be and why they disciplined the student (which of course was not mentioned at all in the follow up article on fox news):

http://fau.edu/ufsgov/Final%20AFDPC%20Report%206-24-2013.pdf

Here is an excerpt of the intended lesson:

The exercise asked students to write the letters “J-E-S-U-S” on a sheet of paper, to place the paper on the floor, to think about it for a short time, and then Dr. Poole asked the students to step on their papers. The stated purpose of this exercise is to start a discussion on the importance of symbolism and its cultural context. The exercise followed by Dr. Poole is included among the instructors’ resources that accompany the course textbook.

It also goes on the state it is optional, of course, because the purpose of the exercise is not to insult anyone. The purpose is to make the lesson that people take symbols seriously. It didn't have to be jesus on the piece of paper, it could be whatever the person would feel negatively about stepping on. If you can't step on it, then maybe they should feel empathy when other peoples symbols get stepped on.

It seems like a completely logical lesson in context. However, it seems the student in question didn't get the point of it. So much so that he decided to stay after class to threaten the professor

The agitated student allegedly approached Dr. Poole in a threatening manner saying, “I want to hit you,” while punching his fist into his open palm. Dr. Poole also said that the student told Dr. Poole never to use this exercise again, and pounded on Dr. Poole’s desk with his fist several times yelling, “Don’t you ever do that again! Do you hear me?” Dr. Poole insisted that both students leave immediately, which they did. 
A witness also after class corroborates this in a later email.

I am at a loss for words regarding what happened tonight. I just wanted to make it clear that I do not share the same views as my colleague and have the utmost respect for you as a professor

That is when the university notified the student he was being charged under the university student code of conduct. Not because he didn't want to participate in the lesson, which is in the past by this point.

After an initial determination by this office that the student conduct process should proceed, you are being charged with violating FAU’s Student Code of Conduct, Regulation 4.007, specifically: (N) Acts of verbal, written (including electronic communications) or physical abuse, threats, intimidation, harassment, coercion or other conduct which threaten the health, safety or welfare of any person.

 I think it is safe to say Ben Carson is completely unaware of any of this. Or the fact that it may have well been partially racially motivated. This is clearly after anyone was going to step on the papers, and so after the student supposedly had already been disciplined for not participating in the 'stomping'.

Dr. Poole then asked the students in the class to discuss their personal reactions to the idea of stepping on their papers. Dr. Poole said that one student vociferously objected to stepping on the paper. The offended student remained disruptive, repeatedly calling out, “hey brother!” to reengage Dr. Poole in a one-on one dialogue during class. Dr. Poole told the AFDPC that he instructed the student to stop calling him “brother,” but ultimately dismissed the class early 

Not that it should matter, but Dr. Poole is black. The reason he was removed from work was because he received death threats after the media coverage, not because of the lesson.

https://www.insidehighered.com/news/2013/04/01/interview-professor-center-jesus-debate-florida-atlantic

One of the threats said that I might find myself hanging from a tree
The conservative media took some college student who clearly has anger problems, who lost his temper at a professor because he couldn't think outside of his own culture (in an intercultural class), threatened his professor, and the media just encourages and stirs up more anger and threats. The entire thing was built to make people angry.

Now, here we are over 2 years later, and this same half-story is being trolled out. Not just for the conservatives, but also now for the liberals. Ben Carson is just the tip of the ice-burg, and probably doesn't even know what he's referring to half of the time. He just knows what seems to get 'conservatives' attention, and they know how to play him like a flute. It's such a shame it's not even a funny any more. Now he's being used to get this story to make liberals angry too. Does he really believe the dept. of education should censure colleges, or did someone just tell him to say it? It seems like he had trouble remembering what to say, and even needed a second interview to clear things up with some help.

Noone seems to even know or care about what's happened anymore.





Sunday, October 11, 2015

Generating a large number of pseudo-random numbers in WebGL/OpenGL

I needed to make a large number of random numbers, but OpenGL doesn't really give a function that does this. I found a few quick and dirty tricks on the internet to get random like textures, but I really wasn't satisfied with the results. So I went a different path to try to get good results and performance.

Here is a demo of the generator: http://kcdodd.github.io/random-webgl/

My current implementation starts with a large random texture being pre-computed and loaded to the gpu, call it S. Each pixel is a random rgba value with uniform distribution between 0 and 1. This texture is constant and serves as a source of entropy for random values.

My random texture is also initialized with it's own random values, call it texture H, which is smaller than S. H.b, and H.a (blue and alpha channels) represent a texture position in S. At each iteration, H is updated by computing a new position, and looking up new values in S. S.b and S.a are mixed into H.b and H.a in a non-linear way to provide addition entropy to the position from which S itself is being sampled. However, this causes H.b, and H.a to not have a uniformly random value in the way I have done it, and is why I don't simply use that method directly for random values.

S.r, and S.g are sampled to preserve the uniform distribution of H.r and H.g (red and green), which are the values I will actually use as random numbers, ignoring the blue and alpha. They are simply added together, then mod 1 to stay on the interval [0.1].

To step the position in S, I am using the logistic map function with r = 4.

\[x^{i+1} = 4 x^i (1 - x^i) \]

I have tried to use this before as a random number generator, since it is chaotic by itself, but I have never been able to get a very desirable distribution from it. However, I don't really care about the distribution of positions, since it is the distribution of values it looks up that matters. As long as the two are independent of one another, I should get a uniform distribution, with a long non-repeating sequences due to the logistic map.

The random texture (red and green channels) can be thought of a random permutation of a random subset of S, both of which change in a chaotic way.

Just to mix the position a little further, I mix a small amount of S into x as well at each step (currently 0.1% worth). This means that the next value it generates is a function of the entire history of that pixel.

Here is the fragment shader for stepping H (which is called u_rand). S is u_entropy.


Thursday, October 1, 2015

Particle Pushing with Boris Method Matrix-Vector

This is going to a fairly technical post. I'm using the Boris method of charged particle pushing for my current project and just thought I'd share my notes. I first saw the Boris method described here : https://www.particleincell.com/2011/vxb-rotation/. Which provides a good explanation of why it was developed and compares it with other methods. 
However, the last time I did this I desired a simple form of computation based on a matrix/vector multiplication and addition. Something like

\[v^{n+1/2} = R v^{n-1/2} + A \]

Where R is the result of rotation in a magnetic field (not the R mentioned earlier in the linked article), and A is due to electric acceleration and interaction between electric and magnetic accelerations. And \(v^{n}\) is the velocity of the particle being stepped in time at the nth time marker. The position of the particle would be stepped in a leap-frog method as described in those documents.

I don't know if it has been put in this form before, it could have so I'm not trying to take credit for anything. I'm just following my notes. I start with the final equations listed in the link.

\[v^{n+1/2} = v^{n-1/2} + \frac{q \delta t}{m} \left (E + \frac{v^{n-1/2} + v^{n+1/2}}{2} \times  B\right ) \]
\[v^{n-1/2} = v^{-} - \frac{q \delta t}{2m} E \] 
\[v^{n+1/2} = v^{+} + \frac{q \delta t}{2m} E \]
\[v^{+} = v^{-} + \frac{q \delta t}{2m} \left (v^{+} + v^{-} \right ) \times B \]

They give a series of steps to velocity calculation. But I wanted a single multiply and add. I preferred renaming \(t = \phi \) for the next two steps. I think of \(\phi\) as like a angular vector of rotation due to the magnetic field.

\[\phi \equiv h B \]
\[h \equiv \frac{q \delta t}{2m} \]
\[v' = v^{-} + v^{-} \times \phi \]
\[v^{+} = v^{-} + v' \times \phi \frac{2}{1+ \phi^2}\]

Ok, just plugging step 3 into step 4.

\[ v^{+} = v^{-} + \left (v^{-} + v^{-} \times \phi \right ) \times \phi \frac{2}{1+ \phi^2} \]

Then start expanding the terms.

\[ v^{+} = v^{-}  + \frac{2}{1+ \phi^2} \left ( v^{-} \times \phi + (v^{-} \times \phi ) \times \phi \right )\]

\[ v^{+} = v^{-}  + \frac{2}{1+ \phi^2} \left ( v^{-} \times \phi + \phi \times (\phi \times v^{-}) \right )\]

Then use the vector identity \(a \times (b \times c) = (a \cdot c) b - (a \cdot b) c \), and \(\phi \cdot \phi = \phi^2 \)

\[ v^{+} = v^{-}  + \frac{2}{1+ \phi^2} \left ( v^{-} \times \phi + \phi (\phi \cdot v^{-} ) - \phi^2 v^{-} \right )\]

Now, pull out the \(v^{-}\). This will look really strange because of the 'dangling' vector operations. However, those can be converted into a matrix, which when a vector is multiplied through will have the desired result. But I will wait on computing that matrix.

\[ v^{+} = v^{-} \left (1  + \frac{2}{1+ \phi^2} \left ( (\times \phi)  + \phi (\phi \cdot  ) - \phi^2  \right ) \right ) \]

But this has to be put in terms of the actual current and next velocity. Basically, you can plug into the equations above to get it in terms of \(v^{n-1/2}\) and \(v^{n+1/2}\). Also re-arranged the terms a bit.

\[ v^{n+1/2} = \left ( v^{n-1/2} + \frac{q \delta t}{2m} E \right ) \left (\left (1 - \frac{2 \phi^2 }{1+ \phi^2} \right )  + \frac{2}{1+ \phi^2} \left ( (\times \phi )  + \phi (\phi \cdot  )  \right ) \right )  + \frac{q \delta t}{2m} E\]

Now expand the first term

\[ v^{n+1/2} = v^{n-1/2} \left (\left (1 - \frac{2 \phi^2 }{1+ \phi^2} \right )  + \frac{2}{1+ \phi^2} \left ( (\times \phi )  + \phi (\phi \cdot  )  \right ) \right )  + \frac{q \delta t}{2m} E \left (\left (1 - \frac{2 \phi^2 }{1+ \phi^2} \right )  + \frac{2}{1+ \phi^2} \left ( (\times \phi )  + \phi (\phi \cdot  )  \right ) \right ) + \frac{q \delta t}{2m} E\]

Ok, I won't torture you any further with this. I'm going to skip to the end and just tell you what the matrices are in terms of the fields. 

\[R = \left ( 1 - \frac{2 h^2 B^2}{1 + h^2 B^2} \right ) I + \frac{2}{1 + h^2B^2}( h (\times B) + h^2 (BB)) \]

\[ I = \begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\  0 & 0 & 1 \\ \end{bmatrix}\]

\[ (\times B) = \begin{bmatrix}0 & B_3 & -B_2 \\ -B_3 & 0 & B_1 \\  B_2 & -B_1 & 0 \\ \end{bmatrix}\]

\[ (BB) = \begin{bmatrix} B_1 B_1 & B_1 B_2 & B_1 B_3 \\  B_2 B_1 & B_2 B_2 & B_2 B_3 \\  B_3 B_1 & B_3 B_2 & B_3 B_3 \\ \end{bmatrix}\]

\[A = h \left ( 2 - \frac{2 h^2 B^2}{1 + h^2 B^2} \right ) E + \frac{2}{1 + h^2B^2}\left ( h^2 (E \times B) + h^3 (E \cdot B) B \right ) \]

The impressiveness of this method can maybe be seen now being 3rd order in the delta time (the h constant), even taking into account for \(E\times B\) drift and energy being gained or lost to the fields through \(E\cdot B\). While at the same time conserving energy in the pure rotation part.

The idea is R and A would be pre-calculated once per step (assuming it is time-dependent) as a function of position given the fields E and B, with a total of 12 parameters. Then, when particles are pushed the matrix at position \(x^{n}\) is looked up and a multiply and add is done to update the velocity, which is a total of 9 multiply and 9 add operations per particle.


Tuesday, September 22, 2015

Space Warps + Black-holes

I realized I had a misunderstanding about how space is warped in gravity. I am trying to gain intuition from the Schwarzschild solution for a blackhole. This can be used to see how space reacts far away from a kind of ideal massive object. I am using wikipedia as references:

https://en.wikipedia.org/wiki/Schwarzschild_metric
https://en.wikipedia.org/wiki/Deriving_the_Schwarzschild_solution

The initial solution presented however is not isotropic in the coordinates, which makes it hard to understand how space is really behaving from a measurable point of view. If these equations don't make any sense to you I will try to explain so don't freak out. I will copy here for reference, but I will explain why this one is deceiving:

\[ds^2 = \left (1-\frac{2Gm}{c^2 r} \right)^{-1} dr^2 + r^2 \left ( d\theta^2 + sin^2(\theta)d\phi^2 \right) - \left (1-\frac{2Gm}{c^2 r} \right )c^2 dt^2 \]

The solution (or metric) in the second page for isotropic coordinates I found more revealing and I will copy here.

\[ds^2 = \left (1+\frac{Gm}{2c^2 r_1} \right)^4 \left (dr_1^2 + r_1^2 \left ( d\theta^2 + sin^2(\theta)d\phi^2 \right)  \right) - \frac{\left (1-\frac{Gm}{2c^2 r_1} \right )^2}{\left (1+\frac{Gm}{2c^2 r_1} \right )^2}c^2 dt^2 \]


The \(dt\) represents a tick of a reference clock, and \(dr\), \(rd\theta\), and \(r sin(\theta) d\phi\) represent reference lengths. The factors multiplying them represent a scaling factor of what the actual clock ticked, or length is, at that radius relative to the reference.

The factor on the time coordinate gives the time dilation.

Time dilation as a function of radius: \(\frac{Gm}{2c^2}=1\)

The factors are a function of radius, \(r\), but the radius does not mean the same thing that it means in flat space. The solution is defined from imagining space being built up from a series of concentric spherical shells. The coordinate \(r\) tells which shell you are sitting on by equating the surface area of the shell to \(4 \pi r^2 \). It does not, however, necessarily tell you how far from the center you are. And actually it doesn't even tell you what the actual surface area is, since you also have to look at the scaling factor for those dimensions.

In flat space, each shell has to be bigger than the one inside of it, and smaller than the one outside, by a fixed amount. This is the limit where \(m = 0\), and so the scaling factor is a constant. As m increases, the scaling factor becomes a function of radius.

Length expansion as function of radius: \(\frac{Gm}{2c^2}=1\) 


There is a limit to this solution at \(r = \frac{Gm}{2c^2}\). The metric at that radius is

\[ds^2 = 4^2 \left (dr_1^2 + r_1^2 \left ( d\theta^2 + sin^2(\theta)d\phi^2 \right) \right)\]

This is called the event horizon because the time component vanishes, which means nothing can ever cross this boundary from the point of view of someone outside. Now, from what I have read, other coordinate systems allow the solution to progress past the event horizon. It's not important right now whether this is physically real since I only care about events far away from this limit. I think what may be more important is to see that nothing too crazy is happening to the spatial coordinates here.

However, in the first solution it looks like the length factor in the radial coordinate blows up as the event horizon is approached. This is because of the choice of radial coordinate. The problem is that the surface area of each shell as measured by a distant observer starts to approach a constant value as the event horizon is approached. That is, each concentric shell has about the same surface area as the one just outside, and the one just inside. This means space is not flat.

The problem with this solution is that in order to get to a spherical shell of a smaller surface area, one must drop a much further distance toward the event horizon. And at the event horizon itself the areas become constant, which makes it look like the radial coordinate blows up. However, the distance to the event horizon is actually a finite distance through space.


Radial factor accounting for length expansion near horizon


The second solution scales the surface areas as one gets close to the event horizon which is what one would actually experience. We can see that the radial length factor is actually only 4x that of a reference length far away. But also every dimension is 4x bigger there, not just the radial lengths.

For things that are not blackholes, what this means is that essentially there is slightly more space inside and around a planet than one would expect from far away, in addition to time running slightly slower.

Saturday, September 19, 2015

Gravitational Field Energy

I meant to talk about how I am thinking about the gravitational field energy. My first thought experiment is to imagine a single photon of sufficient energy (and frequency) is converted to matter. That matter falls into a gravitational well, adding to the mass that is already there. As it fell it also gained kinetic energy.

The total amount of energy gained by matter falling into a gravity well is the energy of assembly. But since it gained energy by assembling into one big mass, the energy of assembly for gravity should be negative.

But I realized that time-dilation and gravitational acceleration are proportional. Now imagine that instead of the matter colliding with the planet surface, it is converted back into a single photon and reflected back into space. The new photon starts with a frequency higher than the original photon, corresponding to the gain in kinetic energy. But as it travels back out of the potential well it is red-shifted back to the frequency of the original photon: total conservation of energy.

This means that the kinetic energy gained through gravity only has a relevant meaning within the gravitational well. From outside observers, there is no change of energy at all. Gains of kinetic energy through gravitational collapse are exactly matched by a time-dilation factor, which for observers away from the well, there is no net change of frequency (or energy) at all!

A second thought experiment: imagine two massive bodies well separated with total mass M. The two bodies then collapse gravitationally. In the gravity well energy is gained increasing causing the energy of the single new body to be higher than the original bodies. Since the new body is 2M in mass, it also experiences a higher time dilation. However, from outside observers the total energy of the system is still only 2mc^2. The extra thermal energy is folded into the total energy. If the thermal energy is radiated away (out of the gravity well), observers will actually see a total mass less than 2M, even though observers inside the well still see 2M worth of mass.

This now brings me to black holes. If a massive body simply collapses without any interruptions, and ignoring radiation taking away thermal energy, the total mass of a body will remain the same for outside observers even though the internal thermal energy increases. As the radius of the body approaches the Schwarzschild radius (although the definition of that radius seems a bit wonky), the time dilation goes to zero, and the internal energy approaches infinity. But because of the near infinite time-dilation, outside observers still see the same total mass, energy, and temperature no matter how much the body collapses under gravity.

So what does this mean for field energy? In a way, the energy is taken from the time dimension. Object gains energy falling, but then causes time-dilation which makes it appear as total energy of the matter is unchanged. If the total energy is unchanged, then there is no need to invoke the idea of a field energy.

What is really wonky is the perspective of the rest of the universe from inside a gravity well.

Thursday, September 17, 2015

General Relativity and Quantum Mechanics

Over the past few days I have made some realizations about how to think about gravity, and what it is exactly. General relativity describes the effect of gravity as objects simply following their natural trajectories of shortest path, but that space and time itself are warped such that the shortest path is actually a curved line, and so we see acceleration due to gravity.

However, this does not seem to immediately give us any intuition about what this represents in real life. There are several erroneous visualizations about this warped space-time that have caused me great confusion when trying to understand this concept.

The most simple false visual is the stretched piece of rubber with something heavy sitting on it. The rubber bends down. Since it represents space-time we see what warping of space-time might look like. And also if we place other objects on the rubber sheet they even appear to be pulled toward each other.

A more complicated but more physically satisfying false visual is that of space itself  'falling' in a gravitational field. Then from a relativistic point of view it makes sense that anything on the falling space would fall as well, while also following their own shortest path within that patch of space.

The reason both of these visualizations are incorrect is because neither has any physical meaning. Nothing about the two scenarios could be tested. The rubber sheet only works because gravity is already around to cause the rubber to warp and balls fall to each other. It doesn't add any explanatory power as to why the balls should move at all.

While falling space does seem to add explanatory power, or an intuition pump at least, it doesn't predict anything that can actually be measured. There is no way to detect movement of space. Mathematically it would also introduce an arbitrary preferred frame of reference; that space has a particular configuration, and that it can change and accelerate etc just like matter. None of that is a part of GR.

So, what exactly CAN we measure? There are classically only two things we have. The stick to measure length, and the clock to measure time. This is how Einstein would have visualized things. We just fill up space with little sticks and clocks, which take the place of a coordinate system.

The simplest case is the elevator thought experiment. The equivalence principle holds that an accelerating frame is indistinguishable from a gravitational field. This means that if we imagine being closed in an elevator, we can not tell if we are on Earth under the influence of gravity, or in deep space and being accelerated by the cable being pulling in some direction. Inside the elevator the two situations are indistinguishable for the purposes of GR.

This also means that any warping of space-time that is measurable must be exactly the same inside the elevator in the two cases as well. It doesn't matter what is causing space-time to appear warped; it looks the same either way.

Now, we have to make a control. If the elevator is in deep space, and not being accelerated, then we can study how space-time looks inside. Then put it under acceleration and see how space-time looks. In both cases all of the sticks appear to be pretty much unchanged. We still measure the height of the elevator cabin to be the same, as well as the width and the depth. The only thing that has changed is that the clocks at the roof of the cabin tick slightly faster than the clocks at the floor when it's being accelerated.

This means that the effect of gravity is due solely, and completely, to the fact that time moves at a faster rate higher in the cabin, and slower lower in the cabin. Ok, you might ask, but how does this explain why things fall? How does this add explanatory power?

The fact that time moves slower as you go deeper in a gravity well is not just a side-effect of gravity, but the primary cause of acceleration. I do not want it to see like space is not warped as well, because it is. However, if you are to drop a ball from a stand-still on the surface of Earth, there is no effect from the warping of space that can cause it to start moving. Warped space can only affect things that are already moving through space. Warping of space can change a trajectory, like that of light, but it can't cause acceleration without the object already possessing a velocity.

That is, warped space causes velocity dependent forces on objects. Warped time is what causes acceleration from nothing. Mathematically in GR, we fall because shortest path goes through slower time, kind of like how light is bent in a lens because it goes slower in the material. But it is hard for us to actually visualize this path in a space-time diagram.

For me the epiphany came when I coupled this with quantum mechanics. The probability of finding a particle at a particular position is inferred from the evolution of what is called its wave function. From a certain point of view, the wave function evolves in time over space just like a wave does. We normally don't use this interpretation for predictions because we can't actually measure this evolution; we can only place detectors. But, ignoring that for a second, the evolution of the wave function is essentially a local function of time, which can make the probability become higher or lower in different locations over time.

In flat space-time, all the clocks everywhere tick at the same rate, and so the wave function evolves as we expect without a gravitational field. But, with time progressing faster in one direction, the part of the wave function in that direction also evolves faster. The effect is that the relative phases along that direction begin to change. This phase-shift then causes the wave function to move downward; gradients in phase are equivalent to momentum.

This can all be easily seen from simulating the Schrodinger wave equation by simply adding a scaling factor to the time evolution which depends on position.

The following simulation starts with 1D particle in box. Gravity is implemented by causing time to progress faster towards the right, which represents a gravitational acceleration to the left.

http://kcdodd.github.io/qmgrav/

When gravity is turned on, it beings to accelerate left. Because the box limits its motion, it bounces back up due to a quantum 'tension' against the two walls. I can 'push' the particle by turning gravity on when the particle is more at the right side, and off when it is more at at the left side, increasing the total energy of the particle. Otherwise the energy is conserved. The actual time-rate difference between the two sides is hard to notice, even though it maxes at double speed at the right edge.



So, in quantum mechanics a gradient in the rate of time produces something like a force. Since we can all be described by wave functions, the reason we accelerate downward at the surface of Earth is because the rate of time is slower at our feet than at our heads.

Since the factor is on the time (not the mass, or potential, etc), every wave will experience the same acceleration, because every time factor would be the same. This is why all things fall at the same rate regardless of the mass: the mass isn't what is causing the acceleration at all. A light wave traveling upward will become red shifted also due to time running faster higher up, which relates to a change in energy. In fact, the frequency of all waves are lower at higher points, and higher at the lower points in the gravity well, which relates directly to changes in energy.

Frequency and energy are equivalent in quantum mechanics. This is what gives rise to the 'gravitational potential' energy being converted to 'kinetic' energy. The potential was created by the fact that time is running slower deeper in the well. When objects fall, they increase kinetic energy, which has a certain frequency, and the further it falls the higher that frequency is, directly proportional to the blue shifting caused by time-rate changes. I will want to revisit the issue of what the gravitational field energy represents, which by the way must be negative to account for the increase in energy of things falling into the gravity well.

For me this is a fairly complete picture of why things fall in warped space-time. But it doesn't answer why matter warps space to begin with. The space elevator thought experiment explains why warped space causes acceleration, but remember that doesn't depend on a gravitating mass; that is any accelerating frame.

The effect of matter on space-time is described by the Einstein field equations. But, like most equations, they don't provide much of an intuition pump. It basically says curvature of space-time is constrained, and energy and stress can alter those constraints.

This is very difficult because even with no matter or energy, it does not mean space-time is flat! It is not that rigid. This is where the idea of a rubber sheet might be helpful; in visualizing how space-time reacts in the absence of any matter. It is completely determined by the boundary conditions. If the boundary is flat (the hoop holding the rubber), and there is no matter anywhere of course, then it will be flat. If the hoop were deformed, the rubber would not be flat anymore, but would find a kind of smooth transition between the edges of the boundary.

This is like how the universe is. We usually assume a boundary out at infinity that is flat, which makes space-time flat when there is no matter in the universe. But that does not mean space-time is flat everywhere there is no matter. And it doesn't even mean this is how the universe is shaped, when it probably is not.

When matter is introduced, it interrupts the nature of space-time. Suppose the introduction of a ball specified that time progressed at half the rate at the surface of the ball, than at the boundary at infinity. Well, the space in between the surface of the ball and infinity will warp to transition between the two values. Close to the ball the time rate is x0.5, and further away it is x0.75, and further it is x0.99, etc. and at infinity it is back to x1. (the surface of the ball can also specify 'rates' for the 3 spacial dimensions as well which would cause space warps in addition to time warps).

This gradient in time around the ball then causes things to accelerate toward the ball. Suppose there are two balls, they will accelerate toward each other. A problem with this model is that the rates at the surface of each ball is fixed to x0.5 time rate, when really it should be like x0.25 since there is twice the mass now. Each ball is looking at the boundary and saying x0.5 that, while ignoring where the other ball is. That is because I'm treating them as boundaries, instead of sources.

As a source it doesn't specify a fixed rate of time, but specifies how much the rate of time is decreased relatively, which then propagate out according to the field equations. Simply put, for some reason the presence of matter and energy causes time to slow down.

The effects of matter on space are a little more complicated. [edit: I need to deal with this separately because I missed some things]

Wednesday, May 27, 2015

"New" vs. "Old" Math

This post is in response to this repudiation of old math. This has been building for a while, and I've been thinking about what exactly it is I disagree with. So, this isn't just in response to that article, but with a more general view of math and what it is we are trying to teach children.

So, to start off with the article at hand, the author claims that "the top doesn’t make sense, the bottom does, and the connection to Common Core is completely misunderstood. (Says this math teacher.)" I'm just going to call the top the 'old' way, and the bottom the 'new', but I hope by the end of this post you'll get why that doesn't matter. They further explain that the old way is "... just an algorithm. You can do it without thinking".

Ok, here lies the essence of my entire issue with this view. To be blunt, and to the probable chagrin of many other teacher: math is nothing but an algorithm. The problem for many is not understanding this because there is not just an algorithm. There are many, in fact infinity many, possible algorithms to subtract two numbers (just as an example). The 'new' way is also just an algorithm.

Ok, so what is the rational for the 'new' way that makes it better. The author claims that students can do the old way just fine, but don't know why it works. And that somehow the new math teaches what they call “number sense”, which is useful for "other math concepts". Also the new way is easier for making change out of $20.

I will start with a story from my childhood. I remember when I first learned subtraction of simple numbers, like say 13 - 7, the way I did it was I started at the lower number and counted up adding 1 each time until I got to the top number. I kept count on my fingers, so however many fingers I had was the answer. This is the essence of the new math system, although generalized to work with differences greater than 10. However, I remember the other students didn't get how I could be adding to compute the subtraction: that's crazy! I get that this concept is totally understandable to that age group because it was obvious to me at the time, at least.

However, this method is a terrible general algorithm for subtraction. It's fine for 20 - 4.30. But what if you had 154442132 - 498484, as an extreme example. One argument is that, well, students would never need to do that because we have calculators. Ok, that's fine. Now to my next point: the new way is even more arbitrary than the old way, and doesn't add any new intuition about what is happening.

The argument made was that the new way is useful to other concepts: what, exactly? Let's first actually define what the new algorithm actually is:

Subtract: 32 - 12
Do you know the answer already?
    yes: give the answer.
    no: pick a number between 32 and 12, call it x, such that you know the answer to x - 12
    Do you know that answer to 32 - x?
        yes: give the answer as (32 - x) + (x-12).
        no: pick number between 32 and x, call it y, such that you know the answer to y - x.
        Do you know the answer to 32 - y?
            yes: give the answer as (32 - y) + (y - x) + (x - 12)
            no: pick a number between 32 and y, call it z, such that you know the answer to z - y.
            Do you know the answer to 32 - z?
                yes: give answer as (32 - z) + (z - y) + (y - x) + (x - 12)
                no: keep repeating etc, etc, etc

You will see this is exactly the algorithm being taught if I plug in z=30, y=20, x=15:

32 - 12 = (32 - 30) + (30 - 20) + (20 - 15) + (15 - 12) = 2 + 10 + 5 + 3 = 20

Ok, I made this look even more complicated because I put in all those variables. Why did I do that? Because they are totally and completely arbitrary. Why pick 15, 20, and 30? The argument is that they are "easier". But what that really means is that we already memorized the results of those subtractions. We aren't actually learning anything new at all that we didn't before. If instead I only knew how to subtract 1, like I did when I first learned subtraction, then the algorithm would be x = 13, y = 14, z = 15, and I would have to add a bunch more. Other students could might see other combinations, such as x=22, or just see the answer right away.

So, why does the new way seems to be better to some teachers? I have some speculations. It very well may be building some intuition. And it is true, it is easy to understand why it works, but actually harder to understand how it works, which are separate issues. We can teach why very easily without pushing one particular algorithm. And the algorithm we pushed should be based on utility, generality, and understanding of the algorithm itself. Math is not just intuition.

Now comes to a more recent story about my experience trying to teach computer science to a group of early level high school students.

I gave an assignment to write a program that would take a starting time for a clock, and a after a certain number of ticks of the second hand it would report the new time on the clock. And it had to work for any number of ticks and times. So if I said it started at 10:01:56, it ticked 8 times, it should read 10:02:04. Now, if I asked a student to do this in their head, they had absolutely no problem with the additions and carry overs needed to do that. But ask them to write a program? They couldn't decide exactly what it was they were asking the computer to do. They had the intuition, or number sense, to solve the problem, but they couldn't describe the algorithm they were using to accomplish it. And without that, they couldn't tell the computer how to do it.

The reason this method is so good at making change is because you are almost always subtracting from an even number. That's because we don't print bills in odd amounts. The quantities and differences are also usually small. Every example I see about how easy this new method is to use is based on these problems that are easy no matter how you go about it. Again, completely arbitrary and a special case.

I think what we should be doing for this type of problem is exactly teaching algorithms. If you don't think they understand the algorithm, then work on that. But, exploring the algorithms are the important part, not just the answer, which is I think the end point here anyway.

Saturday, April 11, 2015

JavaScript "classes" my way

I group information about an object in to a "unit" object which holds information about the inputs to make the object, the interface of the object created, the functional constructor for the object, and unit testing for the object. My motivation was to build as much documentation and testing about the object in to the code itself. Validation of the inputs and created objects can also be done easily with some helper functions as long as the same convention is followed. More complicated hierarchies can be created easily with this model, without really introducing any new syntax or dealing with prototypes. I'm assuming something like QUnit is used for unit testing.

Tuesday, March 10, 2015

Productivity Unemployment Model

Assume the demand curve is given by:

\begin{equation}
s = s_0 \left ( 1 - \frac{p}{p_0} \right )
\end{equation}

Where \begin{equation}p_0\end{equation} is the price at which noone will buy the product, and \begin{equation}s_0\end{equation} is the quantity that can be given away for free.

My thought is that the production curve is proportional to the number of workers in the industry and their productivity.

\begin{equation}
s = \frac{w}{h}
\end{equation}

where w = # workers, and h = hours per worker per unit.

The only missing element is the wage of the worker: c. But assuming the price covers all labor costs (in the abstract sense of the word), then it must be then that the price is roughly equal to the number of hours of labor per unit times the cost of that labor time.

\begin{equation}
p = hc
\end{equation}

Combining with the above, and solving for the number of workers in the industry (assuming wages don't change significantly).

\begin{equation}
w = h s_0 \left ( 1 - \frac{hc}{p_0} \right )
\end{equation}

This is a quadratic function of the labor time, with a maximum value at

\begin{equation}
h = \frac{p_0}{2 c}
\end{equation}

My interpretation of this equation and the story it tells is as follows. If a particular product requires too much labor relative to how much people want it, then it will not be produced. But once the labor time falls to a sufficiently low level by increases in potential productivity, then it will begin to be produced and sold. As productivity continues to increase, the price falls and more workers are employed to bring ever larger quantities of the product to market.

However, once the labor time falls below a certain level, fewer workers are actually required to bring the larger quantity to market, even as production increases. This leads to layoffs in that particular industry after every advance in productivity.

Monday, February 23, 2015

Employment and Technology

I want to make a toy economy to illustrate my thinking about this subject. Suppose lets start off with 100% employment in a static economy. Peoples preferences, incomes, etc are all stagnant and so supply and demand has reached some equilibrium.

At the risk of being cliche, say there is industry Widgetry that makes widgets. Now suppose that someone invents a technology that makes workers twice as productive (twice as many widgets per day per worker). Now, how does that industry take advantage of this? To save some time, I am simply going to assert that this requires at least some of the workers be fired. The dynamic of exactly how many workers depends on whether or not the business wants to sell more widgets by reducing the price, or just keep the extra profits, but either way the result is more or less the same.

Now, if that company simply keeps the extra profit, market forces might lead to other companies being formed that sells a similar item for less, taking advantage of the new technology, thus hiring some of the those same forces. However, again, I will assert that at least some percentage of that workforce must never work in that industry again, because otherwise there is no economic gain from the increased productivity.

There is a very serious problem here, because the rest of the economy is already balanced. There is in principle nowhere for these workers to go. Worse even is that they no longer demand as much due to their reduced income, which may lead to additional layoffs or wage reductions in other industries, making the employment opportunities even tighter. Those workers are simply forgotten by the economy, because they are obsolete for all practical purposes.

What is their only hope? I contend that the only thing that would prevent permanent unemployment is the development of an entirely new type of product that does not directly compete with the product they used to make. From a consumer's point of view, buying two of essentially the same thing is less useful than buying two completely different things. However, new product development is never guaranteed to follow advances in productivity.