I'll admit when I first saw the words "Dimensional Analysis," I felt my skin flush; my heart started beating faster; my mind began racing; and I scanned the exits of the room I was in. Classic fight or flight, with an emphasis on the latter.
But as I read, I realized my first reaction was silly. That said, I couldn't stop thinking that a lot of this was hand-waving. Can we legitimately summarize this discussion (or least summarize the justification) by observing that "in physics, almost everything is continuous" so arguments like this just work?
More precisely, what exactly is "length scale" or "characteristic length" supposed to represent? Is this along the lines of the length of the box everything is contained in, or is this the length of the smallest phenomenon observable/significant? What about in problems with a large container and small phenomena of global significance?
Also, why do we put a bar on the velocity scale U?
Finally, how is Reynolds number in any way well-defined? Can't I just say the scales are approximately this or that and get entirely different values?
Onto the next section, when can we legitimately make the lubrication assumption and get realistic results? I want to say for "slippery" fluids, but what does that even mean?
When we get a time estimate for the length of time needed to remove an adhering object, what assumptions are we making about the way it's pulled off? I feel like this should be clear, but wasn't really for me.
Overall, really cool stuff. I'm amazed that despite the sophistication of the equations, we can get tangible and useful numerical results.
Monday, April 21, 2008
Sunday, April 13, 2008
Reading 4/14/2008: Lecture 7
First derivation is very cool. Energy minimization leads to the fluid cylinder instability. Makes sense, and the derivation is simple enough.
Very cool to see Bessel functions popping up, though given the type of equations and the space on which we're solving them, this doesn't seem particularly surprising, comparing to experience with Math 180.
Overall, it all makes sense to me. It's very interesting to see that what is fundamentally a stability analysis can be performed by linearizing and solving the system and then looking at solutions for which the waves grow. I'm a little unclear as to where the "asymmetric modes" part of the final paragraph comes from, but the fact that wavelengths greater than a threshold value grow to infinity actually makes sense to me.
NB: Sorry I've missed so many blog entries. If I can find time, I'm going to go back and write them, but these past two weeks have been absolutely vicious.
Very cool to see Bessel functions popping up, though given the type of equations and the space on which we're solving them, this doesn't seem particularly surprising, comparing to experience with Math 180.
Overall, it all makes sense to me. It's very interesting to see that what is fundamentally a stability analysis can be performed by linearizing and solving the system and then looking at solutions for which the waves grow. I'm a little unclear as to where the "asymmetric modes" part of the final paragraph comes from, but the fact that wavelengths greater than a threshold value grow to infinity actually makes sense to me.
NB: Sorry I've missed so many blog entries. If I can find time, I'm going to go back and write them, but these past two weeks have been absolutely vicious.
Wednesday, April 2, 2008
Reading 4/2/2008: Lecture 3 Notes
I wish I'd had more time to blog recently, but life has been a little too crazy.
At any rate, this is cool stuff. It's nice to see how the free-surface boundary conditions play out in the mathematical PDEs framework, and it's even cooler to see a fairly rigorous proof of Bernoulli's theorem. Obviously the same concepts are there, but, well I'm a mathematician, so it's better now.
Definitely cool to see the Fourier transform appear in the end, too. I'd be curious to hear about the general applicability of the FT in fluids - it's certainly a big hammer and great for making this smooth. (No pun intended.)
The series expansion strangely reminded by of perturbation theory from big quantum; I'm guessing this is a fairly standard approach, I think it's more the notation. That said, I wonder how applicable the linearized equations are and/or what are their drawbacks?
At any rate, this is cool stuff. It's nice to see how the free-surface boundary conditions play out in the mathematical PDEs framework, and it's even cooler to see a fairly rigorous proof of Bernoulli's theorem. Obviously the same concepts are there, but, well I'm a mathematician, so it's better now.
Definitely cool to see the Fourier transform appear in the end, too. I'd be curious to hear about the general applicability of the FT in fluids - it's certainly a big hammer and great for making this smooth. (No pun intended.)
The series expansion strangely reminded by of perturbation theory from big quantum; I'm guessing this is a fairly standard approach, I think it's more the notation. That said, I wonder how applicable the linearized equations are and/or what are their drawbacks?
Wednesday, March 12, 2008
Reading 3/10/2008: Section 3.6
Late, I know, but better than never.
This stuff honestly is straightforward. Having seen complex variable before, the approach is a little weird (partial derivatives of a complex function and chain rule usage are a little suspect, but it works).
Laplace's equation is solved pretty thoroughly in a bunch of classes, so that's pretty much par for the course. It is really cool to see the stuff on p. 197 about using the real vs. complex part as the potential. Didn't know you could look at it that way.
What exactly happens physically at the interface where, mathematically, the pressure becomes negative? That was really my only major question from the chapter.
This stuff honestly is straightforward. Having seen complex variable before, the approach is a little weird (partial derivatives of a complex function and chain rule usage are a little suspect, but it works).
Laplace's equation is solved pretty thoroughly in a bunch of classes, so that's pretty much par for the course. It is really cool to see the stuff on p. 197 about using the real vs. complex part as the potential. Didn't know you could look at it that way.
What exactly happens physically at the interface where, mathematically, the pressure becomes negative? That was really my only major question from the chapter.
Wednesday, March 5, 2008
Reading 3/5/2008: Sections 3.4-3.5
Section 3.4: Very cool; we have a number that can tell is whether we make the assumption of diffusion-dominance or vorticity-dominance. Makes sense to me, though where do we get the characteristic length scale from? And why does it shrink with turbulence? Otherwise, everything is clear.
Section 3.5: The derivation of small Bernoulli is very straightforward, though the result is quite cool. For big Bernoulli, we demand that a flow is irrotational - where are some examples where this really breaks down in a big way? Also, how might one measure &phi, the velocity potential function, much less its time rate of change?
The connection with the Laplacian here makes sense given our assumptions, but is a nice touch. The dipole/etc. thing is worth discussing in class. I've never understood physicists' fascinating with dipoles, but maybe I can with a little more detail.
Section 3.5: The derivation of small Bernoulli is very straightforward, though the result is quite cool. For big Bernoulli, we demand that a flow is irrotational - where are some examples where this really breaks down in a big way? Also, how might one measure &phi, the velocity potential function, much less its time rate of change?
The connection with the Laplacian here makes sense given our assumptions, but is a nice touch. The dipole/etc. thing is worth discussing in class. I've never understood physicists' fascinating with dipoles, but maybe I can with a little more detail.
Monday, March 3, 2008
Reading 3/3/2008: Sections 3.2-3.3
Section 3.2: Okay, cool. Our equations reduce with the acoustic approximation to something much more tractable. Very nice. I'm a little curious why we can assume (3.58) can only be satisfied in the two ways mentioned in the book. If I had a little more time, I would sit down and just prove this, but I wonder if there's a quick answer?
What are the real-world implications of S-waves decaying so rapidly? If the waves are only significant very, very close to the source, where do they arise/where are they important in practice?
How is the scattering effect of particles in p. 163 accounted for in fluid models?
Overall, this section seemed pleasantly simple. We get some nasty dispersion relations, but they're easy enough to use, and reduce to forms that are fairly easy to work with. Cool stuff.
Section 3.3: A section with "Theorem" in the title. Yay, math. Honestly, everything here made good sense. I wish I could see a more rigorous proof of the theorem, but for our purposes, this seems pretty good to me.
The bit at the end about vortex tubes is awesome. So THAT'S what a tornado is...
What are the real-world implications of S-waves decaying so rapidly? If the waves are only significant very, very close to the source, where do they arise/where are they important in practice?
How is the scattering effect of particles in p. 163 accounted for in fluid models?
Overall, this section seemed pleasantly simple. We get some nasty dispersion relations, but they're easy enough to use, and reduce to forms that are fairly easy to work with. Cool stuff.
Section 3.3: A section with "Theorem" in the title. Yay, math. Honestly, everything here made good sense. I wish I could see a more rigorous proof of the theorem, but for our purposes, this seems pretty good to me.
The bit at the end about vortex tubes is awesome. So THAT'S what a tornado is...
Reading 2/27/2008: Section 3.1
Woo, Fluids.
So we can immediately dispense with &mu, simplifying things quite nicely. Very cool derivation, and seemingly quite rigorous. I'm not entirely clear why we can assume &xin,n is zero, but I'm assuming it's because Smm is &Phi.
The derivation of the new equation of state is very cool. It's remarkable to see that dp can be characterized completely and uniquely in terms of &rho. I'm not quite sure where the book is going with the "exact differential" comment, but I'm assuming that means something to physicists that it lacks in meaning to me.
Why do we assume viscous stresses are linearly proportional to velocities? What is the origin of this postulate? That was one of the major aspects unclear in the section. Otherwise, the derivation of Newtonian viscosities was clear enough.
And holy cow, we have Navier-Stokes! If only we could solve them generally...
I'm a little unclear what is meant by a volume force in (3.27) - is this just to emphasize that this is a force separate from the external (e.g. gravitational) force?
On p. 151 I just want to point out that the word "magma" is bloody awesome. Everyone should incorporate it into their daily speech immediately. No, seriously. I mean it.
Overall a fantastic section.
So we can immediately dispense with &mu, simplifying things quite nicely. Very cool derivation, and seemingly quite rigorous. I'm not entirely clear why we can assume &xin,n is zero, but I'm assuming it's because Smm is &Phi.
The derivation of the new equation of state is very cool. It's remarkable to see that dp can be characterized completely and uniquely in terms of &rho. I'm not quite sure where the book is going with the "exact differential" comment, but I'm assuming that means something to physicists that it lacks in meaning to me.
Why do we assume viscous stresses are linearly proportional to velocities? What is the origin of this postulate? That was one of the major aspects unclear in the section. Otherwise, the derivation of Newtonian viscosities was clear enough.
And holy cow, we have Navier-Stokes! If only we could solve them generally...
I'm a little unclear what is meant by a volume force in (3.27) - is this just to emphasize that this is a force separate from the external (e.g. gravitational) force?
On p. 151 I just want to point out that the word "magma" is bloody awesome. Everyone should incorporate it into their daily speech immediately. No, seriously. I mean it.
Overall a fantastic section.
Saturday, February 23, 2008
Reading 2/25/2008: Section 2.12
This was a pretty straightforward extension of the material from the previous chapter. Allow the wavevector to be complex, and - boom - you've got new solutions. It's definitely cool to see that they must decay exponentially as you move deeper in the material, but the rest was pretty intuitive and straightforward. No real questions that I can think of, as the rest was really algebra.
Reading 2/20/2008: Section 2.13
Okay, so this is rather ashamedly late, but I've been rather wickedly sick the last few days so please do forgive me.
The lead up to (2.254) is simply gorgeous. We start with the general equation of motion, use our previous construction of &Phi, put stuff in terms of kinetic energy, and - Bam! - all of a sudden we have a classical equation of kinetic and potential energy combined with energy flux. As suggested in the book, this is definitely Maxwell-style beauty.
I'm a little puzzled by the comment on p. 124 - why can't we do these calculations in exponential notation? Mathematically, the two are fundamentally equivalent, so it doesn't make sense to me where the difference arises.
My last thought is to wonder if there is some kind of fundamental equivalence (or at least similarity) between EM and P waves. The formulas on the bottom of p. 124 do seem to suggest that if we say
&rho0 &alpha &omega2 -> Sqrt[&epsilon0/&mu0]
and
a2 -> E02
we get some kind of relationship?
The lead up to (2.254) is simply gorgeous. We start with the general equation of motion, use our previous construction of &Phi, put stuff in terms of kinetic energy, and - Bam! - all of a sudden we have a classical equation of kinetic and potential energy combined with energy flux. As suggested in the book, this is definitely Maxwell-style beauty.
I'm a little puzzled by the comment on p. 124 - why can't we do these calculations in exponential notation? Mathematically, the two are fundamentally equivalent, so it doesn't make sense to me where the difference arises.
My last thought is to wonder if there is some kind of fundamental equivalence (or at least similarity) between EM and P waves. The formulas on the bottom of p. 124 do seem to suggest that if we say
&rho0 &alpha &omega2 -> Sqrt[&epsilon0/&mu0]
and
a2 -> E02
we get some kind of relationship?
Tuesday, February 19, 2008
Reading 2/18/2008: Section 2.11
Sorry this is late, I've been rather sick. Anyway, this section is a bit of a beast, but whew - good stuff.
So we can use the symmetry of &xi to plug one equation in the acoustic approximation into the other, and we get something nice (2.137). Fantastic.
What exactly does Helmholtz's theorem state? Wikipedia gives a good enough answer, and I assume this was discussed in class, but I'll let the question stand, as it seems to be an important, if intuitive, result.
Equation (2.144) is awesome. With some fairly basic steps, we've jumped from our general equation for isotropic acoustic-approximated media to the wave equation. BAM. There are huge assumptions on smoothness throughout all the derivations here, but the mathematician in me is throwing up the white flag - it ain't worth the battle.
As far as the derivation of necessity goes (2.147-148), my only question is: is there a cute proof that zero curl and divergence imply constancy? I feel like there must be, but it's not coming to be offhand and I'm too busy to try and find it.
The derivation of the dispersion relations is a little unclear, but I think it's mostly a notational issue. We have wave propagating in direction ki. For one, &phi and A have fixed values on the plane normal to ki only in constant time, as I gather, though this isn't said. If we denote the speed by c and let r be a position coordinate in the direction of ki (presumably with magnitude k), then by definition dr/dt = c. If we assume plane of constant phase, we end up with the dispersion relation. It makes sense after several readings, but the statement feels unclear.
The notion that P- and S-waves represent the two modes of propagation in an isotropic medium is new to me, and actually very cool, though it makes perfect sense now.
The stuff on pages 99-101 is standard stuff, having seen Physics 52 and 116, but the whole tossing in of an incident and a reflected wave out of this air has always seemed a little hand-wavy to me. That said, I'll accept it. (Not like I have a choice ;-)) All that said, why does symmetry of the medium imply an incident and a reflected wave must lie in the same plane? Intuitively this is clear, but mathematically...?
That was mostly it for the chapter. The connection to earthquakes was a nice touch, too.
Until next time...
So we can use the symmetry of &xi to plug one equation in the acoustic approximation into the other, and we get something nice (2.137). Fantastic.
What exactly does Helmholtz's theorem state? Wikipedia gives a good enough answer, and I assume this was discussed in class, but I'll let the question stand, as it seems to be an important, if intuitive, result.
Equation (2.144) is awesome. With some fairly basic steps, we've jumped from our general equation for isotropic acoustic-approximated media to the wave equation. BAM. There are huge assumptions on smoothness throughout all the derivations here, but the mathematician in me is throwing up the white flag - it ain't worth the battle.
As far as the derivation of necessity goes (2.147-148), my only question is: is there a cute proof that zero curl and divergence imply constancy? I feel like there must be, but it's not coming to be offhand and I'm too busy to try and find it.
The derivation of the dispersion relations is a little unclear, but I think it's mostly a notational issue. We have wave propagating in direction ki. For one, &phi and A have fixed values on the plane normal to ki only in constant time, as I gather, though this isn't said. If we denote the speed by c and let r be a position coordinate in the direction of ki (presumably with magnitude k), then by definition dr/dt = c. If we assume plane of constant phase, we end up with the dispersion relation. It makes sense after several readings, but the statement feels unclear.
The notion that P- and S-waves represent the two modes of propagation in an isotropic medium is new to me, and actually very cool, though it makes perfect sense now.
The stuff on pages 99-101 is standard stuff, having seen Physics 52 and 116, but the whole tossing in of an incident and a reflected wave out of this air has always seemed a little hand-wavy to me. That said, I'll accept it. (Not like I have a choice ;-)) All that said, why does symmetry of the medium imply an incident and a reflected wave must lie in the same plane? Intuitively this is clear, but mathematically...?
That was mostly it for the chapter. The connection to earthquakes was a nice touch, too.
Until next time...
Wednesday, February 13, 2008
Reading 2/13/2008: Section 2.10
Section 2.10: Fantastic, absolutely fantastic stuff. As always a little skeptical and amused of the physicist's tendency to pull a sine or cosine wave out of thin air, but it sure works nicely. The Taylor series in (2.105) serves no apparent purpose to me, as this is just the chain rule, but it's a minor point. The rest is actually pretty intuitive after the introduction of the propagating plane wave and the fairly natural acoustic approximation.
Not a whole lot else here. Sorry this is late, but it's been a long, long day.
Not a whole lot else here. Sorry this is late, but it's been a long, long day.
Sunday, February 10, 2008
Reading 2/10/2008
Section 2.8: This section was very cool. I'm guessing that the energy function defined herein is going to be the key to many later results. That said, I'm still trying to get used to the physicist differential notation. Usually I need to convert this in my mind to something that is actually mathematically well-defined, but the only way I see to do that here is to change d[whatever] to Delta[whatever] and then take limits.
I'm a little confused on the implication of (2.70) leading to (2.71). The notion of exact differential is not quite rigorously defined; is this perhaps equivalent to the existence of a gradient function in a conservative field? That's the explanation that makes sense. Then (2.72) would become a dot product of the gradient with dr, which would make considerably more sense to my mathematical sensibilities.
Equation (2.75) is very cool and intuitive. (Although I feel like there's got to be a more elegant, if less physical, way to get here.) We've got a function which is literally the gradient of the stress tensor, when considered as a vector in R9.
I don't necessarily see why (2.78) needs explanation - the derivative of component with respect to another component is very clearly a delta function under any circumstances.
We get a beautifully simple equation in the end, but my only question is how are we assured that the components of the tensor c are space invariant? Is this an assumption or is this universally true?
Section 2.9: This is very intuitive stuff. In essence, if we can find rotations that leave the stress tensor (and therefore the potential energy) unchanged, then these rotations are symmetries, and through equation (2.90), force symmetries into the elastic constants.
In the text, we've essentially noted that without knowing the tensor itself, we can argue physically that a certain structure must have certain elastic symmetries. But it seems we could actually reverse this argument. If we construct the elastic tensor of a structure, we can find all of its symmetries, even if they are not obvious due to coordinate choice.
So let's denote the elastic 4-tensor by T. If I remember correctly, we denote the action of a rotation A on T (which is really a pullback, but...) by writing
A*T(u,v,w,z) = T(ATu,ATv,ATw,ATz)
where u,v,w,z are vectors. This agrees with (2.87). Note that we are using here that AT = A-1. Writing it this way, we're actually looking for rotations A such that
A*T = T
In essence, we can just find all rotations that pull back under * to a transformation fixing T. I still feel like this could be formulated as an eigenvalue problem, which was my initial motivation for this tangent, making such a task even simpler, but I can't think of how at the moment.
On another note, I'd prefer a slightly better justification for why (2.97) is a sufficiently general isotropic tensor.
The derivation at the end is slick, though.
I'm a little confused on the implication of (2.70) leading to (2.71). The notion of exact differential is not quite rigorously defined; is this perhaps equivalent to the existence of a gradient function in a conservative field? That's the explanation that makes sense. Then (2.72) would become a dot product of the gradient with dr, which would make considerably more sense to my mathematical sensibilities.
Equation (2.75) is very cool and intuitive. (Although I feel like there's got to be a more elegant, if less physical, way to get here.) We've got a function which is literally the gradient of the stress tensor, when considered as a vector in R9.
I don't necessarily see why (2.78) needs explanation - the derivative of component with respect to another component is very clearly a delta function under any circumstances.
We get a beautifully simple equation in the end, but my only question is how are we assured that the components of the tensor c are space invariant? Is this an assumption or is this universally true?
Section 2.9: This is very intuitive stuff. In essence, if we can find rotations that leave the stress tensor (and therefore the potential energy) unchanged, then these rotations are symmetries, and through equation (2.90), force symmetries into the elastic constants.
In the text, we've essentially noted that without knowing the tensor itself, we can argue physically that a certain structure must have certain elastic symmetries. But it seems we could actually reverse this argument. If we construct the elastic tensor of a structure, we can find all of its symmetries, even if they are not obvious due to coordinate choice.
So let's denote the elastic 4-tensor by T. If I remember correctly, we denote the action of a rotation A on T (which is really a pullback, but...) by writing
A*T(u,v,w,z) = T(ATu,ATv,ATw,ATz)
where u,v,w,z are vectors. This agrees with (2.87). Note that we are using here that AT = A-1. Writing it this way, we're actually looking for rotations A such that
A*T = T
In essence, we can just find all rotations that pull back under * to a transformation fixing T. I still feel like this could be formulated as an eigenvalue problem, which was my initial motivation for this tangent, making such a task even simpler, but I can't think of how at the moment.
On another note, I'd prefer a slightly better justification for why (2.97) is a sufficiently general isotropic tensor.
The derivation at the end is slick, though.
Tuesday, February 5, 2008
Reading 2/6/2008: 2.7
Section 2.7: Not much to question here. I might ask how we calculate/find/arrive at the components of the 4-tensor, but other than that, it's all pretty straightforward.
Monday, February 4, 2008
Reading 2/4/2008: 2.4-2.6 (Continued)
Two points.
1. A lot of people have asked why there exists a set of coordinates wherein the stress tensor is diagonal. This is actually just a fact from linear algebra. Remember that Sij can be thought of as a matrix S. Since the stress tensor is symmetric, the matrix is symmetric in the sense that ST = S. This is just because Sij is just the component in the ith row and jth column. Check it if you don't believe it.
Now from linear algebra, there's a theorem that says every symmetric matrix admits an orthogonal diagonalization. That is, there exists a an orthogonal matrix P and a diagonal matrix D such that
S = P-1DP
Note that the diagonal elements of D are just the eigenvalues of S. Also, P is invertible because every orthogonal matrix is (det P = +-1).
Now why does this answer the question? Because if we let the columns of P become our new basis for R^2, then S "looks like" D. More precisely, the linear transformation S has representation D with respect to the basis formed by the columns of P. Thus those columns represent the coordinate system in which S is diagonal.
2. With regard to my little tangent before, maybe to help clarify a bit.
SO(3) is the set of rotations in three space. Its elements are just 3x3 matrices that are not only orthogonal (as all rotations are) but actually have determinant +1 to rule out reflections. It's the Special Orthogonal Group. We call it a group because it's actually a more sophisticated mathematical object, but that's not important. The key point is that multiplication and addition are defined on this set by matrix multiplication and division. It might help to think of matrices as vectors in R9.
What's interesting is that SO(3) is actually a Lie group. This basically means that it has the operation of matrix addition and that the group "looks like some version of Rn". This is obvious to us, because we can think of a matrix easily as just a vector in R9. But this gives it lots of special properties, because it means everything is sufficiently nice.
We can define a "path" in SO(3) to be a set of matrices parameterized by a real variable t. Say &gamma(t) might be a bunch of matrices that vary smoothly at t varies from 0 to 1. What do we mean by vary smoothly? Well that's simple - each entry in the matrix is just a real function of t, and all of those functions are smooth.
So now we have this structure on our set of rotations. How do we think about an infinitesimal rotation? Well, it should be close to the identity - because no rotation at all is just multiplying by the identity matrix. And it should have some notion of "direction", because if we integrated a bunch of infinitesimal rotations (in some sense), we ought to be able to get to a real rotation.
What we do is we let &gamma be an arbitrary path starting at the identity (so &gamma(0) = I) and defined from t = 0 to some later value of t (it doesn't matter). We can then define an infinitesimal rotation by taking the derivative of this path at 0. Why does this make sense? Well intuitively, say &gamma(1) is the rotation we want to move "towards", then we should be able to get there by integrating &gamma'(t) from 0 to 1 - integrating the infinitesimal rotations. This is just the fundamental theorem of calculus in disguise.
A lie algebra is just the set of all objects of the form &gamma'(0) where &gamma is, again, a path starting at the identity matrix and going somewhere else. The calculation I made in my previous post was investigating what the lie algebra of SO(3) (denoted by so(3), to be confusing!) actually looks like. All I did was let &gamma be an arbitrary path, I then wrote &gamma in terms of an arbitrary rotation matrix (every matrix in SO(3) can be written in terms of three angles, which we can then think of as functions of t), and then differentiated with respect to t and set t=0. That's where the matrix graphic in my post comes from. Since &gamma was arbitrary, d&theta/dt, d&phi/dt, and d&psi/dt are all arbitrary as well. That's why we calculate that so(3) is just the set of antisymmetric matrices.
Hopefully that makes slightly more clear what I was talking about.
1. A lot of people have asked why there exists a set of coordinates wherein the stress tensor is diagonal. This is actually just a fact from linear algebra. Remember that Sij can be thought of as a matrix S. Since the stress tensor is symmetric, the matrix is symmetric in the sense that ST = S. This is just because Sij is just the component in the ith row and jth column. Check it if you don't believe it.
Now from linear algebra, there's a theorem that says every symmetric matrix admits an orthogonal diagonalization. That is, there exists a an orthogonal matrix P and a diagonal matrix D such that
S = P-1DP
Note that the diagonal elements of D are just the eigenvalues of S. Also, P is invertible because every orthogonal matrix is (det P = +-1).
Now why does this answer the question? Because if we let the columns of P become our new basis for R^2, then S "looks like" D. More precisely, the linear transformation S has representation D with respect to the basis formed by the columns of P. Thus those columns represent the coordinate system in which S is diagonal.
2. With regard to my little tangent before, maybe to help clarify a bit.
SO(3) is the set of rotations in three space. Its elements are just 3x3 matrices that are not only orthogonal (as all rotations are) but actually have determinant +1 to rule out reflections. It's the Special Orthogonal Group. We call it a group because it's actually a more sophisticated mathematical object, but that's not important. The key point is that multiplication and addition are defined on this set by matrix multiplication and division. It might help to think of matrices as vectors in R9.
What's interesting is that SO(3) is actually a Lie group. This basically means that it has the operation of matrix addition and that the group "looks like some version of Rn". This is obvious to us, because we can think of a matrix easily as just a vector in R9. But this gives it lots of special properties, because it means everything is sufficiently nice.
We can define a "path" in SO(3) to be a set of matrices parameterized by a real variable t. Say &gamma(t) might be a bunch of matrices that vary smoothly at t varies from 0 to 1. What do we mean by vary smoothly? Well that's simple - each entry in the matrix is just a real function of t, and all of those functions are smooth.
So now we have this structure on our set of rotations. How do we think about an infinitesimal rotation? Well, it should be close to the identity - because no rotation at all is just multiplying by the identity matrix. And it should have some notion of "direction", because if we integrated a bunch of infinitesimal rotations (in some sense), we ought to be able to get to a real rotation.
What we do is we let &gamma be an arbitrary path starting at the identity (so &gamma(0) = I) and defined from t = 0 to some later value of t (it doesn't matter). We can then define an infinitesimal rotation by taking the derivative of this path at 0. Why does this make sense? Well intuitively, say &gamma(1) is the rotation we want to move "towards", then we should be able to get there by integrating &gamma'(t) from 0 to 1 - integrating the infinitesimal rotations. This is just the fundamental theorem of calculus in disguise.
A lie algebra is just the set of all objects of the form &gamma'(0) where &gamma is, again, a path starting at the identity matrix and going somewhere else. The calculation I made in my previous post was investigating what the lie algebra of SO(3) (denoted by so(3), to be confusing!) actually looks like. All I did was let &gamma be an arbitrary path, I then wrote &gamma in terms of an arbitrary rotation matrix (every matrix in SO(3) can be written in terms of three angles, which we can then think of as functions of t), and then differentiated with respect to t and set t=0. That's where the matrix graphic in my post comes from. Since &gamma was arbitrary, d&theta/dt, d&phi/dt, and d&psi/dt are all arbitrary as well. That's why we calculate that so(3) is just the set of antisymmetric matrices.
Hopefully that makes slightly more clear what I was talking about.
Sunday, February 3, 2008
Reading 2/4/2008: 2.4-2.6
Section 2.4: In deriving the continuity equation, we're relying heavily on the continuity and differentiability of the density and velocity. Does this mean we'll need entirely new techniques when analyzing systems containing the interface between two different materials (i.e. discontinuous density)? Are there situations when continuous velocity breaks down?
I thought I'd mention with respect to equation (2.19) that one easy way to understand why a total derivative becomes a partial derivative is to write it as
Written this way, the dV = dx dy dz variables can be thought of as dummy variables, much like dummy indices, since they are being integrated over. Consequently, t is the only free variable, and so passage from total differential to partial differential is purely notational, since we do not use total differentials for functions of more than one variable. (Unless, of course, they are "secretly," as in classical/quantum mechanics, a function of just one variable - time. But we can ignore that here.)
As a side note, I'm a little surprised none of this has been adapted to be used in Math 13/61 courses. It's a fantastic showing of the power of the divergence theorem.
Section 2.5: I kind of wish the explanation had been some strictly in 2D, because it took a little while to figure out where the extra components were going. That said, Figures 2.6/2.7 are very cool - it's quite clear that with symmetry we have no torque and without symmetry there must be one.
I wish a little more could be said about the stress ellipsoid. If my understanding is correct, once we've transformed to principal axes, directions xi in which Πii is large have small extent. Thus the greater the stress is one direction, the smaller the extent of the ellipsoid. Similarly, in the case of negative stress, how exactly are compressive and extensional forces differentiated? And where do the new arrows come from in FIgure 2.11?
Section 2.6: This is the physics-math barrier coming up again, but for the life of me I can't remember how physicists get away with saying that if D is an "infinitesimal rotation" about P, then δ = D x dx. I know infinitesimal rotations are elements of the lie algebra of SO(3), but there's got to be a better intuitive way of understanding where the cross product comes from.
WARNING: TANGENT
So, I'm just thinking out loud here, sorry if this is useless. But I want to see if I can explore the answer to the question I just posed. Here I'm going to assume we've rigorously defined an infinitesimal rotation as an element of the lie algebra of SO(3), the group of rotations in three dimensions, but that we don't know anything about so(3). The reason is that I can't remember offhand what the lie algebra looks like.
Okay, so lets suppose we have the identity I of SO(3) and have a path γ with γ(0) = I. We would like to calculate &gamma'(0) for an arbitrary path. First, we express &gamma as a matrix by multiplying the matrices for an xy rotation of angle &theta, a yz rotation of angle &phi, and an xz rotation of angle &psi$. Notice that if we then write each of the angles as a function of time with all angles zero at time zero, then we have an arbitrary path in SO(3) beginning at the origin. We finally differentiate the matrix and plug in t = 0. This series of calculations is extremely involved, but the result is the matrix
which represents an arbitrary element of so(3). Note that each derivative above is evaluated at t = 0. Okay, cool, so it looks like so(3) is just the set of antisymmetric matrices, since those derivatives at zero can be anything we want them to be. We've established that any infinitesimal rotation is just antisymmetric matrix. Set a = dφ/dt, b = dψ/dt. and c = dθ/dt. Then we can construct a vector D = (-a,b,-c), where the negatives will be clear momentarily. We can finally recover the antisymmetric matrix B = &gamma'(0) using a trick from the book and writing
Bij = εijk Dk
So in sum, it looks like we can legitimately make a "nice" bijection between the set of infinitesimal rotations, so(3), and individual vectors. In retrospect, I suppose the cross product comes in since if we multiple an element B of so(3) by a vector x = xj, we get
which is essentially (2.40). So if we express an infinitesimal rotation as a vector D, we can represent the result of applying the rotation to a vector x (up to a factor of -1) by taking th cross product of D and x. That is VERY cool.
Okay, sorry if that made sense/helped absolutely no one. It was really cool to me, so I left it in. When I wrote all that I hadn't even read the next line, but now I realize that what we're doing in (2.42) is recovering the ACTUAL infinitesimal rotation, which is of course the antisymmetric matrix Aij in the book. Even the negative sign resurfaces!
END TANGENT
I just have to say the way in which we derive &deltaS is gorgeous. Eliminate translations by defining &delta and brilliantly get rid of infinitesimal rotations by removing the antisymmetric part. Not intuitive, but both practically simple and mathematically incredible.
I thought I'd mention with respect to equation (2.19) that one easy way to understand why a total derivative becomes a partial derivative is to write it as
Written this way, the dV = dx dy dz variables can be thought of as dummy variables, much like dummy indices, since they are being integrated over. Consequently, t is the only free variable, and so passage from total differential to partial differential is purely notational, since we do not use total differentials for functions of more than one variable. (Unless, of course, they are "secretly," as in classical/quantum mechanics, a function of just one variable - time. But we can ignore that here.)
As a side note, I'm a little surprised none of this has been adapted to be used in Math 13/61 courses. It's a fantastic showing of the power of the divergence theorem.
Section 2.5: I kind of wish the explanation had been some strictly in 2D, because it took a little while to figure out where the extra components were going. That said, Figures 2.6/2.7 are very cool - it's quite clear that with symmetry we have no torque and without symmetry there must be one.
I wish a little more could be said about the stress ellipsoid. If my understanding is correct, once we've transformed to principal axes, directions xi in which Πii is large have small extent. Thus the greater the stress is one direction, the smaller the extent of the ellipsoid. Similarly, in the case of negative stress, how exactly are compressive and extensional forces differentiated? And where do the new arrows come from in FIgure 2.11?
Section 2.6: This is the physics-math barrier coming up again, but for the life of me I can't remember how physicists get away with saying that if D is an "infinitesimal rotation" about P, then δ = D x dx. I know infinitesimal rotations are elements of the lie algebra of SO(3), but there's got to be a better intuitive way of understanding where the cross product comes from.
WARNING: TANGENT
So, I'm just thinking out loud here, sorry if this is useless. But I want to see if I can explore the answer to the question I just posed. Here I'm going to assume we've rigorously defined an infinitesimal rotation as an element of the lie algebra of SO(3), the group of rotations in three dimensions, but that we don't know anything about so(3). The reason is that I can't remember offhand what the lie algebra looks like.
Okay, so lets suppose we have the identity I of SO(3) and have a path γ with γ(0) = I. We would like to calculate &gamma'(0) for an arbitrary path. First, we express &gamma as a matrix by multiplying the matrices for an xy rotation of angle &theta, a yz rotation of angle &phi, and an xz rotation of angle &psi$. Notice that if we then write each of the angles as a function of time with all angles zero at time zero, then we have an arbitrary path in SO(3) beginning at the origin. We finally differentiate the matrix and plug in t = 0. This series of calculations is extremely involved, but the result is the matrix
which represents an arbitrary element of so(3). Note that each derivative above is evaluated at t = 0. Okay, cool, so it looks like so(3) is just the set of antisymmetric matrices, since those derivatives at zero can be anything we want them to be. We've established that any infinitesimal rotation is just antisymmetric matrix. Set a = dφ/dt, b = dψ/dt. and c = dθ/dt. Then we can construct a vector D = (-a,b,-c), where the negatives will be clear momentarily. We can finally recover the antisymmetric matrix B = &gamma'(0) using a trick from the book and writing
Bij = εijk Dk
So in sum, it looks like we can legitimately make a "nice" bijection between the set of infinitesimal rotations, so(3), and individual vectors. In retrospect, I suppose the cross product comes in since if we multiple an element B of so(3) by a vector x = xj, we get
which is essentially (2.40). So if we express an infinitesimal rotation as a vector D, we can represent the result of applying the rotation to a vector x (up to a factor of -1) by taking th cross product of D and x. That is VERY cool.
Okay, sorry if that made sense/helped absolutely no one. It was really cool to me, so I left it in. When I wrote all that I hadn't even read the next line, but now I realize that what we're doing in (2.42) is recovering the ACTUAL infinitesimal rotation, which is of course the antisymmetric matrix Aij in the book. Even the negative sign resurfaces!
END TANGENT
I just have to say the way in which we derive &deltaS is gorgeous. Eliminate translations by defining &delta and brilliantly get rid of infinitesimal rotations by removing the antisymmetric part. Not intuitive, but both practically simple and mathematically incredible.
Tuesday, January 29, 2008
Reading 1/30/2008: 1.6-2.3
First off, sorry this is past the midnight deadline. Monday and Tuesday are absurdly busy for me on my schedule. I've forgone proper symbols to get this out quicker; my apologies for the messiness.
Section 1.6: Cool stuff - derivatives are straightforward, the easy proofs with Laplacian, curl, and divergence are impressive. Gets a little unclear towards the end. (1.89) is not manifestly covariant since it is not yet expressed in terms of tensors and we do not know how it changes under rotation. Is covariant here used in the sense that it transforms nicely under rotations or in the sense of covariant tensors/pseudovectors. But is B_ij a tensor or a pseudotensor? It is not obtained by tensor product or by Levi-Civita contraction, so we have not yet established a rule for determination.
Section 2.1-2.2: One question that immediately stands out is equation (2.1). We have a volumetric force F_i multiplied by mass (rho*dV) equal (in units) to the net force. That is, the units seem not to work out. How is this resolved? Another point lies in equation (2.6) - why are we computing surface integrals when we are told to integrate over volume? The rest pretty much made sense, though I'll be interested to see how this is presented in class. My mathematical instincts are a little frustrated by the "wishy-washiness" of the argument.
Section 2.3: Okay so F_i, though not mentioned before, is force per unit mass. That makes more sense now. This stuff is very cool and intuitive. Everything else pretty much makes sense.
Good stuff.
Section 1.6: Cool stuff - derivatives are straightforward, the easy proofs with Laplacian, curl, and divergence are impressive. Gets a little unclear towards the end. (1.89) is not manifestly covariant since it is not yet expressed in terms of tensors and we do not know how it changes under rotation. Is covariant here used in the sense that it transforms nicely under rotations or in the sense of covariant tensors/pseudovectors. But is B_ij a tensor or a pseudotensor? It is not obtained by tensor product or by Levi-Civita contraction, so we have not yet established a rule for determination.
Section 2.1-2.2: One question that immediately stands out is equation (2.1). We have a volumetric force F_i multiplied by mass (rho*dV) equal (in units) to the net force. That is, the units seem not to work out. How is this resolved? Another point lies in equation (2.6) - why are we computing surface integrals when we are told to integrate over volume? The rest pretty much made sense, though I'll be interested to see how this is presented in class. My mathematical instincts are a little frustrated by the "wishy-washiness" of the argument.
Section 2.3: Okay so F_i, though not mentioned before, is force per unit mass. That makes more sense now. This stuff is very cool and intuitive. Everything else pretty much makes sense.
Good stuff.
Sunday, January 27, 2008
Reading 1/28/2008: 1.1-1.5
Sections 1.1,1.2: Okay, so these are basically what we did in class. Fairly simple stuff to those who have seen tensors. The mathematician in me is still freaked out by the index stuff, and the physicist is wondering what happened to superscript/subscript Einstein notation used in GR. But otherwise, it's pretty intuitive. That said, it does seem the notation is incredibly cumbersome in certain cases. For example, equation 1.31
||aij||2 = ||aij|| ||aij|| = ||aij ajk|| = ||&deltaij|| = 1
seems vastly more clear when written as
|A|2 = |A| |A| = |A| |A|T = |A| |A|-1 = 1
As the middle step in index notation does not seem easily justified, whereas we know that orthonormal matrices have transpose inverses. It's a minor point, though.
Section 1.3: The subscript issue was nicely resolved here, though I wish there were a more rigorous definition of contravariant vs. covariant. I'll need to go dig that up on Wikipedia. It seems to be an issue that doesn't come up as prominently (if at all) in the mathematical notation. Contravariant vectors transform 'like' coordinates and covariant transform 'like' the gradient, but what exactly does that mean? I realize this won't be necessary for the class, but it's intriguing. Another point (definition-wise) I'd be curious about here is exactly what it means to be an isotropic tensor - I'm presuming this means that its an eigenvector of the rotation matrix with eigenvalue one.
Section 1.4: Pseudovectors are intriguing. I haven't seen them in a mathematical context and am wondering if they have a common analogue, perhaps under a different name. Mathematicians, of course, aren't looking at things from the same perspective - cross products are not as prominent and the central focus isn't how tensors transform under transformations. That said, what is the fundamental difference? And where does the name tensor density come from - in particular, why density? I expected this to be something like a tensor field, but obviously it is not.
Section 1.5: This is pretty much the standard tensor stuff. It's actually a little strange to see contraction without the need to raise or lower indices, but it makes sense given the Cartesian focus. The transition between tensors and tensor densities is definitely new, however, though I'd still like to know what is fundamentally going on - at least in the mathematical realm - with densities. Somehow, that usually makes the physics make more sense.
So we're still doing fundamentals. Tensors are cool little creatures, and it should be good to see how they work in this context, as opposed to GR or Diff. Geo.
||aij||2 = ||aij|| ||aij|| = ||aij ajk|| = ||&deltaij|| = 1
seems vastly more clear when written as
|A|2 = |A| |A| = |A| |A|T = |A| |A|-1 = 1
As the middle step in index notation does not seem easily justified, whereas we know that orthonormal matrices have transpose inverses. It's a minor point, though.
Section 1.3: The subscript issue was nicely resolved here, though I wish there were a more rigorous definition of contravariant vs. covariant. I'll need to go dig that up on Wikipedia. It seems to be an issue that doesn't come up as prominently (if at all) in the mathematical notation. Contravariant vectors transform 'like' coordinates and covariant transform 'like' the gradient, but what exactly does that mean? I realize this won't be necessary for the class, but it's intriguing. Another point (definition-wise) I'd be curious about here is exactly what it means to be an isotropic tensor - I'm presuming this means that its an eigenvector of the rotation matrix with eigenvalue one.
Section 1.4: Pseudovectors are intriguing. I haven't seen them in a mathematical context and am wondering if they have a common analogue, perhaps under a different name. Mathematicians, of course, aren't looking at things from the same perspective - cross products are not as prominent and the central focus isn't how tensors transform under transformations. That said, what is the fundamental difference? And where does the name tensor density come from - in particular, why density? I expected this to be something like a tensor field, but obviously it is not.
Section 1.5: This is pretty much the standard tensor stuff. It's actually a little strange to see contraction without the need to raise or lower indices, but it makes sense given the Cartesian focus. The transition between tensors and tensor densities is definitely new, however, though I'd still like to know what is fundamentally going on - at least in the mathematical realm - with densities. Somehow, that usually makes the physics make more sense.
So we're still doing fundamentals. Tensors are cool little creatures, and it should be good to see how they work in this context, as opposed to GR or Diff. Geo.
Tuesday, January 22, 2008
First Post
Well, here I am posting on yet another blog. Hello world! f1r2t p0st! Digg me! And with that out of my system, I move on.
So in case it wasn't extraordinarily obvious from my oh-so-creative blog title, I'm Ben Preskill. And I'm a junior at Harvey Mudd College with majors in Math and Physics and a concentration in Economics.
Truth is, I like math. All of it. And I especially like applications of math to cool problems. But I have no idea which applications excite me the most. My solution? Try as many as I can. Fluid mechanics has to be one of the most prominent areas of applied math out there. It's used as a paradigm for applications of calculus, differential geometry, and PDEs, and so I figured I damn well ought to see what this stuff is all about. That's pretty much why I'm taking Continuum and Fluids. Oh, and tensors rock.
Really, I don't know what to expect. I know there are tensors involved and I know we'll be modeling...well, fluids. But that's sort of the point - I don't know anything about fluid mechanics, really, and I want to know what the subject is all about. Moreover, I've seen a lot of math from a physics perspective (being a physics major,) but haven't seen as much physics from a math perspective. Although I don't expect the whole course to be the latter, I'm guessing I'll get a taste of it. Finally, I imagine continuum mechanics makes modeling almost any type of macroscopic physical situation a lot more feasible. That's just a guess, though. For all I know, you might mostly end up with some god-awful system of PDEs that requires ode15s because it's too damn stiff, and then MATLAB takes an hour to spit out a vector containing a bunch of numbers to the negative fiftieth power. But I hope not.
Now, I've been asked about a favorite equation, and I've not put a lot of thought into this. The physicists would love me if I said Schroedinger or a Hamiltonian System. But then, I'm so tempted to pick something powerful and abstract from math, like some kind of one-line statement of the Riesz Representation Theorem. I could also be cute and talk about how sexy Navier-Stokes is. Instead, how about something that underlies it all. If f ∈ C1 where f : ℜ → ℜm , we can write
f(t) = ∑ ck eikt
Now that's sexy. But Fourier series are pretty passe, I'll admit, so I'll also throw in the geodesic equation
d2xk/dt2 + ∑ij Γkij dxi/dt dxj/dt = 0
Wow, not exactly super pretty. I guess I'll just do that LaTeX thing in the future.
Anyway, that's just about it. For a fun fact about myself I'll throw in that I'm learning guitar now. For even more exciting detail, I'll confess that my inspiration is an unyielding desire to play early 90's britpop such as, e.g., Oasis. No, I'm not kidding.
Expect more fun to follow. Cheers.
So in case it wasn't extraordinarily obvious from my oh-so-creative blog title, I'm Ben Preskill. And I'm a junior at Harvey Mudd College with majors in Math and Physics and a concentration in Economics.
Truth is, I like math. All of it. And I especially like applications of math to cool problems. But I have no idea which applications excite me the most. My solution? Try as many as I can. Fluid mechanics has to be one of the most prominent areas of applied math out there. It's used as a paradigm for applications of calculus, differential geometry, and PDEs, and so I figured I damn well ought to see what this stuff is all about. That's pretty much why I'm taking Continuum and Fluids. Oh, and tensors rock.
Really, I don't know what to expect. I know there are tensors involved and I know we'll be modeling...well, fluids. But that's sort of the point - I don't know anything about fluid mechanics, really, and I want to know what the subject is all about. Moreover, I've seen a lot of math from a physics perspective (being a physics major,) but haven't seen as much physics from a math perspective. Although I don't expect the whole course to be the latter, I'm guessing I'll get a taste of it. Finally, I imagine continuum mechanics makes modeling almost any type of macroscopic physical situation a lot more feasible. That's just a guess, though. For all I know, you might mostly end up with some god-awful system of PDEs that requires ode15s because it's too damn stiff, and then MATLAB takes an hour to spit out a vector containing a bunch of numbers to the negative fiftieth power. But I hope not.
Now, I've been asked about a favorite equation, and I've not put a lot of thought into this. The physicists would love me if I said Schroedinger or a Hamiltonian System. But then, I'm so tempted to pick something powerful and abstract from math, like some kind of one-line statement of the Riesz Representation Theorem. I could also be cute and talk about how sexy Navier-Stokes is. Instead, how about something that underlies it all. If f ∈ C1 where f : ℜ → ℜm , we can write
f(t) = ∑ ck eikt
Now that's sexy. But Fourier series are pretty passe, I'll admit, so I'll also throw in the geodesic equation
d2xk/dt2 + ∑ij Γkij dxi/dt dxj/dt = 0
Wow, not exactly super pretty. I guess I'll just do that LaTeX thing in the future.
Anyway, that's just about it. For a fun fact about myself I'll throw in that I'm learning guitar now. For even more exciting detail, I'll confess that my inspiration is an unyielding desire to play early 90's britpop such as, e.g., Oasis. No, I'm not kidding.
Expect more fun to follow. Cheers.
Subscribe to:
Posts (Atom)