Saturday, February 23, 2008
Reading 2/25/2008: Section 2.12
This was a pretty straightforward extension of the material from the previous chapter. Allow the wavevector to be complex, and - boom - you've got new solutions. It's definitely cool to see that they must decay exponentially as you move deeper in the material, but the rest was pretty intuitive and straightforward. No real questions that I can think of, as the rest was really algebra.
Reading 2/20/2008: Section 2.13
Okay, so this is rather ashamedly late, but I've been rather wickedly sick the last few days so please do forgive me.
The lead up to (2.254) is simply gorgeous. We start with the general equation of motion, use our previous construction of &Phi, put stuff in terms of kinetic energy, and - Bam! - all of a sudden we have a classical equation of kinetic and potential energy combined with energy flux. As suggested in the book, this is definitely Maxwell-style beauty.
I'm a little puzzled by the comment on p. 124 - why can't we do these calculations in exponential notation? Mathematically, the two are fundamentally equivalent, so it doesn't make sense to me where the difference arises.
My last thought is to wonder if there is some kind of fundamental equivalence (or at least similarity) between EM and P waves. The formulas on the bottom of p. 124 do seem to suggest that if we say
&rho0 &alpha &omega2 -> Sqrt[&epsilon0/&mu0]
and
a2 -> E02
we get some kind of relationship?
The lead up to (2.254) is simply gorgeous. We start with the general equation of motion, use our previous construction of &Phi, put stuff in terms of kinetic energy, and - Bam! - all of a sudden we have a classical equation of kinetic and potential energy combined with energy flux. As suggested in the book, this is definitely Maxwell-style beauty.
I'm a little puzzled by the comment on p. 124 - why can't we do these calculations in exponential notation? Mathematically, the two are fundamentally equivalent, so it doesn't make sense to me where the difference arises.
My last thought is to wonder if there is some kind of fundamental equivalence (or at least similarity) between EM and P waves. The formulas on the bottom of p. 124 do seem to suggest that if we say
&rho0 &alpha &omega2 -> Sqrt[&epsilon0/&mu0]
and
a2 -> E02
we get some kind of relationship?
Tuesday, February 19, 2008
Reading 2/18/2008: Section 2.11
Sorry this is late, I've been rather sick. Anyway, this section is a bit of a beast, but whew - good stuff.
So we can use the symmetry of &xi to plug one equation in the acoustic approximation into the other, and we get something nice (2.137). Fantastic.
What exactly does Helmholtz's theorem state? Wikipedia gives a good enough answer, and I assume this was discussed in class, but I'll let the question stand, as it seems to be an important, if intuitive, result.
Equation (2.144) is awesome. With some fairly basic steps, we've jumped from our general equation for isotropic acoustic-approximated media to the wave equation. BAM. There are huge assumptions on smoothness throughout all the derivations here, but the mathematician in me is throwing up the white flag - it ain't worth the battle.
As far as the derivation of necessity goes (2.147-148), my only question is: is there a cute proof that zero curl and divergence imply constancy? I feel like there must be, but it's not coming to be offhand and I'm too busy to try and find it.
The derivation of the dispersion relations is a little unclear, but I think it's mostly a notational issue. We have wave propagating in direction ki. For one, &phi and A have fixed values on the plane normal to ki only in constant time, as I gather, though this isn't said. If we denote the speed by c and let r be a position coordinate in the direction of ki (presumably with magnitude k), then by definition dr/dt = c. If we assume plane of constant phase, we end up with the dispersion relation. It makes sense after several readings, but the statement feels unclear.
The notion that P- and S-waves represent the two modes of propagation in an isotropic medium is new to me, and actually very cool, though it makes perfect sense now.
The stuff on pages 99-101 is standard stuff, having seen Physics 52 and 116, but the whole tossing in of an incident and a reflected wave out of this air has always seemed a little hand-wavy to me. That said, I'll accept it. (Not like I have a choice ;-)) All that said, why does symmetry of the medium imply an incident and a reflected wave must lie in the same plane? Intuitively this is clear, but mathematically...?
That was mostly it for the chapter. The connection to earthquakes was a nice touch, too.
Until next time...
So we can use the symmetry of &xi to plug one equation in the acoustic approximation into the other, and we get something nice (2.137). Fantastic.
What exactly does Helmholtz's theorem state? Wikipedia gives a good enough answer, and I assume this was discussed in class, but I'll let the question stand, as it seems to be an important, if intuitive, result.
Equation (2.144) is awesome. With some fairly basic steps, we've jumped from our general equation for isotropic acoustic-approximated media to the wave equation. BAM. There are huge assumptions on smoothness throughout all the derivations here, but the mathematician in me is throwing up the white flag - it ain't worth the battle.
As far as the derivation of necessity goes (2.147-148), my only question is: is there a cute proof that zero curl and divergence imply constancy? I feel like there must be, but it's not coming to be offhand and I'm too busy to try and find it.
The derivation of the dispersion relations is a little unclear, but I think it's mostly a notational issue. We have wave propagating in direction ki. For one, &phi and A have fixed values on the plane normal to ki only in constant time, as I gather, though this isn't said. If we denote the speed by c and let r be a position coordinate in the direction of ki (presumably with magnitude k), then by definition dr/dt = c. If we assume plane of constant phase, we end up with the dispersion relation. It makes sense after several readings, but the statement feels unclear.
The notion that P- and S-waves represent the two modes of propagation in an isotropic medium is new to me, and actually very cool, though it makes perfect sense now.
The stuff on pages 99-101 is standard stuff, having seen Physics 52 and 116, but the whole tossing in of an incident and a reflected wave out of this air has always seemed a little hand-wavy to me. That said, I'll accept it. (Not like I have a choice ;-)) All that said, why does symmetry of the medium imply an incident and a reflected wave must lie in the same plane? Intuitively this is clear, but mathematically...?
That was mostly it for the chapter. The connection to earthquakes was a nice touch, too.
Until next time...
Wednesday, February 13, 2008
Reading 2/13/2008: Section 2.10
Section 2.10: Fantastic, absolutely fantastic stuff. As always a little skeptical and amused of the physicist's tendency to pull a sine or cosine wave out of thin air, but it sure works nicely. The Taylor series in (2.105) serves no apparent purpose to me, as this is just the chain rule, but it's a minor point. The rest is actually pretty intuitive after the introduction of the propagating plane wave and the fairly natural acoustic approximation.
Not a whole lot else here. Sorry this is late, but it's been a long, long day.
Not a whole lot else here. Sorry this is late, but it's been a long, long day.
Sunday, February 10, 2008
Reading 2/10/2008
Section 2.8: This section was very cool. I'm guessing that the energy function defined herein is going to be the key to many later results. That said, I'm still trying to get used to the physicist differential notation. Usually I need to convert this in my mind to something that is actually mathematically well-defined, but the only way I see to do that here is to change d[whatever] to Delta[whatever] and then take limits.
I'm a little confused on the implication of (2.70) leading to (2.71). The notion of exact differential is not quite rigorously defined; is this perhaps equivalent to the existence of a gradient function in a conservative field? That's the explanation that makes sense. Then (2.72) would become a dot product of the gradient with dr, which would make considerably more sense to my mathematical sensibilities.
Equation (2.75) is very cool and intuitive. (Although I feel like there's got to be a more elegant, if less physical, way to get here.) We've got a function which is literally the gradient of the stress tensor, when considered as a vector in R9.
I don't necessarily see why (2.78) needs explanation - the derivative of component with respect to another component is very clearly a delta function under any circumstances.
We get a beautifully simple equation in the end, but my only question is how are we assured that the components of the tensor c are space invariant? Is this an assumption or is this universally true?
Section 2.9: This is very intuitive stuff. In essence, if we can find rotations that leave the stress tensor (and therefore the potential energy) unchanged, then these rotations are symmetries, and through equation (2.90), force symmetries into the elastic constants.
In the text, we've essentially noted that without knowing the tensor itself, we can argue physically that a certain structure must have certain elastic symmetries. But it seems we could actually reverse this argument. If we construct the elastic tensor of a structure, we can find all of its symmetries, even if they are not obvious due to coordinate choice.
So let's denote the elastic 4-tensor by T. If I remember correctly, we denote the action of a rotation A on T (which is really a pullback, but...) by writing
A*T(u,v,w,z) = T(ATu,ATv,ATw,ATz)
where u,v,w,z are vectors. This agrees with (2.87). Note that we are using here that AT = A-1. Writing it this way, we're actually looking for rotations A such that
A*T = T
In essence, we can just find all rotations that pull back under * to a transformation fixing T. I still feel like this could be formulated as an eigenvalue problem, which was my initial motivation for this tangent, making such a task even simpler, but I can't think of how at the moment.
On another note, I'd prefer a slightly better justification for why (2.97) is a sufficiently general isotropic tensor.
The derivation at the end is slick, though.
I'm a little confused on the implication of (2.70) leading to (2.71). The notion of exact differential is not quite rigorously defined; is this perhaps equivalent to the existence of a gradient function in a conservative field? That's the explanation that makes sense. Then (2.72) would become a dot product of the gradient with dr, which would make considerably more sense to my mathematical sensibilities.
Equation (2.75) is very cool and intuitive. (Although I feel like there's got to be a more elegant, if less physical, way to get here.) We've got a function which is literally the gradient of the stress tensor, when considered as a vector in R9.
I don't necessarily see why (2.78) needs explanation - the derivative of component with respect to another component is very clearly a delta function under any circumstances.
We get a beautifully simple equation in the end, but my only question is how are we assured that the components of the tensor c are space invariant? Is this an assumption or is this universally true?
Section 2.9: This is very intuitive stuff. In essence, if we can find rotations that leave the stress tensor (and therefore the potential energy) unchanged, then these rotations are symmetries, and through equation (2.90), force symmetries into the elastic constants.
In the text, we've essentially noted that without knowing the tensor itself, we can argue physically that a certain structure must have certain elastic symmetries. But it seems we could actually reverse this argument. If we construct the elastic tensor of a structure, we can find all of its symmetries, even if they are not obvious due to coordinate choice.
So let's denote the elastic 4-tensor by T. If I remember correctly, we denote the action of a rotation A on T (which is really a pullback, but...) by writing
A*T(u,v,w,z) = T(ATu,ATv,ATw,ATz)
where u,v,w,z are vectors. This agrees with (2.87). Note that we are using here that AT = A-1. Writing it this way, we're actually looking for rotations A such that
A*T = T
In essence, we can just find all rotations that pull back under * to a transformation fixing T. I still feel like this could be formulated as an eigenvalue problem, which was my initial motivation for this tangent, making such a task even simpler, but I can't think of how at the moment.
On another note, I'd prefer a slightly better justification for why (2.97) is a sufficiently general isotropic tensor.
The derivation at the end is slick, though.
Tuesday, February 5, 2008
Reading 2/6/2008: 2.7
Section 2.7: Not much to question here. I might ask how we calculate/find/arrive at the components of the 4-tensor, but other than that, it's all pretty straightforward.
Monday, February 4, 2008
Reading 2/4/2008: 2.4-2.6 (Continued)
Two points.
1. A lot of people have asked why there exists a set of coordinates wherein the stress tensor is diagonal. This is actually just a fact from linear algebra. Remember that Sij can be thought of as a matrix S. Since the stress tensor is symmetric, the matrix is symmetric in the sense that ST = S. This is just because Sij is just the component in the ith row and jth column. Check it if you don't believe it.
Now from linear algebra, there's a theorem that says every symmetric matrix admits an orthogonal diagonalization. That is, there exists a an orthogonal matrix P and a diagonal matrix D such that
S = P-1DP
Note that the diagonal elements of D are just the eigenvalues of S. Also, P is invertible because every orthogonal matrix is (det P = +-1).
Now why does this answer the question? Because if we let the columns of P become our new basis for R^2, then S "looks like" D. More precisely, the linear transformation S has representation D with respect to the basis formed by the columns of P. Thus those columns represent the coordinate system in which S is diagonal.
2. With regard to my little tangent before, maybe to help clarify a bit.
SO(3) is the set of rotations in three space. Its elements are just 3x3 matrices that are not only orthogonal (as all rotations are) but actually have determinant +1 to rule out reflections. It's the Special Orthogonal Group. We call it a group because it's actually a more sophisticated mathematical object, but that's not important. The key point is that multiplication and addition are defined on this set by matrix multiplication and division. It might help to think of matrices as vectors in R9.
What's interesting is that SO(3) is actually a Lie group. This basically means that it has the operation of matrix addition and that the group "looks like some version of Rn". This is obvious to us, because we can think of a matrix easily as just a vector in R9. But this gives it lots of special properties, because it means everything is sufficiently nice.
We can define a "path" in SO(3) to be a set of matrices parameterized by a real variable t. Say &gamma(t) might be a bunch of matrices that vary smoothly at t varies from 0 to 1. What do we mean by vary smoothly? Well that's simple - each entry in the matrix is just a real function of t, and all of those functions are smooth.
So now we have this structure on our set of rotations. How do we think about an infinitesimal rotation? Well, it should be close to the identity - because no rotation at all is just multiplying by the identity matrix. And it should have some notion of "direction", because if we integrated a bunch of infinitesimal rotations (in some sense), we ought to be able to get to a real rotation.
What we do is we let &gamma be an arbitrary path starting at the identity (so &gamma(0) = I) and defined from t = 0 to some later value of t (it doesn't matter). We can then define an infinitesimal rotation by taking the derivative of this path at 0. Why does this make sense? Well intuitively, say &gamma(1) is the rotation we want to move "towards", then we should be able to get there by integrating &gamma'(t) from 0 to 1 - integrating the infinitesimal rotations. This is just the fundamental theorem of calculus in disguise.
A lie algebra is just the set of all objects of the form &gamma'(0) where &gamma is, again, a path starting at the identity matrix and going somewhere else. The calculation I made in my previous post was investigating what the lie algebra of SO(3) (denoted by so(3), to be confusing!) actually looks like. All I did was let &gamma be an arbitrary path, I then wrote &gamma in terms of an arbitrary rotation matrix (every matrix in SO(3) can be written in terms of three angles, which we can then think of as functions of t), and then differentiated with respect to t and set t=0. That's where the matrix graphic in my post comes from. Since &gamma was arbitrary, d&theta/dt, d&phi/dt, and d&psi/dt are all arbitrary as well. That's why we calculate that so(3) is just the set of antisymmetric matrices.
Hopefully that makes slightly more clear what I was talking about.
1. A lot of people have asked why there exists a set of coordinates wherein the stress tensor is diagonal. This is actually just a fact from linear algebra. Remember that Sij can be thought of as a matrix S. Since the stress tensor is symmetric, the matrix is symmetric in the sense that ST = S. This is just because Sij is just the component in the ith row and jth column. Check it if you don't believe it.
Now from linear algebra, there's a theorem that says every symmetric matrix admits an orthogonal diagonalization. That is, there exists a an orthogonal matrix P and a diagonal matrix D such that
S = P-1DP
Note that the diagonal elements of D are just the eigenvalues of S. Also, P is invertible because every orthogonal matrix is (det P = +-1).
Now why does this answer the question? Because if we let the columns of P become our new basis for R^2, then S "looks like" D. More precisely, the linear transformation S has representation D with respect to the basis formed by the columns of P. Thus those columns represent the coordinate system in which S is diagonal.
2. With regard to my little tangent before, maybe to help clarify a bit.
SO(3) is the set of rotations in three space. Its elements are just 3x3 matrices that are not only orthogonal (as all rotations are) but actually have determinant +1 to rule out reflections. It's the Special Orthogonal Group. We call it a group because it's actually a more sophisticated mathematical object, but that's not important. The key point is that multiplication and addition are defined on this set by matrix multiplication and division. It might help to think of matrices as vectors in R9.
What's interesting is that SO(3) is actually a Lie group. This basically means that it has the operation of matrix addition and that the group "looks like some version of Rn". This is obvious to us, because we can think of a matrix easily as just a vector in R9. But this gives it lots of special properties, because it means everything is sufficiently nice.
We can define a "path" in SO(3) to be a set of matrices parameterized by a real variable t. Say &gamma(t) might be a bunch of matrices that vary smoothly at t varies from 0 to 1. What do we mean by vary smoothly? Well that's simple - each entry in the matrix is just a real function of t, and all of those functions are smooth.
So now we have this structure on our set of rotations. How do we think about an infinitesimal rotation? Well, it should be close to the identity - because no rotation at all is just multiplying by the identity matrix. And it should have some notion of "direction", because if we integrated a bunch of infinitesimal rotations (in some sense), we ought to be able to get to a real rotation.
What we do is we let &gamma be an arbitrary path starting at the identity (so &gamma(0) = I) and defined from t = 0 to some later value of t (it doesn't matter). We can then define an infinitesimal rotation by taking the derivative of this path at 0. Why does this make sense? Well intuitively, say &gamma(1) is the rotation we want to move "towards", then we should be able to get there by integrating &gamma'(t) from 0 to 1 - integrating the infinitesimal rotations. This is just the fundamental theorem of calculus in disguise.
A lie algebra is just the set of all objects of the form &gamma'(0) where &gamma is, again, a path starting at the identity matrix and going somewhere else. The calculation I made in my previous post was investigating what the lie algebra of SO(3) (denoted by so(3), to be confusing!) actually looks like. All I did was let &gamma be an arbitrary path, I then wrote &gamma in terms of an arbitrary rotation matrix (every matrix in SO(3) can be written in terms of three angles, which we can then think of as functions of t), and then differentiated with respect to t and set t=0. That's where the matrix graphic in my post comes from. Since &gamma was arbitrary, d&theta/dt, d&phi/dt, and d&psi/dt are all arbitrary as well. That's why we calculate that so(3) is just the set of antisymmetric matrices.
Hopefully that makes slightly more clear what I was talking about.
Sunday, February 3, 2008
Reading 2/4/2008: 2.4-2.6
Section 2.4: In deriving the continuity equation, we're relying heavily on the continuity and differentiability of the density and velocity. Does this mean we'll need entirely new techniques when analyzing systems containing the interface between two different materials (i.e. discontinuous density)? Are there situations when continuous velocity breaks down?
I thought I'd mention with respect to equation (2.19) that one easy way to understand why a total derivative becomes a partial derivative is to write it as
Written this way, the dV = dx dy dz variables can be thought of as dummy variables, much like dummy indices, since they are being integrated over. Consequently, t is the only free variable, and so passage from total differential to partial differential is purely notational, since we do not use total differentials for functions of more than one variable. (Unless, of course, they are "secretly," as in classical/quantum mechanics, a function of just one variable - time. But we can ignore that here.)
As a side note, I'm a little surprised none of this has been adapted to be used in Math 13/61 courses. It's a fantastic showing of the power of the divergence theorem.
Section 2.5: I kind of wish the explanation had been some strictly in 2D, because it took a little while to figure out where the extra components were going. That said, Figures 2.6/2.7 are very cool - it's quite clear that with symmetry we have no torque and without symmetry there must be one.
I wish a little more could be said about the stress ellipsoid. If my understanding is correct, once we've transformed to principal axes, directions xi in which Πii is large have small extent. Thus the greater the stress is one direction, the smaller the extent of the ellipsoid. Similarly, in the case of negative stress, how exactly are compressive and extensional forces differentiated? And where do the new arrows come from in FIgure 2.11?
Section 2.6: This is the physics-math barrier coming up again, but for the life of me I can't remember how physicists get away with saying that if D is an "infinitesimal rotation" about P, then δ = D x dx. I know infinitesimal rotations are elements of the lie algebra of SO(3), but there's got to be a better intuitive way of understanding where the cross product comes from.
WARNING: TANGENT
So, I'm just thinking out loud here, sorry if this is useless. But I want to see if I can explore the answer to the question I just posed. Here I'm going to assume we've rigorously defined an infinitesimal rotation as an element of the lie algebra of SO(3), the group of rotations in three dimensions, but that we don't know anything about so(3). The reason is that I can't remember offhand what the lie algebra looks like.
Okay, so lets suppose we have the identity I of SO(3) and have a path γ with γ(0) = I. We would like to calculate &gamma'(0) for an arbitrary path. First, we express &gamma as a matrix by multiplying the matrices for an xy rotation of angle &theta, a yz rotation of angle &phi, and an xz rotation of angle &psi$. Notice that if we then write each of the angles as a function of time with all angles zero at time zero, then we have an arbitrary path in SO(3) beginning at the origin. We finally differentiate the matrix and plug in t = 0. This series of calculations is extremely involved, but the result is the matrix
which represents an arbitrary element of so(3). Note that each derivative above is evaluated at t = 0. Okay, cool, so it looks like so(3) is just the set of antisymmetric matrices, since those derivatives at zero can be anything we want them to be. We've established that any infinitesimal rotation is just antisymmetric matrix. Set a = dφ/dt, b = dψ/dt. and c = dθ/dt. Then we can construct a vector D = (-a,b,-c), where the negatives will be clear momentarily. We can finally recover the antisymmetric matrix B = &gamma'(0) using a trick from the book and writing
Bij = εijk Dk
So in sum, it looks like we can legitimately make a "nice" bijection between the set of infinitesimal rotations, so(3), and individual vectors. In retrospect, I suppose the cross product comes in since if we multiple an element B of so(3) by a vector x = xj, we get
which is essentially (2.40). So if we express an infinitesimal rotation as a vector D, we can represent the result of applying the rotation to a vector x (up to a factor of -1) by taking th cross product of D and x. That is VERY cool.
Okay, sorry if that made sense/helped absolutely no one. It was really cool to me, so I left it in. When I wrote all that I hadn't even read the next line, but now I realize that what we're doing in (2.42) is recovering the ACTUAL infinitesimal rotation, which is of course the antisymmetric matrix Aij in the book. Even the negative sign resurfaces!
END TANGENT
I just have to say the way in which we derive &deltaS is gorgeous. Eliminate translations by defining &delta and brilliantly get rid of infinitesimal rotations by removing the antisymmetric part. Not intuitive, but both practically simple and mathematically incredible.
I thought I'd mention with respect to equation (2.19) that one easy way to understand why a total derivative becomes a partial derivative is to write it as
Written this way, the dV = dx dy dz variables can be thought of as dummy variables, much like dummy indices, since they are being integrated over. Consequently, t is the only free variable, and so passage from total differential to partial differential is purely notational, since we do not use total differentials for functions of more than one variable. (Unless, of course, they are "secretly," as in classical/quantum mechanics, a function of just one variable - time. But we can ignore that here.)
As a side note, I'm a little surprised none of this has been adapted to be used in Math 13/61 courses. It's a fantastic showing of the power of the divergence theorem.
Section 2.5: I kind of wish the explanation had been some strictly in 2D, because it took a little while to figure out where the extra components were going. That said, Figures 2.6/2.7 are very cool - it's quite clear that with symmetry we have no torque and without symmetry there must be one.
I wish a little more could be said about the stress ellipsoid. If my understanding is correct, once we've transformed to principal axes, directions xi in which Πii is large have small extent. Thus the greater the stress is one direction, the smaller the extent of the ellipsoid. Similarly, in the case of negative stress, how exactly are compressive and extensional forces differentiated? And where do the new arrows come from in FIgure 2.11?
Section 2.6: This is the physics-math barrier coming up again, but for the life of me I can't remember how physicists get away with saying that if D is an "infinitesimal rotation" about P, then δ = D x dx. I know infinitesimal rotations are elements of the lie algebra of SO(3), but there's got to be a better intuitive way of understanding where the cross product comes from.
WARNING: TANGENT
So, I'm just thinking out loud here, sorry if this is useless. But I want to see if I can explore the answer to the question I just posed. Here I'm going to assume we've rigorously defined an infinitesimal rotation as an element of the lie algebra of SO(3), the group of rotations in three dimensions, but that we don't know anything about so(3). The reason is that I can't remember offhand what the lie algebra looks like.
Okay, so lets suppose we have the identity I of SO(3) and have a path γ with γ(0) = I. We would like to calculate &gamma'(0) for an arbitrary path. First, we express &gamma as a matrix by multiplying the matrices for an xy rotation of angle &theta, a yz rotation of angle &phi, and an xz rotation of angle &psi$. Notice that if we then write each of the angles as a function of time with all angles zero at time zero, then we have an arbitrary path in SO(3) beginning at the origin. We finally differentiate the matrix and plug in t = 0. This series of calculations is extremely involved, but the result is the matrix
which represents an arbitrary element of so(3). Note that each derivative above is evaluated at t = 0. Okay, cool, so it looks like so(3) is just the set of antisymmetric matrices, since those derivatives at zero can be anything we want them to be. We've established that any infinitesimal rotation is just antisymmetric matrix. Set a = dφ/dt, b = dψ/dt. and c = dθ/dt. Then we can construct a vector D = (-a,b,-c), where the negatives will be clear momentarily. We can finally recover the antisymmetric matrix B = &gamma'(0) using a trick from the book and writing
Bij = εijk Dk
So in sum, it looks like we can legitimately make a "nice" bijection between the set of infinitesimal rotations, so(3), and individual vectors. In retrospect, I suppose the cross product comes in since if we multiple an element B of so(3) by a vector x = xj, we get
which is essentially (2.40). So if we express an infinitesimal rotation as a vector D, we can represent the result of applying the rotation to a vector x (up to a factor of -1) by taking th cross product of D and x. That is VERY cool.
Okay, sorry if that made sense/helped absolutely no one. It was really cool to me, so I left it in. When I wrote all that I hadn't even read the next line, but now I realize that what we're doing in (2.42) is recovering the ACTUAL infinitesimal rotation, which is of course the antisymmetric matrix Aij in the book. Even the negative sign resurfaces!
END TANGENT
I just have to say the way in which we derive &deltaS is gorgeous. Eliminate translations by defining &delta and brilliantly get rid of infinitesimal rotations by removing the antisymmetric part. Not intuitive, but both practically simple and mathematically incredible.
Subscribe to:
Posts (Atom)