Jump to content
Science Forums

A question for Qfwfq (or anyone else who could answer)


mang73

Recommended Posts

I'd say it's much like the symmetric case.

 

A_ij = -A_ji

 

(aA)_ij = aA_ij = -aA_ji = -(aA)_ji

 

B_ij = -B_ji

 

(A + :xx:_ij = A_ij + B_ij = -A_ji - B_ji = = -(A_ji + B_ji) = -(A + :xx:_ji

 

It should be easy enough to do the general linear combination.

Link to comment
Share on other sites

Hey thanks a lot. You are a life saver. Listen, there is this Q I am trying to solve and I ahve looked into about 5 linear algebra books trying to find a similar problem so I could solve it but I can't. I am not sure if you could help me with it but here it goes:

 

Let T: V -> W be a linear map between two vector spaces. Let w0 be an element of W and let T^ -1(w0) be the set of V whose elements are mapped to w0, i.e.

 

T^ -1(w0) = { u element of V l T(u) = w0}.

 

show that

 

1) T^ -1(w0) is a subspace of V if and only if w0 is the zero element of W.

 

2) T^ -1(w0) = { v0 + v l v element of Ker(T) }, where v0 is any element in T^ -1(w0).

 

:xx: :xx:

Link to comment
Share on other sites

It might be better to keep questions about linear algebra in the same thread. I'm glad you have found the answers helpful.

 

1) If and only if: It is easy to show the "if" because if w0 is the zero of W then T^ -1(w0) is T^ -1(0) which is ker T, so it follows as we had already seen that ker T is a subspace. To show the "only if" we suppose w0 != 0. Consider T(u) = w0 and T(v) = w0 and we can write T(au + bv) = aT(u) + bT(v) = aw0 + bw0. Is aw0 + bw0 = 0 for every a and b? No, it is (a + b)w0 which is 0 only for b = -a.

 

2) Let's show that T(v0 + v) is w0. Easy: T(v0 + v) = T(v0) + T(v) = w0 + 0, by the definitions of v and v0. This shows that, for a given v0, all v0 + v are "good". We ought to show that any other member of T^ -1(w0) is v0 plus some v in ker T. This is like saying that if u is also in T^ -1(w0) then u - v0 is in ker T. True, because: T(u - v0) = T(u) - T(v0) = w0 - w0 = 0.

 

Tricky? Not as easy as the other proofs, but with a bit of practice you can acquire the right approach. It's harder when a test is looming up, it's a lot easier when you're less anxious, and not worried about teachers and marks and so on... :xx: Some teachers make it harder.

 

I remember when I was doing this stuff in a first year course. The teacher wasn't the best and by the time it got to applying these things to geometry, it seemed terrible. I finally scraped through that course's exams but now these things seem a lot more obvious, :xx: I would have done better if I had just been more relaxed and concentrated.

Link to comment
Share on other sites

Yes, this means V is the direct sum of ker and im, usually written with a circled + but we don't have that symbol here! The second proposition is actually part of the definition of direct sum but it's easier to show it in itself and then use it to help show the first proposition. I wouldn't advise you to try proving the first without the second. Now by definition of ker and Im, they both have 0. Let's suppose v is in both ker and Im, Pv = 0 and also v = Pu. Can we also have v != 0??? From PP = P we have:

 

PPv = Pv, PPu = Pu

 

and considering v = Pu we can write

 

PPu = Pu = v = Pv = PPv

 

these are also = 0, considering Pv = 0 but this would mean v = 0, so 0 is all of ker I Im. We don't have the upside-down U here so let's use I instead. :)

 

The first proposition, direct sum, means that for any v in V there is one and only one u in ker, and one and only one w in Im, such that v can be written by u + w. Given any v, Pv is in Im, so let's try calling it u, and we can also consider v - Pv = v - u. If we show that this is in ker, we can call it w. Obviously v = u + w! So what is P(v - Pv)? It's equal to Pv - PPv = Pv - Pv = 0.

 

Now we only need to show that u = Pv and w = v - Pv are the only good u and w in Im and ker respectively. Write v = u' + w' with these also in Im and ker and ask: can u' != u or w' != w??? This is where we need the second proposition, ker I Im = {0}. It would mean:

 

u' + w' = u + w, or:

 

u' - u = w - w'

 

but each of these differences is clearly in Im and ker respectively, so they can be equal to each other only if they are 0.

 

Why should the second proposition be proven before first?

 

If you consider U and W that span all V but their intersection isn't {0} then, for a given v of V you can more more than pair, u and w, that sum to v. This means the direct sum U + W is actually more than V, the intersection is repeated. It might seem confusing because U and W are both subspaces.

 

Remember that you can even make the direct sum of a space V with itself (more precisely between two "copies" of V) and it will have a number dimensions twice that of V. :)

Link to comment
Share on other sites

Wow. ;) You amaze me everytime by your generosity. I truly appreciate the fact that you go through this great length of time to answer my questions. Thank you. As for now. I am panicing like crazy. I just found out that for tomorrows homework there were 3 problems that I wasn't aware of. I am not sure how to solve them. If you can even solve one of them by 2:00 PM california(pacific) time, That would be great.

 

QUESTION 1: Find the matrix associated with the following linear map. The vectors are written horizantally with a transpose sign for typographically reasons. ( I don't know if in other books its typed like this too but the "t" transpose sign is like the power but on the left side not the right side. Could you also tell me what that means? I will type it like: "^t")

 

F: R4-->R2 given by F[^t(x1,x2,x3,x4)] = ^t(x1,x2) (the projection)

 

Question 2: Find the matrix R(theta) associate with the rotation for theta= pi/4

 

 

Question 3: Let c be a number, and Let: Rn-->Rn be a linear map such that L(X)=cX. What is the matrix associated with this linear map?

 

Thank you.

Link to comment
Share on other sites

Wow. ;) You amaze me everytime by your generosity. I truly appreciate the fact that you go through this great length of time to answer my questions. Thank you.
No trouble, I'm not busy today and wasn't yesterday, other times I might need to come back the next morning or after weekend.

 

I like helping people to learn and to understand, but maybe I shouldn't be quite doing your homework for you... ;) ;) ;)

 

Try not to panick! Things are easier if you just think without worrying. ;)

 

Could you also tell me what that means? I will type it like: "^t")
I'm not sure if you want to know what transposition means. If so, rows become columns and vice versa. Just "flip it over". The symbol is arbitrary, sometimes a T-like sign, like a + without the top, is added like an apex or an exponent. I don't find it in my ASCII table.

 

F: R4-->R2 given by F[^t(x1,x2,x3,x4)] = ^t(x1,x2) (the projection)
Easy! Just think that a "trivial" projection operator is... identity. The identity matrix is just zeros, except for all ones on the diagonal:

 

1 0 0 0...

0 1 0 0...

0 0 1 0...

0 0 0 1...

.............

 

Now, if you want only some of a vectors components to "survive" what matrix do you need? ;)

 

Question 2: Find the matrix R(theta) associate with the rotation for theta= pi/4
Rotations in euclidean space are orthogonal matrices with determinant +1 but it is simple to write in 3 dimensions if the axis of rotation is one of the cartesian axes. You need sines and cosines of the angle. Start from the identity matrix (which means rotation of zero angle!!!) and remember that cos(0) = 1 and sin(0) = 0. That should be quite suggestive!

 

In 3-D, suppose you rotate arond the x1 axis, that means only the x2 and x3 components will be recombined by the tranformation, so find the right sine and cosine expressions to fill in:

 

1 0 0

0

0

 

Only point to be careful on: R ^R must give identity (orthogaonal) and determinant must be +1, so get the + and - signs of each element right. You will find two possibilities, which are just equivalent to the angle being + or -. Hmmmm... it's easy now! ;)

 

Question 3: Let c be a number, and Let: Rn-->Rn be a linear map such that L(X)=cX. What is the matrix associated with this linear map?
If c = 0, what does L(X)=cX become? It becomes identity!!! What would c times identity be???

 

Easy! Don't panick!!! ;)

Link to comment
Share on other sites

Q1: Let be a triangular nxn matrix, say a matrix such that all components below the diagonal are equal to zero.(NOTE: I couldn't draw the large bracket around the matrix).

 

 

a11

0 a22

0 0 .

. . .

. . .

. . .

0 0 ............... ann

 

Lets say this matrix=A

Now, what is D(A)=?

 

---------------------------------------------------------------------------

Link to comment
Share on other sites

Q2: If A is an nxn matrix whose determinant is "NOT" equal to zero, and B is a given vector in n-space, show that the system of linear equations AX=B has a unique solution. If B=O, this sulution is X=O (Note: I don't think its a 0 its an O).

Link to comment
Share on other sites

Q3: Using the fact that if A,B are two nxn matrices then

 

Det(AB)= Det(A) Det(:xx:,

 

prove that a matrix A such that Det(A)=0 does not have an inverse.

 

I hope you also saw the 2 questions above. There were 5 more questions but they weren't proofs. They were just finding determinants which I did myself. Thanks for the help.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...