Fun Stuff > CHATTER
Can't Think of a Breaking Bad Pun For the Title: Let's Do Some Math!
BeoPuppy:
--- Quote from: pwhodges on 12 Dec 2013, 02:35 ---Maybe the exercise is deliberately intended to get you to find out about the rule yourself - it's one style of teaching...
--- End quote ---
Sounds like being pushed of a cliff to see if the students evolve wings or not.
(Thanks, T.Pratchett!)
ankhtahr:
Nope, we found solutions for both which were relatively simple without it.
I still remember our solution for the d) involving simply showing that lim cos(x) is 1, so lim cos(x) -1 is 0, so lim sin(cos(x) -1) is 0.
And as far as I know the rule will be introduced in a few weeks.
ankhtahr:
Turns out it's hard to get back into Linear Algebra after being forced to skip a week.
Exercise Sheet
I have to read up on a lot right now. I don't even really know what vector spaces are.
I have until Monday evening for this sheet.
PthariensFlame:
--- Quote from: ankhtahr on 14 Dec 2013, 13:35 ---I don't even really know what vector spaces are.
--- End quote ---
A vector space is a module over a field, with some appropriate additional laws. :P
Loki:
Some informal thoughts:
You don't need to know what vector spaces are for the first one, just what linear (in)dependency is. Nevertheless, a vector space is a set of numbers which fulfills some axioms. A subspace is a subset of that set of numbers which also fulfills the axioms (ie is a vector space in itself).
Side note: A vector space is spanned by some vectors. Which means if you add vectors together or scale them up, you can get the whole vector space. Remember how you were calculating points in Rē in school? You needed a*(1,0) + b*(0,1) to get to any point (a,b). The set {(1,0), (0,1)} was a basis for Rē. Incidentally, the set {(1,0), (0,1), (13,37)} would also span Rē, but you'd never need to use (13,37) to reach your points. The vectors {(1,0), (0,1)} were a bare minimum you needed, that's why a basis is also called a minimum spanning system. There are many spanning systems for a vector space. For example, you could use {(4,0),(0,2)}, because you can cale them down to {(1,0), (0,1)} and proceed from there as usual.
If you only used {(1,0)}, then you could get all vectors which were on the x-axis, but no other vectors. All the vectors (x,0) are in a subspace of Rē. Incidentally, if you were to omit the 0 (because it's kinda redundant, right), you'd have the number line (that is, R). Congratulations! You discovered that R is a subspace of Rē!
Now, to some tasks.
1a) There are multiple approaches I can think of.
Because c1*v1+c2*v2+c3*v3=0 (I have substituted lambda with c) has at least one non-trivial solution, the set {v1, v2, v3} is linearly dependent. That means that you can "take away" at least one vector from it and they will still span the same space. <v1, v2, v3> = <v1, v2> = <v2, v3> = <v1, v3>.
Another way would be to start with a trivial case <v1>=<v2>. Then <v1, v2> = <v1> = <v2>. It would then follow from the equation that <v1> = <v3> = <v1, v3>. You'd then have to show that you get to the same conclusion even if <v1>!=<v2> (this is more formal, but harder and probably pointless).
1b) Would start with showing both directions separately.
=>: let (v_1, ...v_n) be a linearly indep. system. It then follows that it spans some subspace of V\{0}, let's call this subspace U. (v_1...v_s) then spans some subspace of U, let's call it U1. Similarly, (v_(s+1)...v_n) spans some subspace U2. Because (v_1, ...v_n) is lin. indep., every non-empty subset of it is also lin. indep. (this might be a bit of pain in the ass to prove. When in doubt, use induction). That is, for every s, {v1...vs} is linearly independent and {v_(s+1)...v_n} is linearly independent. In particular, for every v in U1, it is not in U2, except if it is 0. (I am doing it a bit sloppily right now.)
Because of this, the only element in U1 intersect U2 is 0.
<=: let's have <v1,...vs> intersect <v_(s+1), ...vn> = 0 for all s. Then it particularly follows that <v1> intersect <v2...vn> = 0. v1 is thus not in <v2...vn>. {v1} union {v2...vn}={v1...vn} is then lin. indep.
3b) Let V have n spanning vectors in it's basis and U have k spanning vectors in it's basis, with k<=n (ie U is k-dimensional). Then there are (n-k) vectors which are linearly independent (since they are a subset of a the linearly independent set of the basis vectors of V). They span a subspace W. None of the elements of W are in U, except 0 (does this look familiar?), and vice-versa. Thus, V\U=W.
I will leave the special case U=V as an exercise to you, because I am not sure if a complement can be an empty set.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version