Stickytext

I am not from Louisville.
I have never been to Louisville.
I don't even like baseball that much.
This blog is a course log for CSC165H1 in the fall 2014 semester.

Friday 24 October 2014

w-VII

WEEKSEVENWEEKSEVENWEEKSEVENWEEKSEVENWEEKSEVENWEEKSEVENWEEKSEVEN
WEEKSEVEN  
WEEKSEVEN              
WEEKSEVEN                     
WEEKSEVEN                            
WEEKSEVEN                                  
WEEKSEVEN                                       
WEEKSEVEN                                           
WEEKSEVEN                                               
WEEKSEVEN                                                   
WEEKSEVEN                                                      
WEEKSEVEN                                                        
WEEKSEVEN                                                         
WEEKSEVEN                                                         
WEEKSEVEN                                                         
WEEKSEVEN                                                         
WEEKSEVEN                                                         
WEEKSEVEN                                                         


Okay, so week seven in the course was really swell. Like, actually. We are basically coming to the end of constructing proofs, and I seem to have learned to understand appropriate logical construction of arguments/proofs much better. I think that attempting to teach proofs alongside new material runs the risk of sweeping the mechanics of proof aside while focusing on the ``meat'' of the material. For example, learning epsilon-delta proofs in a calculus class.  Getting a firm handle on logic and arguments has been an extremely valuable process, and I already have three-or-so years of university-level mathematics education. It is a little embarrassing to admit that that is the case, but it has genuinely been a very valuable process.

As far as new material this week, we did proof by cases and introduction to algorithm analysis. Let's talk about algorithm stuff.

I guess it is fundamentally interesting to have perspective on how scaling factors and lower order terms have no bearing on how we understand the efficiency of code. This makes me think of ``big data'', and what ``big data'' even means. I recall a talk I attended about computational biology where single datasets for single genes are 10's of terabytes or more. One point that I came away with, was that generating new datasets and/or re-generating the same dataset when one wants to look at the data is more efficient/cost effective than storing all produced data in disk storage somewhere. It is easy to imagine mathematically, taking limits of very large numbers, but sometimes it is difficult to comprehend actually using these numbers in real applications. 

On the other hand it is easier to understand our non-interest in scaling factors for small numbers. For an integer, how much longer does it take to simply return that number versus initializing an accumulator and adding one to it until we reach that number? For small numbers, we could not perceive the difference in running time. Even for not-so-small numbers, we would not be particularly disturbed by a few seconds of run time. What is more concerning to us is how much longer it would take if we doubled or tripled that number. Computers are fundamentally important to the utility of large data and are only becoming more important still. As an aside, a great deal of science is currently being held back by reluctance to embrace the power of computers. This is where we really care about computational and algorithmic efficiency. I look forward to learning more about this.

Thanks for the slog, don't roll a log, motorcycle hog, flip a POG, hot hot dog,

J

Friday 17 October 2014

w-VI

Here
Here is
Here is a
Here is a post
Here is a post about
Here is a post about week
Here is a post about week six
Here is a post about week six in
Here is a post about week six in the
Here is a post about week six in the course.

Oh, so we did some proofs about the ``floor'' function. I guess it really drives the point home about requiring a complete and rigorous understanding of the ``floor'' operation before we can start to prove anything about it. Further, it demonstrates the validity of showing intermediate/lesser results before whatever it is that we want to prove. Not that we can always just show a bunch of stuff and accidentally stumble on the right answer, but sometimes we can.

I guess the next point is that disproving something is the equivalent of proving its negation. This is useful for disproving something but also comes back to the concept of proof by contradiction, and maybe even proof by contraposition. I guess the idea is that typically, constructing a single example is easier than demonstrating that something is true for all possibilities. We can use the negation operator to exchange existential for universal quantifiers and we should consider the applicability of the contrapositive and the negation whenever we set out to prove things.

Finally, there is the idea of constructing proofs for limits in the ``epsilon and delta'' sense. Where it is easy to imagine the concept whereby a function continuous at some point has an infinite number of points arbitrarily and ``infinitely'' close to itself, it is possibly more challenging to find what scaling factor is the limit on the relative sizes of epsilon and delta. We can imagine for the function prescribing a line, the slope gives the relationship between epsilon and delta. Where we have anything less trivial, we must then consider how its vertical and horizontal spans compare. For the example of a parabola, however we might realize that we can find an arbitrarily large neighbourhood of any point where the function's slope is arbitrarily large. This is why in the class example, everything is simplified when we restrict the size of the delta that we pick for any epsilon. Regardless of epsilon, we can just pick a maximum size for delta, so that if we consider very large values of epsilon, rather than ``chasing'' the upper limit that we can have on delta for that epsilon, we can choose instead some other small neighbourhood of the point of interest which will easily satisfy our requirements for the limit to converge. The ``art'' in this construction is knowing how to pick a limit on delta to get a ``nice'' relationship (scalar multiplication and/or exponentiation) between epsilon and delta. It just makes for a much more satisfying solution and an easier/simpler ``recipe'' (for delta, given epsilon).

That's about all.

Oh, and I totally got perfect on the midterm, which in some way is tacit permission to continue with this ``style'' of SLOG slogging.

Thanks for sloggin' around town, king without a crown, ground-beef brown, all fall down,

J

Friday 10 October 2014

w-V

APOSTABOUTWEEKFIVE-OHHIMARK

Alright. Week five in the course was pretty cool and great.


``There was a mid-term!''

And then there was some stuff about about proofs and proof structures. I don't really have any brilliant or lucid insight into proof structuring and writing out, however I will say that graph paper makes dealing with the course-requested indenting structure much easier for hand-written proof (structures).

I as I see it, the only really salient point to talk about or even think about is pre-proof structuring. More specifically, how to know when and where to attempt direct proof, or contraposition, or contradiction, or induction or whatever. Some observations, true or otherwise:

  1. Proving something that we intuitively think of as ``obvious'' is often easier by contradiction, for example, that there an infinitely many prime numbers.
  2. Trying to make claims about integers and their squares (as we've done a few times in the lectures), it is best to start with the integer itself and any conditions on it and then move towards making claims about its square. The set of integers is not closed under taking square-roots and so we must be cautious with how we handle these numbers.
  3. Induction is best (possibly even necessary) when we are looking at an indeterminate number of objects. For example, if we say ``the finite intersection of open sets is itself open'', we wouldn't want to take every possibility for any open sets and look at their intersection. It is enough to prove that the intersection of any two open sets is open. From there we can sort of reason through ``adding'' more sets through intersection one-by-one and knowing that these intersections will be open because we are intersecting a open set from intersecting two elements with a new open set. Another example is the uniqueness of polynomial decomposition up to scalars (units) where we cannot say how many irreducible polynomials that we might ``pull out'' of a polynomial.
  4. If you try to apply rigorous proof (and/or proof structure) in regular conversation, your friends will probably get mad at you.


I guess also about making proofs, we must be cautious about what we take to be ``assumed knowledge''. For example, we are asked to prove:


If we know analysis or calculus or topology or something, we could evoke the intermediate value theorem. We would not be finding an example to prove an existential condition, but would instead be directly applying another existential truth to this example. Obviously, without knowing this theorem, it is easier to find a single example than to prove the entire intermediate value theorem, but I certainly have a knee-jerk reaction to this posed question to evoke my entire body of knowledge of calculus/analysis, including any and all theorems about polynomials, limits, and continuous functions. But, I'm pretty sure that I'm not supposed to do that. It is better that I should demonstrate these things with minimal assumed knowledge, perhaps just guess-and-check methods (in this case) with only some algebra so that I can use the standard operations of addition and multiplication. Probably.

That's about all that I have to say about that.

Thanks for plig-a-plog sliggin' my slog,

J

Friday 3 October 2014

w-IV

Here is a new post.
It is about the fourth week.
In the course, I mean.

-------------------------------------------------------------

I was told that I should talk about ``the paper folding problem'' and so I shall as not to disappoint my avid readers.

The lecture had an explanation about rotational symmetry, but I didn't much care for it.

A question is posed:
If we fold a piece of paper in half, right hand over top to left hand and unfold it, we have a single crease in our paper pointing downwards. If we fold it once and then again from the far right edge again over top to the left side and completely unfold it, we now have multiple creases. Some of these will point downwards and others upwards. How can we predict the direction and order of theses folds for repeating this process an arbitrary number of times?
To begin, we can simply do this to a piece of paper and look at the results:

N Folds
1 D
2 DDU
3 DDUDDUU
4 DDUDDUUDDDUUDUU

In this case, N is the number of times the folding process is repeated `U' denotes an upward pointing fold, and `D' a downward pointing one. Firstly, we notice that the number of folds is easily predictable by considering that each fold doubles the number of segments that the paper is divided up into and the number of folds is one less than the number of segments. This says that we will have 2^N - 1 folds where the process has been repeated N times.

Now, to the order and orientation of folds. When the folding process is repeated, we never change or remove existing folds. So if we can understand how new folds are added, we can understand recursively how to predict the folding pattern for arbitrarily large and/or non-physical values of N. Knowing that any existing folds must persist, we can rewrite the table to make this clearer.

N Folds
1 D
2 D       D       U
3 D   D   U   D   D   U   U
4 D D U D D U U D D D U U D U U

Now what we are particularly interested in are the folds that are added at each stage. Firstly, we see that from 1 to 2, we add a D and a U on either side of the centre crease. From 2 to 3, We a D and U on either side of each crease which is not from the original crease. Already, we see a pattern forming. Looking at all of the folds created from 3 to 4, we note that they always alternate (D, U, D, U,...). If we consider the implications of this, we understand that it makes perfect sense. When the paper is already folded, it is a continuous strand that passes some number of times from our left hand to our right hand and back, because after the first fold, both ends are kept in the left hand. Because the piece of paper is continuous, after it passes from the left to the right hand, it immediately passes back to the left from the right. Any two consecutive segments that we are about to introduce a new fold to, run antiparallel to each other immediately preceding the fold. Thus any two consecutive new folds must have opposite directionality (UvD or DvU). It should be clear than a recursive iterative solution can therefore easily be arrived at.

The problem while very symmetric in the introduction of new folds has a degree of non-symmetry introduced in that the paper does not begin to receive folds symmetrically until we have already folded it in half uniting the two ends of the paper, effectively creating a `loop'.

So I guess my solution for a general N, might look something like this in MATLAB:

% Up  = 1
% Down = 0
% N = ???  % You need to assign N a value and remove the first '%'
folds_old = 0;
n = 1;

% assuming N > 1

while n < N

    n = n + 1;
    folds_new = NaN(2^n - 1,1);
    % Setting even elements to existing folds 
    folds_new(2:2:end) = folds_old;
    % Set first of odd folds to down
    folds_new(1:4:end) = zeros(size(folds_new(1:4:end)));
    % Set second of odd folds to up
    folds_new(3:4:end) = ones(size(folds_new(3:4:end)));
    folds_old = folds_new;

end

I'm sure there's a way to do this in Python, but I am familiar with MATLAB indexing things and it took, like, a single minute to do, so whatever. If you are unfamiliar, K(A:B:C) in MATLAB takes every Bth element of K starting at index A up to indices no greater than C. Note that indexing starts at 1.

I hope that this has been sufficient or good or whatever. 

Thanks for sloggin' in my bog,

J