Louisville SLOGger
Stickytext
I am not from Louisville.
I have never been to Louisville.
I don't even like baseball that much.
This blog is a course log for CSC165H1 in the fall 2014 semester.
I have never been to Louisville.
I don't even like baseball that much.
This blog is a course log for CSC165H1 in the fall 2014 semester.
Monday 3 November 2014
w-VIII
No post from last week. My lymph nodes were, and still are, extremely swollen and I don't feel like myself. I feel almost ``drunk'' all the time and am unable to concentrate on my work or do anything effectively. It is not a lot of fun.
Friday 24 October 2014
w-VII
WEEKSEVENWEEKSEVENWEEKSEVENWEEKSEVENWEEKSEVENWEEKSEVENWEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
WEEKSEVEN
Okay, so week seven in the course was really swell. Like, actually. We are basically coming to the end of constructing proofs, and I seem to have learned to understand appropriate logical construction of arguments/proofs much better. I think that attempting to teach proofs alongside new material runs the risk of sweeping the mechanics of proof aside while focusing on the ``meat'' of the material. For example, learning epsilon-delta proofs in a calculus class. Getting a firm handle on logic and arguments has been an extremely valuable process, and I already have three-or-so years of university-level mathematics education. It is a little embarrassing to admit that that is the case, but it has genuinely been a very valuable process.
As far as new material this week, we did proof by cases and introduction to algorithm analysis. Let's talk about algorithm stuff.
I guess it is fundamentally interesting to have perspective on how scaling factors and lower order terms have no bearing on how we understand the efficiency of code. This makes me think of ``big data'', and what ``big data'' even means. I recall a talk I attended about computational biology where single datasets for single genes are 10's of terabytes or more. One point that I came away with, was that generating new datasets and/or re-generating the same dataset when one wants to look at the data is more efficient/cost effective than storing all produced data in disk storage somewhere. It is easy to imagine mathematically, taking limits of very large numbers, but sometimes it is difficult to comprehend actually using these numbers in real applications.
On the other hand it is easier to understand our non-interest in scaling factors for small numbers. For an integer, how much longer does it take to simply return that number versus initializing an accumulator and adding one to it until we reach that number? For small numbers, we could not perceive the difference in running time. Even for not-so-small numbers, we would not be particularly disturbed by a few seconds of run time. What is more concerning to us is how much longer it would take if we doubled or tripled that number. Computers are fundamentally important to the utility of large data and are only becoming more important still. As an aside, a great deal of science is currently being held back by reluctance to embrace the power of computers. This is where we really care about computational and algorithmic efficiency. I look forward to learning more about this.
Thanks for the slog, don't roll a log, motorcycle hog, flip a POG, hot hot dog,
J
Friday 17 October 2014
w-VI
Here
Here is
Here is a
Here is a post
Here is a post about
Here is a post about week
Here is a post about week six
Here is a post about week six in
Here is a post about week six in the
Here is a post about week six in the course.
Oh, so we did some proofs about the ``floor'' function. I guess it really drives the point home about requiring a complete and rigorous understanding of the ``floor'' operation before we can start to prove anything about it. Further, it demonstrates the validity of showing intermediate/lesser results before whatever it is that we want to prove. Not that we can always just show a bunch of stuff and accidentally stumble on the right answer, but sometimes we can.
I guess the next point is that disproving something is the equivalent of proving its negation. This is useful for disproving something but also comes back to the concept of proof by contradiction, and maybe even proof by contraposition. I guess the idea is that typically, constructing a single example is easier than demonstrating that something is true for all possibilities. We can use the negation operator to exchange existential for universal quantifiers and we should consider the applicability of the contrapositive and the negation whenever we set out to prove things.
Finally, there is the idea of constructing proofs for limits in the ``epsilon and delta'' sense. Where it is easy to imagine the concept whereby a function continuous at some point has an infinite number of points arbitrarily and ``infinitely'' close to itself, it is possibly more challenging to find what scaling factor is the limit on the relative sizes of epsilon and delta. We can imagine for the function prescribing a line, the slope gives the relationship between epsilon and delta. Where we have anything less trivial, we must then consider how its vertical and horizontal spans compare. For the example of a parabola, however we might realize that we can find an arbitrarily large neighbourhood of any point where the function's slope is arbitrarily large. This is why in the class example, everything is simplified when we restrict the size of the delta that we pick for any epsilon. Regardless of epsilon, we can just pick a maximum size for delta, so that if we consider very large values of epsilon, rather than ``chasing'' the upper limit that we can have on delta for that epsilon, we can choose instead some other small neighbourhood of the point of interest which will easily satisfy our requirements for the limit to converge. The ``art'' in this construction is knowing how to pick a limit on delta to get a ``nice'' relationship (scalar multiplication and/or exponentiation) between epsilon and delta. It just makes for a much more satisfying solution and an easier/simpler ``recipe'' (for delta, given epsilon).
That's about all.
Oh, and I totally got perfect on the midterm, which in some way is tacit permission to continue with this ``style'' of SLOG slogging.
Thanks for sloggin' around town, king without a crown, ground-beef brown, all fall down,
J
Here is
Here is a
Here is a post
Here is a post about
Here is a post about week
Here is a post about week six
Here is a post about week six in
Here is a post about week six in the
Here is a post about week six in the course.
Oh, so we did some proofs about the ``floor'' function. I guess it really drives the point home about requiring a complete and rigorous understanding of the ``floor'' operation before we can start to prove anything about it. Further, it demonstrates the validity of showing intermediate/lesser results before whatever it is that we want to prove. Not that we can always just show a bunch of stuff and accidentally stumble on the right answer, but sometimes we can.
I guess the next point is that disproving something is the equivalent of proving its negation. This is useful for disproving something but also comes back to the concept of proof by contradiction, and maybe even proof by contraposition. I guess the idea is that typically, constructing a single example is easier than demonstrating that something is true for all possibilities. We can use the negation operator to exchange existential for universal quantifiers and we should consider the applicability of the contrapositive and the negation whenever we set out to prove things.
Finally, there is the idea of constructing proofs for limits in the ``epsilon and delta'' sense. Where it is easy to imagine the concept whereby a function continuous at some point has an infinite number of points arbitrarily and ``infinitely'' close to itself, it is possibly more challenging to find what scaling factor is the limit on the relative sizes of epsilon and delta. We can imagine for the function prescribing a line, the slope gives the relationship between epsilon and delta. Where we have anything less trivial, we must then consider how its vertical and horizontal spans compare. For the example of a parabola, however we might realize that we can find an arbitrarily large neighbourhood of any point where the function's slope is arbitrarily large. This is why in the class example, everything is simplified when we restrict the size of the delta that we pick for any epsilon. Regardless of epsilon, we can just pick a maximum size for delta, so that if we consider very large values of epsilon, rather than ``chasing'' the upper limit that we can have on delta for that epsilon, we can choose instead some other small neighbourhood of the point of interest which will easily satisfy our requirements for the limit to converge. The ``art'' in this construction is knowing how to pick a limit on delta to get a ``nice'' relationship (scalar multiplication and/or exponentiation) between epsilon and delta. It just makes for a much more satisfying solution and an easier/simpler ``recipe'' (for delta, given epsilon).
That's about all.
Oh, and I totally got perfect on the midterm, which in some way is tacit permission to continue with this ``style'' of SLOG slogging.
Thanks for sloggin' around town, king without a crown, ground-beef brown, all fall down,
J
Friday 10 October 2014
w-V
APOSTABOUTWEEKFIVE-OHHIMARK
Alright. Week five in the course was pretty cool and great.
And then there was some stuff about about proofs and proof structures. I don't really have any brilliant or lucid insight into proof structuring and writing out, however I will say that graph paper makes dealing with the course-requested indenting structure much easier for hand-written proof (structures).
I as I see it, the only really salient point to talk about or even think about is pre-proof structuring. More specifically, how to know when and where to attempt direct proof, or contraposition, or contradiction, or induction or whatever. Some observations, true or otherwise:
I guess also about making proofs, we must be cautious about what we take to be ``assumed knowledge''. For example, we are asked to prove:
If we know analysis or calculus or topology or something, we could evoke the intermediate value theorem. We would not be finding an example to prove an existential condition, but would instead be directly applying another existential truth to this example. Obviously, without knowing this theorem, it is easier to find a single example than to prove the entire intermediate value theorem, but I certainly have a knee-jerk reaction to this posed question to evoke my entire body of knowledge of calculus/analysis, including any and all theorems about polynomials, limits, and continuous functions. But, I'm pretty sure that I'm not supposed to do that. It is better that I should demonstrate these things with minimal assumed knowledge, perhaps just guess-and-check methods (in this case) with only some algebra so that I can use the standard operations of addition and multiplication. Probably.
That's about all that I have to say about that.
Thanks for plig-a-plog sliggin' my slog,
J
Alright. Week five in the course was pretty cool and great.
``There was a mid-term!''
And then there was some stuff about about proofs and proof structures. I don't really have any brilliant or lucid insight into proof structuring and writing out, however I will say that graph paper makes dealing with the course-requested indenting structure much easier for hand-written proof (structures).
I as I see it, the only really salient point to talk about or even think about is pre-proof structuring. More specifically, how to know when and where to attempt direct proof, or contraposition, or contradiction, or induction or whatever. Some observations, true or otherwise:
- Proving something that we intuitively think of as ``obvious'' is often easier by contradiction, for example, that there an infinitely many prime numbers.
- Trying to make claims about integers and their squares (as we've done a few times in the lectures), it is best to start with the integer itself and any conditions on it and then move towards making claims about its square. The set of integers is not closed under taking square-roots and so we must be cautious with how we handle these numbers.
- Induction is best (possibly even necessary) when we are looking at an indeterminate number of objects. For example, if we say ``the finite intersection of open sets is itself open'', we wouldn't want to take every possibility for any open sets and look at their intersection. It is enough to prove that the intersection of any two open sets is open. From there we can sort of reason through ``adding'' more sets through intersection one-by-one and knowing that these intersections will be open because we are intersecting a open set from intersecting two elements with a new open set. Another example is the uniqueness of polynomial decomposition up to scalars (units) where we cannot say how many irreducible polynomials that we might ``pull out'' of a polynomial.
- If you try to apply rigorous proof (and/or proof structure) in regular conversation, your friends will probably get mad at you.
I guess also about making proofs, we must be cautious about what we take to be ``assumed knowledge''. For example, we are asked to prove:
If we know analysis or calculus or topology or something, we could evoke the intermediate value theorem. We would not be finding an example to prove an existential condition, but would instead be directly applying another existential truth to this example. Obviously, without knowing this theorem, it is easier to find a single example than to prove the entire intermediate value theorem, but I certainly have a knee-jerk reaction to this posed question to evoke my entire body of knowledge of calculus/analysis, including any and all theorems about polynomials, limits, and continuous functions. But, I'm pretty sure that I'm not supposed to do that. It is better that I should demonstrate these things with minimal assumed knowledge, perhaps just guess-and-check methods (in this case) with only some algebra so that I can use the standard operations of addition and multiplication. Probably.
That's about all that I have to say about that.
Thanks for plig-a-plog sliggin' my slog,
J
Friday 3 October 2014
w-IV
Here is a new post.
It is about the fourth week.
In the course, I mean.
-------------------------------------------------------------
I was told that I should talk about ``the paper folding problem'' and so I shall as not to disappoint my avid readers.
The lecture had an explanation about rotational symmetry, but I didn't much care for it.
A question is posed:
In this case, N is the number of times the folding process is repeated `U' denotes an upward pointing fold, and `D' a downward pointing one. Firstly, we notice that the number of folds is easily predictable by considering that each fold doubles the number of segments that the paper is divided up into and the number of folds is one less than the number of segments. This says that we will have 2^N - 1 folds where the process has been repeated N times.
Now, to the order and orientation of folds. When the folding process is repeated, we never change or remove existing folds. So if we can understand how new folds are added, we can understand recursively how to predict the folding pattern for arbitrarily large and/or non-physical values of N. Knowing that any existing folds must persist, we can rewrite the table to make this clearer.
Now what we are particularly interested in are the folds that are added at each stage. Firstly, we see that from 1 to 2, we add a D and a U on either side of the centre crease. From 2 to 3, We a D and U on either side of each crease which is not from the original crease. Already, we see a pattern forming. Looking at all of the folds created from 3 to 4, we note that they always alternate (D, U, D, U,...). If we consider the implications of this, we understand that it makes perfect sense. When the paper is already folded, it is a continuous strand that passes some number of times from our left hand to our right hand and back, because after the first fold, both ends are kept in the left hand. Because the piece of paper is continuous, after it passes from the left to the right hand, it immediately passes back to the left from the right. Any two consecutive segments that we are about to introduce a new fold to, run antiparallel to each other immediately preceding the fold. Thus any two consecutive new folds must have opposite directionality (UvD or DvU). It should be clear than a recursive iterative solution can therefore easily be arrived at.
The problem while very symmetric in the introduction of new folds has a degree of non-symmetry introduced in that the paper does not begin to receive folds symmetrically until we have already folded it in half uniting the two ends of the paper, effectively creating a `loop'.
So I guess my solution for a general N, might look something like this in MATLAB:
% Up = 1
% Down = 0
% N = ??? % You need to assign N a value and remove the first '%'
folds_old = 0;
n = 1;
% assuming N > 1
while n < N
n = n + 1;
folds_new = NaN(2^n - 1,1);
% Setting even elements to existing folds
end
I'm sure there's a way to do this in Python, but I am familiar with MATLAB indexing things and it took, like, a single minute to do, so whatever. If you are unfamiliar, K(A:B:C) in MATLAB takes every Bth element of K starting at index A up to indices no greater than C. Note that indexing starts at 1.
It is about the fourth week.
In the course, I mean.
-------------------------------------------------------------
I was told that I should talk about ``the paper folding problem'' and so I shall as not to disappoint my avid readers.
The lecture had an explanation about rotational symmetry, but I didn't much care for it.
A question is posed:
If we fold a piece of paper in half, right hand over top to left hand and unfold it, we have a single crease in our paper pointing downwards. If we fold it once and then again from the far right edge again over top to the left side and completely unfold it, we now have multiple creases. Some of these will point downwards and others upwards. How can we predict the direction and order of theses folds for repeating this process an arbitrary number of times?To begin, we can simply do this to a piece of paper and look at the results:
N | Folds |
1 | D |
2 | DDU |
3 | DDUDDUU |
4 | DDUDDUUDDDUUDUU |
In this case, N is the number of times the folding process is repeated `U' denotes an upward pointing fold, and `D' a downward pointing one. Firstly, we notice that the number of folds is easily predictable by considering that each fold doubles the number of segments that the paper is divided up into and the number of folds is one less than the number of segments. This says that we will have 2^N - 1 folds where the process has been repeated N times.
Now, to the order and orientation of folds. When the folding process is repeated, we never change or remove existing folds. So if we can understand how new folds are added, we can understand recursively how to predict the folding pattern for arbitrarily large and/or non-physical values of N. Knowing that any existing folds must persist, we can rewrite the table to make this clearer.
N | Folds |
1 | D |
2 | D D U |
3 | D D U D D U U |
4 | D D U D D U U D D D U U D U U |
Now what we are particularly interested in are the folds that are added at each stage. Firstly, we see that from 1 to 2, we add a D and a U on either side of the centre crease. From 2 to 3, We a D and U on either side of each crease which is not from the original crease. Already, we see a pattern forming. Looking at all of the folds created from 3 to 4, we note that they always alternate (D, U, D, U,...). If we consider the implications of this, we understand that it makes perfect sense. When the paper is already folded, it is a continuous strand that passes some number of times from our left hand to our right hand and back, because after the first fold, both ends are kept in the left hand. Because the piece of paper is continuous, after it passes from the left to the right hand, it immediately passes back to the left from the right. Any two consecutive segments that we are about to introduce a new fold to, run antiparallel to each other immediately preceding the fold. Thus any two consecutive new folds must have opposite directionality (UvD or DvU). It should be clear than a recursive iterative solution can therefore easily be arrived at.
The problem while very symmetric in the introduction of new folds has a degree of non-symmetry introduced in that the paper does not begin to receive folds symmetrically until we have already folded it in half uniting the two ends of the paper, effectively creating a `loop'.
So I guess my solution for a general N, might look something like this in MATLAB:
% Up = 1
% Down = 0
% N = ??? % You need to assign N a value and remove the first '%'
folds_old = 0;
n = 1;
% assuming N > 1
while n < N
n = n + 1;
folds_new = NaN(2^n - 1,1);
% Setting even elements to existing folds
folds_new(2:2:end) = folds_old;
% Set first of odd folds to down
folds_new(1:4:end) = zeros(size(folds_new(1:4:end)));
% Set second of odd folds to up
folds_new(3:4:end) = ones(size(folds_new(3:4:end)));
folds_old = folds_new;
I'm sure there's a way to do this in Python, but I am familiar with MATLAB indexing things and it took, like, a single minute to do, so whatever. If you are unfamiliar, K(A:B:C) in MATLAB takes every Bth element of K starting at index A up to indices no greater than C. Note that indexing starts at 1.
I hope that this has been sufficient or good or whatever.
Thanks for sloggin' in my bog,
J
Friday 26 September 2014
w-III
Here is a very nice and good post about week three in the course.
So the third week was about firstly, conjunction, disjunction, and negation, and then about truth tables and then about manipulation of these.
The first section is almost just introduction of conjunction and disjunction as extensions of the ideas of union and intersection. The interesting part comes with negation and the variety of equivalent statements that can be made by moving around the scope of the negation and making appropriate changes to quantifiers, etc. At this point, actually, it becomes simplest to work through these examples through natural language. We all have some sort of idea of how to explain the opposite of some situation, but now we have a better understanding and framework for moving easily between natural language and symbolic language so this becomes the simplest manner of manipulating these operators.
The bit about truth tables is always interesting because sometimes it is easy to want to look at an expression and have an answer but oftentimes a truth table lets us take a step back and slowly work through these expressions in a rigorous way to avoid making mistakes. I always liked the idea that if two truth tables are equivalent then the two expressions are equivalent in a complete and rigorous way. It is always entirely possible that two expressions are equivalent but have entirely different meanings. I would liken it to `the formula for prime numbers' which was mostly just a tricky way of enumerating values using an alternate formulation of Euler's totient function and while looked nice in some sort of closed form, provided no new or interesting information or insight. A truth table for two True/False arguments has only so many possibilities but we can formulate an infinity of absurd expressions that we could demonstrate to be equivalent to some obvious or simple expression. I'm not sure to what ends, but the rigour over a small number of elements is a very `easy' thing to investigate so equivalence proofs become all but trivial.
Finally, the idea that evaluating P=>Q for P False, always evaluating to True. We cannot evaluate this expression with respect to the implication of P when we have `not P', so we must call it true. It reminds me of concepts of open and closed in mathematical sets, maybe in topology. The idea is that sets can be open or closed, but they can also be both or neither. If a set's complement is open then that set is closed and if a set's complement is closed then the set is open. The implication of the converse is what describes the set and complement is a unitary/cyclic operator such that the complement of a set's complement is the set itself. It is just another interesting example about properties that we must be careful to define and manipulate in a very careful way, because while not easily relatable to physical or extant objects, the math itself is complete, consistent and rigorous. If we simply permit weirdness and shirk our human need to relate things to the physical world (an important part of learning about the quantum universe), we can use our math to learn more and extend our knowledge and understanding. We just need to have a little bit of faith. Which is actually pretty scary for a `scientist', I guess.
Okay, cool.
J
So the third week was about firstly, conjunction, disjunction, and negation, and then about truth tables and then about manipulation of these.
The first section is almost just introduction of conjunction and disjunction as extensions of the ideas of union and intersection. The interesting part comes with negation and the variety of equivalent statements that can be made by moving around the scope of the negation and making appropriate changes to quantifiers, etc. At this point, actually, it becomes simplest to work through these examples through natural language. We all have some sort of idea of how to explain the opposite of some situation, but now we have a better understanding and framework for moving easily between natural language and symbolic language so this becomes the simplest manner of manipulating these operators.
The bit about truth tables is always interesting because sometimes it is easy to want to look at an expression and have an answer but oftentimes a truth table lets us take a step back and slowly work through these expressions in a rigorous way to avoid making mistakes. I always liked the idea that if two truth tables are equivalent then the two expressions are equivalent in a complete and rigorous way. It is always entirely possible that two expressions are equivalent but have entirely different meanings. I would liken it to `the formula for prime numbers' which was mostly just a tricky way of enumerating values using an alternate formulation of Euler's totient function and while looked nice in some sort of closed form, provided no new or interesting information or insight. A truth table for two True/False arguments has only so many possibilities but we can formulate an infinity of absurd expressions that we could demonstrate to be equivalent to some obvious or simple expression. I'm not sure to what ends, but the rigour over a small number of elements is a very `easy' thing to investigate so equivalence proofs become all but trivial.
Finally, the idea that evaluating P=>Q for P False, always evaluating to True. We cannot evaluate this expression with respect to the implication of P when we have `not P', so we must call it true. It reminds me of concepts of open and closed in mathematical sets, maybe in topology. The idea is that sets can be open or closed, but they can also be both or neither. If a set's complement is open then that set is closed and if a set's complement is closed then the set is open. The implication of the converse is what describes the set and complement is a unitary/cyclic operator such that the complement of a set's complement is the set itself. It is just another interesting example about properties that we must be careful to define and manipulate in a very careful way, because while not easily relatable to physical or extant objects, the math itself is complete, consistent and rigorous. If we simply permit weirdness and shirk our human need to relate things to the physical world (an important part of learning about the quantum universe), we can use our math to learn more and extend our knowledge and understanding. We just need to have a little bit of faith. Which is actually pretty scary for a `scientist', I guess.
Okay, cool.
J
Friday 19 September 2014
w-II
TOP SECRET
THIS IS A POST ABOUT WEEK TWO IN THE COURSE
I'M NOT YELLING, THIS IS JUST HOW COVER PAGES
FOR TOP SECRET DOCUMENTS LOOK
I JUST IMAGINE GEORGE W. BUSH IN HIS TEXAS DRAWL:
"HOW COULD ANYONE IGNORE ALL THESE CAPITAL LETTERS? IT'S IMPOSSIBLE!"
--------------------------
Week two in the course was touching on material that was new to me which is always nice because I like that sort of thing. I am perfectly familiar and comfortable with set notation and the like but predicates are definitely a new thing.
The rest of the lecture was a lot of parsing statements to identify P and Q if they were to be made to match the mathematical statement P=>Q. This started off slow and easy but later progressed to some rather challenging examples that required careful consideration to understand what was being said in plain english and understand the implications in a mathematical (P=>Q) sense. Overall pretty interesting but will likely require me to go review some of the trickier examples. Another horrible failing of the english language. All these equivalent ways of describing a P=>Q situation with some much less clear than the others.
Also a fun example of vacuous truth. Such an interesting mathematical/philosophical phenomenon, and there's another good one about it in the course notes. I guess the idea is that we might run into such a situation when we are programming. Nonetheless they are important to know about and understand.
My final note about the lecture is that there were two characters seated immediately behind me who seemed to think that their unsolicited sarcastic comments were in some way valuable or insightful. Sadly they were mistaken. It was pretty annoying. Almost as annoying as me griping about it on my SLOG, but just not quite.
The final piece of things we did this week that I feel in some way about it the venn diagram with 'x' and 'o' marking. The idea is to draw a venn diagram of two subsets of a third larger set and place 'x' in any area where we know that no element can exist, and 'o' in any area where we know some element must exist. In our tutorial, we were always interested in a single one of the two subsets, and placing indicators 'x' or 'o' in either its intersection with the other subset or its are disjoint with the other subset. This one set in which we were interested, was known to have three elements. Because it is known to be non-empty, when either of the disjoint or intersection (with the other subset) areas has 'x', it is known that the other must have 'o'. Because the union of the disjoint and the intersection section is the whole (sub)set, if one section is empty, the other must necessarily contain any any elements that are a part of that (sub)set, which is at least one (indicated by 'o') by the (sub)set's known non-emptiness. A set's non-emptiness is a required condition to produce a situation to which this 'x' - 'o' complement applies. For our example problems, we can and we must use this to place 'o' (in its correct spot) in any situation where we have placed 'x'. Reviewing the other examples, we typically see that sets are rarely given as non-empty. It is necessary then to take caution when attempting to apply this 'rule'.
I guess my final point is the role that our use of natural language takes in understanding and thus solving these problems. Suppose we pose the question:
If "some men are doctors", and "some doctors are tall", does it follow that "some men are tall"?It is maybe not entirely clear, but the answer to the question (in this case) "Are some men tall?" is "no". There is an entire psychological aspect to this problem where if we simply ask "In real life, are some men tall?" everyone would be answering "yes". It is true. We can construct a reasonable definition for what constitutes "tall" based on human height measurements and distributions, and then look at the set of professional basketball players and find at least one single example to demonstrate that there exists a man (an element in the set of men) who is tall. No problem, all done. But at the same time, if we simply replace "men", "doctors" and "tall" and we instead ask:
If "some cats are pets", and "some pets are dogs", does it follow that "some cats are dogs"?It might be easier to evoke a correct response of "Of course not! What a ridiculous question! Why would you waste my time even asking that?". But this is because the question that we ask with the foreword "does it follow" is false even with the foreword "in real life". What about further abstraction? What happens when we ask:
If "some A are B", and "some B are C", does it follow that "some A are C"?
Maybe we can prompt more advanced or specific thought about the question rather than about men and/or medical degrees and/or height and/or cats and/or dogs and/or pets. Just some thoughts. I guess the point is that abstraction to more concrete statements can help us solve questions in our natural language that can often be unclear or confusing for reasons of familiarity and association of these words with extant things or concepts. What about asking:
If "A ∩ B ≠ ∅", and "B ∩ C ≠ ∅", does it follow that "A ∩ C ≠ ∅"?
I'm not sure, but I think it's interesting for sure.
Please someone let me know if I'm on the right track with this whole SLOG thing.
That's all for now.
Thanks for slig-sluggin' it with me,
Thanks for slig-sluggin' it with me,
J
Subscribe to:
Posts (Atom)