Wednesday 3 December 2014

CSC165 week12 SLOG:

    This week had been a long week, the 3rd assignment is really not easy, and the new materials taught in class were hard to comprehend. For the assignment, the fifth question,∀f,g ϵ Ƒ, f ϵO(g) ∨ f ϵ Ω(g), is very tricky to me, and I did it wrong in my first try,because I did not consider the definition of the big-O and Omega carefully. In the introduction of disjunction,  if A is true, then not A or A is True. And I thought big-O and Omega is opposite to each other, then I thought if big-O of this fucntion is wrong, then Omega of it must be right, then I can use introduction of disjunction to prove this claim right. However, there are functions which are oscillating. For example, f(n) = cos n, it is oscillating, it changes it growth rate from positive to negative and then from negative to positive again and again, so it is neither in an big-O nor in a Omega of some functions. Then we should prove the negation of the claim,∃f,g ϵ Ƒ, f ∉ O(g) ∧ f  Ω(g)
   
     In the lecture, we explore further on the computability and introduce a new strategy of proof called induction. About induction, I think the use of it is similar to the way how we write recursive functions for linked list in CSC148. When we write recursive function, we take the first item of the linked list as input, and we need to write a appropriate base case first, then write a case that recalls this function again using the next item in the list as input, then we can get the result of all the items running this function. When we write the proof  of the claim ∀ nϵ ℕ, p(n) using induction, we also need to think about a base case first and prove it with the a base number which should be zero,the smallest possible input, in this case, then link all other possible inputs together by finding the relationship between each two inputs. In this case, we can always use the next smallest number (n +1) as new input just like a linked list. Then by proving p(0) right implies p(0+1) right, we know p(0+1) is right also implies p(0+1+1) right, and etc.

    -end- :)

 

    
    

   
   

Saturday 29 November 2014

CSC165 week11SLOG:

     This week we had winter break, so I thought I will have not much to write about this course. But just like Forrest Gump said,"life is a box of chocolate." I never thought about that two totally new and confusing ideas,the halting problems and computability, were Brought into my life in this week's lecture. These two ideas are all about algorithm in the computer.

    In the beginning of the lecture, two great people Alonzo Church(who introduced Lambda calculus) and Alan Turing(who described Turing machine) were introduced. They showed that there are some problems that cannot be solved by algorithm,in another word, uncomputable.  An important idea to know is that the programming languages we run on the modern computers are "turing complete", that is to say, the computer can solve any problem a turing machine can. And a turing machine is not a real machine, it is just a system of rules, states and transitions conceived by Turing.  Lambda calculus is an example of turing completeness.

    Then professor gave some python codes and described how the codes works and how their running time are calculated(algorithm) in order to lead us to the halting problems. I learnt that I should use my head to think whether a code will hart or not rather than just run it, because a code may have a very long running time but it is not equal to endless(never halt).  For example, a code in the lecture:
def collatz(n):
    #n is a natural number
    while n>1:
        if n%2 == 0:
              n = n/2
        else:
              n= 3*n+1
    return "Reach 1.."
If we choose some very big and odd n,it may take a very long running time, and we can choose infinite number of natural numbers as inputs,so we have to finally come up with a proof to prove whether it will halt or not instead of running it forever.
       Next, some uncomputable functions were introduced to us. We can only say a function is non-computable when it is well-defined, and outputs for every inputs in some domain. but still cannot compute it. What's more, if we add some executable steps to a function to form another, then the original one is computable implies the new one is computable. For the contrapositive, if we reduce some executable steps to a function to form another, then the original one is non-computable implies the new one is non-computable.
       Finally, two terminologies ,one-to-one and onto,is taught in order to help us count finite sets. If a function (input A->B output) is One-to-one,1-1, then outputs of two inputs equal implies these two inputs equal, it prevents me from over-counting. A function(A->B) is Onto when for all outputs, there is an input, and it prevents undercounting.
     All in all, I now understand the computability and halting problems, but still lack of the ability to compute a function and prove the computability. I think we will be taught more detailedly about how to deal with these problems next week.

 
 








Reference:
http://en.wikipedia.org/wiki/Alonzo_Church
http://en.wikipedia.org/wiki/Lambda_calculus
http://en.wikipedia.org/wiki/Alan_Turing
http://simple.wikipedia.org/wiki/Turing_complete
http://simple.wikipedia.org/wiki/Turing_machine
 
   
 
 

 

Thursday 20 November 2014

CSC165 week10 SLOG:

    In this week's lecture,we learnt about the proofs for some general statements about big-Oh,O, and big-Omega, Ω. From wikipedia, it says "a description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function." And for computer science, Big O is used to describe the worst case running time for an algorithm. Oppositely, Big Omega notation is used to describe the best case running time for an algorithm. Based on these, since the formal definition of  O(n^3 + 2n + 1) is{f : N -> R(>=0) | ∃c∈R+,∃B∈N,∀n∈N, n>=B=> f(n) <= c(n^3+ 2n + 1)}, then the formal definition of Ω(n^3 + 2n + 1) is{f : N -> R(>=0) | ∃c∈R+,∃B∈N,∀n∈N, n>=B=> f(n) >= c(n^3+ 2n + 1)}. The proof of  big-Omega of  n^3+ 2n + 1 is similar to the proof of Big-Oh of n^3+ 2n + 1,the only difference is that now we need a very small c instead of a very big c, in order to satisfy that  f(n) >= c(n^3+ 2n + 1)  instead of f(n) <= c(n^3+ 2n + 1).

    An example of the general statements about Big-O and Big-Omega is that for F:{f : N -> R(>=0)}, f,g∈F, f ∈ Ω(g) => g ∈ O(f). The main strategy to prove this kind of statement is to use the formal definitions of Big-O and Big-Omega showed above.  Since the main part for proving Big-O or Big-Omega is to choose wise c and B, and in this question, we have both Big-O and Big-Omega, we can choose a wise c and B for ∈ Ω(g),∃c∈R+,∃B∈N,∀n∈N, n>=B=> f >= cg,and then we can choose a wise c' and B' for ∈ Ω(g),∃c'∈R+,∃B'∈N,∀n∈N, n>=B=> g <= c'f.  In this case, we choose a c>0, and c' = 1/c >0, and B = B' ,then f >= cg =>  g <= (1/c)f, then   g <= c'f = > g <= c'f. 

   

Wednesday 12 November 2014

csc165 week9 SLOG:
    In this week's lecture, we continued learning of the big-O proofs. We learnt from week8's lecture that the formal definition of big-O. For example, the formal definition of  O(n^3 + 2n + 1) is that
{f : N -> R(>=0) | ∃c∈R+,∃B∈N,∀n∈N, n>=B=> f(n) <= c(n^3+ 2n + 1)}.
    To prove 5n^2 + 5n + 1 ∈ O(n^3 + 2n + 1), we should first consider whether this claim is right. If it is wrong we should disprove it by negating the original claim. In this case, the claim is correct. My first thought of proving this claim is to try to let n >= 1, then n * n >=  n. And I can let n >= 1 by picking a B = 1 while 1 is a natural number. Then I want to compare two functions by comparing their highest-order term. Since n>=1,5n^2 + 5n + 1 <= 5n^2 + 5n*n + 1*n^2 = 11n^2. And then I can choose c equals to the positive real number 11 such that O(n^3 + 2n + 1) = 11(n^3 + 2n + 1) =  11n^3 + 22n + 11 >= 11n^3, since n and 11 are both positive number. Then we can get 11n^3 = 11n * n^2 >= 11n^2 since n >=1,then 5n^2 + 5n + 1 <= 11(n^3 + 2n + 1). Finally we can get 5n^2 + 5n + 1 ∈ O(n^3 + 2n + 1) from above.
     And if a claim of Big-O is wrong,we should disprove it. For example, n^4 O(n^3), in order to disprove it, I need to prove the negation of this claim. The original claim can be written as ∃c∈R+,∃B∈N,∀n∈N, n>=B=>  n^4 <= c(n^3). Then the negation of this claim is c∈R+,B∈N,n∈N, n>=B   n^4 + 1 > c(n^3). And the main part of proving this is to pick a wise n, in this case I  pick n = c + B +1 to make n > c and n > B in order to let n*n^3 > c*n^3 ∧ n>=B for all n∈R+and all B∈N
    We also learnt about proving Big-O by using limit. For example,3^n ∈ n^3, since 3^n is non-polynomial, we can use limit. And we can find the ratio of 3^n/n^3 approaches infinity while n gets larger and larger by L'Hopital's rule. And the definition of this limit is c∈R+,∃n'∈N,∀n∈N, n>=n'=> 3^n/n^3 > c. Finally,more about proof of this limit will be more detailedly taught in next week.
    
 

Wednesday 5 November 2014

csc165 week8 SLOG:

    In this week's lecture,we first learnt more detailedly about upper-bound and lower-bound. An explanation for upper-bound,O, is that if a function is in O(n^2),then this function grows no faster than n^2. Therefore, if a function is in O(n^2),then the highest-order term in this function must be n^2 or n or a  constant number,and cannot be n^3 or other higher order terms. For lower-bound,Ω, which is opposite to upper-bound.  If a function is in Ω(n^2),then this function grows no slower than n^2, the highest-order term in the function must be n^2 or any term with higher order than n^2.  

    Then the idea of sorting algorithm is introduced. This part involves our comprehension of the code. For instance, when we were taught to solve the worst running time of a insertion sort code. We first need to understand how the code is processing. And a good way of understanding the code is to break the code into parts, we should make each loop a part, and it is very important to think over the relationships between each part,that is, whether a part is inside another or they are independent to each other. For instance, if a while loop A is inside another while loop B,then the total number of steps loop A take is included in the total steps of loop B.  What's more, loop guard is an indispensable part of the algorithm. It is the last running step of each loop to make sure the loop is over.  All in all, I feel that this is the most difficult part I have learnt in this course, and I think to do well on analyze the sorting algorithm takes requires me to develop great comprehension of the code.

    Then I learnt how to prove the worst case complexity of insertion sort is O(n^2) and Ω(n^2). The formal definition of O(n^2) is ∃c∈R+,∃B∈N,∀n∈N, n>=B=> Wis(n) <= cn^2, and formal definition of  Ω(n^2) is ∃c∈R+,∃B∈N,∀n∈N, n>=B=> Wis(n) >= cn^2. Since both of these two definitions start with ∃ two numbers, I think the first step and the main strategy of solving this problem is to think over and  pick two wise numbers c and B first. And a wise number is the number that will make my proof successful. Then for proving  the worst case complexity of insertion sort is O(n^2),we just bring the c into the function of Wis(n),the worst case complexity of insertion sort and cn^2, and prove that when n>=B, Wis(n) <= cn^2.

Monday 27 October 2014

csc165 week7 slog:

    In this week's lecture, some inference rules about proofs were introduced. These rules help me to understand the logics of proofs better and can save my time for consideration and proving. For example,before I know the implication elimination rule, usually I would transfer "A => B" to "¬A V B" first,then if I want to infer "¬B => ¬A",I would apply implication rule to "¬B => ¬A" to get "¬A V B". Now I can eliminate these redundant processes, and directly get "¬B => ¬A" from "A => B" by implication elimination rule.

    In addition,  I had a brand new comprehension about how programs work by learning sorting strategies and the algorithm involves in the sorting.  I learnt that the "running time" in computer science  is not the actual time like the time we get after testing in python which has a unit of seconds. The "running time" is the number of steps the algorithm takes. And if we really want to compare two sorting strategies' running times. We can compare how their  steps grow, and in this situation, only the highest-order term is used for the comparison and constant factor does not matter . For instance, if the running time of A is 8n^3 and the running time of B is 100n^3. Another example is that if the running time of A is  8n^3 + n and the running time of B is 8n^3 + n^2 + n. In both examples, their running time have no difference, since they have the same highest-order term, and this relationship can be shown by using an asymptotic notation,asymptotic upper-bound O(f(x)),and both A and B'S running times are in O(n^2).  And an example for two running times that are different is n^2 + n and  n^3 + n.

     The idea of linear search is also taught in the lecture. Linear search is a method to get the running time of a function by going through the function step by step. Therefore, if I want to get the correct running time for a function, I should understand the code first, then it will become very straightforward by using linear search.
 
 

Monday 20 October 2014

csc165 week6 slog:

    In this week's lecture, a new terminology, non-boolean function is introduced to us. It is a function that returns a non-boolean value instead of True or False. For example, x + 1, 2x ,sin(x) are non-boolean functions and x > 1,8x != x are not non-boolean functions. Another new terminology called "floor" is also used this week, we can write "the floor of x" as ⌊x⌋. I think it is really easy to mix ⌊x⌋ with the absolute value of x, |x|. And the floor of x is the largest integer smaller or equal to x. For example, if x = 3.99, ⌊x⌋ equals 3, and if x = 8, ⌊x⌋ is 8.  Learning more new terminologies can help me talk in a more professional way and explain things more easily when I discuss about math and computer science with others.

    Then the professor taught us some new methods about proofs such as disprove a claim, proof by cases, and proof about limits.  As far as I can consider, to disprove a claim is to prove the negation of the claim is right. For example, if the claim S is "Everyone likes playing basketball", then we can prove ¬S that "There is someone who does not like playing basketball".
    About the proof about limits,  I consider it to be the proof of a claim with a variable or more variables which are undecided. For example, the claim S1: "∀ x ∈ R,  ∃y ∈ R, ∀z ∈ R, x > y => x > z". The "y" in S1 is the variable which is not decided, and we can pick a number of y in order to satisfy the claim for every situation of x and z.  And the key point of the proof of limits is to choose a wise number of the "y",the undecided variable.  I find that sometimes visualizes limit proof with a graph can help us choose and check the number we pick.