Monday 1 December 2014

Last post: The Halting Problem

Computer science represents something fundamental about the world. We try to solve problems by implementing an algorithm using the speed of a computer. The question is, can we solve everything in this manner? That is, can any question be solved with an algorithm?

Here is where the idea of halt was taught in the class, and kinda really shook the world of me. To think that halt, something that we have an idea for, cannot be implemented because it'd create a paradox!

Finally, this brings up to the idea of countability. Sets have bigger sets than some other "sets", and things are only countable if a set can be mapped to natural numbers. In this manner, the halting problem is an extension of this problem.

The size of the set of solvable problems are less than the size of the set of all problems, and the halting problem just happens to lie on the other side.

Wednesday 26 November 2014

Big oh of non-polynomial functions.

I'm back! And today, we're going to be talking about how to apply calculus to solve big-oh notation type questions.

So, how would we prove that n is in O(2^n)?

Let's take a graphical look:

http://gyazo.com/50b4f15c85b4a932c322fb648be480d5

It seems that as x grows larger and larger, 2^x "overpowers" the rate of growth of n. So how can we formalize this "overpowering" factor? The way to do this comes from calculus. If 2^x grows much faster than x, then as x gets arbitrary large, 2^x/x would also get arbitrary large. So in calculus terms, the limit as x-> infinity of (2^x/x) must be infinity.

But that was just intuition, we need a proper definition of limit as x -> infinity of some f(x) = infinity, which would be: For all e in positive reals, there exists an n0 in naturals such that for every n in naturals greater than or equal to n0 would imply that 2n/n > e.

Side note: if the limit does exist for some f(x) (in other words the limit as x -> infinity of some f(x) = c), then the definition would be that for all e in positive reals, there exists an n0 in naturals such that for every n in naturals greater than or equal to n0 would imply that c-e<2n/n <c + e.

While this may seem wordy, it's just saying that as x gets arbitrarily large, it either grows without bounds, or that it is bounded.

So how does this help us solve something like n is in O(2^n)?

Well, we know that limit x -> infinity of n/2^n = 0.

Filling in our regular parameters for the definition of big-oh notation, which I remind you in the following.

g(n) is in O(f(n)) when:

There exists c in positive reals, there exists B in naturals such that for all natural n >= B => g(n) <= cf(n).

So here is the following proof for n in 2^n:

Assume B = 1, and c = 1.

Since we know that the limit is infinity, we can set e to be whatever we want, so I chose e = c.

Then there exists an n0 such that for every n >= n0 => n/2^n < 1
Let n0 be such that for every natural n >= n0 => 2^n/n > 1, and n' = max (B,n0) # I'll explain why we use the B later.
Then n' is in naturals, since it's either B, a natural, or n0, a natural.
Then n' >= B. This is why we use max (B,n0), so that we can satisfy the breakpoint condition of the original problem. If we didn't include B, then how would we know that n' >= B?
Then n < c*2^n, since n >0.
Then n' >= B and n' < 2^n'  #since c = 1
Then there exists an n in naturals such that n >= B and g(n') < cf(n')
Then there exists c in positive real, there exists B in natural, for all n in natural, such that if n is greater than or equal to B then n <= c2^n

EDIT:

It's come to my attention that recently a lot of people (judging from the SLOGS) are having trouble with this idea of limits. Simply put, the idea is that at infinity, which one increases faster: f(x) or g(x) in f(x)/g(x)? If f(x), then the limit goes to infinity. If g(x), then the limit goes to 0. It's as simple as that.

Saturday 15 November 2014

Algorithm analysis

Start of fall break. I will be trying to update a lot during the next few days as I work towards trying to catch up on material. Ever since the second midterm test (I got a 15/30), I've realized that I've fallen quite a lot behind on this work. So, during this fall break, I'll be trying my best to try to catch up. Talk about the procrastination bug striking. Man that really bites. :\

Today, on Saturday, I will be working towards learning how to count running time, and I'll document my progress as I try to figure this out.

Linear Search:
def ls (A,x):
1    i = 0
2   while i < len(a):
3        if A[i] == x:
4            return i
5        i = i+1
6    return -1

After tracing the program through a couple of times, I've figured out that the idea behind this was the following. Assume element x was found in position j. What we notice is that lines 1 will execute at least and at most once. We NEVER return back to it, so we have the running time of this program (denoted R(ls)) so far as 1. Then, we notice that lines 2,3, and 5 will execute if the element does not equal to the index. This contributes 3 but the question is how many times does it contribute 3? 

Do we contribute 3 once? Twice? Infinitely many? That was why I specifically stated that element x was found in position j. Indices 0,1,2,3,..., (j-1) will each contribute 3, because at those indices, the element is not found at that position. Thus, by our previous observation, this process will add 3 each time. But how many times? Well, how many elements are there from 0 to j-1? (j-1)-0+1 elements! Thus, our R(ls) is now updated to 1 + 3j.

What happens when the element is found? Let's take a look. Line 2 will execute because i will be less than the length of the list of a if the element exists in the list, so we add one to our R(ls). Now, R(ls) = 1 + 3j + 1. Then we check if A[j] == x, counting as one more step. R(ls) = 1 + 3j + 1 + 1 = 1 + 3j + 2. This check will yield true, because as we stated before, the item will be found at index j, so we execute the return statement j statement, yielding one last +1 to our R(ls), and ending the run time of this function.

Our final R(ls) is 1 + 3j + 3, so we're done.

BUT WAIT. This only covers if we do find the index. What if we don't find the index? Well, LET'S TRACE IT!

We go through line 1 again. only once. However, lines 2,3, and 5 to get run len(A) times, since we loop through the whole list, plus one more for the final i<len(a) check, and plus one for the return -1 statement.

Thus R(ls) if we don't find the item to be R(ls) = 1 + 3*len(a) + 2.

That'll be it for my post for today.

Friday 14 November 2014

Exam feedback + Assignment #1

Exam feedback is here! Did pretty well for my expectations with 34/35. Couldn't expect anything better out of that.

The following weeks, CSC165 focused on proofs and and introduction to Big-Oh notation, as well as assignment one went down.

For assignment one, I felt that we did pretty well regarding the logic ideas. I learned a lot through boolean algebra back in high school, so coming here and adapting to formal mathematical language was kind of easy for me. I felt what was truly confusing for the assignment 1 was how to draw venn diagrams for question 3. In a sense, it really tested your understanding of what these sentences mean in terms of where items can or cannot belong to.

For proofs, the hardest thing I had trouble with was the idea of floor. It never really clicked with me how the second part of the floor part described how it was the biggest integer. Perhaps someone can explain to me?

Tuesday 14 October 2014

First Term Test

A great first term test just happened!

My preparation style was to look at last years term tests, cover up the answers, and go from there. I realized that this was the most effective way to prepare for the term test.

As for the course this weekend, proofs seemed to be relatively straightforward, since I have MAT137 that covers similar material for the first few weeks. If you guys have any questions, feel free to leave a comment here.

Sunday 28 September 2014

Week 2 and 3. What did we learn?

This two weeks, I've learnt that we can mesh together logic and set theory, which seemed to be mutually exclusive at first. For example, take the following sentences.

a implies b. In set theory, this means that a is a subset of b. This becomes apparent when we consider venn diagrams as the set of a by itself must be empty, implying that all of a must exist inside b, thus making 'a' a subset of b.

a and b. In set theory, this means that the set that makes this true is the intersection between a and b. Apparent through venn diagrams and no set theory, I can understand more about logic.

a or b. In set theory, like 'and', the set theory operator is union, which is everything that includes anything within a or b, or maybe both. This even applies for mutually exclusive items!

As I stated in the last post, I found vacuous truths to be one of the more confusing parts about logic. However, using my understanding of when an implication is true, and also using set theory, I am able to verify that a vacuous truth will always make an implication true. To make anything ever false, you must find some existing counterexample. In the case of a vacuous truth, we will never find a counter example! Then, using an explanation through set theory, a vacuous truth implies an empty set. An empty set is always a subset of any other set. That's why a vacuous truth will always be true.

Finally DeMorgan's law presented a new way for statements to change. After doing truth tables, I wondered what if some expression had the same truth values as an if statement, or an and statement? Would they be considered "equivalent"? As De Morgan's law shows, if they have the same truth values, then they actually are the exact same as each other; they are equivalent.

Finally, the experiment on Friday with paper folding actually brought to my attention how interesting it is to approach a problem and solve it just through experimentation. Usually, I'd expect there to be some manipulation you have to pull before being able to solve it. Now, I realize that we have to experiment, play around, and try to understand a bit more before being able to identify that crux move that leads to a solution.

Wednesday 17 September 2014

An Introduction to Mathematical Analysis of Computer Science.

CSC165 First Week



The first few lectures opened up an interesting way for me to understand how to approach computer science. No longer is it just a design and implementation consideration, but now there are mathematical, logical, and readability issues. Now, for me, it becomes important for me to not only learn how to program properly, but to analyze them, and understand how and why these programs work.

Some of the few course material that we covered in the first few weeks consisted of a lot of logic and quantifiers, the idea of ambiguity and precision, and even a new heuristic approach.

One of the things I did struggle with in the first week was the idea of using a Python function to evaluate all and any operations. Originally confusing was the x in s2 for x in s1; to me, it seemed that it was going to loop through x in s2 and check it for all x in s1, but then upon hearing that the opposite was true from the professor, I understood that it checked for all x in s1 if it was in s2.

Additionally, with the idea of logic being introduced into the course, if-then statements attained a different meaning. From the computer science context I drew upon to understand if-then statements, I expected a if-then sentence to be true if and only if the precedent is true, and false otherwise. Learning that this was not the case really rocked my foundations, and thus it made me reconsider my ideas. Going back to fundamentals and representing these ideas with set theory helped to rebuild and reinforce the ideas taught in class. Doing so led to success in the tutorials.

For the first week of class, it was a very rocky stuff because I didn't really know what was being taught and was confused regarding the concepts. Upon further research, example testing, and a return to the fundamentals of set theory, I was able to not only understand the material, but to "prove" it to myself.