Sunday, March 30, 2014

Week #11

This week's classes focused on memory models, tracing , looked at fibonacci sequence and how it is implemented (how it can be) in Python and the consequences of recursion. The idea of memory models (which prior to taking the course was just a cool word in my opinion) is quite interesting. It's fairly basic, with variables referring to objects and different instances. I was very ill and so couldn't attend the test and so decided to instead read up on everything that I had difficulties with.

I found out that picturing a memory model in your head seems like a great way to solve a problem involving Binary Search Trees. While working on BST examples in Labs, my teammate and I mainly drew up what the results would be and it really helped figuring out the final answer.

It has been a very busy semester and I really need to do great in my final exam. I am not as fond of Python as others but see the need to get better at it and I think this is my last chance.

For this week's SLOG, Danny asked (in addition to our normal entry) to write about sorting algorithms:

Sorting has also been mentioned constantly in class and even Obama has a popular video online in which he describes Bubble sort as the least efficient way to sort a million 32-bit integers (Link to Obama video). It turns out a lot of different ways of sorting have been established over the years, each with pros and cons.

Selection Sort: Works by partitioning the list in two with the front part being sorted and the back part containing values that have yet to be sorted. It takes a long time as it needs to go through each element finding the next smallest item in the unsorted section and placing it at the end of the sorted section.

Quick Sort: Much more of a 'divide and conquer' sorting algorithm, the quick sort operates by first dividing a large list into two smaller sub-lists (low elements and high elements) and then recursion is used to sort the sub-lists. It is as its name states it a faster sorting algorithm, way more efficient and faster than the selection or bubble sort (for a long list of integers/items). Hackerrank.com had a great image helping the visualization of the QuickSort algorithm.

Merge Sort: Merge Sort, like Quick Sorting algorithms, is a comparison-based sorting algorithm which follows the same 'divide and conquer' pattern. It divides the list into n sublists with each containing 1 element. Then the algorithm proceeds to repeatedly merge sublists to produce new sorted sublists until there is only 1 sublist left which will be the sorted list. Like the previous sorting algorithms, I prefer to look at a visualization of the sorting instead of just reading about it if I want to learn what it exactly does. The CS department of the University of Alberta has a great visualization in which you can clearly see the original list getting decomposed into multiple ones until it ends up sorted.

Efficiency: Used to describe how the properties of the algorithm that was written relate to the amount of resources used by the said algorithm. The normal way to estimate the complexity of an algorithm (via analysis) is using Big O notation. Big O notation represents the complexity of an algorithm as a function of the size of the input n. So efficiency is what compares different algorithms and 'ranks' them in order of best sorting algorithm or worst sorting algorithm depending on the cases. There are quite a few examples of Big O notation. such as:
O(logn) - eg. finding an item in a sorted array with a binary search or a balanced search tree.
O(n) - linear - finding an item in an unsorted list/array.
O(nlogn)- Heapsort, quicksort, mergesort, etc.
O(n^2)- quadratic - Multiplying two n-digit numbers by a simple algorithm, bubblesort, shell sort, quicksort, selection sort, insertion sort, etc.

Interesting SLOG of the week:
http://compsci148life.blogspot.ca/

Not a lot of blogs but I really liked his definitions for his/her sorting algorithms. They were precise (even though they were quite long) and I felt covered the whole spectrum in terms of defining precisely what the different sorting algorithms were about.

Sunday, March 23, 2014

Week #10

This week was spent on revision, the test is coming up on Wednesday. I am working on an aid sheet full of help for different parts of the syllabus, however I am not feeling so well and haven't been for the past few days and hope I'll be able to attend.

I finished the assignment right on time. It wasn't perfect, actually it had quite a few hiccups especially when it came to regex_match and is_regex. My code looked right to me but I had an issue spitting out the list of valid regexs depending on the input. I got very very frustrated with it, but I had to upload it as time was running out. I've worked on every assignment and exercise by myself this year and I don't think it was the best of ideas. I think I figured out one of the most essential part of programming, asking for help. Every start up has a group of people, hell even our tutorials focus on working in pairs. Next year/next programming class will definitely push me to groups and I am looking forward to it.

This week's readings focused on Sorting but the first lecture focused on running time analysis with big-Ohs and logarithmic questions coming to haunt me from my other classes into this one. I am so used to using (a bit repetitive used and using) logs in math that seeing it in a programming course takes me a back a little but I am excited to see how to use old knowledge to grasp new one.

Big-Oh hierarchy sounded hard when first looking through the slides but the annotated ones were more useful with Danny summing it up nicely in this way:

The rest was all about sorting which I found to be fairly simple when explained properly. It makes sense, way more sense than anything containing Recursion.
While going through the different Sorting methods I tried to find ways to remember some of them, I also couldn't help but notice Bubble Sort being listed, I know it's an obvious one but can't help but remember when Barack Obama mentioned how inefficient it was at a talk.

Was hoping the tutorial would be about sorting but it was focused on linked list traversal with the task being to complete:

def inorder(root: BTNode) -> (LLNode, LLNode):
"""Return the first and last nodes in a linked list that contains every
value from the binary tree rooted at root, listed according to an inorder
traversal.
"""

and then write the code for longest which took the root of a binary tree as an argument and that returned the first node in a linked list that contained every value in a longest path from the root of the binary tree to one of its leaves which was quite challenging although with the help of my teammate and the TA wasn't that big an issue, just hoped that I would remember it all once it came to the test.

Interesting SLOG of the week:
http://dirtystevecsc.blogspot.ca/

His post for Week 10 was fairly similar to mine and that's why it stood out for me. He talked about how he felt the questions in the course were easy however the lack of trial and error that comes with written tests was too much of a burden. I agree with him completely, Computer Science tests lack a major component in my opinion, the Computer part. Like Steve said (I'm assuming his name is Steve based on the dirty-steve-csc blog name) the tests feel a lot more like trying and learning confusing syntax.

Sunday, March 16, 2014

Week #9

While last week's focus on LinkedLists, this week's was trees. Binary search trees to be more precise, they may seem simple at first, with just a simple rule for simple cases: for any element, say n, if element x is smaller than n it should be attached to its left and become the 'left child' while if it's greater than n placed on the right as a 'right child'. But as the simple cases go, there can also be quite a spike in difficulty if the Binary Search tree has multiple levels of children with the difficulty coming from actions such as where does the child go if it's smaller than some branches and larger than others and they all intertwine.

The Tree representation I posted from the 'How to Think Like a Computer Scientist' shows how each node is connected to a left node and a right node with each node having an associated value with it, or cargo.

My main challenge came from Binary Search Trees when it came to replacing values in the trees and how Recursion in general is used in tree traversal. Tree traversal also went over a topic that would come very handy for the assignment due next week with regular expressions as the chapter went over how to build expression trees.

Haven't really been able to look for another SLOG that really caught my eye due to my focus on finishing the second Assignment.

Sunday, March 9, 2014

Week #8

This week was all about Linked Lists and nodes. I felt relatively calm about the whole topic as it basically sounded like a train, connected at the end of each cars with the next variable.

class LListNode:
    """Node to be used in linked list"""
 
    def __init__(self: 'LListNode', value: object =None,
                 nxt: 'LListNode' =None) -> None:
        """Create a new LListNode"""
     
        self.value, self.nxt = value, nxt
     
    def __repr__(self: 'LListNode') -> str:
        """String representation of this LListNode"""
     
        return 'LListNode(' + str(self.value) + ', ' + str(self.nxt) + ')'

For each node created, there is a link to the next Node value (self.next) which for me is best understood as the end of a traincar attached to the beginning of the next one.
Danny told us in a slide during lecture that if we omitted __str__, Python should use __repr__ instead in order to convert elements to strings. We also learned that we could use LListNode in order to re-implement stacks.

I found the first part of Assignment 2 to be quite confusing. I think the freedom of ways to tackle the problems were so varied that it kind of left me a little lost looking for the right way to implement all the functions.


Interesting SLOG for this week:

http://lavendercsc148.blogspot.ca/

Second time using this blog for comments but it really just is that good. This week Lavender focused on the second part of Assignment 2 which I wish I had prior to submitting that assignment. Lavender goes in depth describing each cases and how to do it which left me fairly speechless as the code was simple yet extremely effective and I could only wish I would have written it as such. She goes over each case of the regex_match code, which each case being a different regex symbol.

Sunday, March 2, 2014

Week #7

This week's main focus personally came in the form of getting ready for the midterm. I knew I had to finish the first part of Assignment 2 for the following week but first the midterm. I studied a lot less than I would have wanted to during reading week as I honestly focused on way too many different classes. The pain of being in 6 courses.

This week we were given an assigned topic, recursion. Recursion is a way of programming in which defining a function uses the function on itself. So the function is applied within its own definition. Recursion consists of a base case which applies the function to a simple number (eg. if n >0: return None) that will then be used by the recursive step in deeper functions.

Recursion has been (and it is quite evident by my other SLOGs) quite difficult for me to apply. I grasp everything about it. It is super useful as it can break down code that needs deeper comprehension. Lists of lists come to mind. I know I need to create a case that can be applied to the smallest element that needs to be reiterated on. The only issue I have is the following part where you just let go and let the code run itself. I am getting better at it though and hope to be a pro by the end of the course.

Sunday, February 16, 2014

Week #6

This week I finally grasped the full extent of how useful TreeLists can be. I was reading over treelist.py which was posted last week. I knew it might be on the midterm during the week and so I read over it, and fell against an old enemy of mine: recursion:

def inorder(tl: 'TreeList') -> list:
    """
    Return a list of values of tl inorder

    >>> inorder([5, [4, None, None], [3, [2, None, None], [1, None, None]]])
    [4, 5, 2, 3, 1]
    """

    return ((inorder(tl[1]) if tl[1] else []) +
            [tl[0]] +
            (inorder(tl[2]) if tl[2] else []))

The return statement basically parses a recursive statement for elements prior and another recursive statement added to end. It took me a long time to understand it as for me I understood the base of inorder going over the method for each element but was very confused as to why exactly this happened. Wrote it down as a question to hopefully ask Danny during office hours or a TA or a classmate.

I learned something new in the form of TreeList, I knew what they were and would be able to visualize it, however writing them down as code was a different challenge. Using posted codes and resources (such as Coding like a Pythonista) I was able to get a full grasp. Also have a bunch of Youtube videos just waiting to be watched that explain in different ways. I get the general aspects of it, I struggle however with its implementation with recursion.

Recursion keeps on challenging me and frustrate me. I feel like I am a very logical person, I love understandings the crooks and hidden aspects of things, however, the logic of doing a 'leap of faith' has me all mixed up. Hoping to get it soon, won't give up on it.

Interesting SLOG of the week:

http://lavendercsc148.blogspot.ca/


For Week #6 Lavender broke down the Assignment 2 which was due the previous week, with this blog entry being the first part (part 2 also discussed in my Week 7 SLOG). While part 2 focuses on the regex_match code, this week is all about the __init__ required for the Assignment. She talked about where she had difficulties which coincided with mine, however she found a better solution (lucky me!!! (<- really hard to convey sarcasm over the internet)) and used a helper function which I should have done...

Sunday, February 9, 2014

Week #5

This week I've learned about nested list recursion and how to implement them inside methods. I also struggled with the first assignment, I kept getting an error that would only show on IDLE whenever I'd execute the PyGame module, stopping me from seeing any listed towers of cheese. I spent a lot of time going back to old notes in order to get a fresher point of view in order to do the assignment. I am also preparing for the midterm at this point.

Recursion has been my greatest challenge, it took me a very long time (and I still struggle with it) to grasp the more difficult uses of recursion. I understand the concept very well and am able to solve simple methods easily but I need to deepen my understanding on how recursion works exactly.


Interesting SLOG of the week:

http://veraslog.blogspot.ca/

Quite a gem of a blog I found this week with each entry being bulky (not in a bad way) and rich in details. Vera (I'm assuming) answered the question assigned to us by Danny about recursion and OOP( object oriented programming). He/She wrote such a great definition for recursion that I need to post it here:

"Recursion is an unusually simple but difficult concept to grasp. It’s an extremely efficient way to handle certain situations but is often neglected or overlooked. My programming teacher in high school always said “You either understand recursion or you don’t. Usually it just hits you one day and you get it”. The “What is recursion?” question is regularly asked by programmers. If you search on Google, you get a variety of answers ranging from “Recursion is a method where the solution to a problem depends on solutions to smaller instances of the same problem” or “Recursion is the process of defining a problem in terms of itself”. In other words, recursion is a function that calls itself a number of times.  It is important because it allows programmers to efficiently solve a mathematical model/algorithm that has definite boundaries/parameters. There are a variety of examples where recursion would prove handy such as; traversing trees, modelling mathematical formulas such as factorials (our Towers of Hanoi algorithm too!), fractals, etc. Recursive functions are usually easier to read and understand than solving with loops. Other benefits are that it’s simple, uncomplicated, and time-efficient as opposed to loops."