2015年4月5日星期日

Last Impression

      It's the last week of CSC148 and it's been a pleasure to be in this course.
      We did three assignments and two term tests this semester. All of them have been fine. We did classes at the beginning, did recursion after that and did big O as well.
       Classes include the use of subclass, the use of a function implemented in the general class but depends on subclasses. And that was what we did in A1, we implemented a game Subtract Square.
      Recursion is mainly calling itself in the function, and we did a lot exercises. In A2, we used that in Minimax.
      The big O thing was the last thing we talked about briefly. We kind of did that in 165, just finding the upper bound of a function which now means the function of the running of a method. It helps us to make the functions run faster. That's A3, perfecting Minimax.
      In general, this course went well, although the strike was kind of a disruption. But despite the cancelled lab, this course was running well and Danny and Diane were doing their best.

2015年3月29日星期日

Impression of Week 10

      The key point this week was efficiency.
      I remember we did two things. How to check if an element is in the list efficiently? And how to sort a list efficiently.
      We all agreed that the efficient way to check an element in the list is sort of like checking a binary tree. If the value we are checking is greater than the middle value in the list, check the right half of the list, otherwise check the left. This is an recursive function, we are always checking the middle value in the right or left list until we find the value or reach the end. In this way, we reduce the path from linear to logistic which is a huge step considering a really large list.
      But that function mentioned above need a sorted list which leads to our second problem of sorting a list. We considered the sorting algorithm we already known, bobble, insertion and selection. Although they are slightly different in their running time, but they are all portion to the length of the list squared. So what we can do is to reduce one length of the list into the lg of the length. Here we introduce a function called quick sort.
def quick_sort(L):
    ''' (list) -> list

    Return a new list with the same elements as L in
    ascending order.

    >>> L = []
    >>> quick_sort(L)
    []
    >>> L = [17]
    >>> quick_sort(L)
    [17]
    >>> L = [3, 1, 2]
    >>> quick_sort(L)
    [1, 2, 3]
    '''
    if len(L) < 2:
        # copy of L
        return L[:]
    else:
        return (quick_sort([i for i in L if i < L[0]]) +
                [L[0]] +
                quick_sort([i for i in L[1:] if i >= L[0]]))
      Tracing one example, in each step, this function divides the list into three parts, the first element in the list, the left with all the value less than the first and the right with all the value larger than the first. In each step, we go through all the value, so the number of iteration we are taking is portion to the length of the list. But the number of steps we are taking is nearly portion to lg(length). So the running time of the whole function is portion to length*lg(length). Considering a really large list, this function will save a lot time.

Revisit

      I looked over my Slogs this week and find out there's nothing much to talk about.
      I'll talk about the first slog about why geeks should write. That one was a five-minute work and I did not realize that this will be marked so strictly. I did not think about it very critically.
      Geeks write to record, to communicate. That's my claim in the first slog. And now I think I need one more, writing helps geeks think. I got that thought when I was doing my slog. Writing helps me clear my thoughts and gets me organized. While writing, I found out things I knew and thing I didn't know as I was going over everything in my mind. Writing about things I knew helps me review, writing about thing I don't know makes me learn.
      That's how I felt about my slog. During the strike, my slog was the only extra thing beside the lecture. I wrote about things happened in the lectures and I gained better understanding of the course.

2015年3月22日星期日

Impression of Week 9

      We did some test on Minimax.
      We were given three SubtractSquareState and we wanted to know what outcome of our Minimax meant our function was more likely to be correct. For states with current value 9, 10 and 11, the outcomes are all win or all lose. From that we cannot conclude whether our Minimax is working or not, so we would want a state with less wins to test our code on. In the unit test for Minimax, Danny used a state whose current value is 25 and this state has only one move to win. If that winning move is returned, which is nearly impossible for a random strategy, our Minimax is very likely to be correct.
      Later in that week, we saw some other functions.
      N = 5
      M = 2

      def f(N):
          return M * g(N + 1)

      def g(N):
          return M * N

      def h(N):
          return f(g(N))
      What we learned from them was when the name of a variable is used more than one time, which one should we refer to. The rule is the nearest one is what we need.

2015年3月14日星期六

Impression of Week 8

      We didn't do much during the lecture this week because this week is a test week. We had a test on Wednesday and that wasn't as bad as I thought it would be, it was actually pretty easy.
      On Monday's lecture, all we did was one function: deletion of data from BST from a certain node. In this function, we have several things to consider: if the node is None; using the BST characteristics  to avoid unnecessary iterations; is the node a leaf or a internal node; what should we return?
      So, when implementing the function, we should take all of these things into consideration. As Danny suggests, we should have a list of all the possible situations:

def delete(node, data):
    ''' (BTNode, data) -> BTNode

    Delete, if it exists, node with data and return resulting tree.

    >>> b = BTNode(8)
    >>> b = insert(b, 4)
    >>> b = insert(b, 2)
    >>> b = insert(b, 6)
    >>> b = insert(b, 12)
    >>> b = insert(b, 14)
    >>> b = insert(b, 10)
    >>> b = delete(b, 12)
    >>> print(b)
            14
        10
    8
            6
        4
            2
    <BLANKLINE>
    '''
    # Algorithm for delete:
    # 1. If this node is None, return that
    # 2. If data is less than node.data, delete it from left child and
    #     return this node
    # 3. If data is more than node.data, delete it from right child
    #     and return this node
    # 4. If node with data has fewer than two children,
    #     and you know one is None, return the other one
    # 5. If node with data has two non-None children,
    #     replace data with that of its largest child in the left subtree,
    #     and delete that child, and return this node
    return_node = node
    if node is None:
        pass
    elif node.data > data:
        node.left = delete(node.left, data)
    elif node.data < data:
        node.right = delete(node.right, data)
    elif node.left is None:
        return_node = node.right
    elif node.right is None:
        return_node = node.left
    else:
        node.data = _find_max(node.left).data
        node.left = delete(node.left, node.data)
    return return_node

      List a todo list and follows it. That is a good way to be thorough and organized.
     

2015年3月8日星期日

Impression of Week 7

      LinkedList was introduced to us this week. And we are using that to do iteration and mutation.
      The way to solve LinkedList problems is iteration. We go over every element in the linked list and find what we want, then we make mutations.
      For example, the __contains__ method we did in the lecture:
def __contains__(self, value):
        ''' (LinkedList, object) -> bool

        Return whether LinkedList (self) contains value.

        >>> lnk = LinkedList()
        >>> lnk.prepend(0)
        >>> lnk.prepend(1)
        >>> lnk.prepend(2)
        >>> lnk.__contains__(1)
        True
        >>> lnk.__contains__(3)
        False
        '''
        cur_node = self.front
        while cur_node:
            if cur_node.value == value:
                return True
            cur_node = cur_node.nxt
        return False
      In this example, we are checking if value is in a linked list. Our thought was to go over all the nodes in the linked list, check every one of them to see if it has the same value as value. So, we store the node we are checking in a variable called cur_node, as we go deeper into the linked list, cur_node changes. If cur_node's value is equal to value, we return True, if not, we set cur_node to the next node and keep checking. When we go over all the nodes and nothing is returned, that means there are no value in the linked list, so we return False.

2015年3月1日星期日

Recursion

      Recursion is a pretty new stuff that we just learned in class. It is somehow different from what we already known about python.
      Recursion is used in functions to solve problem with unknown layers and cannot be solved simply by for or while loops, for example, checking elements in arbitrarily nested-lists and that is the main problem that we've trying to solve with recursion so far. Recursion and loops are doing the same job which is repeating some lines of code, but the difference between them is that when recursion is used in a function, it is calling the function itself inside this function.
      This is an example of recursion:
    def sum_list(L):
        ’’’
        (arbitrarily nested list of int or int) -> int
        Return the sum of all ints in L.
        >>> sum_list([1, [2, 3], [4, 5, [6, 7], 8]])
        36
        ’’’
        if isinstance(L, list):
            return sum([sum_list(x) for x in L])
        else:
            return L
      This function is summing all the elements in the list up. The recursive idea in this function is that we keep looking inside nested-lists until there are no nested-lists left. As we can see in this function, a recursion contains an if statement. This if statement is important when we are trying to make a recursion. How do we so that?
      First, we need a base case which can be done without recursing. Base case is also the thing we are looking for using the recursion. In this case, is when L is an int, so that we don't need to look inside lists to find int and there is no need so sum up an int so we return that int. Now we have our first part of the code. Then we need to consider other cases. If L is a list, we should check the elements in L. If the element is an int, we return it and if it is a list, we keep checking the elements inside it. This is where we need recursion. We call this function on every element in L, so that this function does what we want. Using a list comprehension to collect the results and sum them up. The recursion basically looks like this: 'if base case, else: recursion'.
      When running this function on the doctest, the process should be as follows:
      sum_list([1, [2, 3], [4, 5, [6, 7], 8]])
-> sum([sum_list(1), sum_list([2, 3]), sum_list([4, 5, [6, 7], 8])])
-> sum([1, sum([sum_list(2), sum_list(3)]), sum([sum_list(4), sum_list(5), sum_list([6, 7]), sum_list(8)])])
-> sum([1, sum([2, 3]), sum([4, 5, sum([sum_list(6), sum_list(7)]), 8])])
-> sum([1, 5, sum([4, 5, sum([6, 7]), 8])
-> sum([1, 5, sum([4, 5, 13, 8])])
-> sum([1, 5, 30])
-> 36
      Basically what recursion does is it stops and return stuff when we find what we need, int for example, if not we keep checking.