Thursday, November 28, 2013

Memoization

Memoization

Lastly, we discussed redundancy in recursive programming. With the example of Fibonacci's function, we covered how the number of redundant recursive calls can greatly slow down the Python interpreter, resulting in the output to be returned after an extremely long time. Avoiding redundancy in recursive programming was another new concept to me. I never thought about setting up the recursive code so that there is no need to recursively perform redundant calls. The use of a dictionary and how adding all recursive calls and their results to that dictionary (if the call is already in the dictionary, then there is no need to perform the call) can greatly improve the efficiency of code, both performance-wise and time-wise.

Tracing

Tracing

Unlike scope and namespaces, tracing was a concept I was quite familiar with. Tracing is going over a certain code and figuring out, or seeing, what the results outputted or returned will be. In my high school computer courses, we often had to do a lot of tracing for various programs, so it is not a challenge for me to trace code, even if it is a recursive code. I found it interesting that during the lecture, he did not discuss the use of trace tables. Having to fill out trace tables was very common for me back in high school; I guess it is not as common, or at least not as used, in computer sciences courses here at University of Toronto.

Scopes and Namespaces

Scopes and Namespaces

This week, Professor Danny introduced scopes and namespaces. During my programming, I would often run into problems where the interpreter cannot locate the "global" name of a variable, or the name of a variable was a local variable and therefore useless outside the scope of the method. "Global", "local", and "nonlocal" namespaces for the names of variables were new concepts I had never really touched upon before. With these namespaces, the scope for the usability of the names could be easily changed. If I ever had to use a variable outside of its method, the "global" namespace would certainly be useful. Namespaces certainly proved to be extremely useful programming tools.

Sunday, November 24, 2013

Hiding Attributes, Equality and Reduce

Hiding Attributes, Equality and Reduce

The last two lectures (November 18 and 20) consisted of material I have covered in the past, as well as material I have never touched on. Hiding attributes was a concept I learned back in high school, so the reasoning why one would hide attributes was not new to me. The main thing I learned regarding this during those lectures was how to hide attributes in Python. We learned that it was through putting an underscore in the variable name (i.e. self._char instead of self.char) and the use of the built-in function property(), which can either make a certain attribute or variable read-only or re-writable. I found the built-in function property() to be extremely useful for hiding important attributes from programmers who change the internal coding.

Equality, otherwise known as  __eq__ in Python, is used to change what the notion of equivalence means. The standard operator for equality, ==, only asserts the boolean True if the two objects are the same, meaning that they are referenced at the same memory location. The special function __eq__ allows the programmer to customize '==' as whether or not two objects are identical to one another, rather than having to be located in the same memory location. This way, the boolean True is always asserted if two objects are identical to one another, even if they are not the same object! This customization can apply to other situations, such as changing what '+' means or what '*' means, making the concept of customizing built-in functions quite useful for programmers.

Reduce() is a built-in function that reduces a list or an iterable to a single value. As Professor Heap stated during the lecture, say there is a list of integer values and I wanted to add those those values together. The function reduce() takes in three arguments (in this case, the function that does the actual addition of values, the list, and the value that would be returned if the list was empty, which is 0) and reduces the list produced to a single value; that value would be the total sum of the numbers in the list. It is a confusing concept to grasp, and I am still understanding exactly how it works. The best way I look at the concept of reduce() is with the built-in functions sum(), max() and min(). These functions are, in a way, reduce() functions; they iterate of a list or iterable and reduce the list to a single value (sum() reducing the list to the sum of the list, max() reducing the list to the maximum value of the list, and min() reducing the list to the minimum value of the list). 

Term Test #2

Term Test #2

On November 13, we had our second term test. It mainly consisted of previous topics, such as recursion, and new topics, such as binary search trees and linked lists. Because this was my second term test, I had a better understanding as how to university-level computer science tests work and the level of difficulty they are at. However, recursion still proved to be a persistent force to be reckoned with. For me, I feel as though being under a pressuring situation like a test causes me to think too hard and miss out on the smallest and most obvious details. I aim to be more prepared and more calm for the upcoming final exam.

Friday, November 8, 2013

Big O Notation and Sorting Algorithms

Big O Notation and Sorting Algorithms

Big O Notation is how programmers describe the efficiency of a search or sorting algorithm. The efficiency is, as we have seen during lectures, based on how long it takes for a search or sort algorithm to search or sort through items. Some algorithms and their efficiencies are binary search trees, which have efficiencies of log n, and selection and quick sort, which have efficiencies of n to the 2nd power. Merge sort has an efficiency of log n, because the list of items given is split into two groups, and from there those two groups are split into two, and so on and so forth. Much like with linked lists and binary trees, the big O notation and sorting algorithms were concepts I studied in high school, so they were not something I had to learn and understand from the beginning.

Binary Search Trees

Binary Search Trees


A Binary search tree is a particular binary tree such that the its elements are stored according to a less than or greater than algorithm. The item we want to insert into the tree will either go to the left of the root or to the right of the root depending on its value. is. For example, if the root of the tree is 5 and the item we wanted to insert into the tree is 10, then 10 would be inserted into the right of the tree, since 10 is greater than 5. If the item we wanted to insert is 2, then it would be inserted into the left of the tree for the same reason. (The difference being that 2 is less than 5.) Deleting a node from the binary search tree requires that, if the node has two children, then the child on the left subtree of that node will replace the node; if the node has only one child, then that child would replace that node; if the node has no children, however, then it is as simple as removing it from the tree. Much like linked lists, binary search trees were concepts I studied in high school, so most of this was review to me. However, I did gain a better understanding of the concept through the lectures.