Place these files in a lab7 folder. Before leaving lab, zip up the lab7 folder and hand the zip file in.
The correct operation of a hash table relies on a hash function that maps keys (in our case, these are strings) to non-negative integers that indicate which bucket the key belongs to. The hash table will work as long as the hash function maps each key to a non-negative integer less than the size of the hash table, but it will only perform well if it distributes keys relatively uniformly across buckets.
Consider the following hash functions:
Perhaps the simplest possible hash function is one that just ignores the key and always returns 0:
def h0(key,ht_size): return 0
Another hash function obtains the ASCII code of the first character in the key, divides that code by the size of the hash table, and returns the remainder of that division. This can be coded in Python as:
def h1(key,ht_size): return ord(key[0]) % ht_size
Another hash function totals up the ASCII codes of all of the characters in the key, divides that total by the size of the hash table, and returns the remainder of that division:
def h2(key, ht_size): x = 0 for i in range(len(key)): ascii_code = ord(key[i]) x = x + ascii_code return x % ht_sizeThis scheme examines all the letters in the string, but is not sensitive to order. So the strings "act", "cat", and "tac" will all hash to the same value.
It is also possible to build a hash function that is position sensitive, based on the idea of treating each character of the string as a digit in a number. The most familiar way of representing a number is to use a radix-10 system with the Arabic numerals: "0", "1", "2", "3", "4", "5" "6", "7", "8", "9". So, for example, "52473" means ((((((5*10+2)*10)+4)*10)+7)*10)+3. One can, however, also represent numbers using other systems with different sets of symbols. For example, one could represent numbers using the letters "a" - "z" for the numbers 0 through 25 to obtain the Radix-26 system described in your textbook. Characters in the computer are often stored using 7-bit ASCII codes (with values 0-127). So we can treat a string of ASCII characters as a radix-128 number.
To make this into a Python hash function, one need only convert the string into a Python integer and then take the remainder modulo the size of the hash table:
def h3(key,ht_size): x = 0 for i in range(len(key)): ascii_code = ord(key[i]) x = 128 * x + ascii_code return x % ht_size
Hash Functions
For each of the hash functions, h0, h1, h2, and h3, find the result of hashing the following strings for a table of size 100,000.
For each of these hash functions, would you get any collisions (i.e., more than one key hashing to the same bucket) if you were to insert these keys into a hash table?
Record the results in experiments.txt.
Hash Table Statistics
Our hash tables are implemented as Python lists where each element is a bucket. That bucket is a list that holds the entries that hashed to that bucket's index. In order to evaluate the effectiveness of different hash functions it is useful to be able to gather some statistics about the hash table after we've inserted some entries.
Define a Python function largest_bucket(ht) in largest_bucket.py that returns the maximum number of entries contained within any single bucket of the hash table ht. You can follow this algorithm:
Example:
>>> largest_bucket([["a","b"],[],["w","x","y","z"],["c"]]) 4
Define a Python function mean_bucket(ht) in mean_bucket.py that returns the arithmetic mean of the number of entries in non-empty buckets of the hash table ht. You can follow this algorithm:
Example:
>>> mean_bucket([["a","b"],[],["w","x","y","z"],["c"]]) 2.33333333333333
What is the point of gathering these statistics? Ideally, would we want the maximum bucket size and the mean bucket size to be small or large? Write a brief explanation in experiments.txt.
Hash Table Experiments
Download and save the code from hash_table.py into your lab7 folder. Look at the code.
This code implements hash table creation (new_hash_table), insertion (ht_insert), search (ht_search), and the four hash functions (h0, h1, h2, h3). There is a "main" function called run(hash_fun) that sets up a hash table, inserts a large number of words, and then does some searches. It measures the elapsed time for these operations, and reports some statistics about the buckets (using your functions largest_bucket and mean_bucket).
Load hash_table.py by typing python3 -i hash_table.py into the terminal, and call run("h0"), run("h1"), run("h2"), and run("h3"), which will perform the timing and statistical measurements for hash tables with the respective hash function. In interpreting the results, take note of the fact that run() inserts 100,000 words but only searches 1000 times.
Record the output of these runs in experiments.txt. How well do the different hash functions perform for the insertion and search operations? What correlation do you see between how evenly the hash function distributes keys across buckets and the search performance of the hash functions?
Challenge: Devise a Better Hash Function
Design your own hash function. Modify the run function in order to experiment on your hash function. How does your function compare with the four described above? Try to beat h0, h1, h2, and h3.
Save your hash function in my_hash_function.py. Include the results of your experimental evaluation of your hash function in experiments.txt