Read chapter 4 of the book Blown To Bits, pages 109-137. (Feel free to read the whole chapter if the material interests you.)
Suppose that Alice simulates the roll of a pair of dice by defining the roll function below, calling it twice:
def roll() return rand(6) + 1 endBob realizes that the roll of a pair of dice results in a sum of 2 through 12, inclusive, so he simulates the roll of a pair of dice by defining the roll function below, calling it only once:
def roll() return rand(11) + 2 end
Are these equivalent in terms of their behavior over time as we generate roll after roll? Why or why not?
(http://en.wikipedia.org/wiki/File:Kernel_Layout.svg)
What role does the kernel play? (You may look this up online and quote any reliable source you find, citing where you got the information. But if you quote from a website, you should still summarize what you quoted in your own words. Don't just copy someone else's words without thinking about what they mean.)
[ [ [255,0,0] , [0,255,0] ],[ [0,0,255] , [255,255,255] ],[ [0,0,0] , [0,0,0] ] ]
We can remove the red components of an image using the following function in Ruby:
def remove_red(image) num_rows = image.length num_columns = image[0].length for row in 0..num_rows-1 do for column in 0..num_columns-1 do green = image[row][column][1] blue = image[row][column][2] image[row][column] = [0, green, blue] end end return nil end
It is possible to accelerate the performance for the remove_red function by modifying it to perform the work on the different pixels concurrently instead of following the ordering given by the loops. Could the same be done for the problem of implementing a function that returns an array of n pseudorandom numbers generated using a linear congruential generator? Why or why not?