CS 15-212-ML: Assignment 1
Due Wednesday, September 9, 12:00 noon (electronically); papers at recitation.
Maximum Points: 50 (+10 extra credit)
bigEven
of type
int -> bool
that decides whether its integer argument is even and greater than
one hundred (100). For example,
bigEven(120) | => | true |
bigEven(135) | => | false |
bigEven(100) | => | false |
Write a function oddSum
of type
int -> int
that calculates the sum of the first n
odd numbers
on input n
.
Calculate recursively, i.e. you may not directly calculate
n2
. You may assume that the input is
greater than
or equal to 1, but you should document this.
oddSum
is a correct implementation
and computes n2
. Carefully state the
inductive hypothesis and do not omit the boundary condition!
You don't need to type your proof in (writing it by hand and
handing it in at recitation is usually faster and easier).
horner : int * int -> int
. The Horner
method is an algorithm for the efficient computation of the sum of powers of
a number.
(* horner (x, n) = 1 + x + x2 + . . . + xn *) (* val horner : int * int -> int *) (* invariant : n >= 0 *) fun horner (x, 0) = 1 | horner (x, n) = 1 + x * horner(x, n-1)Show by induction that the function
horner
correctly computes the sum of the powers of x
from 0
to n
.
b
of x
is defined as the
unique integer n
such that
bn <= x < bn+1
intLog : int * int -> int
such that
intLog (b, x)
recursively
computes the integer logarithm of x
base b
without using the logarithm functions included
in the ML basis library. You may assume that b
>= 2 and
x
>= 1, but you should document this.
Your function should not use values of
type real
in order to avoid round-off errors.
intLog
correctly computes the
integer logarithm as described above.
and suppose we wish to calculate the derivative at a
.
We might start with the approximation
[f(a1) - f(a)] / [a1 - a]
a2
and a3
.
We can hope that as
ai
approaches a
, our approximation
becomes more refined. For the purposes of this assignment, we will use an
initial approximation of a + delta
, where delta
is an input to the function. Over each iteration, we will allow our
current position to approach a
by half the remaining distance.
Since we do not have the exact value
of the derivative at a
, we will accept an approximate value if the
absolute value of the difference between the values on the previous and current
iterations is less than a given (positive) epsilon.
Write an SML function
diff : real * real -> (real -> real) -> real -> (real
* real)
diff (epsilon, delta) f a
approximates the
derivative of f
at a
up to epsilon
starting
at range delta
. It returns
a pair, where the first component yields
the current range (remember, we increase our accuracy by halving
the range to a
on each iteration),
and the second the
approximate value, as described above. Differentiate
epsilon
= 10-6 (that is, 0.000001), and
delta
= 1.0.
Hint: Use the context browser to access information about the
mathematical functions from the
Math
and Real
structures.
You can access the SML Basis Library for
reals at
http://portal.research.bell-labs.com/orgs/ssr/sml/real-chapter.html.
epsilon
= 10-6 (= 0.000001)
and delta
= 1.0 and write a function
diffx : (real -> real) -> (real -> real)
diffx f
is the function
g : real -> real
which returns
the value of the derivative of f
on input
x
.
f
, we first choose
initial guess x0
. Once we have guess
xi
,
we obtain the next guess, xi+1
,
by finding the
x-intercept of the tangent line to f
at
(xi, f(xi))
,
as illustrated below:
This is given by the formula
xi+1 = xi - [f(x)/f'(x)]
newton : real -> (real -> real) -> real -> real
newton epsilon g x
approximates
a zero of function g
up to epsilon
using initial guess
x
.
Note: You will need to use your function
diffx
to solve this problem. You will not be
penalized in this question for any errors in your function
diffx
.
epsilon
= 10-6 (= 0.000001) and write a function
solve : (real -> real) -> real
where solve f
finds a zero of f
using initial guess zero.
/afs/andrew/scs/cs/15-212-ML/studentdir/<your andrew id>/ass1/ass1.sml