Our approach is to expand and massage the Pegasus graph
so that every eventual machine instruction corresponds to its own
node in the graph (although not every node in the graph will
result in an instruction). This allows each atomic
instruction to be scheduled independently. Typically
when a node is decomposed, there will be data and/or token
edges among the resulting new nodes.
Also, some new nodes and edges will have been
added to ease the subsequent register allocation task.
This massaging should mean that your scheduler does need to
worry about many special cases for different operation types --
if it obeys the node latencies, data and token edges, and
function unit compatibility, it should generate a valid schedule.
The main thing you need to handle in your scheduler
is negative-latency no-ops. Besides the latency
on a node, it must be able to handle the fact that a node's
ASAP calculation may initially compute to a negative value,
recognize this, and set the ASAP to zero.
For example, consider a node sequence add-nop-branch,
with the followng latencies: add(1), nop(-5), branch(6).
If the add's ASAP value is 10, then the nop's ASAP value is 10+1 = 11,
and the branch's is 11+(-5) = 6.
In another case, if the add's ASAP is 3, then the nop's ASAP is
3+1 = 4; the branch's ASAP is initially calculated as 4+(-5) = -1,
but since it can't have a negative ASAP, its ASAP is set to zero.
A more detailed description of the node decomposition and circuit
massaging can be found here.
TI Code Composer Studio
Look here for some misc info for getting started, but anything that Mahim has described in Assignment 0 is more current and
supersedes this old information.
Remember, if you want to use
printf() and similar library calls, you need to add the
run time library for your system to the project.