In the Light example I chose not to make a distinction between flicking the light switch up or flicking it down. In both cases I simply named the action of flicking the switch ``flick.'' I made a choice to abstract from the direction of flicking the switch. (This abstraction gives the implementor of my Light model the freedom to implement the light switch with a button that pops in and out rather than with a lever that moves up and down.) When you are modeling a system you will often be faced with such design decisions. If you are faced with the problem of whether something should be modeled at a particular level of abstraction, the question you should be asking yourself is ``Is this level of detail relevant to this level of abstraction?'' or more precisely ``Is this distinction observable by the environment?'' If the answer is ``no,'' i.e., the observer has no way of telling two things apart or you as the system designer don't want to provide the observer a way of telling two things apart, then you should abstract from the difference between the two things. For example, suppose I have an apple, an orange, and an eggplant. I might decide that I do not want an observer to tell the difference between the apple and the orange, but only the difference between fruits (apple or orange) and vegetables (eggplant).
As another example, think of my Car. By the way I chose to model it, you (as the environment) don't get to see all its states or all its state transitions. For example, to go from the idle state to the accelerating state I may actually have shifted gears, say from first to second and second to third and so on, before getting to the accelerating state. It was my choice to abstract from some of its states (e.g., being in third gear) and state transitions (shifting from second to third). In making my choice of what to reveal to the observer, I hid those states and state transitions from the observer because they were irrelevant. The only information you have about the Car is what I reveal to you. These are design decisions as a modeler that I made.
Some state machine models allow you to make a distinction between external actions and internal actions. External actions are part of the system's interface and are observable by the system's environment. Internal actions are hidden and not observable.