Requirements Analysis Document

15-413: Software Engineering, User Interface Group

Jason Almeida, John Langworthy, Chris Quirk, and Sandra Yoon

  1. Table of Contents

i. Table of Contents *

1 Introduction *

1.1 Brief Summary of the Wright Flyer Simulation *

1.2 Summary of the User Interface *

1.2.1 Functional Requirements and Deliverables *

1.2.2 Project Boundaries *

1.2.3 Team Roles and Responsibilities *

1.2.4 Development Process Model *

1.2.5 Technical Process *

2 Use Cases and Scenarios *

2.1 Pilot Opens UI *

2.2 Pilot Changes Control State *

2.3 HLA Reports Position Change *

2.4 HLA Reports Control Change *

2.5 Collision Detection Event Received *

2.6 Pilot Closes UI *

2.7 Pilot Opens New Windows *

2.8 Pilot Closes Windows *

3 Inter-Federate Communications *

3.1 Overview of Necessary Communication Types *

3.2 Published Events *

3.2.1 Start UI/Begin Simulation *

3.2.2 Control State Change *

3.2.3 Close UI/End Simulation *

3.3 Subscribed Events *

3.3.1 Change in Flyer Position *

3.3.2 Change in Control State *

3.3.3 Collision Detect *

3.3.4 Environmental Interactions *

4 Internal Implementation Plan *

Overall Class Specifications and Attributes *

4.1.1 Overview Diagram *

4.1.2 General Class Descriptions *

4.2 Sequence Diagrams *

4.2.1 Pilot Opens UI *

4.2.2 Pilot Changes Control State *

4.2.3 HLA Reports Control/Position Change *

4.2.4 Screen Redraw *

4.2.5 Pilot Opens/Closes Window *

4.2.6 Collision Detection Message Received *

4.2.7 Pilot Closes UI *

5 Non-Functional Requirements *

5.1 Performance and Time Constraints *

5.2 Required and Utilized Resources *

5.2.1 Hardware Requirements *

5.2.2 Operating Constraints *

5.3 Quality Control *

5.3.1 Conformance to HLA Specifications *

5.3.2 Reliability *

A. Glossary *

  1. Introduction
    1. Brief Summary of the Wright Flyer Simulation
    2. As stated in our class-wide Software Project Management Plan (SPMP), the project aims to simulate the Wright flyer, its components, and its environment at time of flight. As an attempt at creating a reusable, expandable solution while teaching students about important emerging technologies, our simulation will be based on the High Level Architecture (HLA), a new DMSO standard simulation architecture. We will follow constraints imposed by the HLA rules, including limiting communications to go through only a Run-Time Infrastructure (RTI), and documenting our implementations in the style given in the HLA.

    3. Summary of the User Interface
    4. The User Interface (UI) in particular is dealing with two rather similar situations: we must present a product that can used by a pilot attempting to control a simulation, and a passive viewer, who may not affect the simulation in any way and simply watches what may happen. However, we note that the passive viewer implements a subset of the functionality provided by an active pilot solution, so a definition of the architectural bounds will quickly be established so that each component is relatively disparate and is able to run without the other. After this distinction is made, very little attention will be paid to their differences, assuming that the architecture will allow for dual modes of operation.

      1. Functional Requirements and Deliverables
      2. Two distinct items will be delivered at the end of this class: the working source code for a UI, and the documentation that surrounds, defines, and explains it. The creation of both parts is imperative, and it is obvious that delivery of one without the other would violate our major objective: to leave an extensible and usable framework for others upon which others can build.

      3. Project Boundaries
      4. Inter-group communications should be performed by liaison Jason Almeida. Such communications include details of federate communication (subscriptions, etc.), responsibilities of groups in terms of code, et al. John Langworthy should be included in communications to and from the HLA group, since one of his functions deals with integrating the project code into the HLA.

      5. Team Roles and Responsibilities

The members of this organization are structured in the following manner:

Figure -1: Organizational Structure

With the exception of Management Advisor Tim Richardson, all members of the group will be equally responsible for the code of this project. Each group member’s responsibilities are shown in the following table.

Group Member

Title

Responsibilities

Jason Almeida

Liaison

  • Discussion of code boundaries between groups
  • Code generation

John Langworthy

Technical Consultant

  • Advisement on HLA and OpenGL
  • Code generation

Chris Quirk

Project Manager

  • Management of group responsibilities
  • Code generation

Tim Richardson

Management Advisor

  • Communication with higher levels of management

Sandra Yoon

Secretary

  • Documentation of group progress
  • Code generation

Table -1: Group Member Responsibilities

      1. Development Process Model
      2. The Software Development Life Cycle (SDLC) model to be used in this software project is the Spiral Model of Software Development [TRW1]. Since the period of this project is too small for a complete development cycle, only one or two of the coils of the spiral will be completed. The Federation Development and Execution Process (FEDEP) Model [DMSO1] models only one of these coils, which regards the development of federates.

        Figure -2: Five Step Process of the FEDEP ([DMSO1] Figure 2-1)

        The FEDEP consists of five phases, which are described below.

        1. Define Federation Objectives
        2. The purpose of this phase is to define and document a set of needs that are to be addressed through the development and execution of an HLA federation, and to transform those needs into a more detailed list of specific federation objectives. This involves identifying needs and developing objectives.

        3. Develop Federation Conceptual Model
        4. The purpose of this phase is to develop an appropriate representation of the real world domain that applies to the federation problem space, and to develop the federation scenario. This involves developing scenarios, performing conceptual analysis, and developing federation requirements.

        5. Design and Develop Federation
        6. The purpose of this phase is to identify, evaluate, and select all federation participants, to develop a detailed plan for federation development and implementation, and to develop the FOM.

        7. Integrate and Test Federation
        8. The purpose of this phase is to plan the federation execution, establish all required interconnectivity between federates, and test the federation before execution. This involves planning execution, integrating the federation, and testing the federation.

        9. Execute Federation and Prepare Results

        The purpose of the last phase is to execute the federation, process the output of the federation execution, report results, and archive reusable federation products.

        The FEDEP Model will be followed in accordance with other groups’ efforts to create the entire Wright Flyer federation.

        A timeline has been established for major milestones of this project:

        15 February 1999

        Requirements

        12 March 1999

        Design

        31 March 1999

        Specification

        9 April 1999

        Code

        28 April 1999

        Demonstration

      3. Technical Process
          1. Design Methodology
          2. For this project, our design methodology will likely be one for dynamic re-modeling as development is occurring. This is primarily attributed to the relatively short (and absolute) period we have for this development. Were we simply to independently create a model for this project and follow it through to completion, it would doubtlessly fail to operate with the other federates. Similarly, if we exhaustively specified our interface requirements before developing, our development time would be cut drastically. We must then balance the time spent on modeling with that of coding to create a solution implementing our desired features while correctly operating with other federates.

            To compensate for these difficulties, we will begin by working on a generalized object model. After this is complete, we will focus on the points that will be independent of the interfaces, and allow the interface to be further specified by the other federation groups with time. In addition, changes required in the object model would be implemented simultaneous with development in order to accommodate what would normally be major PDRs.

            The requirement stage will entail identification of what information we need to finish modeling. This implies that we will have some initial modeling completed for this review. We are scheduled to complete the initial object modeling portion for the design review, and should already be working on code. The code at this point will probably be small sub-projects to help isolate working pieces, and may be used in an integrated final project. The code should coalesce into a federation for the UI at the time of the specification review (enough to give the final specification). An independently working copy of the code will be required on (as the name implies) the code due date, and the remaining time between the specification and the final project due date will be used to achieve a working simulation, incorporating the other parts in the Wright Flyer. The patching of the model will mainly occur between the design review and the specification review. The combination of this parallel specification process and the items we will review at various points in the course will create a working development method.

          3. Language and Libraries
          4. The user interface will be developed as an application for Win32 (specifically on Windows NT), in the C++ language. This application will be interfaced with the High Level Architecture (HLA) to communicate information with the other computers/ executables simulating the various flyer components. In order to provide an engine for the 3D rendering that will be required in order to view the plane and its environs. This was a choice that was made based on its availability for Win32 and the previous experience of the developers, who have all taken a computer graphics course that used OpenGL on UNIX workstations.

            Although every team member’s prior experience with C++ made it a strong choice, there is one major difficulty: we must develop a Windows application using either the standard C libraries or the Microsoft Foundation Classes (MFC) (as opposed to the Java AWT). Currently, we are evaluating Windows application development libraries (particularly MFC and the MSVC AppWizard) for their ability to aid in the project and comply with any existing standards (such as OMT).

          5. Team Structure

All members of the group will have coding responsibilities, in addition to their other responsibilities (manager, secretary, liaison, or technician) because of the disparity of jobs required, and the overriding time constraint. While jobs have not been assigned yet, the candidates most likely to appear are windows interface, information processing, 3D modeling/rendering, and HLA/RTI interface.

The windows interface would be the job primarily concerned with being able to create an interface usable under Windows, receiving key presses and allowing the OpenGL buffer to be updated periodically. Hopefully much of the display needed can be taken from other code, making this mostly a find-and-modify job as opposed to an original code creation.

The 3D modeling and rendering will have the task of making models of the flyer, ground, and environment. It will then have to use OpenGL to paint this to the screen. Most of this data should come from models loaded at run time and output from the information processing portion

The information processing programmer will be in charge of disseminating information about the airplane and environment. This is an intermediate step between the acquisition of data from the HLA and the use of it in the display. No processing should be required on the input (or very little), leaving this area to perform work for control displays and object rendering

The last area will be in charge of retrieving data from the RTI (and the other federations via the RTI). This part is different from the processing in that it entails knowledge of the interface and will center on getting information into objects, requiring a greater knowledge of the RTI and the objects structures of other groups.

Some job descriptions are still generalized, and some portions of the model are not explicitly accounted for because the design is not finalized and jobs have not yet been defined. However, these areas are approximately the divisions that will later be chosen.

  1. Use Cases and Scenarios
  2. The following set of use cases is meant to give a reasonable estimate of the functionality required from a User Interface. Although these may not be specified in detail, they do allow the developer to make a clean division between what is expected of the User Interface and how it should interact with the other components of the simulation.

     

    1. Pilot Opens UI
  1. User starts program
  2. Internal operation: load Environment and Airplane Model into Render Engine and Collision Detect.
  3. Create Render Engine and cockpit window. Other classes are also instantiated.
  4. Register publishes and subscribes with HLA.
    1. Pilot Changes Control State
  1. Event Handler receives keystroke or mouse button input.
  2. Publish to the HLA
    1. Event Handler tells HLA interface
    2. HLA interface tells RTI.
    1. HLA Reports Position Change
  1. HLA interface receives Change Attributes Event.
  2. HLA passes changes to Airplane Status.
  3. Airplane Status passes changes to Collision Detect.
  4. Airplane Status passes changes to all existing Render Engines.
  5. Render Engines draw changes on its window.
    1. HLA Reports Control Change
  1. HLA interface receives Change Attributes Event.
  2. HLA passes changes to Airplane Status.
  3. Airplane Status passes changes to Instrument Panel.
  4. If UI handles position,

  5. Airplane Status passes changes to Collision Detect.
  6. Airplane Status passes changes to all existing Render Engines.
  7. Render Engine draws changes on its window.
    1. Collision Detection Event Received
  1. HLA interface receives Collision Event.
  2. HLA interface notifies Airplane Status.
  3. Notify to Render Engine.
  4. Notify to User.
    1. Pilot Closes UI
  1. Event Handler receives close.
  2. Notify HLA interface of stop Simulation Event.
  3. HLA interface tells RTI to stop Simulation Event.
  4. HLA interface tells Application to stop Simulation.
  5. Application closes windows and stops.
    1. Pilot Opens New Windows
  1. Event Handler receives Open Window Event.
  2. Event Handler notifies Airplane Status of Open Window with parameters.
  3. Airplane Status creates Render Engines and Windows and adds to lists of Render Engines.
    1. Pilot Closes Windows
  1. Window receives Close Event.
  2. Window tells Render Engine.
  3. Render Engine tells Airplane Status to remove from list.
  4. Render Engine destroys window and itself.
  1. Inter-Federate Communications
  2. The UI federate must be able to communicate with other federates in the simulation. To meet HLA specifications, all inter-federate communication must go through the RTI. This ensures portability and extensibility by allowing federates to be added and removed from a federation.

    1. Overview of Necessary Communication Types
    2. This federate will use two types of communication. The most common type is a signal, in which the federate signals or is signaled by another federate. This occurs in all published and subscribed events below. The second communication only occurs in Environmental Interactions. Since the UI federate needs access to large amounts of environmental data that are stored in another federate, all environment data is sent over the RTI to make a local copy.

    3. Published Events
    4. Published events involve the UI federate notifying other federates that values have been updated, and the simulation should be modified to accommodate the new values. Other federates will subscribe to these events if necessary.

      1. Start UI/Begin Simulation
      2. The user can initiate a simulation from the command line. All other federates will be subscribed to this event, meaning that they will be notified when a simulation has been initiated. This event will be published using the Publish Service of the RTI’s Declaration Management Services.

      3. Control State Change
      4. When the UI federate’s event handling machinery has detected that the state of the aircraft’s controls have been changed, this event is published. Any federates involved in the physical simulation of the aircraft will be subscribed. This event will by published by the Publish Service. One thing to note is that this service will not be a published service when the UI is simply acting as a passive viewer. In this case, the event handler will not detect a change in controls, because the event handler is circumvented.

      5. Close UI/End Simulation

      The user can terminate a simulation at any point. All federates will need to subscribe to this event. This event will be published by the Publish Service.

    5. Subscribed Events
    6. Subscribed events are events that the UI needs to be aware of for its part of the simulation. In general, they are all inputs into the UI federate that modify the UI’s course of action.

      1. Change in Flyer Position
      2. In order to render the simulation views, the UI needs to be aware of a change in position of the aircraft. This will be published by a federate that is involved in the physical simulation of the aircraft. This event will be subscribed to with the RTI’s Subscribe Service, one of the Declaration Management Services.

      3. Change in Control State
      4. When the UI is a passive viewer, it will not inform the rest of the simulation of a change in control state. Instead, another federate will publish the event and the UI will subscribe to it. It will be subscribed to by the Subscribe Service.

      5. Collision Detect
      6. If the aircraft has collided with the terrain during the simulation, the simulation should gracefully exit. All federates (including the UI) will be subscribed to this event which will be published by a federate involved in the physical simulation. This event is subscribed to using the Subscribe Service.

      7. Environmental Interactions

    Environmental interactions at present only include the initial dump of environmental data from another federate. Due to design issues of this simulation, the data of the terrain is owned by another federate in the simulation. However, for rendering purposes, a large subset of the terrain is needed at all times. Instead of constantly requesting that the RTI send relevant portions of the terrain, the entire terrain database is sent over the RTI as the simulation is initialized. This event uses many services in succession.

    The Publish Service is used to tell the environment-holding federate (which is subscribing) that it is ready to receive environmental data. Inside the message is a list of features that the UI supports, for example, terrain mesh data, objects (trees, rocks), etc.

    The Subscribe Service notifies the UI when a copy of the environment data is ready.

    The Register Object Service, part of the Object Management Services, is used to take ownership of the new environment data.

  3. Internal Implementation Plan
    1. Overall Class Specifications and Attributes
      1. Overview Diagram
      2.  

      3. General Class Descriptions

      Application is a class containing the basic windows interface. This will spawn off all other Windows objects and negotiate with the Windows runtime interface.

      Window contains the base functionality to create a sub-window within the application; both View and InstrumentPanel inherit this functionality so the pilot can adjust his view to be most useful.

      View is actually one of three classes of viewpoint: First Person (pilot’s view), Stealth (following plane from a fixed relative position), or Ground (following from and fixed absolute position). In theory, as many views as wanted may be launched, but practically refresh rates will limit this.

      InstrumentPanel is a display of the plane status including position, altitude, heading, speed, wind, etc.

      AirplaneStatus class holds a local copy of the subscribed data from the Plane, and interprets it for both display on the InstrumentPanel and manipulation of the view points.

      RenderEngine performs the OpenGL rendering of the Plane and Environment, based on the current status

      EnvironmentCache holds a local copy of the environment, downloaded at start of simulation(received from the environment group), and uses that data about the environment to show the ground (minimally) , objects, trees, weather, etc. This is turned into polygons for the render engine.

      AirplaneModel is the group of objects used to illustrate the Wright Flyer. While the Plane class keeps track of what is happening to the flyer, the AirplaneModel shows what it looks like at a given time. This will be very elementary in the first revision, but could in the future be affected by the changing status of the Plane (via Airplane Status)

      Plane is the behavioral portion of the flyer that will recieve control inputs either from the user or some other piloting mechanism (e.g. scripts, AI). The class will respond with periodic updates of the current status (position, heading, etc) that will be read by the airplane status.

      Environment class will be the initial supplier of the world map, and will supply information such as whether conditions and (tenatively) collision detection data.

      Note that the Plane and Environment are simply the subscription of data via the RTI, and the Event handeler a standardized set of data publication information, also on the RTI. The Plane and Environment designations are primarily encapsulations of the data being supplied.

       

    2. Sequence Diagrams
    3. The following are pictorial descriptions of how we are planning to handle several situations. As a general rule, they are rather self explanatory. Some elements have been abstracted, however: in particular, all events sent through the HLA are here represented by method calls, even though the implementation will be done in terms of the Publish and Subscribe services provided by the RTI. This is a concession to allow Rational Rose to create UML compliant specification diagrams, and simplifies the execution, and it does not detract in any great measure from the ideas presented.

      Most, if not all, of the diagrams presented here provide a more detailed complement to the use cases presented in section two. For clarifications about the necessity of these events, and when and why they may be triggered, the reader should read this section.

      1. Pilot Opens UI
      2.  

        The pilot, at this point, notifies the User Interface that it should start the simulation. It first creates an AirplaneStatus object, which will reflect the current state of the aircraft, tries to grab ownership of the controls, and retrieves the environment information necessary for rendering the ground, etc.

      3. Pilot Changes Control State
      4.  

        Here the pilot causes an event (such as a mouse movement or keystroke) to be triggered. When the application next checks the EventQueue, it finds this Event, which it parses into a change to the controls, and updates the RTIinterface, which in turn notifies the Plane. It is worthwhile to note that we do not affect our AirplaneStatus object at this point; we wait for the HLA to notify us of the change. Although this may seem inefficient, it helps synchronize the UI with the remainder of the HLA, and allows the AirplaneStatus to work correctly in the situation where controls are not updated by us, but rather by a script or an AI.

      5. HLA Reports Control/Position Change
      6.  

        When the control state or position of the plane is updated, this information is sent (through a Change Attributes event in the RTI) to our RTIinterface, which passes the changes on to AirplaneStatus after checking for incoming messages.

      7. Screen Redraw
      8.  

        Redraw may be run as a completely separate thread in this architecture; it could be a simple event based architecture redraw where the redraw event is generated upon damage of the window’s contents or updates to the other plane’s position. On the other hand, it could simply be a thread that loops and redraws continuously.

      9. Pilot Opens/Closes Window
      10.  

        The maintenance of a windowList in the Application object is the most important concept here. This allows the Application to simply call Draw on every object in the list in order to update the display.

      11. Collision Detection Message Received
      12.  

        If, instead of receiving simple Change messages from the RTI, we receive information saying that a collision has occurred, we have to clean up our information, warn the user, and stop running our part of the simulation. We’ll assume that all other components will be listening for this collision event as well, and as such we need not warn anyone that we’re shutting down.

      13. Pilot Closes UI

     

    On the other hand, if the pilot decides to quit the simulation, it is our responsibility to notify the rest of the simulation that it can stop. We first send out this message via the RTI, and then we go about closing our application

  4. Non-Functional Requirements
    1. Performance and Time Constraints
    2. Our single most important performance requirement is that the system performs in almost real time (greater that approximately 10 frames per second). If our throughput is much less than this, the simulation will not be very useful to a human pilot, although scripted runs and simple tests of the component performances may still be of some worth. It is also important to note that our architecture allows for several simultaneous views on the flyer, and each will consume some amount of the processing time required for rendering. Thus our 10fps requirement is for

    3. Required and Utilized Resources
    4. Much of this section is determined exactly through what we are provided with at the beginning of the course. However, these operating conditions are notable for the extensibility and runnability on other setups and platforms. Some conditions may not be necessary, per se, but we will assume them during the development process.

      1. Hardware Requirements
      2. We expect to run this simulation on a small network of fast Intel based PCs. The UI itself will be a separate application, communicating with other components through the RTI. Because the UI may make great demands on the processing power of the host computer if graphics are complex, it should be run on a machine of its own. Therefore, we expect to have a network of size at least 3 to 5 computers.

      3. Operating Constraints

      Although the HLA usings mechanisms such as Routing Spaces and publishing data only to subscribed federates to minimize network traffic, it still may consume a large amount of bandwidth. We assume then that the simulation running on this small network will carry little to no traffic unrelated to the HLA. And as the processor usage from the UI may be high, it should be run on a separate machine from the rest of the simulation, both for accuracy of display and to allow the actual simulation to run with as much computational power as necessary.

    5. Quality Control
    6. Many of the quality control specifications will be handled by an automated testing system that has not yet been set up. Therefore, we will focus on issues that are beyond the scope of such a system, and issues that are more easily dealt with in the planning stages.

      1. Conformance to HLA Specifications
      2. Here we have two major issues: first, we must follow the specifications for describing our object models and constituent structures. By creating the diagrams contained in this document in UML, we’ve provided a good starting point for this, and we simply must commit to continuing to follow the specifcations as written. Secondly, we must follow the ten rules for federates during implementation. The explicit statement of what exactly these rules are, as presented by the HLA group in class, and the reference material given is a good start for this, and again, a continued commitment during the coding stage is all that is necessary.

      3. Reliability

It is hard to create some reasonable plan for creating a reliable product; we must simply focus on making strong, modular components that can be isolated and tested extensively before connecting them in the final product. Again the presence of an automated testing system will be of great use here, but we cannot presuppose the details of this system.

  1. Glossary

FEDEP

Federation Development and Execution Process Model: a model of developing, executing, and collecting data from an HLA federation.
A link to the actual document can be found at
http://hla.dmso.mil/hla/federation/.

federate

A member of an HLA federation.

federation

A set of federates that interact with the RTI to achieve a simulation goal.

FOM

Federation Object Model: a specification of the exchange of public data among the federates in an HLA federation.

HLA

The "High Level Architecture": a military standard for implementation of a simulation. Documentation is most easily found at http://hla.dmso.mil/.

MFC

Microsoft Foundation Classes; a standard set of classes that wrap around commonly used data types, windowing components, etc.

OMT

Object Model Template: a standard for object modeling used by the HLA

OpenGL

A 3 dimensional graphics package available on Win32, SGI, Sun, and other platforms.

RTI

Runtime Infrastructure: that which provides the services of federation management in an HLA Federation.

Win32

Microsoft’s operating system API1, currently only implemented by Windows 95 and Windows NT