Requirements Analysis Document
Learning Team: Jon Hsieh, James Lampe, Yun-Ching Lee, Wing Leung, Rudy Setiawan, Jon Wildstrom, Andrew Zimdars
1.0 General Goals
The subsystem monitors the network and database activity and stores information about these
activities. The subsystem then analyzes the data to expedite the decision making process of the
PAID system. First, it will deal with a large number of clients (over 6000). Second, it will utilize machine
learning and other intelligent algorithms in its data analyzation process. Third, it will investigate
existing monitoring and diagnosis tools. Finally, it will provide reports of its analyzation with
a graphical user interface.
2.0 Current System
Daimler-Benz currently uses a variety of mediums for the distribution of aftersales infomation,
such as CDROM's, Online, Microfiche and Paper. This system is inefficient in that information is
typically sent out once a month. It is also inaccurate as service outlets frequently have outdated
information. Any means of intelligently streamlining the information distribution process is
nonexistent.
3.0 Proposed System
3.1 Overview
The proposed learning subsystem will watch for "triggers" (data transactions) from the main Database.
The system will selectively analyze this data and determine a more intelligent, streamlined and efficient
way to handle future data transactions.
3.2 Functional Requirements
The learning subsystem will analyze behaviors and suggest actions to implement. It does this by
monitoring the behavior noted by the database and responding to database triggers. Specifically,
it will store frequently accessed records in the local database and reschedule updates over the
network to avoid congestion and waiting lines. Second, responses to triggers from the database
need to be at least semi intelligent, with a lower bound on unreasonable or erroneous prompts.
Depending on the user's specification of a cost/speed balance, the learning system should
recommend only behaviors that will enhance the cost/speed performance of the system. Third,
the system should be user friendly, in that it never completely take control of the system from
the user. This can probably be accomplished using preference setting at the client side.
Finally, the decision making process should be done in a reasonably short period of time.
3.3 Non-Functional Requirements
3.3.1 User Interface and Human Factors
There are three ways the user interacts with the learning subsystem. First, the subsystem will have a learning
preferences panel on the client side. The user can use this panel to decide how much control they wish to give
to the system, or how frequently they wish to be prompted for updates or changes. Second, the server can access
reports of the network activity that the learning subsystem monitors through the server-side user interface. In general,
these two interfaces may be more advanced features of the entire user interface, but detailed documentation should help
expedite the learning process. Finally, when the user wishes to download large files and may not desire to keep their
connection open long enough to finish the download, the subsystem will let the user know of the problem. In the form of
a button panel, it will inform the user of the estimated download time and give the user the opportunity to refine their
request or cancel it.
3.3.2 Documentation
Documentation will include an explanation of both the learning preferences panel on the client side and the
network activity reports on the server side. It will include a description (as examples) of the functionality that
selection of the learning preferences panel options enable. Also, it will include an interpretation of the statistics
generated by the network activity reports for the server interface.
3.3.3 Hardware Consideration
The Java based subsystem should ideally be running on any platform. Due to the fact that the subsystem
will need to analyze data transactions from as many as 6,000 dealers, considerable RAM will be
needed to carry out the complex calculations. On the client side, the learning subsystem must pay attention to the user's
connection speed and download size. It will inform the user of the estimated download time and prompt the user in the event
that they wish to refine their download request. If the user's cache or memory resources are too small, the subsystem may
prioritize file downloads, given that a file hierarchy is provided.
3.3.4 Performance Characteristics
The subsystem will perform off-line data analyzation, which will be the most time consuming
process. Nonetheless, the decision making process should be relatively fast if optimal
algorithms are implemented. The actions that the decision making process recommends, however,
should speed up the information transaction process for the different scenarios. However, if the user's available
memory or connection speed limits their downloading capabilities, the subsystem will either prompt the user to
refine their request or automatically prioritize the download.
3.3.5 Error Handling and Extreme Conditions
The major sections of the subsystem are persistant storage (event logs and behavior files),
the data analyzation functions, and the connection to the database/event service. We assume
that the persistant data is robust and stable. In the situation that we are unable to reach
persistent data, we are crippled. In the event that we get too many triggers/events to process,
we fall back to a less intelligent yet fast algorithm (logging, and a default response) to
process requests. They will be handled later by the data analyzation cycles. In the event that
the data analyzation process incorrectly recommends actions, the learning preferences panel on
the client side will have an option to reevaluate the behaviors recommended.
To minimize the cases where the user cannot finish their download before they disconnect, the subsystem
will inform them of the estimated download time. If the user disconnects before the download is complete,
the subsystem must forget the download.
3.3.6 System Interfacing
The learning subsystem interacts only with the database/event service subsystem. It receives input
of database events from the database and outputs recommended actions to the database. Additionally,
the learning subsystem is the only subsystem using its data, so we can safely choose any desired
data format. We assume that the database will allow as much granularity as possible, so that we may break
downloads into individual files.
3.3.7 Quality Issues
Fundamentally, the subsystem will attempt to recommend near optimal actions. For it to be
reliable, data transactions will need to be substantially more streamlined and efficient for
each of the applicable scenarios. The system needs to realize when it is making erroneous
recommendations and correct the process, either by user input or a system checking process.
In cases where the network goes down or there are database malfunctions rendering the system
unable to access new data, the system will recognize this and not act until connections are
restored. Due to the fact that the data analyzation process restarts at arbitrary time intervals,
if the subsystem itself were to crash it will easily resume operations once restarted. Data files
will be continually saved to insure this.
3.3.8 System Modifications
The data analyzation process is a good candidate for future modifications. If the subsystem relies on
one major data mining algorithm to analyze data, multiple, more optimal data mining algorithms
for specific scenarios may be added. The object oriented system will be easily extendable to
accomodate such modifications.
3.3.9 Physical Environment
The subsystem resides on the server side, with no interaction with a physical environment.
3.3.10 Security Issues
The internal subsystem needs to deal with no authentication or security issues to deal with.
3.3.11 Resource Issues
The learning subsystem needs to rely on persistent data storage from the database subsystem.
3.4 Constraints
The learning problem is essentially an issue of data mining. The resources needed to carry
out the calculations are the greatest concern to the system. The Java development environment
is needed. Thus, if the subsystem uses third party software packages not implemented
in Java, it must be wrapped in Java. Finally, domain specific knowledge may be limited.
3.5 System Model
3.5.1 Scenarios
Sams Busy Workshop (Scenario 3)
Participating Actor Instances
eventDispatcher: EventServices
samUpdateEvent, samBehaviorEvent: Event
samRecord: EventRecord
dealerUpdateLog: LogDB
behaviorFile: LearnedBehavior
Flow of Events
- (Entry) The event dispatcher posts Sams update event on a subscribed channel.
- Sams record notes this event and updates itself in the dealer update log.
- Sams record also orders the behavior file to recommend an action.
- (Exit)
Based on previous events it has learned, the behavior file publishes a behavior event specialized to Sams needs to the event dispatcher.
Bratts Impatient Customers (Scenario 3)
Participating Actor Instances
eventDispatcher: EventServices
brattDelayUpdateEvent, brattUpdateReminder: Event
brattRecord: EventRecord
dealerUpdateLog: LogDB
behaviorFile: LearnedBehavior
Flow of Events
- (Entry) The event dispatcher posts Bratts "delay-update" event on a subscribed channel.
- Bratts record notes this event and updates itself in the dealer update log.
- Bratts record also orders the behavior file to recommend an action.
- (Exit)
Based on previous events it has learned, the behavior file will later publish an update reminder to the event dispatcher.
Klaus and the M-Class (Scenario 4)
Participating Actor Instances
eventDispatcher: EventServices, Scheduler
klausInfoEvent, klausStatusQuoEvent, klausStoreLocalEvent: Event
klausRecord: EventRecord
dealerUpdateLog: LogDB
dealerInfoMiner: DataMiner
alarmClock: Scheduler
behaviorFile: LearnedBehavior
Flow of Events (Early accesses)
- (Entry) The event dispatcher posts Klauss information request event on a subscribed channel.
- Klauss record notes this event and updates itself in the dealer update log.
- Klauss record also orders the behavior file to recommend an action.
- (Exit)
Based on previous events it has learned, the behavior file publishes an event recommending no change in Klauss local storage to the event dispatcher.
Flow of Events (Data mining)
- (Entry) The scheduler wakes up the dealer information mining agent.
- The data miner analyzes the dealer update log.
- (Exit)
Based on its analysis, the data miner updates the recommendations in the behavior file.
Flow of Events (Later access)
- (Entry) The event dispatcher posts Klauss information request event on a subscribed channel.
- Klauss record notes this event and updates itself in the dealer update log.
- Klauss record also orders the behavior file to recommend an action.
- (Exit)
Based on previous events it has learned, the behavior file publishes an event, recommending local storage of the M-class information for Klaus, to the event dispatcher.
Sams Free Connection (Scenario 5)
Participating Actor Instances
eventDispatcher: EventServices
samEvent, samBehaviorEvent: Event
samRecord: EventRecord
dealerUpdateLog: LogDB
behaviorFile: LearnedBehavior
Flow of Events
- (Entry) The event dispatcher posts Sams event on a subscribed channel.
- Sams record notes this event and updates itself in the dealer update log.
- Sams record also orders the behavior file to recommend an action, providing the information that Sam has an inexpensive network connection.
- (Exit)
The behavior publishes a recommendation to the event dispatcher that does not attempt to minimize connection costs.
Klauss Expensive Connection (Scenario 5)
Participating Actor Instances
eventDispatcher: EventServices
klausEvent, klausBehaviorEvent: Event
klausRecord: EventRecord
dealerUpdateLog: LogDB
behaviorFile: LearnedBehavior
Flow of Events
- (Entry) The event dispatcher posts Klauss event on a subscribed channel.
- Klauss record notes this event and updates itself in the dealer update log.
- Klauss record also orders the behavior file to recommend an action, providing the information that Sam has an expensive network connection.
- (Exit)
The behavior publishes a recommendation to the event dispatcher that attempts to minimize connection costs.
Bratts Mobile Garage (Scenario 6)
Participating Actor Instances
eventDispatcher: EventServices
frankInfoEvent, mobileInfoEvent: Event
truckMobileFixRecord: EventRecord
mobileRepairLog: LogDB
behaviorFile: LearnedBehavior
Flow of Events
- (Entry)
The event dispatcher posts Bratts request for Franks truck information on a subscribed channel.
- The mobile repair record for that type of truck notes this event and updates itself in the mobile repair log.
- Sams record also orders the behavior file to recommend an action.
- (Exit)
Based on previous events it has learned, the behavior file publishes a request for the list of records that Bratt will need to the event dispatcher.
3.5.3 Use Case Model
Participating actors are EventService and the Scheduler. EventService (database) posts the event
which is put into the log database ("update dealer log"). The dealer log requests information
and puts it into the behavior file. The behavior file sends recommendations back to EventService.
The scheduler, meanwhile, wakes up the DataMiner, which analyzes data and updates the Behavior File.
3.5.3 Object Model
Object Explanations:
EventService: publishes requests through patchRequest() that learning is interested in monitoring
Event Record: a record of the request patched through EventService.
LogDB: Contains many Event Records. Submits events through submitEvent()
when DataMiner requests EventRecords and makes new EventRecord through LogEvent()
Data Miner: More than one exists for each scenario. Performs learning functions
through analyzeBehavior(). Activated by scheduler to minimize likely expensiveness of
its computation. Requests EventRecords from logDB through requestER().
Updates LearnedBehavior object after analyzing data through updateBehavior().
Scheduler: Activates data miner and periodic intervals through startDM()
Learned Behavior: Based on recommendations of DataMiner, sends requests to
EventService through sendEvent() to perform its intelligent functions. Modifies scheduler
through modifyScheduler() so scheduler can more intelligently activate DataMiner. After
receiving initial request from EventServicer, dispatches it to LogDB to make a new EventRecord
through postRequest()
3.5.4 Dynamic Model
The scheduler wakes up the DataMiner. The DataMiner requests an EventRecord(s) from the LogDB.
The LogDB sends this EventRecord(s) to the DataMiner. The DataMiner analyzes the data and updates
the LearnedBehavior object. LearnedBehavior then updates the Scheduler so the Scheduler can wake the
DataMiner at a more intelligent time interval.
EventService patches a request to the LearnedBehavior object. Based on what LearnedBehavior
"knows", it will send an event back to EventService. LearnedBehavior posts a request to the LogDB which logs
the event posted by EventService.
This page is hosted by the Chair for Applied Software Engineering of the Technische Universität München.
Imprint (Impressum)