Professional Documents
Culture Documents
ws/Unisa
We call these models prescriptive because they prescribe a set of process elements namely framework activities, software engineering actions, tasks, work products, quality assurance and change control mechanisms for each project. Each process model also prescribes a workflow that is, the manner in which the process elements are interrelated to one another. All software process models can accommodate the generic framework activities, but each applies a different emphasis to these activities and defines a workflow that invokes each framework activity (as well as software engineering actions and tasks) in a different manner.
http://wikistudent.ws/Unisa The waterfall model is the oldest paradigm for software engineering. However, over the past two decades, criticism of this process model has caused even ardent supporters to question its efficacy. Among the problems that are sometimes encountered when the waterfall model is applied are: Real projects rarely follow the sequential flow that the model proposes. Although the linear model can accommodate iteration, it does so indirectly. As a result, changes can cause confusion as the project team proceeds. It is often difficult for the customer to state all requirements explicitly. The waterfall model requires this and has difficulty accommodating the natural uncertainty that exists at the beginning of many projects. The customer must have patience. A working version of the program will not be available until late in the project time-span. A major blunder, if undetected until the working program is reviewed, can be disastrous.
In an interesting analysis of actual projects it was found that the linear nature of the waterfall model leads to blocking states in which some project team members must wait for other members of the team to compete dependent tasks. In fact, the time spent waiting can exceed the time spent on productive work. The blocking state tends to be more prevalent at the beginning and end of a linear sequential process. Today, software work is fast-paced and subject to a never-ending stream of changes (to features, functions and information content). The waterfall model is often inappropriate for such work. However, it can serve as a useful process model in situations where requirements are fixed and work is to proceed to completion in a linear manner.
http://wikistudent.ws/Unisa Incremental development is particularly useful when staffing is unavailable for a complete implementation by the business deadline that has been established for the project. Early increments can be implemented with fewer people. If the core product is well received, additional staff (if required) can be added to the implement of the next increment. In addition, increments can be planned to manage technical risks.
http://wikistudent.ws/Unisa
1.4.1 PROTOTYPING.
Often, a customer defines a set of general objectives for software, but does not identify detailed input, processing, or output requirements. In other cases, the developer may be unsure of the efficiency of an algorithm, the adaptability of an operating system, or the form that human-machine interaction should take. In these, and many other situations, a prototyping paradigm may offer the best approach. Although prototyping can be used as a standalone process model, it is more commonly used as a technique that can be implemented within the context of any one of the process models. Regardless of the manner in which it is applied, the prototyping paradigm assists the software engineer and the customer to better understand what is to be built when requirements are fuzzy. The prototyping paradigm begins with communication. The software engineer and customer meet and define the overall objectives for the software, identify whatever requirements are known, and outline areas where further definition is mandatory. A prototyping iteration is planned quickly and modeling (in the form of a quick design) occurs. The quick design focuses on a representation of those aspects of the software that will be visible to the customer or end-user. The quick design leads to the construction of a prototype. The prototype is deployed and then evaluated by the customer or user. Feedback is used to refine requirements for the software. Iteration occurs as the prototype is tuned to satisfy the needs of the customer, while at the same time enabling the developer to better understand what needs to be done. Ideally, the prototype serves as a mechanism for identifying software requirements. If a working prototype is built, the developer attempts to make use of existing program fragments or applies tools that enable working programs to be generated quickly. In most projects, the first system built is barely usable. It may be too slow, too big, awkward in use or all three. There is no alternative but to start again and build a redesigned version in which these problems are solved. When a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time. The prototype can serve as the first system, the one we throw away. But this may be an idealized view. It is true that both customers and developers like the prototyping paradigm. Users get a feel for the actual system and developer get to build something immediately. Yet, prototyping can be problematic for the following reasons: The customer sees what appears to be a working version of the software, unaware that the prototype is held together with strings, unaware that in the rush to get it working we havent considered overall software quality or long-term maintainability. When informed that the product must be rebuilt so that high-levels of quality can be maintained, the customer cries foul and demands that a few fixes be applied to make the prototype a working product. The developer often makes implementation compromises in order to get a prototype working quickly. After a time, the developer may become comfortable with these choices and forget all the reasons why they were inappropriate. The less-than-ideal choice has now become an integral part of the system.
Although problems can occur, prototyping can be an effective paradigm for stare engineering. The key is to define the rules of the game at the beginning; that is, the customer and developer must both agree that the prototype is built to serve as a mechanism for defining requirements. It is then discarded (at least in part), and the actual software is engineered with an eye toward quality.
http://wikistudent.ws/Unisa The spiral model is an evolutionary software process model that couples the iterative nature of prototyping with the controlled and systematic aspects of the waterfall model. It provides the potential for rapid development of increasingly more complete versions of the software. The spiral development model is a risk-driven process model generator that is used to guide multistakeholder concurrent engineering of software intensive systems. It has two main distinguishing features. One is a cyclic approach for incrementally growing a systems degree of definition and implementation while decreasing its degree of risk. The other is a set of anchor pint milestones for ensuring stakeholder commitment to feasible and mutually satisfactory system solutions. Using the spiral mode, software is developed in a series of evolutionary releases. During early iterations, the release might be a paper model or prototype. During later iterations, increasingly more complete versions of the engineered system are produced. A spiral model is divided into a set of framework activities defined by the software engineering team. Each of the framework activities represents one segment of the spiral path. As this evolutionary process begins, the software team performs activities that are implied by a circuit around the spiral in a clockwise direction, beginning at the center. Risk is considered as each revolution is made. Anchor point milestones are a combination of work products and conditions that are attained along the path of the spiral and are noted for each evolutionary pass. The first circuit around the path of the spiral might result in the development of a product specification; subsequent passes around the spiral might be sued to develop a prototype and then progressively more sophisticated version of the software. Each pass through the planning region results in adjustments to the project plan. Cost and schedule are adjusted based on feedback derived from the customer after delivery. In addition, the project manager adjusts the planned number of iterations required to complete the software. Unlike other process models that en when software is delivered, the spiral model can be adapted to apply throughout the life span of the computer software. The spiral model is a realistic approach to the development of large-scale systems and software. Because software evolves as the process progresses, the developer and customer better understand and react to risk at each evolutionary level. The spiral model uses prototyping as a risk reduction mechanism but, more importantly, enables the developer to apply the prototyping approach at any stage in the evolution of the product. It maintains the systematic stepwise approach suggested by the classic life cycle but incorporates it into an iterative framework that more realistically reflects the real world. The spiral model demands a direct consideration of technical risk at all stages of the project and, if properly applied, should reduce risk before they become problematic. But like other paradigms, the spiral model is not a solution and problems are: It may be difficult to convince customers (particularly in contract situations) that the evolutionary approach is controllable. It demands considerable risk assessment expertise and relies on the expertise for success. If a major risk is not uncovered and managed, problems will occur.
http://wikistudent.ws/Unisa The concurrent process model defines a series of events that will trigger transition from state to state for each of the software engineering activities, actions, or tasks. The concurrent process model is applicable to all types of software development and provides an accurate picture of the current stat of a project. Rather than confining software engineering activities, actions, and tasks to a sequence of events. Events generated at one point in the process network trigger transitions among the states.
http://wikistudent.ws/Unisa Component integration issues are considered. A software architecture is designed to accommodate the components. Components are integrated into the architecture. Comprehensive testing is conducted to ensure proper functionality.
The component-based development model leads to software reuse, and reusability provides software engineers with a number of measurable benefits. Based on studies of reusability component-based development can lead to reduction in development cycle time, reduction in project cost and increase in productivity. Although these results are a function of the robustness of the component library, there is little question that the component-based development model provides significant advantages for software engineers.
http://wikistudent.ws/Unisa A distinct aspect-oriented process has not yet matured. However, it is likely that such a process will adopt characteristics of both the spiral and concurrent process models. The evolutionary nature of the spiral is appropriate as aspects are identified and then constructed. The parallel nature of concurrent development is essential because aspects are engineered independently of localized software components and yet, aspects have a direct impact on these components.
http://wikistudent.ws/Unisa The construction phase of the unified process is identical to the construction activity defined for the generic software process. Using the architectural model as input, the construction phase develops or acquires the software components that will make each use-case operation for end-users. To accomplish this, analysis and design models that were started during the elaboration phase are completed to reflect the final version of the software increment. All necessary and required features and functions of the software increment are then implemented in source code. As components are being implemented, until tests are designed and executed for each. In addition, integration activities (component assembly and integration testing) are conducted. Use-cases are used to derive a suite of acceptance tests that are executed prior to the initiation of the next unified process phase. The transition phase of the unified process encompasses the latter stages of the generic construction activity and the first part of the generic construction activity and the first part of the generic deployment activity. Software is given to end-users for beta testing, and user feedback reports both defects and necessary changes. In addition, the software team creates the necessary support information that is required for the release. At the conclusion of the transition phase, the software increment becomes a usable software release. The production phase of the unified process coincides with the deployment activity of the generic process. During the phase, the on-going use of the software is monitored, support for the operating environment is provide, and defect reports and requests for changes are submitted and evaluated. It is likely that at the same time the construction, transition and production phases are being conducted, work may have already begun on the next software increment. This means that he five unified process phases do not occur in a sequence, but rather in a staggered concurrency. A software engineering workflow is distributed across all unified process phases. In the context of the unified process, a workflow is analogous to a task set. That is, a workflow identifies the tasks required to accomplish an important software engineering action and the work products that are produced as a consequence of successfully completing the tasks. It should be noted that not every task identified for a unified process workflow is conducted for every software project. The team adapts the process to meet its needs.
http://wikistudent.ws/Unisa representation of the software architecture. In addition the elaboration phase revisits risk and the project plan to ensure that each remains valid. The construction phase produces an implementation model that translates design classes into software components that will be built to realize the system and a deployment model maps components into the physical computing environment. Finally, a test model describes tests that are used to ensure that usecases are properly reflected in the software that has been constructed. The transition phase delivers the software increment and assesses work products that are produced as end-users work with the software. Feedback from beta testing and qualitative requests for change are produced at this time.
http://wikistudent.ws/Unisa 5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. 6. The most efficient and effective method of conveying information and within a development team is face-to-face conversation. 7. Working software is the primary measure of progress. 8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. 9. Continuous attention to technical excellence and good design enhances agility. 10. Simplicity, the art of maximizing the amount of work not done is essential. 11. The best architectures, requirements, and designs emerge from self organizing teams. 12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. Agility can be applied to any software process. However, to accomplish this, it is essential that the process be designed in a way that allows the project team to adapt tasks and to streamline them. Conduct planning in a way that understands the changeability of an agile development approach, eliminate all but the most essential work products and keep them lean, and emphasize an incremental delivery strategy that gets working software to the customer as rapidly as feasible for the product type and operational environment.
http://wikistudent.ws/Unisa Extreme programming uses and object-oriented approach as its preferred development paradigm. Extreme programming encompasses a set of rules and practices that occur within the context of four framework activities: 1. Planning Stories describe required features and functionality of software. Each story is written by the customer and placed on an index card. The customer assigns a value or priority to the story. Members of the XP team assess each story and assign a cost, measured in development weeks. If the story requires more than 3 weeks, the customer must split the story into smaller ones. Customers and the XP team decide how to group stories into the next release or the next increment. Once a commitment is made for a release, the team orders the stories that will be developed in one of three ways: o All stories will be implemented immediately. o The stories with highest value will be implemented first. o The riskiest stories will be implemented first. After the 1st release, the team computes project velocity, the number of customer stories implemented during the 1st release. Project velocity can then be used to: o Help estimate delivery dates & schedule for other releases. o Determine whether an over-commitment has been made for all stories across the entire development project. As work proceeds, the customer can add, change, split, or eliminate stories, and the team modifies its plans accordingly.
2. Design A simple design is always preferred over a complex one. The design provides implementation guidance for a story. The design of extra functionality is discouraged. CRC cards identify and organize the object-oriented classes that are relevant. If a difficult design problem is encountered, create a prototype. Spike solution is the prototype that is implemented and evaluated. The only work products are CRC cards and spike solutions. Refactoring is improving the code design after its been written. Design occurs both before and after coding commences.
3. Coding Develop unit tests to exercise each story before coding. When the code is complete, it can be unit tested immediately. Pair programming is having two people work together at one computer to create code for a story, for quality assurance. As pair programmers complete their work, their code is integrated with the work of others, to uncover errors early. This continuous integration provides a smoke testing environment. 4. Testing Tests should be automated to be executed easily and repeatedly. Regression testing strategy whenever code is modified or refactored. As the individual unit tests are organized into a universal testing suite, integration and validation testing of the system can occur on a daily basis. This provides the team with a continual indication of progress. Acceptance tests or customer tests are ones specified by customers and focus on overall features that are visible.
http://wikistudent.ws/Unisa
2. Collaboration Motivated people work together in a way that multiplies their talent and creative output beyond their absolute numbers. People working together must trust one another to : o Criticize without animosity. o Assist without resentment. o Work as hard or harder as they do. o Have the skill set to contribute to the work at hand. o Communicate problems in ways that lead to effective action.
3. Learning ASD teams learn in 3 ways: o Focus groups where the customer or users provide feedback on software increments. o Formal technical reviews where ASD team members review the software components developed. o Postmortems where the team becomes introspective, addressing its own performance.
2. Business study. Establishes the functional and information requirements. Defines the basic application architecture and identifies the maintainability requirements for the application.
3. Functional model iteration. Produces a set of incremental prototypes for demonstration. All DSDM prototypes are intended to evolve into deliverables. Additional requirements are gathered by eliciting feedback from users as they exercise the prototype.
http://wikistudent.ws/Unisa 4. Design and build iteration. Revisits prototypes built during the functional model iteration to ensure that each will provide operational business value. Functional model iteration and Design and build iteration can occur concurrently.
5. Implementation. The latest software increment is placed into operation. Note that the increment may not be 100% complete, and changes may be requested as the increment is put into place. DSDM work continues by returning to the function model iteration.
1.9.4 SCRUM.
Scrum is an agile process model which principles are consistent with the agile manifesto. Small working teams are organized to maximize communication and knowledge, and minimize overhead, and maximize sharing of unspoken, informal knowledge. The process must be adaptable to both technical and business changes. Frequent software increments. Work and people are partitioned. Constant testing and documentation. A product is declared done whenever required. Scrum principles are used to guide development activities within a process that incorporates the following framework activities, requirements, analysis, design, evolution and delivery. Software process patterns define some development activities: o Backlog is a prioritized list of project requirements or features. Items can be added to the backlog at any time. o Sprints are work units that are required to achieve a requirement defined in the backlog that must fit into a time-box. During the sprint, the backlog items it uses are frozen. o Scrum meetings are short daily meetings where these key questions are asked: 1. What did you do since the last team meeting? 2. What obstacles are you encountering 3. What do you plan to accomplish by the next meeting? A scrum master assesses the responses from each person. The meetings help the team uncover potential problems early. These daily meetings lead to knowledge socialization. Demos are an increment delivered to the customer so that functionality can be demonstrated and evaluated. The demo may not contain all planned functionality, but only the functions delivered within the established time-box.
1.9.5 CRYSTAL.
Crystal is an approach that puts a premium on maneuverability. Its primary goal is delivering useful, working software. The secondary goal is setting up for the next game. To achieve maneuverability, there is a set of methodologies to choose from, which all have core elements in common. Reflection workshops are conducted before, during, and after an increment is delivered. Crystal is called a family of agile methods because the Crystal approach emphasizes collaboration and communication among people who have varying degrees of interest in the software project. The method is also tolerant of varying team cultures and can accommodate both informal and formal approaches. Crystal family is actually a set of sample agile processes that have been proved effective for different types of projects. The intent is to allow agile teams to select the member of the Crystal family that is most appropriate for their project.
http://wikistudent.ws/Unisa o Because features are small, users can describe them more easily and better review them, and their design and code are easier to inspect effectively. o Features can be organized into a hierarchy. o The team develops operational features every two weeks. o Project planning, scheduling, and tracking are driven by the feature hierarchy, rather than an arbitrarily adopted software engineering task set. Five framework activities or processes: 1. Develop an overall model. 2. Build a features list. 3. Plan by feature. 4. Design by feature. 5. Build by feature. FDD provides greater emphasis on project management guidelines and techniques than many other agile methods. To determine if software increments are properly scheduled, FDD defines 6 milestones: 1. Design walkthrough. 2. Design. 3. Design inspection. 4. Code. 5. Code inspection. 6. Promote to build.
http://wikistudent.ws/Unisa
http://wikistudent.ws/Unisa 6. Strive for collaboration. Collaboration and consensus occur when the collective knowledge of members of the team is combined to describe product or system functions or features. Each small collaboration serves to build trust among team members and creates a common goal for the team. 7. Stay focused, modularize your discussion. The more people involved in any communication, the more likely that discussion will bounce from one topic to the next. The facilitator should keep the conversation modular, leaving one topic only after it has been resolved. 8. If something is unclear, draw a picture. Verbal communication goes only so far. A sketch or drawing can often provide clarity when fords fail to do the job. 9. Once you agree to something, move on; If you cant agree to something, move on; If a feature or function is unclear and cannot be clarified at the moment, move on. Communication like any software engineering activity, takes time. Rather than iterating endlessly, the people who participate should recognize that many topics require discussion and that moving on is sometimes the best way to achieve communication agility. 10. Negotiation is not a contest or a game. It works best when both parties win. There are many instances in which the software engineer and the customer must negotiate functions and features, priorities, and delivery dates. If the team has collaborated well, the parties have a common goal. Therefore, negotiations will demand compromise from all parties.
http://wikistudent.ws/Unisa 7. Adjust granularity as you define the plan. Granularity refers to the level of detail that is introduced as a project plan is developed. A fine granularity plan provides significant work task detail that is planned over relatively short time increments. A coarse granularity plan provides broader work tasks that are planned over longer time periods. In general, granularity moves from fine to coarse as the project timeline moves away from the current date. Over the next few weeks or months, the project can be planned in significant detail. Activities that wont occur for many months do not require fine granularity. 8. Define how you intend to ensure quality. The plan should identify how the software team intends to ensure quality. If formal technical reviews are to be conducted, they should be scheduled. If pair programming is to be used during construction, it should be explicitly defined within the plan. 9. Describe how you intend to accommodate change. Even the best planning can be delayed by uncontrolled change. The software team should identify how changes are to be accommodated as software engineering work proceeds. 10. Track the plan frequently and make adjustments as required. Software projects fall behind schedule one day at a time. Therefore, it makes sense to track progress on a daily basis, looking for problem areas and situations in which scheduled work does not conform to actual work conducted. When slippage is encountered, the plan is adjusted accordingly.
http://wikistudent.ws/Unisa Input provided by end-users, control data provided by an external system, or monitoring data collected over a network all cause the software to behave in a specific way. 4. The models that depict information, function, and behavior must be partitioned in a manner that uncovers detail in a layered fashion. Analysis modeling is the first step in software engineering problem solving. It allows the practitioner to better understand the problem and establishes a basis for the solution. Complex problems are difficult to solve in their entirety. For this reason, we use a divide and conquer strategy. A large complex problem is divided into sub-problems until each sub-problem is relatively easy to understand. This concept is called partitioning, and is a key strategy in analysis modeling. 5. The analysis task should move from essential information toward implementation detail. Analysis modeling begins by describing the problem from the end-users perspective. The essence of the problem is described without any consideration of ho a solution will be implemented. Implementation detail indicates how the essence will be implemented.
http://wikistudent.ws/Unisa propagation also increases and the overall maintainability of the software decreases. Therefore, component coupling should be kept as low as is reasonable. 8. Design representations should be easily understandable. The purpose of design is to communicate information to practitioners who will generate code, to those who will test the software, and to others who may maintain the software in the future. If the design is difficult to understand, it will not serve as an effective communication medium. 9. The design should be developed iteratively. With each iteration, the designer should strive for greater simplicity. Like almost all creative activities, design occurs iteratively. The first iterations work to refine the design and correct errors, but the later iterations should strive to make the design as simple as is possible.
http://wikistudent.ws/Unisa 5. Create nested loops in a way that makes them easily testable. 6. Select meaningful variable names and follow other local coding standards. 7. Write code that is self-documenting. 8. Create a visual layout for example indentation and blank lines, that aids understanding. 3. Validation principles: After youve completed your first coding pass, be sure you: 1. Conduct a code walkthrough when appropriate. 2. Perform unit tests and correct errors you have uncovered. 3. Refactor the code.
http://wikistudent.ws/Unisa relevant information must be assembled and thoroughly beta-tested with actual users. All installation scripts and other operational features should be thoroughly exercised in all possible computing configurations. 3. A support regime must be established before the software is delivered. An end-user expects responsiveness and accurate information when a question or problem arises. If support is ad hoc, or worse, nonexistent, the customer will become dissatisfied immediately. Support should be planned, support material should be prepared, and appropriate record keeping mechanisms should be established so that the software team can conduct a categorical assessment of the kinds of support requested. 4. Appropriate instructional materials must be provided to end-users. The software team delivers more than the software itself. Appropriate training aids should be developed, troubleshooting guidelines should be provided, and whats different about this software increment descriptions should be published. 5. Buggy software should be fixed first, delivered later. Under time pressure, some software organizations deliver low-quality increments with a warning to the customer that bugs will be fixed in the next release. Customers will forget delivery of high-quality products late, but not problems caused by delivery of low-quality products.
http://wikistudent.ws/Unisa Requirements engineering builds a bridge to design and construction. Some argue that it begins at the feet of the project stakeholders, where business need is defined, user scenarios are described, functions and features are delineated, and project constraints are identified. Others might suggest that it begins with a broader system definition, where software is but one component of the larger system domain. But regardless of the starting point, the journey across the bridge takes us high above the project, allowing the software team to examine the context of the software work to be performed; the specific needs that design and construction must address; the priorities that guide the order in which work is to be completed; and the information, functions and behaviors that will have a profound impact on the resultant design.
Some of these requirements engineering functions occur in parallel and all are adapted to the needs of the project. All strive to define what the customer wants, and all serve to establish a solid foundation for the design and construction of what the customer gets.
1.17.1 INCEPTION.
In some cases, a casual conversation is all that is needed to precipitate a major software engineering effort. But in general, most projects begin when a business need is identified or a potential new market or service is discovered. Stakeholders from the business community define a business case for the idea, try to identify the breadth and depth of the market, do a rough feasibility analysis, and identify a working description of the projects scope. All of this information is subject to change, but it is sufficient to precipitate discussions with the software engineering organization. At project inception software engineers ask a set of context-free questions. The intent is to establish a basic understanding of the problem, the people who want a solution, the nature of the solution that is desired, and the effectiveness of preliminary communication and collaboration between the customer and the developer.
1.17.2 ELICITATION.
It is not as simple as asking the customer, the users, and others what the objectives for the system or product are, what is to be accomplished, how the system or product fits into the needs of the business, and finally, how the system or products is to be used on a day-to-day basis. A number of problems why requirements elicitation is difficult are:
http://wikistudent.ws/Unisa Problems of scope. The boundary of the system is ill-defined or the customers or users specify unnecessary technical detail that may confuse, rather than clarify, overall system objectives. Problems of understanding. The customers or users are not completely sure of what is needed, have a poor understanding of the capabilities and limitations of their computing environment, dont have a full understanding of the problem domain, have trouble communicating needs to the system engineer, omit information that is believed to be obvious, specify requirements that conflict with the needs of other customers or users, or specify requirements that are ambiguous or untestable. Problems of volatility. The requirements change over time.
To help overcome these problems, requirements engineers must approach the requirements gathering activity in an organized manner.
1.17.3 ELABORATION.
The information obtained from the customers during inception and elicitation is expanded and refined during elaboration. This requirements engineering activity focuses on developing a refined technical model of software functions, features, and constraints. Elaboration is an analysis modeling action that is composed of a number of modeling and refinement tasks. Elaboration is driven by the creation and refinement of user scenarios that describe how the end-user interacts with the system. Each user scenario is parsed to extract analysis classes, business domain entities that are visible to the end-user. The attributes of each analysis class are defined and the services that are required by each class are identified. The relationships and collaboration between classes are identified and a variety of supplementary UML diagrams are produced. The end result of the elaboration is an analysis model that defines the informational, functional and behavioral domain of the problem.
1.17.4 NEGOTIATION.
The requirements engineer must reconcile conflicts trough a process of negotiation. Customers, users, and other stakeholders are asked to rank requirements and then discuss conflicts in priority. Risks associated with each requirement are identified and analyzed. Rough guestimates of development effort are made and used to assess the impact of each requirement on project cost and delivery time. Using an iterative approach, requirements are eliminated, combined, or modified so that each party achieves some measure of satisfaction.
1.17.5 SPECIFICATION .
In the context of computer-based systems, the term specification means different things to different people. A specification can be a written document, a set of graphical models, a formal mathematical model, a collection of usage scenarios, a prototype or any combination of these. The specification is the final work product by the requirements engineer. It serves as the foundation for subsequent software engineering activities. It describes the function of performance of a computerbased system and the constraints that will govern its development.
1.17.6 VALIDATION .
The work products produced as a consequence of requirements engineering are assessed for quality during a validation step. Requirements validation examines the specification to ensure that all software requirements have been stated unambiqously; that inconsistencies, omissions, and errors have been detected and corrected; and that the work products conform to the standards established for the process, the project and the product.
http://wikistudent.ws/Unisa The primary requirements validation mechanism is the formal technical review. The review team that validates requirements includes software engineering, customers, users, and other stakeholders who examine the specification looking for errors in content or interpretation, areas where clarification may be required, missing information, inconsistencies, conflicting requirements, or unrealistic requirements.
In many cases, these traceability tables are maintained as part of a requirements database so that they can be quickly searched to understand how a change in one requirement will affect different aspects of the system to be built.
http://wikistudent.ws/Unisa
These questions help to identify all stakeholders who will have interest in the software to be built. I addition, the questions identify the measureable benefit of a successful implementation and possible alternatives to custom software development. The next set of questions enable the software team to gain a better understanding of the problem and allows the customer to voice his or her perceptions about a solution. How would you characterize good output that would be generated by a successful solution? What problems will this solution address? Can you show me the business environment in which the solution will be used? Will special performance issues or constraints affect the way the solution is approached?
The final set of questions focuses on the effectiveness of the communication activity itself. Are you the right person to answer these questions? Are your answers official? Are my questions relevant to the problem that you have? Am I asking too many questions? Can anyone else provide additional information? Should I be asking you anything else?
These questions will help to break the ice and initiate the communication that is essential to successful elicitation.
http://wikistudent.ws/Unisa An agenda is suggested that is formal enough to cover all important points but informal enough to encourage the free flow of ideas. A facilitator controls the meeting. A definition mechanism is used. The goal is to identify the problem, propose elements of the solution, negotiate different approaches, and specify a preliminary set of solution requirements in atmosphere that is conducive to the accomplishment of the goal.
http://wikistudent.ws/Unisa Each of these work products is reviewed by all people who have participated in requirements elicitation.
http://wikistudent.ws/Unisa require. For that reason, the analysis model is a snapshot of requirements at any given time. We expect it to change. As the analysis model evolves, certain elements will become relatively stable, providing a solid foundation for the design tasks that follow. However, other elements of the model may be more volatile, indicating the customer does not yet fully understand requirements for the system.
http://wikistudent.ws/Unisa on a network link, or a large data file retrieved from secondary storage. The transforms may comprise a single logical comparison, a complex numerical algorithm, or a rule-inference approach of an expert system. Output may light a single LED or produce a 200 page report. In effect, we can create a flow model for any computer-based system, regardless of size and complexity.
Analysis patterns are integrated into the analysis model by reference to the pattern name. They are also stored in a repository so that requirements engineers can use search facilities to find and reuse them. Information about an analysis pattern is presented in a standard template that takes the form: Pattern name: A descriptor that captures the essence of the pattern. The descriptor is used within the analysis model when reference is made to the pattern. Intent: Describes what the pattern accomplishes or represents and or what problem is addressed within the context of an application domain. Motivation: A scenario that illustrates how the pattern can be used to address the problem. Forces and context: A description of the external issues that can affect how the pattern is used and also the external issues that will be resolved when the pattern is applied. External issues can encompass business-related subjects, external technical constraints, and people-related matters. Solution: A description of how the pattern is applied to solve the problem with an emphasis on structural and behavioral issues. Consequences: Addresses what happens when the pattern is applied and what trade-offs exist during its application. Design: Discusses how the analysis pattern can be achieved through the use of known design patterns. Known uses: Example of uses within actual systems. Related patterns: One or more analysis patterns that are related to the named pattern because: 1. The analysis pattern is commonly used with the named pattern. 2. The analysis pattern is structurally similar to the named pattern.
http://wikistudent.ws/Unisa Has the requirements model been partitioned in a way that exposes progressively more detailed information about the system? Have requirements patterns been used to simplify the requirements model? Have all the patterns been properly validated? Are all the patterns consistent with customer requirements?
These and other questions should be asked and answered to ensure that the requirements model is an accurate reflection of the customers needs and that it provides a solid foundation for design.
http://wikistudent.ws/Unisa models that depict user scenarios, functional activities, problem classes and their relationships, system and class behavior, and the flow of data s it is transformed. Requirements analysis provides the software designer with a representation information, function, and behavior that can be translated to architectural, interface and component-level designs. Finally, the analysis model and the requirements specification provide the developer and the customer with the means to assess quality once software is built. Throughout analysis modeling, the software engineers primary focus is on what, not how. What objects does the system manipulate, what functions must the system perform, what behaviors does the system exhibit, what interfaces are defined, and what constraints apply. Complete specifications of requirements may not be possible at this stage. The customer may be unsure of precisely what is required. The developer may be unsure that a specific approach will properly accomplish function and performance. These realities mitigate in favor of an iterative approach to requirements analysis and modeling. The analyst should model what is known and use that model as the basis for design of the software increment.
Only those elements which add value to the model should be used.
http://wikistudent.ws/Unisa The data object description incorporates the data object and all of its attributes there is no reference to operations.
1.26.3 RELATIONSHIPS.
Indicate how data objects are connected to one another.