Saturday, May 16, 2009

MI 16 – 01 SOFTWARE PROJECT MANAGEMENT

Q. 1. Discuss the various Software Team Organization.

Ans. There are almost as many human organizational structures for software development as there are organizations that develop software. For better or worse, organizational structure cannot be easily modified. The options are available for applying human resources to a project that will require ‘n’ people working for ‘k’ years:

1. n individuals are assigned to m different functional tasks, co-ordination is the responsibility of a software manager, who may have other projects to be concerned with.

2. n individuals are assigned m different functional tasks (m <>

3. n individual are organized into t teams; each team is assigned one or more functional tasks; each team has a specific structure that is defined for all teams working on a project; co-ordination is controlled by both the team as well as the software project manager.

The “best” team structure depends on the management style of the organization, the number of people who make up the team, their skill level and also the overall problem difficulty. Mantei suggests three generic team organizations:

Democratic Decentralized (DD): This software engineering team has no permanent leader. Rather, task coordinators are appointed for short durations and then replaced by others who may co-ordinate the different tasks. Decisions on problems and approach are made by a group consensus.

Controlled Decentralized (CD): This software engineering team has a defined leader who co-ordinates specific tasks and secondary leaders who are responsible for the subtasks. Problem solving then remains a group activity, but the implementation of solutions is partitioned among subgroups by the team leader.

Controlled Centralized (CC): Top-level problem solving and internal team co-ordination are managed by a team leader. The communication between the leader and team members is vertical.

Mantei describes seven project factors that need to be considered when planning the structure of software engineering teams:

· Problem difficulty

· The size of the resultant program in lines of code.

· The lifetime of the team.

· The degree to which the problem can be modularized.

· The required quality and reliability of the system to be built.

· The product delivery date.

· The degree of sociability (communication) required for the project.

Decentralized teams generate faster and better solutions than individuals. Therefore such teams have a greater probability of success when working on difficult problems. Since the CD team is centralized for problem solving either a CD team or CC team structure can be successfully applied to simple problems.

A DD team structure is best suited for difficult problems. The length of time that the team stays together will also affects the team morale. It has been found that DD team structures result in high morale and job satisfaction, and is therefore good for teams that will be together for a long time. The DD team structure can be best applied to problems with relatively low modularity and also because of the higher volume of communication required.

CC team and CD team have been found to produce fewer defects than the DD team, but these data have much to do with the specific quality assurance activities that are applied by the team. The earliest software team organization was a controlled centralized (CD team) structure originally called the chief programmer team. This structure was first proposed by Harlan Mills and described by Baker. The nucleus of the team was composed of a senior engineer (the chief programmer), who plans, co-ordinates and reviews all technical activities of the team; technical staff, who conduct analysis and development activities; and a backup engineer; who supports the senior engineer in his or her activities and can replace the senior engineer with minimum loss in project continuity. The chief programmer may be served by one or more specialists, support staff and a software librarian. The librarian serves many teams and performs the following functions : maintains and controls all elements of the software configuration (i.e., documentation, source listings, data, storage media); helps collect and format software productivity data; catalogs and indexes reusable software components, and assists the team in research, evaluation, and document preparation. The librarian acts as a controller, coordinator, and a potential evaluator of the software configuration. To achieve a high performance team:

· Team members must trust one another.

· The distribution of skills must be appropriate to the problem.

Regardless of team organization, the objective of every project manager is to help create a team that exhibits cohesiveness. A jelled team is a group of prople so strongly knit that the whole is greater than the sum of the parts. Once a team begins to jell, the probability of success goes way up. But not all teams jell very well. In fact, many teams suffer from what Jackman calls “team toxity”. She defines five factors that foster a potentially toxic team environment:

· A frenzied work atmosphere in which team members waste energy and lose focut on the objectives of the work to be performed.

· High frustration caused by personal, business, or a technological factor that causes friction among team members.

· Fragmented or poorly coordinated procedures or a poorly defined or improperly chosen process model that becomes a roadblock to accomplishment.

· Unclear definition of roles resulting in a lack of accountability.

· Continuous and repeated exposure to failure that leads to a loss of confidence and a lowering of morale.

Jackman suggests a number of antitoxins that address these all-too-common problems. To avoid a frenzied work environment, the project manager should be certain that the team has access to all information required to do the job and that major goals, once required, should not be modified unless it is absolutely necessary to do so. A software team can avoid frustration if it is given as much responsibility for decision-making as possible. The more control over process and technical decision given to the team, the less frustration the team members will feel. The software project manager, working together with the team, should clearly refine roles and responsibilities before the project begins. The team itself should establish its own mechanisms for accountability and define a series of corrective approaches when a member of the team fails to perform.

Every software team experiences small failures. The key to avoiding an atmosphere of failure is to establish team-based techniques for feedback and problem solving. In addition, failure by any member of the team must be viewed as a failure by the team itself. This leads to a team-oriented approach to corrective action, rather than the finger pointing and mistrust that grows rapidly on toxic teams. In addition to the five toxins described above by Jackman, a software team often struggles with the differing human traits of its members.

Q. 2. Briefly explain Problem Decomposition.

Ans. Problem decomposition, sometimes known as partitioning or problem elaboration, is an activity, which sits at the core of the software requirements analysis. During the scooping activity no attempt is made to fully decompose the problem. Rather, decomposition is applied in two major areas:

· Functionality that must be delivered.

· Process that will be used to deliver it.

A complex problem is partitioned into smaller problems, which are more manageable. Since both cost and the schedule estimates are functionally oriented, decomposition to some extent is often useful.

As an example, consider a project that would build a new word-processing product. Among the unique features of the product are the continuous voice as well as the keyboard input, extremely sophisticated automatic copy edit features, page layout capability, automatic indexing and table of contents and others. The project manager must first establish a statement of scope that bounds these features. For example, will continuous voice input require that the product be “trained” by the user? Specifically, what capabilities will the copy edit feature provide? Just how sophisticated will the page layout capability be?

As the statement of scope evolves, a first level of partitioning naturally occurs. The project team learns that the marketing department has talked with the potential customers and found that the following functions should be part of automatic copy editing: spell checking, sentence grammar checking, reference checking for large documents and, section and chapter reference validation for large documents. Each of these features represents a sub-function to be implemented in the software.

Q. 3. Explain Extended Function Point Metrics.

Ans. The function point measure was designed to be applied to business information systems. To accommodate these applications, the data dimension was emphasized to the exclusion of the functional and behavioral dimensions. For this precise reason, the function point measure was inadequate for many engineering and embedded systems.

A function point extension called feature points, is a superset of the function point measure that can be applied to systems and engineering software applications. The feature point measure accommodates applications in which algorithmic complexity is high. To compute the feature point, information domain values are again counted and weighed as describes in the previous section. In addition, the feature point metric counts a new software characteristic, algorithms. An algorithm is defined as a bounded computational problem that is included within a specific computer program. Inverting a matrix, decoding a bit string, handling an interrupt are all examples of algorithms.

Another function point extension for real time systems and engineered products has been developed by Boeing. The boeing approach integrates the data dimensions of software with the functional and control dimensions to provide a function oriented measure amenable to applications that emphasizes function and control capabilities. Called the 3D function point, characteristics of all 3 software dimensions are counted, quantified and transformed into a measure that provides an indication of the functionality delivered by the software.

The data dimension is evaluated in the same manner as described as in the previous section. Counts of retained data e.g.; files, and external data, e.g. inputs, outputs, inquiries etc. are used along with measures of complexity to derive a data dimension count. The functional dimension is measured by considering the number of internal operations required to transform input to output data. The control dimension is measured by counting the number of transitions between states. A state represents an externally observable mode of behavior, and a transition occurs as a result of some event that causes the software or system to change its state. For example, a wireless phone contains software that supports the auto dial functions. To enter the auto dial state from the resting state, the user needs to press the auto key on the key-pad. This event causes an LCD display to prompt for a code that will indicate the party to be called. Upon entry of the code and on hitting the dial key, the wireless phone software makes a transition to the dialing state.

To compute the 3D points, the following relationship is used:

Index = I + O + Q + F + E + T + R -> eqn. 2.2

where, I, O, Q, F, E, T, R represent complexity weighed values for the elements like inputs, outputs, inquiries, internal data structures, external files, transformation and transitions respectively. Each complexity-weighted value is computed using the following relationship:

complexity weighted value = NilWil + NiaWia + NihWih -> eqn. 2.3

where, Nil,Nia,Nih represent the number of occurrences of element I (e.g.: outputs) for each level of complexity (low, medium, high) and Wil,Wia,Wih are the corresponding weights. The overall complexity of a transformation for 3D function points is shown below:

Processing Steps/Semantic statements

1 – 5

6 – 10

11 +

1 – 10

Low

Low

Average

11-20

Low

Average

High

21+

Average

High

High

Q. 4. Write Short Notes on:

1. Software Sizing

The accuracy of a software project estimate is based on a number of things:

· Degree of which the software planner has properly estimated the size of the product to be built.

· The ability to translate the size estimate into human effort, calendar time and dollars.

· The degree to which the project plan reflects the abilities of the software team.

· The stability of product requirements and the environment that supports the software engineering effort.

Since the estimate of the project is only as good as the estimate of the size of the work to be accomplished, sizing represents the software project planner’s first major challenge. In the context of project planning, size refers to a quantifiable outcome of the software project. If a direct approach is taken, size can be measured in LOC (Line of Code). If an indirect approach is chosen, size is represented as FP (Function Points).

Putnam and Myers suggest four different approaches to the sizing problem:

Fuzzy Logic Sizing: This approach uses the approximate reasoning techniques. To apply this approach, the software planner must identify the type of application, establish its magnitude on a qualitative scale, and then refine the magnitude within the original range.

Function Point Sizing: The planner develops estimates of the information domain characteristics.

Standard Component sizing: Software is composed of a number of different components that are generic to a particular application area. For example, the standard components for an information system are sub-systems, modules, screens, reports; interactive programs, batch programs, files, LOC and object-level instructions. The project planner estimates the number of occurrences of each standard component and then uses historical project data to determine the delivered size per standard component.

Change Sizing: This approach is used when a project encompasses the use of existing software that must be modified in some way as part of a project. The planner estimates the number and type (e.g., reuse, adding code, changing code, deleting code) of modifications that must be accomplished. Using an “effort ratio” for each type of change, the size of the change may be estimated.

Putnam and Myers suggest that the results of each of these sizing approaches be combined statistically to create a three – point or expected value estimate. This is accomplished by developing optimistic (low), most likely and pessimistic (high) values for size and combining them.

2. LOC Based Estimation

Lines of code and function points are described as measures from which productivity metrics could be computed. LOC and FP data can be used in two ways during the software project estimation:

· As an estimation variable to “size” each element of the software.

· As baseline metrics collected from past projects and used n conjunction with estimation variables to develop cost and effort projections.

LOC and FP estimation are distinct estimation techniques. The project planner begins with a bounded statement of software scope and from this statement attempts to decompose the software into various problem functions that can each be estimated individually. LOC or FP (the estimation variable) is then estimated for each function. Alternatively, the planner may choose another component for sizing such as classes or objects, changes, or business processes affected. The LOC and FP estimation techniques differ in the level of detail required for decomposition and the target of the partitioning.

When LOC is used as the estimation variable, decomposition is absolutely essential and is often taken to considerable levels of detail. For FP estimates, decomposition works differently. Rather than focusing on function, each of the information domain characteristics – inputs, outputs, data files, inquiries, and external interfaces – as well as the 14 complexity adjustment values discussed in the previous chapter are estimated. The resultant estimates can then be used to derive a FP value that can be tied to past data and used to generate an estimate.

Regardless of the estimation variable that is used, the project planner begins by estimating a range of values for each function or information domain value. Using historical data or intuition, the planner estimates an optimistic, most likely, and pessimistic size value for each function or count for each information domain value. An implicit indication of the degree of uncertainty is provided when a range of values is specified. A three-point or expected value can then be computed. The expected value for the estimation variable (size), S can be computed as a weighted average of the optimistic (Sopt), most likely (Sm), and pessimistic (Spess) estimates. For example, S = (Sopt + 4 Sm + Spess) / 6 -> eqn. 3.1 gives heaviest credence to the “most likely” estimate and follows a beta probability distribution. Once the expected value for the estimation variable has been determi8ned, historical LOC or FP productivity data are applied. Any estimation technique, n matter how sophisticated, must be crosschecked with another approach.

0 Comments:

Search for More Assignments and Papers Here ...

Google
 
 

Interview Preparation | Placement Papers