MBA management

Software


Software is described by its capabilities. The capabilities relate to the functions it executes, the features it provides and the facilities it offers. Software written for sales – order processing would have different functions to process different types of sales orders from different market segments. The features, for example, would be to handle multi-currency computing, updation of product, sales and tax status in MIS reports and books of accounts. The facilities could be printing of sales orders, e-mail to customers, reports and advice to the stores department to dispatch the goods. The facilities and features could be optional and based on customer choice.

The software is developed keeping in mind certain hardware and operating system considerations, known as platform. Hence, software is described along with its capabilities and the platform specifications that are required to run it. Like any other product, software is judged by a number of different quality attributes. The quality attributes required are:

• Bug free Reliable execution
• Produces correct results
• Reusable structure
• Easy to maintain
• Efficient use of Computing r-sources
• Easy to understand
• User friendly

Classes of Software


Software is classified into two classes: Generic and customized. Generic software is designed for a broad customer market whose requirements are very common, fairly stable and well understood by the software engineer. These products are sold in the open market, and there could be several competitive products in the market. The database products, browsers, ERP/ CRM and CAD/CAM packages, OS and system software are examples of generic software. Customized products are those that are developed for a customer where domain, environment and requirements being unique to that customer cannot be satisfied by generic products.

Process control systems, traffic management systems, hospital management systems and manufacturing process control systems require customized software.

A generic product is managed by the developer and a customized product is managed by the customer. In other words, requirements and specifications in a generic product are controlled by the developer, whereas in the case of the customized product, these are controlled by the customer and influenced by the practices of that industry.

Both products are developed in a similar manner, through various phases of life cycle development, starting from the requirement study to implementation.

Attributes of Good Software


Maintainability: The ease of maintenance of software is very important and critical to both the software engineer and its user. If the changes are not quickly effected, the execution of business process will be disturbed. For all changes desired by the customer or user, the software engineer has to respond very fast. Both these requirements can be fulfilled effectively only if the software design and its architecture are so chosen that changes can be carried out in the shortest time, without affecting overall integrity of the software. The change could be to correct mistakes, to expand its scope or adapt it to new technology. Ease of maintenance is therefore an essential attribute of good software.

Dependability: Dependability is the result of a number of sub-attributes, namely, reliability through assured performance, fully verified and validated processes for all functionality and features, secured and safe to work in all eventualities of hold –up, breakdown and power crises. It is dependable in all conditions and situations.

Efficiency: The software is termed good if it uses the resources at its command in the most effective manner. The resources are memory, processor and storage. The software design and architecture should be such that it offers a quick response in the least processing time, using resources at optimum level. There should not be a long processing cycle, and under or excessive utilization of physical resources.

Usability: Software becomes usable if it does not call for extra effort to be learnt. Usability increases with good documentation. In software operations a lot depends on the design of the user interface and the quality of the user manual.

Besides these attributes, the measure of good software is customer satisfaction and cost and budget constraints fulfillment. Customer satisfaction depends on the degree to which customer requirements and expectations have been met. The development of software within the cost budget depends on efficient design and a high level of project management effort.

Software Scope


Software, being a commercial product, calls for such engineering approach as to ensure that it is designed with the correct choice of technology and architecture to

• Achieve customer satisfaction
• Ensure on-time delivery
• Be developed within the budgeted cost
• Provide ease of maintenance to meet changing requirements

Software Feasibility


A feasibility study is a detailed assessment of the need, value, and practicality of a proposed enterprise, such as systems development. Simply stated, it is used to prove that a project is earlier practical or impractical. The ultimate deliverable is a report that discusses the feasibility of a technical solution and provides evidence for the steering committee to decide whether it is worth going on with any of the suggestions.

At the beginning of every project, it is often difficult to determine if the project will be successful, it its cost will be reasonable with respect to the requirements of building certain software, or if it will be profitable in the long run.

In general a feasibility study should include the following information:

• Brief description of proposed system and characteristics
• Brief description of proposed system and characteristics
• A cost/ brief analysis
• Estimates, schedules, and reports

Considerable research into the business and technical viability of the proposed system is necessary in order to develop the feasibility study.

Feasibility Study Components


There are actually three categories of feasibility:

Financial Feasibility

A systems development project should be economically feasible and provide good value to the organization. The benefits should outweigh the costs of completing the project. The financial feasibility also includes the time, budget, and staff resources used during all the stags of the project through completion.

A feasibility study will determine if the proposed budget is enough to fund the project to completion when finances are discussed, time must also be a consideration. Saving time and user convenience has always been major concern when companies develop products. Companies want to make sure that services rendered will be timely. No end user wants to wants to wait for a long time to receive service or use a product, however good it is, if another product is immediately available.

Key risk issues include:

1) The length of the project’s payback (the shorter the payback, the lower the risky.

2) The length of the project’s development time (the shorter the development time, the less likely objectives, users, and development personnel will change and, consequently, the lower the risk, and

3) The smaller the differences people make in cost, benefit, and life cycle estimates, the greater the confidence that the expected return will be achieved.

Technical Feasibility

A computer system should be practical to develop and easy to maintain. It is important that the necessary expertise is available to analyze design, code, install, operate, and maintain the system. Technical feasibility addresses the possibility and desirability of a computer solution in the problem area. Assessments can be made based on many factors for example, knowledge of current and emerging technical solutions, availability of technical personnel on staff, working knowledge of technical staff. Capacity of the proposed system to meet requirements, and capacity of the proposed system to meet performance requirements.

Developing new technology will need to take into account the current technology. Will today’s technology be able to sustain what we plan to develop? How realistic is the project? Do we have the knowledge and tools needed to accomplish the job? Emerging technology is getting more and more advanced with each passing day; somehow we need to know if our objectives can be realized, it is not enough to note if the product in development is technologically feasible, we also must sure that it is at par with or more advanced than technology in use today.

Organizational or Operational Feasibility

A systems development project should meet the needs and expectations of the organization. It is important that the system be accepted by the operational. The following requirements should be taken into consideration in determining if the system is operationally feasible: staff resistance or receptiveness to change, management support for a new system, nature or level of user involvement, direct and indirect impact of new system on current work practices, anticipated performance and outcome of the new system compared to the old system, and viability of development and implementation schedule.

The Feasibility Study Process


A feasibility study should follow a certain process. It should analyze the proposed project and produce a written description, define and document possible types of systems, and develop a statement of the probable types of systems. The feasibility study should analyze the costs of similar systems, it should produce an estimate of the next stage of the life cycle. Analysis of the current system is necessary in order to establish feasibility of a future technical system. This will provide evidence for the functions that the new system will perform. Finally, a report should be written containing suggestions findings, and necessary resources.

A feasibility report will be written and submitted to management containing all relevant information, including financial expense and expected benefits. Based on this report management will make its determination about the future of the project. Much of the information will come from the analyst and the systems investigation. The report should include information on the feasibility of the project, the principle work areas for the project, any needs for specialist staff that may be required at later dates. Possible improvements or potential savings, costs and benefits, as well as recommendation. Charts and diagrams relative to the project, such as Gantt and pert charts, should be included in the feasibility report; obviously the project cannot proceed until the feasibility report has been accepted.

Determining Feasibility


A proposal may be regarded as feasible if it satisfies the three criteria discussed at length earlier: financial, technical, and operational. Scheduling and legal issues must also be considered. It is possible to proceed with the project even if one more of these criteria fail to be met. For example, management may find it is not possible to proceed with the project at one point in time but that the project can commerce at a later date. Another option would be for management to make amendments to the proposed agenda and agree to proceed upon those conditions. Conversely, a project that may have been determined feasible may later be determined infeasible due to changes in circumstances.

Software Size Estimation


The first step in software estimation is assessing the size of the software proposed for development. The size estimate is then used to compute development efforts and resource estimate, cost, and development time. The size of the software is directly linked to requirement specifications. The size also varies with the complexity of the requirement specifications, business conditions and constraints, implementation platform and so on. It is further influenced by the software architecture chosen for meeting the requirements. In simple terms, the software engineer in the role of an estimator must have a clear understanding of requirement specifications and the complete scope of the system. The first need is to get the precise requirements, which are frozen by customer and users.

Line of Code (LOC)


Line of code (LOC) is a measure of the length of code the software engineer will write to deliver software requirement. A number of studies including that of Albrecht and Gaffney have shown that there is a relationship between the number of lines and programming language and quality of design. Jones C provides a rough estimate of the average number of lines of code per delivery of function point for various programming languages.

FP and LOC are used to calculate the productivity of skilled resource. The metric is ‘LOC per man month’ or ‘FP per man month’. To calculate manpower productivity, you need to measure in precise terms the efforts taken by skilled personnel to deliver the FPs of the entire software.

Function points and LOC metric have been found reliable and are therefore used for planning resources, tam building, costing as well as for making business proposals.

Function Point Analysis(FPA)


FPA is a popular method for estimating and measuring the size of the software. The basis for FPA is functionality of the software from the user‘s point of view. It considers users’ requirements from software, and its logical design. The size of the functionality is calculated in terms of Function Point Count (FPC). Allan J Albrecht of IBM is the father of FPC as a measure of size of the software. The advantage of FPC is that it can be used across programming languages and development technologies. The FPA and its version, Mark II FPA, are promoted by IFPUG. IFPUG releases the Function Point Counting Practices Manual for consistency of FPC calculations across organizations.

IFPUG has set the objectives for FPA. These are:
  • Measure functionality of software from a user’s perspective
  • Measure the functionality independent of development and implementation technology
  • Create simple and consistent measures to estimate the size of the software across projects and organizations
  • Use Function Point Analysis (FPA) Method
FPA method is simple to follow. It begins with determining users’ requirement from software. As mentioned earlier, the first step in the method is to determine the following on the basis of requirement analysis and software requirement specification (SRS):

  • Identifies the boundaries of the software scope
  • Count all functional requirements
  • Count all non-functional requirements
  • Identify design constraints
The enumeration of these inputs for FPA is critical to obtaining a correct size estimate. The enumeration will be accurate depending upon the quality of SRS considered by the software engineer.

To understand the method and its flow better, the terms used in FPA method should be understood. The terms used in FPA are given in the following Table.

Data Functions   Relates to holding different types of data complexity. Relates to nature of data structure.
Transaction Function   Considers all processes relating to acquiring data and processing and offer to users. The complexity of transaction relates to how cumbersome the process is due to rules, checks and controls. This function provides a method to process the data.
Data File Types   Data is stored in files. The files are Internal Logical Files and External Interface Files. The distinction between internal and external is done through finding ‘who creates the files and maintains it?’
If software generates a file (Record file) in the system then it is internal. If some needs a file (Master file) created by another system (which is meant for many other software applications) it is termed external .An invoice records file is internal and customer Master file is external.
Data Elements and Record Elements   Files and determined by nature of data Elements and Record elements. The Number of data elements and record elements together suggests the complexity. For example, data, document No., Customer Name, Amount Rate, Quality, tax Code are data elements.
Document No., data, Customer Name, Amount Together is a record. Files contain different sets of records.
Internal Logical Files   A file stored in the system and maintained by the system and used within the system.
External Interface File   A file shared by the system with other system. It is used by the system but not maintained by system or created by the system. This file external to the system.
Complexity of Data and File   Complexity of file is judged by the number of data element types and record element types in each file, be it internal logical file, or external interface file. If the data element types in a file are high, the file is termed complex. Record is a group of data elements created for specific processing need. In this record file, the number of record types decides its complexity. Record element types are determined on the basis updates its complexity. Record element types are determined on the basis updations, attributes of sub- entity and so on. For example, if in a file, the data, Amount and Quantity are updated, then this is termed as one Record Element Typ. Based on the quantity, if the document is updated for dispatch and balance the document No., Quantity Dispatched, and Quantity balance becomes another record element in the same file. If for some reason, the name of the customer, a sub-entity, is updated for receivable then is a third record type. So, a sales file is made of three types, having, say, 12 data elements.


A board guideline to assign unadjusted function Point (UFP) to the file is given in the following table. The user is advised to extrapolate UFP for intermediate score of elements. The function point is called unadjusted when it is not evaluated for system characteristics. UFP mentioned is a guideline for estimation.

File containing data element types/ record element types   Complexity   UFP
Less than 20/ Less than 2   Simple   5
Less than 50/ Less than 5   Normal   8
More than 50/ More than 5   Complex   15

Unadjusted Function Points(UFP)


A transaction is triggered when an event takes place in the environment. This event calls for updation of some status through a Transaction Process. For example if a person joins the organization, an Employee Master file will be updated, Account Master will be updated, and so on. A transaction is effected by external input and concluded by an external data input. Then, when it updates the strength of payroll employees by ‘one’, it is an external data input. Then, when it updates the strength of payroll employees by ‘one’, it is external output. When transaction is of a type of viewing, assessing or querying, in which input or output status is not affected, then it is called External Query transactions (EQ).

A board guidelines is given in the following table; suggesting UFP for variety of transactions characterized by file types referred and record element types.

Transaction containing Data Element Type ( DET)/ Record Element Type (RET)   Complexity   UFP* (Unadjusted Function Points for System Characteristics)
Less than 5/ Less than 2   Simple   3
Less than 20/ Less than 4   Normal   5
More than 20/ More than 5   Complex   7


Once unadjusted function points are calculated for data function and transaction function in the total scope of software, they have to be summed up for the entire scope of the software. An example of total UFP calculation is shown in the following table using the guidelines in the previous two tables.

S.No.   Name Data/transaction   DET   RET/FTR   Complexity Simple/normal/complex   UFP
1.   File 1   30   4   Normal   8
2.   File 2   20   3   Simple   5
3.   File 3   60   2   Complex   15
4.   Function- 1   5   2   Normal   5
5.   Function- 2   6   3   Complex   7
6.   Function- 3   2   3   Simple   3


UFP calculation is a systematic method based on certain characteristics of the software proposed to be developed. Since it considers data and transactions of the software, it largely depends on the designer’s conceptualization of the design and architecture. Hence’ UFPs may be different if two designers are estimating the size in their individual views of the software. After UFP calculation, next step is to calculate FPS adjusted for fourteen system characteristics.

Software Rating on General System Characteristics


Unadjusted Function Points calculated in the earlier sections are for functional requirements of data and transactions in the software. The method further requires rating the software on its system characteristics, which by and large are nonfunctional requirements. The IFPUG has identified 14 characteristics that influence the function points calculated for the software. The rating is a quantitative measure of influence of these characteristics on function points. The rating is marked on a scale of 0to 5. A lower rating means less influence on the software, and a higher rating denotes a higher influence. The total of the rating for 14 characteristics is the value of the rating arising out of general system characteristics. The rating suggested here is a subjective judgment of the designer. Hence, this value would differ from designer to designer. The 14 characteristics considered by IFPUG are as under:

  • Data communications
  • On-line Data Entry
  • Installation
  • Distributed Processing
  • End-User Efficiency
  • Operations
  • Performance
  • On – line Updates
  • Multiple Sites
  • Configuration
  • Processing
  • Change Requirement
  • Transaction Rate
  • Reusability

These characteristics have an effect on design, development, architecture and implementation that require specific skills and resources. The value arrived at after consideration of these characteristics will change the UFP count.

Let us discuss these characteristics more in detail, so that its application for valuing non- function requirement will be easier. Guidance has been given on two extreme rating; the situation in between has to be judged: and intermediate estimated value of rating is to be given.

1. Data Communication
  • If there is no need for communication, i.e. if the software system is a kind of standalone, then rating 0.
  • If the software system needs communication interfaces and facilities and uses different multiple communication protocols, then Rating= 5.

2. Distributed Data Processing
  • If the software does not need to transfer data to other systems or does not use any other component(s) in processing, then it is considered a local processing requirement and not a distributed processing one. In such a case, Rating=0.
  • If the software has to automatically move to another `software component outside the system’, complete the processing, fetch thee processed result in the software and then proceed further. It is considered distributed processing. If software has such multiple needs in execution, then Rating=5.

3. Performance
  • Very often, the customer or user specific certain performance requirements in terms of cycle time or volume/period or response to query etc. To achieve this performance level, the necessary features and facilities must be provided in design and architecture. Performance analysis tools have also to be used to evaluate the design and architecture. It may call for a development of prototype or a demonstration through proof of design or concept. In the absence of such performance requirement, Rating=0.
  • If non-functional requirement is in multiple terms across the software system, then Rating=5.

4. Configuration
  • If hardware or software platform is special with the specification having an impact on design considerations, then Rating=5.
  • If the configuration has no bearing on design, then rating=0.

5. Transaction
  • If the volume of transaction is so high as to affect the design, development, installation and special software development feature, then rating=5. If the volume is unpredictable, then also rating=5.
  • If transaction, volume and its serving rates are very low, then Rating 0.
6. On–line Data Entry
  • If data entry is offline and in done in batches, then Rating 0.
  • If data entry is more than 30% on-line and interactive with control requirements, then rating=5.

7. End- user Efficiency
  • If user capabilities require simplicity in usage through several features like function keys, means, on-line help, drop-down menus and so on, then Rating=5.
  • If user capabilities do not require such features affecting the design and architecture, then Rating=0.

8. On-line Update
  • If the requirement calls for on-line update, automated recovery procedures, triggering stored procedures with minimum or no user intervention, then Rating=5.
  • If on-line updates of such nature are very minimal, then Rating 0.

9. Complex Processing
  • If software is required to use complex procedures full of commercial and scientific rules, thereby influencing the software design, then Rating=5.
  • If software is required to cater for simple procedures not affecting significantly the software design, then Rating=0.

10. Reusability
  • If software nature is such that it could be reused in other applications and if such reusable component is very high, then rating=5.
  • If software is custom- and domain- specific and reusability is minimal, then Rating=0.

11. Installation ease
  • If software so developed requires installation utilities, their manuals, and external support, then Rating=5.
  • If software does not need external assistance while installation, then Rating=0.

12. Operational Ease
  • Operational ease is considered very high when user when user intervention is minimal for startup, shutdown, backup, recovery, setup etc., as system design and architecture automatically takes care of these, then Rating=5.
  • If there are no such requirements on the system design and architecture, then Rating=0.

13. Multiple sites
  • If software is required for multiple sits, and the sits at diverse in nature, affecting the design, then rating=5.
  • If software is not required for multiple sites or sits have no diversity, then rating 0.

14. Change Requirement
  • If software is to be designed to quickly incorporate change in reports, screens, queries, then rating-5.
  • If software is to designed for very minimal requirement of changes of such nature, then Crating=0.

    Once the rating for these 14 characteristics are determined, the next step is to arrive at a value that will adjust the UFP for the influence arising from these characteristics. The formula used for calculation of these values is.

    Value of Adjustment Factor (VAF) = (Total of Rating) X 0.01 + 0.65

    Software having high VAF mans that requirements have a very strong non-functional component. The Function Point Count (FPC) is calculated by the formula

    FPC= UFP X VAF

  • Where UFP is unadjusted function point taken from the previous table.
  • VAF is the value of the adjustment factor.

Process Based Estimation


Imagine any project: building a new house, creating a new service, deploying a technology solution. Within any of these projects there will be a logical approach from start to finish. Within project management, the flow of activities must be documented from initiation to closure. The five process groups don’t necessarily allow the work to progress. They serve more as a control mechanism to identify and oversee the flow of actions within the project.

Each process has unique activities contributes and coincides with the project work. The activities guide the project work from concept to completion. Specifically, the parts of the processes art the gears to the “project machine.” The processes allow for a specific, manageable, and expected Outcome of the project. Within each process, there are three common components:

• Inputs; Documented conditions, values, and expectations that start the given process.

• Tools and techniques: the actions to evaluate and act upon the inputs to create the outputs.

• Outputs; The documented results of a process that may serve as an input to another process.

Customizing Process Interactions


The processes are the mainstream, generally accepted order of operations. You can count on these processes existing and progressing in the preceding order. However, having said that, you can also count on these processes are not made of stone, but flexible steel.

The following are some general guidelines about customizing project processes:

• Projects that are resource- dependent may define roles and responsibilities prior to scope creation. This is because the scope of the project may be limited by the availability of the resources to complete the scope.

• The processes may be governed by a project constraint. Consider a predetermined deadline, budget, or project scope. The project constraint, such as a deadline, will determine the activity sequencing, the need for resources, risk management, and other processes.

• Larger projects require more detail. Remember that projects fail at the beginning, not at the end.

• Subprojects and smaller projects have more flexibility with the processes based on the usefulness of the process. For example, a project with a relatively small team may not benefit from an in depth communications plan the same way that a large project with 35 project team members might.

Use Case Based Estimation


Use case model is a representation of sequence of transactions initiated by the user (actor) from outside the system. In the process, the transaction uses internal objects to take the transaction to completion. Use case defines and describes what happens in the system in logical order, termed as system behavior. They represent the flows of events that the user triggers. The user/actor is anything that initiates or triggers the action in the system.

It is not necessary that the user be a human being. It could be an external hardware like a barcode reader, a card-swiping machine, an ATM, or any other system with an interface. Users can be of the same kind, type, role and responsibility performing the same use case. Members of the library or club, ATM card holders and clerks in accounts department are users of the same kind and type, with the same roles and responsibilities in their respective use cases.

Use case diagram is a graphical presentation of a user’s view and a developer’s understanding of transactions is performed in the case scenario. A use case is modeled by a

• Boundary or frame
• Line of communication of participatory association between actor(s) and the use case
• Transaction steps
• Generalization among use cases

A use case may begin with no preconditions or with some preconditions. It concludes with the achievement of a specific goal.

Though use case model is drawn as illustrated, a lot needs to be said to make use case diagrams more useful to the analyst/designer/ developer. They need to be written in standard format and documented properly for use in development. In fact, this use case document is an important voluminous document, referred very often for an understanding of the system. One can develop a template for a use case. Following is one suggestion for a template:

• System name
• Use case name with ID
• Use case goal(s)
• Post condition of the system when use case is complete
• Use case action sequence
• Sub- system name
• Application
• Page
• Precondition(s) to begin the action
• Actors

Let us explore our use case of railway ticket booking in the suggested format, as shown in the template.

System Name Railway Ticket Booking System Ticket booking and collection (other subsystems) are reservation, cancellation and extension).

Use case name: Ticket collection

Actors:
• Passenger or passenger’s representative.
• Issuing clerk.

Use case goal: collect money and issue ticket(s)

Precondition:

• Tickets are issued and passenger walks away and system is available to the next passenger in the queue.

• Ticket/train database is suitably updated showing open availability in numbers of ticket.

Related use cases; cancellation, extension of journey.

Use case describes the scenario in terms of transactions, users and system performance. It is based on user tasks and explains user activities in natural English language. It helps to

• Define the scope of the use case, use cases and applications and system.

• Measure the size of the software development project in a number of use cases.

• Track and monitor progress of the development in terms of use cases.

• Validate the requirement of the system as told by the user.

• Design test cases by each use to ensure completeness.

• Ensures complete system documentation, as all use cases are included with model, design and details.

• Realize tangible benefits in costs by the reuse case approach as the number of formal methods and processes are reduced for development.

• Reduce the user’s need for training and hand-holding in learning to operate the system.

• Use systems more efficiently.

• Reduce time for system maintenance.

• Obtain user’s fast acceptance of the system.

COCOMO model


COCOMO (constructive Cost Model) considers the size of the software and several other characteristics of the proposed software. Dr. Barry Boehm in 1981 proposed this approach when software engineers started using OOD, component technologies, reusable strategies and automated tools for code generation, testing and so on.

A series of mathematical formulae are used to predict effort based on software size (FPC) and other factors such as maturity of process and capability of development tam, development flexibility, risk and risk resolution. The COCOMO model uses exponential in estimation. This is mainly because efforts do not normally increase linearly with size. As the size of the software increases, the development team size increases, increasing system development overheads, namely communication, configuration management, integration and so on.

Further the COCOMO model considers the software product attributes.   Platform attributes , development team attributes and project management attributes, and weights them suitably to improve the estimation.


The latest version of the COCOMO model is COCOMO II, released in 1997. This model uses 161 data points, and uses Bayesian statistical analysis of empirical data on completed projects and expert opinions. COCOMO II has three estimation models to estimate effort and cost. The models are Application Suite, Early design and Final Design Architectural Model.

Application Suite Model


This model is used where software can be decomposed into several components and each component can be described in object points. The objects are screens, reports and 3GL components, which are easy to identify and count when the software system is split into different sub-system components.

Object points are alternative to function points when $GLs or similar languages are used for software development. It should be noted that object points are not object classes as understood in OOT.

    Simple   Complex   Very complex
Screen   1   2   3
Reports   2   5   8
3 G L Modules   4   10   -


The candidates for object points are screen, reports and the number of 3 GL modules to supplement the 4GL code. These objects are given points as under, depending on the level of complexity.

The advantage of using object points is that they can be estimated from the high level of design of the software and they are only numbers of screens, reports and 3GL modules. The complexity level is the judgment of software engineer.

The object points count needs modification by way of way of reduction as the software may us reusable components and libraries. Therefore:

Revised Object Point (ROP)= Object Point X (100-- % reuse)/100.

For this ROP, the effort in man months is computed using a productivity constant, based on the software development team’s experience and capability. COCOMO II prescribes a productivity constant expressed as the number of object points per man month as in the following table.

LEVEL OF EXPERIENCE VERSUS PRODUCTIVITY CONSTANT IN ROP

Man Month Effort=ROP/ Productivity constant

For example, object points, say, are 40 and the re-use possibility is 10%, then

ROP = 40 X(100—10)/100 = 40X0.90=36

Further, if the development team’s experience and maturity is at level Normal, then productivity constant is 13, Hence

MME = ROP/Productivity Constant = 36/13 3 Man months

This model is also used to estimate the effort at the prototype level when requirements are not clear.

Early design level model: COCOMOII


The COCOMO II models use the base equation

MME = AX (Size)B

Where MME = Man month Effort

A = Constant representing nominal productivity

B = Factor indicating economics/ diseconomies of scale. ‘B’ is known as the scaling factor. Size KLoC.

It should be understood that if B= 1, then software does not have an impact on MMCE. So, if B is less than 1, the man month effort will have a positive impact and if b is greater than1, the impact on man month effort is negative. That is, man month effort is either less or more than the man month effort required when B=1.

The emphasis in the COCOMOII model is on scaling factors, which together give rise to ‘B’. The model uses five factors for arriving at economies/ diseconomies in scale, namely Precedentness (PREC), development flexibility (FLEX), Risk resolution (RESL), Tam Cohesion (TEAM), and Organization Process Maturity (PMAT). ‘B’ the value of scaling factors (also known as drivers) is computed as below.

B = 0.91+ 0.01 (“5! Rating)

The rating for each of the five factors is based on the organization’s level on each of these factors. The organization’s level is judged on a scale from very low to high. Based on this judgment, COCOMO II recommends a value for the rating. The following table gives the value of rating for scaling factors.

Value of rating for scaling factor:

Factor code   Very low   Low   Nominal   High   Factor name
PREC   6.20   4.96   3.72   2.48   Precedentness
FLEX   5.07   4.05   3.04   2.03   Flexibility
RESL   7.07   5.65   4.24   2.83   Risk Resolution
TEAM   5.48   4.38   3.29   2.19   Team Cohesion
PMAT   7.80   6.24   4.68   3.12   Process Maturity


Let us now understand these five factors to judge the organization on the very low to high scale.

• PREC : Precedentness.
Understanding and experience of developing similar software. If the degree of learning benefit which can be given to new software is very low, then rating is 6.20.

• FLEX: Flexibility.
Flexibility is measured bases on the degrees of freedom and comfort level the developer has, based on the level conformance required to either pre-established or customer-laid down standards, specifications, tools and schedules. If the developer has a free play in this area, then flexibility in development is very high and rating then is 2.03.

• RESL: Risk Resolution.
If the organization and the software development team has considerable experience in risk management and is in a position to develop an RMMM plan for the project, then the level of risk resolution is very high. The rating value, therefore, is 2.83.

• TEAM: team Cohesion.
If the Capacity of the organization to provide a development team whose members will work towards common objectives in cohesion is normal, then value of rating is 3.29.

• PMAT: Process Maturity.
This is the SEI_CMM level used for describing an organization‘s development maturity. If CMM level is very low, that is ‘1’, then rating value is 7.80.
Let us assume that in a given software development scenario, the organization’s level on these five factors is very low. Then
B = 0.91 + 0.01 X (6.20 + 5.07 + 7.07+ 5.48+ 7.80)= 0.91 + 0.01x 31.62= 1.2262
A= 13, size based on FPA is 10 KLoC
Then MME = 13 X(10) 1.2262= 220 man month.
For the same software, if we say that organization scores ‘high’ on all factors,
Then B= 0.91 + 0.01 X (2.48 + 2.03 + 2.83 + 2.19+ 3.12)
=0.91 + 0.1265 = 1.0365 =1.04
Then MME 13 X(14) 104 = 142 man months.

Post architectural model: COCOMO II


In addition to these factors that affect development efforts, there are also other factors that are relevant, as they have a large impact on MME. If there factors are considered, the MME (Modified) is calculated, there are many factors which affect the man month efforts but COCOMO II considers the following 16 factors

Which are significant?
COCOMO LL equation for
MME (Modified) = MME X (Product of ratings of 16 factors)
COCOMO II factors
S.No.   Code   Name   Description
1.   RELY   Software Reliability   Failure does not cause any inconvenience, Rating=very low. Failure is fatal, Rating=High
2.   DATA   Database size   Database size is measured as (D) Database in bytes divided by (p) Lines of code (D/P). If D/P<10, rating is low. If D/P>1000, Rating is high.
3.   CPLX   Software Complexity   Few control options, simple, few calculations. Simple data management, then rating = very low. If all these are high, the Rating= high.
4.   RUSE   Required Reusability   No requirement of reusability, i.e., custom software then rating= very Low. If reusability is required of substantial nature then Rating= High.
5.   DOCU   Documentation   Documentation need is standard but low then Rating= Very low. Documentation need is very high both pre-and post – development, rating= High.
6.   TIME   Time Constraint on Execution   No constraint, Rating= Very low. Available time is almost equal to execution time, Rating= High.
7.   STOR   Main storage Constraint   No constraint due to available very high storage, Crating= Very low. Available storage is almost equal to required storage the Crating=High.
8.   PVOL   Platform Volatility   If platform is stable, the Rating= Very low. If platform is unstable and may change rapidly and frequently, rating= High.
9.   ACAP   Analyst Capability   Inexperience and lack of knowledge etc., then capability is low Rating= Very low. If analysts score high on experience then rating = High.
10.   PCAP   Programmer   Same as ACAP.
11.   PCON   Capability Personnel Continuity   Very low turnover, Rating= high 50% or more leave, Rating+ very low.
12.   AEXP   Analyst experience   Minimum experience, rating= very low, More leave, rating = very low.
13.   PEXP   Programmer Experience   Same as 12 AEXP.
14.   LTEX   Language and tools experience   Minimum experience, Crating = Very low. More than adequate experience, Rating= Very high.
15.   TOOL   Use of software Tools   No use or occasional use, rating = Very low. Sustained usage of a variety of tools, rating=High.
16.   SITE   Site environment   -Single site, single location, not more than one or two sponsors, Crating= Very low.
- Multiple site, multiple partners, number of locations, but supported by good communication infrastructure, Rating = High.
-If problems of communication are not there, Crating= Very low.


For the 16 factors, general guidelines are given to rate each factor. The value of the rating is as per the guidelines given in the following table.

Factor   Levels and Ratings
    Very Low   Low   Nominal   High
Product factors
• RELY
• DATA
• CPLX
• RUSE
• DOCU
  0.82
0.80
0.73
0.85
0.81
  0.92
0.90
0.87
0.95
0.91
  1.00
1.00
1.00
1.00
1.00
  1.10
1.14
1.17
1.07
1.11
Platform factors
• TIME
• STOR
• PVOL
  NRA*
NRA
NRA
  NRA*
NRA
NRA
  1.00
1.00
1.00
  1.11
1.05
1.15
Personnel Factors
• ACAP
• PCAP
• AXP
• PEXP
• LTXP
• PCON
  1.42
1.34
1.22
1.19
1.20
1.29
  1.19
1.15
1.10
1.09
1.09
1.12
  1.00
1.00
1.00
1.00
1.00
1.00
  0.85
0.88
0.89
0.91
0.91
0.90
Project Factors
• TOOL
• SITE
  1.17
1.22
  1.09
1.09
  1.00
1.00
  0.90
0.93
• NRA =   No Rating Applied            


Let us now estimate the MME for a post architectural scenario in our earlier example, where size is 10 KL0C.

Let us evaluate two possibilities of an extreme nature. One assumption is that the organization scores very low on all architectural factors, and the second possibility is that the organization scores high on all factors.

Possibility 1: All rating are ‘very low’
Hence, product of all 16 ratings value (from the above table)
0.82X 0.80 X 1.73 X 0.85 X 0.81 X 1.42 X 1.34 X 1.22 X 1.99 X 1.20 X 1.29 X 1.17 X 1.22 = 2.01

Possibility 2: all rating are ‘High’
Hence: product of all 16 rating value (from the above table)
1.10X 1.14 X 1.17 X1.07 X 1.11 X 1.11 X 1.05 X 1.15 X .85 X.88 X .88X .91 X .91 X ,90X .90 X 93 = 0.98

So MME 1 = 220 X 2.01 + 442 man months Possibility 1
MME 2: 142 X 0.98 = 139 man months. Possibility 2

The man month cost is estimated by multiplying man months by man month rate.

Agile Software Development


Agile Software development refers to a group of software development methodologies based on iterative development, where requirements and solutions evolve through collaboration between self-organizing cross functional teams. The terms. The term was coined in the year 2001 when the agile Manifesto was formulated.

Agile methods generally promote a disciplined project management process that encourages frequent inspection and adaptation, a leadership philosophy that encourages teamwork, self-organization and accountability, a set of engineering best practices that allow for rapid delivery of high quality software, and a business approach that aligns development with customer needs and company goals. Conceptual foundations of this framework are found in modern approaches to operations management and analysis. Such as lean manufacturing, soft systems methodology, speech act theory (network of conversations approach), and Six Sigma.

There are many specific agile development methods. Most promote development iterations, teamwork, collaboration, and process adaptability throughout the life-cycle of the project.

Agile methods break tasks into small increments with minimal planning, and don’t directly involve long-term planning. Iterations are short time frames (“time boxes”) that typically last from one to four weeks. Each iteration is worked on by a tam through a full software development cycle including planning, requirements analysis, design, coding, unit testing, and acceptance testing when a working product is demonstrated to stakeholders produce documentation as required. An iteration may not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration. Multiple iterations may be required to release a product or new features.

Team composition in an agile project is usually cross functional and self-organizing without consideration for any existing corporate hierarchy or the corporate roles of team members. Team members normally take responsibility for tasks that deliver the functionality an iteration requires. They decide individually how to meet an iteration’s requirements.

Agile methods emphasize face-to-face communication over written documents when the team is all in the same location. When a team works in different locations, they maintain daily contact through videoconferencing, voice, e-mail, etc.

Most agile teams work in a single open office, which facilitates such communication. Team size is typically small (5-9people) to help make team communication and team collaboration easier. Larger development efforts may be delivered by multiple teams working towards a common goal or different parts of an effort. This may also require a coordination of priorities across teams.

No matter what development disciplines are required, each agile team will contain a customer representative. This person is appointed by stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer miditeration problem domain questions. At the end of each iteration, stakeholders and the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment and ensuring alignment with customer needs and company goals.

Most agile implementations use a routine and formal daily face-to-face communication among team members. This specifically includes the customer representative and any interested stakeholders as observers. In a brief session, team members. This specifically includes the customer representative and any interested stakeholders as observers. In a brief session, team members report to each other what they did yesterday, what they intend to do today, and what their roadblocks are. This standing face-to-face communication prevents problems being hidden.

Agile emphasizes working software as the primary measure of progress. This combined with the preference for face-to-face communication, produces less written documentation than other methods though, in an agile project, documentation and artifacts rank equally with working product. The agile method encourages stakeholders to prioritize them with other iteration outcomes based exclusively on business value perceived at the beginning of the iteration.

Specific tools and techniques such as continuous integration, automated or x Unit test, pair programming, test driven development, design patterns, domain driven design, code refactoring and other techniques are often used to improve quality and enhance project agility.

Cost Estimating


Cost estimating is the process of calculating the costs of the identified resources need to complete the project work. The person or group doing the estimating must consider the possible fluctuations, conditions, and other causes of variances that could affect the total cost of the estimate.

There is a distinct difference between cost estimating and pricing. A cost estimate is the cost of the resources required to complete the project work. Pricing, however, includes a profit margin. In other words, a company performing projects for other. Organizations may do a cost estimate to see how much the project is going to cost to complete. The costs to be considered are as under:

• Personnel
• Software costs
• Training costs
• Marketing costs
• Hardware costs
• Communication, travel and stay costs
• Outsourcing costs
• Administrative costs

Considering the Cost Estimating Inputs


Cost estimating relies on several project components from the initiation and planning process groups. This process also relies on enterprise environmental factors, the processes and procedures unique to your organization, and the organizational process assets, such as historical information and forms & templates.

Using the work Breakdown Structure (WBS):
Of course, the WBS is included it’s an input to five major planning processes: cost estimating, cost budgeting, resource planning, risk management planning, and activity definition.

Relying on the Resource Requirements:
The only output of resource planning serves as a key input to cost estimating. The project will have some requirement for resources the skills of the labor, the ability of materials, or the function of equipment must all be accounted for.

Calculating Resource Rates:
The estimator has to know how much each resource costs. The cost should be in some unit of time or measure such as cost per hour, cost per metric ton, or cost per use.

If the rats of the resources are not known, the rats themselves may also have to be estimated. Of course. Skewed rates on the estimates will result in a skewed estimate for the project. There are four categories of cost:

Direct Costs:
These costs are attributed directly to the project work and cannot be shared among projects (airfare, hotels, and long distance phone charges, and so on).

Indirect Costs:
These costs are representative of more than one project (utilities for performing organization, access to a training room, project management software license, and so on).

Variable Costs:
These costs vary depending on the conditions applied in the project (the number of meeting participants, the supply and demand of materials, and so on).

Fixed Costs:
These costs remain constant throughout the project (the cost of a piece of rented equipment for the project, the cost of a consultant brought on to the project, and so on).

Estimating Project Costs


Management, customers, and other interested stakeholders are all going to be interested in what the project is going to cost to complete. Several approaches to cost estimating exist, which we’ll discuss in a moment. First, however, understand that cost estimates have a way of following the project manager around especially the lowest initial cost estimate. The more accurate the information, the better the cost estimate will be.

Using Analogous Estimating


Analogous estimating relies on historical information to predict the cost of the current project. It is also known as top-down estimating. The process of analogous estimating takes the actual cost of a historical project as a basis for the current project. The cost of the historical project is applied to the cost of the current project, taking into account the scope and size of the current project as well as other known variables.

Analogous estimating is a form of expert judgment. This estimating approach takes less time to complete than estimating models, but is also less accurate. This top-down approach is good for fast estimates to get a general idea of what the project may cost.

Using Bottom-up Estimating


Bottom-up estimating starts from zero, accounts for each component of the WBS, and arrives at a sum for the project.it is completed with the project team and can be one of the most time consuming methods used to predict project costs. While this method is more expensive, because of the time invested to create the estimate, it is also one of the most accurate. A fringe benefit of completing a bottom-up estimate is that the project team may buy into the project work since they see the cost and value of each cost within the project.

Cost Budget


Cost budgeting is the process of assigning a cost to an individual work package. The goal of this process is to assign to the work in the project so it can be measured for performance.

Cost budgeting and cost estimates may go hand-in-hand, but estimating should be completed before a budget is requested or assigned. Cost budgeting applies the cost estimates over time. This results in a time phased estimate for cost, allowing an organization to predict cash flow needs. The difference between cost estimates and cost budgeting is that cost estimates show costs by category, whereas a cost budget shows costs across time.

Developing the Project Budget


Many of the tools and techniques used to create the project cost estimates are also used to create the project budget.

• Cost aggregation: Costs are parallel to each WBS work package. The costs of each work package are aggregated to their corresponding control accounts. Each control account then is aggregated to the sum of the project costs.

• Reserve analysis: Cost reserves are for unknown unknowns within a project. The contingency reserve is not part of the project’s cost baseline, but is included as part of the project budget.

• Parametric estimating: this approach uses a parametric model to extrapolate what costs will be for a project (for example, cost per hour and cost per unit). It can include variables and points based on conditions.

• Funding limit reconciliation; Organizations only have so much cash to allot to projects and you can’t have all the monies right now. Funding limit reconciliation is an organization’s approach to managing cash flow against the project deliverables based on a schedule, milestone accomplishment, or data constraints. This helps an organization plan when money will be devoted to a project rather than using all of the funds available at the start of a project. In other words, the monies for a project budget will become available based on dates and/ or deliverables. If the project doesn’t hit predetermined dates and products that were set as milestones, the additional funding becomes questionable.

• Bottom-up budgeting: This approach is the most reliable, though it also takes the longest to create. It starts at zero and requires that each work package be accounted for.

• Computerized tools: the same software programs used in estimating can help predict the project budget with some accuracy.

Make or Buy Decisions


A firm may be manufacturing a product by itself. It may receive an officer from an outside supplier to supply that product. The decision in such a case will be made by comparing the price that has to be paid and the saving that can be affected on cost. The saving will be only in terms of marginal cost of the product since generally no saving can be affected in fixed costs.

Similarly, a firm may be buying a product from outside; it may be considering to manufacture that product in the firm itself. The decision in such a case will be made by comparing the price being paid to outsiders and all additional costs that will have to be incurred for manufacturing the product. Such additional costs will comprise not only direct materials and direct labor but also salaries of additional supervisors engaged, rent for premises if required and interest on additional capital employed. Besides that the firm must also take into account the fact that the firm will be losing the opportunity of using surplus capacity for any other purpose in case it decides to manufacture the product by itself.

In case a firm decides to get a product manufactured from outside, besides the saving in cost it must also take into account the following factors:

(i) Whether the outside supplier would be in a position to maintain quality of the product?

(ii) Whether the supplier would be regular in his supplies?

(iii) Whether the supplier is reliable? In other words, he is financially and technically sound.

In case the answer is “No” to any of these questions it will not be advisable for the firm to buy the product from outside.
Copyright © 2015 Mbaexamnotes.com         Home | Contact | Projects | Jobs

Review Questions
  • 1. What is software? What are the attributes of good software?
  • 2. What is feasibility study? What are the different categories of feasibility study? Explain.
  • 3. What are the processes involved in feasibility study / Explain.
  • 4. Write short notes on a) Software size estimation b) LOC c) FPA.
  • 5. Explain the FPA method of software estimation.
  • 6. What do you mean by process based estimation?
  • 7. What do you mean by use case base estimation?
  • 8. What is COCOMO model? Explain.
  • 9. What is cost estimation? What are the different costs to be considered in a project?
  • 10. What is bottom up estimating?
  • 11. What is cost budget? How will you develop a cost budget?
  • 12. ‘Make or buy decisions requires critical thinking’—Elucide.
Copyright © 2015 Mbaexamnotes.com         Home | Contact | Projects | Jobs

Related Topics
Estimation of Software Projects Keywords
  • Estimation of Software Projects Notes

  • Estimation of Software Projects Programs

  • Estimation of Software Projects Syllabus

  • Estimation of Software Projects Sample Questions

  • Estimation of Software Projects Subjects

  • EMBA Estimation of Software Projects Subjects

  • Estimation of Software Projects Study Material

  • BBA Estimation of Software Projects Study Material