Why effort estimation?
Taxonomy
Web Resources
Some thoughts about Generic Modelbased Models
Some thoughts about NonModel based Methods
A starting point: applying Regression Analysis
Estimating with Incomplete Data
Estimation studies by application domain
Software Estimation Tools
Publications
Why effort estimation?
After discussing how a project can be sized against an FSM method, next step is the calculation of the effort needed to build such project.
Main applications where software resource modeling and estimation is necessary are:
* outsourcing software projects: a company may decide to outsource a new system to be developed or maintained and such estimation provide part of the answer to the "make or buy" question
* supplying software products or services: a company has to estimate the effort needed for a possible next project in a bid for a Customer, before having all the technical specifications available, trying to be more accurate as possible in such moment
* project management: the estimation at the start of a project represent the core activity for the allocation of resources within the project, based on the historical data relevant for such kind of project
* productivity benchmarking: another type of application for estimation is productivity benchmarking with similar projects, comparing eventual different set of techniques and/or processes applied.
Before introducing the main families of techniques and methods for estimating the effort for a project, some general tips on estimation could be useful. A 2002 presentation by Steve McConnell listed the "10 deadly sins of software estimation" (in reality, they listed the first 20 ones: the ones from #20 till #11 in a short way and the top10 in a more detailed way).
Taxonomy
A very exhaustive summary research is the 2000's ISERN technical report by Briand & Wieczorek, starting from the history and motivation for resource estimation in the Software Engineering field up to the evaluation and criteria adopted to assess an estimation method.
This is their classification schema:
(click to enlarge the image)
A recent paper (in Italian) at CNIPA by Prof. S.Morasca uses the same approach for his discussion. Another technical report proposing a different taxonomy is the one by Boehm, Abts & Chulani (Figure on p.4)
Web Resources
For each category, in the following there are some direct links to such resources:
Model Based Methods 
Generic Model Based 
Proprietary 
 


Nonproprietary 
* COCOMO I
* COCOMO II
* SLIM


Specific Model Based 
Data Driven 
* CART (Classification and Regression Trees)
* OSR (Optimized Set Reduction)
* Stepwise ANOVA
* OLS (Ordinary Least Square) regression
* Robust regression (LMS, LBRS, LIRS)



Composite Methods 
* COBRA
* Analogybased methods

NonModel Based Methods 


* Expert Judgement
* Delphi method and variants

Some thoughts about Generic Modelbased Models
Often Generic ModelBased models such as COCOMO and its variants are used and applied asis, without taking into account the composition of the dataset on which such model has been built. For instance, COCOMO I was formulated using 63 projects while COCOMO II using 83 projects, most of which in COBOL, LISP and ADA programming languages.
Many people speak about such models as "open" models. But they really seems to be "closed" models, in the sense that they propose a "black box" approach, where you apply your data to a dataset not necessarily having a good fit with your organization, the expertise of your project teams, programming languages, environments and all the proxies proposed by that model. So, what about the application of any model with a low fit your reality? Which level of predictability should be expected in return?
In this case, a calibration of the basic values provided by the model should be done before applying it; here there are some papers about calibration on the PostArchitecture Model using also a Bayesian approach. Anyway, a lot of attention must be paid to this issue in order to obtain reliable estimated data.
Some thoughts about NonModel based Methods
First of the two techniques/methods for this category is Expert Judgement. The lower part of the classification tree presented before reflects the opportunity to do not formalize the knowledge into models, even to use the experience by human experts case by case for estimating project. Wellknown and recognized guides on Project Management such as the PMBOK (Project Management Body of Knowledge) in the "Estimate Activity Duration" process (process #6.3) proposes the "Expert Judgment" and the "Analogous Estimating" as the first two "tools & techniques" for satisfying that process. The "Quantitatively based durations" techniques are classified and treated after. This could be a simple but easy way to look at the relevance nowadays nonmodel based methods have in the estimation field, for several reasons. In particular, for the ICT sector there are several publications by Magne Jorgensen (Univ. of Oslo, Norway) looking at this aspects, with attention through statistical analysis to catch which could be the main drivers in human judgement for reducing estimation errors. This paper just published in the "IEEE Transaction on Software Engineering" provides an extensive review of studies related to expert estimation of software development effort, discussing why this kind of estimation should be better than using models. Another interesting paper by Passing & Strahringer, describing a process with three levels of prescriptiveness (Basic, Intermediate, Advanced)and two levels of decomposition (workflows and activities), based on the first two estimation criteria listed in the process #6.3 of the PMBOK2000.
Second item classified here is the Delphi method. As discussed in this paper by Karl Wiegers, some flaws on the basic Delphi technique  not including team interaction among the experts composing the team were detected by Barry Boehm and his Rand colleagues, modifying it into the Wideband Delphi. A very quick summary on it is available in this presentation by Robert L.Galen (pages 37). A template for formalizing Wideband Delphi estimations is also available from one of the Carnegie Mellon University (CMU) courses.
A starting point: applying Regression Analysis
Due to the complexity in calibrating a database such as the COCOMO one and the several factors to manage, the starting point is a basic comprehension and usage of Statistics and Math also in Software Engineering, in particular of Regression Analysis, and its application your own projects database, where taking into account several qualitative factors, mainly organizational ones defining the project, such as: the project typology (New Development, Customization, Maintenance, ...), the Project Manager responsible for that project, the number of staff people involved in the project, the kind of Software Life Cycle selected (Waterfall, Spiral, ...) and its eventual approaches (Sashimi, eXtremeProgramming, SCRUM, Lightweight methodologies, ...):
and quantitative data (at least  as said  the size, expressed preferreably with a FSM method unit, such as Function points, COSMIC fsu, ...) but also the effort distribution by SLC phase, the number of defects, and so on. It could be interesting, to accomplish with SwCMM requirements, to isolate the effort devoted for SQA and SCM activities (last two columns in next figure), since these activities should be performed by external groups.
(click to enlarge the image)
The application of the simple formulas for linear or exponential regression allows to obtain the two parameters for deriving the new project effort values.
These are the formulas for linear regression:
)
and exponential regression:
,
and finally
Click here for a simple MS Excel template based on these formulas.
After the application of regression analysis, the total number of man/days for the new project will be derived on the base of the assumptions taken into account. Next step will be the distribution of effort by SLC phase.
Thus, an interesting aspect to catch from the projects database should be the (re)calculation of some figures, as shown in last table (Max, Min, Avg, Median values plus the Avg% contribution that a certain phase within the whole project has given). For instance, considering those sample data, the Coding phase in the average takes 23.29% of the total effort, SQA activities only the 3.47% and so on. So, grouping projects by similarities (in a worksheet such this with Excel filters), it is possible to apply such figures on the new project or start from them in order to determine the new effort distribution.
But what to do if there is no in place your own historical database available? Does it exist some study with a "standard" effort distribution?
Capers Jones in a 2002 paper proposed a table with the list of the 25 most applied activities in Software Projects from 4 domains (Web, MIS, Systems and Military projects) for application of about 1000 FPs in size or larger. A 2001 paper reported on page 2 the SLC phase distribution on Rubin's data, even if the Code/Unit Test is a unique category. Another presentation from the Ottawa SPIN presents on slide 10 some standard figures (rules of thumb).
Anyway, applying asis those figures could be dangerous, since the application of a certain distribution  as said  should be done against projects with several similarities. Thus, a lot of attention must be paid to this aspects during the planning phase.
Estimating with Incomplete Data
A common and frequent problem could occur when building (or simply consulting) an historical project database for estimation purposes: some data could be incomplete and the most common practice is simply to ignore those observations. This practice (called listwise deletion) can lead to reduce the accuracy of cost estimation models.
Several techniques have been therefore created  labelled under the MDT (Missing Data Techniques) acronym  to overcome this common situation with the aim to improve the accuracy in estimation.
The most common ones are:
Deletion Methods 
Listwise deletion 
uses only those observations containing any missing values 

Pairwise deletion 
all record values in each observation are considered and missing values are ignored 
Imputation Methods 
Mean Imputation 
each missing value is substituted with the mean of observed values 

HotDeck Imputation 
each missing value is filled by taking values from other observation in the same dataset 

Cold Deck Imputation 
as hotdeck imputation, but the selection of a donor comes from the results of a previous survey 

Regression Imputation 
each missing value is replaced with a predicted value based on a regression model 

Multiple Imputation Methods 
as regression imputation, but imputing more than one value taken from a predicted distribution of values 
A reference publication is:
Strike K., ElEmam K. & Madhavji N., Software Cost Estimation with Incomplete Data, National Research Council of Canada  Institute for Information Technology, Technical Report NRC43618, January 2000
Other papers referring/using MDTs are:
P.Sentas, L.Angelis, I.Stamelos & G.L.Bleris, Multiple Logistic Regression as Imputation Method Applied on Software Effort Prediction , METRICS2004 Conference, September 2004
N.Ohsugi, M.Tsunoda, A.Monden, & K.Matsumoto, Effort Estimation based on Collaborative Filtering, PROFES 2004 Conference, April 2004, Nara (Japan)
Estimation studies by application domain
It is possible to find several papers reporting experimental data applying estimation techniques or deriving formulas from data retrieved from the field, starting  for the MIS domain  from the one by Walston & Felix (1977) on the IBM Systems Journal, one of the first papers about this issue, and going on. Another good source with several papers on estimation from 1999 on is here.
MIS projects
This is the most treated domain, with several studies published. Here in the following there is a list of some valuable ones available online.
One of the most interesting and cited studies on the usage of Function Points in building productivity models by Abran & Robillard
A review of predictive models FPAbased is reported by Meli & Santillo
Several papers are reporting estimation models based on the COSMICFFP method as the one presented by Abran, Symons & Oligny at ESCOM2001, based on 12 projects from the ISBSG worldwide database
Dolado presented in 1999 a study for comparing three estimation techniques (Linear Regression, Neural Networks and Genetic Programming) against 7 datasets expressed both in Function Points and LOCs
A paper by Maxwell & Forselius appeared on IEEE Software looking at most impacting key productivity factors for properly predicting the project effort
Bayesian Belief Networks (BBN) have been recently investigated as a Software Productivity Estimation tool by Fenton and also by Bibi, Stamelos & Angelis
BRACE is a software tool supporting the practical estimation by analogy using a simulation approach named bootstrap, by Stamelos, Angelis & Sakellaris
An ISERN 2000 Technical Report by Jeffery, Ruhe & Wieczorek comparing several estimation techniques applied on the ISBSH dataset R6
Maintenance projects
The most of the studies about Software Maintenance are from Netherlands. From the functional measurement side, NESMA (the Netherland Software Metrics Association), whose Counting Practice Manual is strongly devoted to count enhancement projects
Niessink and Van Vliet has published some papers on this topics, using FPA
Manfred Bundschuh presented this paper at IWSM2002, with some notes on the "bath tub curve" for predicting the service effort in IT systems of Volkswagen AG
Abran, Silva & Primera presented a compared estimation study with two sets of projects using COSMICFFP v2.0 in the maintenance context
Web/Hypermedia projects
Some particularities are be nowadays devoted to web/hypermedia projects. Holck & Clemmensen describe what should make webdevelopment different from MIS applications, stressing that estimates cannot be based on functionalities. Reifer, taking note of such differences, proposes WebMo, an estimation model COCOMOlike for web projects, whose sizing method is Web Objects, based on the Function Point Analysis (FPA) method but considering in place of the traditional data/transaction elements (ILF, EIF, EI, EO, EQ) other "predictors" with a better fit with the web in addition to the traditional ones (# of xmlhtmlquery lines, # of multimedia files, # of scripts, # of web building blocks).
A Chilean experience is reported by Ochoa, Bastarrica and Parra, proposing a method called CWADEE, based and moving a bit from WebMo/WebObjects; due to some project constraints, they applied the Data Web Points (DWP) metric and an estimation equation that take care of: user cost 9 cost drivers, the # of DWP, a coefficient for representativeness.
Another 2003 publication was application of the COBRA method to web projects, called WebCOBRA, by Ruhe, Jeffery & Wieczorek, with a MMRE=0.17 against the used dataset.
Using Data Driven models, such as CBR (CaseBased Reasoning), Stepwise Regression (SWR) and Regression Trees (CART), Mendes et al. recently proposed a comparative study of cost estimation models for web hypermedia applications using a database with 34 projects.
An interesting research line on estimation issues is conducted by a Norwegian Lab (Simula); look at publications by Magne Jørgensen. About estimation for web project, this paper examines the relationship between the accuracy of expert effort estimates of webdevelopment projects and the estimators' type of competence.
Software Estimation Tools
About the tools to use for estimation, there is a recent (2002) paper by Capers Jones, illustrating their evolution from the '60s on, as well as 10 generic features that software estimation tools can (or should) perform (anyway, the whole June 2002 Crosstalk journal was about Software Estimation).
Another webpage summarising several links of tools for Software Estimation is the one maintained by DACS.
Here there is a list compiled by a Finnish consulting company
A useful tool, practically for free, is an MSExcel addin, namely the Analysis TookPak. Simply selecting it from the "Tools  Addins" menu, you can have several statistical analysis available (ANOVA, ttest, Multiple Regression analysis, ...)
Publications
Buglione L., Dimensionare il software: qual è il giusto "metro"?, White Paper, Bloom.it  Frammenti di Organizzazione Ottobre 2003
Buglione L., Project Size Unit (PSU)  Measurement Manual, v1.01, October 2005, English version
Buglione L., FORESEEing for Better Project Sizing and Effort Estimation, MENSURA 2006, Cadiz(Spain) November 68, 2006
Buglione L., Meglio Agili o Veloci? Alcune riflessioni sulle stime nei progetti XP, XPM.it, Febbraio 2007
Bégnoche L., Abran A. & Buglione L., A Measurement Approach Integrating ISO 15939, CMMI and ISBSG, Proceedings of the 4th Software Measurement European Forum (SMEF 2007), Rome (Italy), May 911 2007, ISBN 9788870909425, pp.111130
Buglione L. & Abran A., Improving Estimations in Agile Projects: issues and avenues, Proceedings of the 4th Software Measurement European Forum (SMEF 2007), Rome (Italy), May 911 2007, ISBN 9788870909425, pp.265274
Buglione L., Project Size Unit (PSU)  Measurement Manual, v1.21, November 2007, EnglishSpanishItalian versions available
Buglione L., Some thoughts on Productivity in ICT projects, WP200701, White Paper, version 1.1, March 2008
Buglione L. & Abran A., Performance calculation and estimation with QEST/LIME using ISBSG r10 data, Proceedings of the 5th Software Measurement European Forum (SMEF 2008), Milan (Italy), 2830 May 2008
Buglione L., Improving Estimation by Effort Type Proportions, Software Measurement News, Vol. 13, No.1, 2008
Cross References to this page
R.S. Pressman & Associates, Inc.  Software Engineering Resources
Mohamed Fares  Metrics page
last update: September 9, 2011
