By Capers Jones


Abstract

In 2014 software is the main operational component of every major business and government organization in the world. But software quality is still not acceptable for many applications. Software schedules and costs are frequently much larger than planned. This short study discusses the proven methods and results for achieving software excellence. The paper also provides quantification of what the term “excellence” means for both quality and productivity.

Introduction

Software is the main operating tool of business and government in 2014. But up through the end of 2013 software quality remained marginal; software schedules and costs remained much larger than desirable or planned. Cancelled projects were about 35% in the 10,000 function point size range and about 5% of software outsource agreements ended up in court in litigation. This short study identifies the major methods for bringing software under control and achieving excellent results.

The first topic of importance is to show the quantitative differences between excellent, average, and poor software projects in quantified form. Table 1 shows the essential differences between software excellence and unacceptable results for a mid-sized project of 1,000 function points or about 53,000 Java statements.

The data in table 1 comes from the author’s clients, which consist of about 600 companies of whom 150 are Fortune 500 companies. About 40 government and military organizations are also clients, but table 1 is based on corporate results rather than government results. Government software tends to have large overhead costs and extensive status reporting that are not found in the civilian sector. (Some big defense projects have produced so much paperwork that there were about 400 English words for every Ada statement, and the words cost more than the source code.)

(Note that the data in this report was produced using the Namcook Analytics Software Risk Master™ (SRM) tool. SRM can operate as an estimating tool prior to requirements or as a measurement tool after deployment.)

At this point it is useful to discuss and explain the main differences between the best, average, and poor results.

Software Quality Differences for Best, Average, and Poor Projects

Software quality is the major point of differentiation between excellent results, average results, and poor results.

While software executives demand high productivity and short schedules, the vast majority do not understand how to achieve them. Bypassing quality control does not speed projects up: it slows them down. The number 1 reason for enormous schedule slips noted in breach of contract litigation where the author has been an expert witness is starting testing with so many bugs that test schedules are at least double their planned duration.

The major point of this article is: High quality using a synergistic combination of defect prevention, pre-test inspections and static analysis is fast and cheap. Poor quality is expensive, slow, and unfortunately far too common because most companies do not know how to achieve it. High quality does not come from testing alone. It requires defect prevention such as Joint Application Design or embedded users; pre-test inspections and static analysis; and formal test case development combined with certified test personnel.

The defect potential information in table 1 includes defects from five origins: requirements defects, design defects, code defects, document defects, and “bad fixes” or new defects accidentally included in defect repairs. The approximate distribution among these five sources is:

1. Requirements defects 15%

2. Design defects 30%

3. Code defects 40%

4. Document defects 8%

5. Bad fixes 7%

6. Total Defects 100%

However the distribution of defect origins varies widely based on the novelty of the application, the experience of the clients and the development team, the methodologies used, and programming languages. Certified reusable material also has an impact on software defect volumes and origins.

Because the costs of finding and fixing bugs have been the #1 cost driver for the entire software industry for more than 50 years, the most important difference between excellent and mediocre results are in the areas of defect prevention, pre-test defect removal, and testing.

All three examples are assumed to use the same set of test stages, including:

1. Unit test

2. Function test

3. Regression test

4. Component test

5. Performance test

6. System test

7. Acceptance test

The overall defect removal efficiency levels of these 7 test stages range from below 80% for the worst case up to about 90% for the best case.

Testing alone is not sufficient to top 95% in defect removal efficiency (DRE). Pre-test inspections and static analysis are needed to approach or exceed the 99% range of the best case.

Excellent Quality Control

Excellent projects have rigorous quality control methods that include formal estimation of quality before starting, full defect measurement and tracking during development, and a full suite of defect prevention, pre-test removal and test stages. The combination of low defect potentials and high defect removal efficiency (DRE) is what software excellence is all about.

Companies that are excellent in quality control are usually the companies that build complex physical devices such as computers, aircraft, embedded engine components, medical devices, and telephone switching systems. Without excellence in quality these physical devices will not operate successfully. Worse, failure can lead to litigation and even criminal charges. Therefore all companies that use software to control complex physical machinery tend to be excellent in software quality.

Examples of organizations with excellent software quality in alphabetical order include Advanced Bionics, Apple, AT&T, Boeing, Ford for engine controls, General Electric for jet engines, Hewlett Packard, IBM, Motorola, NASA, the Navy for weapons, Raytheon, and Siemens.

Companies and projects with excellent quality control tend to have low levels of code cyclomatic complexity and high test coverage; i.e. test cases cover > 95% of paths and risk areas.

These companies also measure quality well and all know their defect removal efficiency (DRE) levels. (Any company that does not measure and know their DRE is probably below 85% in DRE.)

Excellent quality control has defect removal efficiency levels (DRE) between about 97% for large systems in the 10,000 function point size range and about 99.6% for small projects < 1,000 function points in size.

A DRE of 100% is theoretically possible but is extremely rare. The author has only noted DRE of 100% in two projects out of a total of about 20,000 projects examined.

Average Quality Control

In today’s world agile is the new average. Agile development has proven to be effective for smaller applications below 1,000 function points in size. Agile does not scale up well and is not a top method for quality. Agile is weak in quality measurements and does not normally use inspections, which has the highest defect removal efficiency (DRE) of any known form of defect removal. Inspections top 85% in DRE and also raise testing DRE levels. Among the authors clients that use Agile the average value for defect removal efficiency is about 92%. This is certainly better than the 85% industry average, but not up to the 99% actually needed to achieve optimal results.

Some but not all agile projects use “pair programming” in which two programmers share an office and a work station and take turns coding while the other watches and “navigates.” Pair programming is very expensive but only benefits quality by about 15% compared to single programmers. Pair programming is much less effective in finding bugs than formal inspections, which usually bring 3 to 5 personnel together to seek out bugs using formal methods.

Agile is a definite improvement for quality compared to waterfall development, but is not as effective as the quality-strong methods of team software process (TSP) and the rational unified process (RUP).

Average projects usually do not know defects by origin, and do not measure defect removal efficiency until testing starts; i.e. requirements and design defects are under reported and sometimes invisible.

A recent advance in software quality control now frequently used by average as well as advanced organizations is that of static analysis. Static analysis tools can find about 55% of code defects, which is much higher than most forms of testing.

Many test stages such as unit test, function test, regression test, etc. are only about 35% efficient in finding code bugs, or find one bug out of three. This explains why 6 to 10 separate kinds of testing are needed.

The kinds of companies and projects that are “average” would include internal software built by hundreds of banks, insurance companies, retail and wholesale companies, and many government agencies at federal, state, and municipal levels.

Average quality control has defect removal efficiency levels (DRE) from about 85% for large systems up to 97% for small and simple projects.

Poor Quality Control

Poor quality control is characterized by weak defect prevention and almost a total omission of pre-test defect removal methods such as static analysis and formal inspections. Poor quality control is also characterized by inept and inaccurate quality measures which ignore front-end defects in requirements and design. There are also gaps in measuring code defects. For example most companies with poor quality control have no idea how many test cases might be needed or how efficient various kinds of test stages are.

Companies with poor quality control also fail to perform any kind of up-front quality predictions so they jump into development without a clue as to how many bugs are likely to occur and what are the best methods for preventing or removing these bugs.

One of the main reasons for the long schedules and high costs associated with poor quality is the fact that so many bugs are found when testing starts that the test interval stretches out to two or three times longer than planned.

Some of the kinds of software that are noted for poor quality control include the Obamacare web site, municipal software for property tax assessments, and software for programmed stock trading, which has caused several massive stock crashes.

Poor quality control is below 85% in defect removal efficiency (DRE) levels. In fact for canceled projects or those that end up in litigation for poor quality, the DRE levels may drop below 80%, which is low enough to be considered professional malpractice. In litigation where the author has been an expert witness DRE levels in the low 80% range have been the unfortunate norm.

Reuse of Certified Materials for Software Projects

So long as software applications are custom designed and coded by hand, software will remain a labor-intensive craft rather than a modern professional activity. Manual software development even with excellent methodologies cannot be much more than 15% better than average development due to the intrinsic limits in human performance and legal limits in the number of hours that can be worked without fatigue.

The best long-term strategy for achieving consistent excellence at high speed would be to eliminate manual design and coding in favor of construction from certified reusable components.

It is important to realize that software reuse encompasses many deliverables and not just source code. A full suite of reusable software components would include at least the following 10 items:

1. Reusable requirements

2. Reusable architecture

3. Reusable design

4. Reusable code

5. Reusable project plans and estimates

6. Reusable test plans

7. Reusable test scripts

8. Reusable test cases

9. Reusable user manuals

10. Reusable training materials

These materials need to be certified to near zero-defect levels of quality before reuse becomes safe and economically viable. Reusing buggy materials is harmful and expensive. This is why excellent quality control is the first stage in a successful reuse program.

The need for being close to zero defects and formal certification adds about 20% to the costs of constructing reusable artifacts, and about 30% to the schedules for construction. However using certified reusable materials subtracts over 80% from the costs of construction and can shorten schedules by more than 60%. The more times materials are reused the greater their cumulative economic value.

One caution to readers: reusable artifacts may be treated as taxable assets by the Internal Revenue Service. It is important to check this topic out with a tax attorney to be sure that formal corporate reuse programs will not encounter unpleasant tax consequences.

The three samples in table 1 showed only moderate reuse typical for the end of 2013: Excellent project (15% certified reuse - close to current maximum); Average project (10% certified reuse); and Poor projects (5% certified reuse).

In the future it is technically possible to make large increases in the volumes of reusable materials. By around 2025 we should be able to construct software applications with perhaps 85% certified reusable materials.

Table 2 shows the productivity impact of increasing volumes of certified reusable materials. Table 2 uses whole numbers and generic values to simplify the calculations.

Software reuse from certified components instead of custom design and hand coding is the only known technique that can achieve order-of-magnitude improvements in software productivity. True excellence in software engineering must derive from replacing costly and error-prone manual work with construction from certified reusable components.

Because finding and fixing bugs is the major software cost driver, increasing volumes of high-quality certified materials can convert software from an error-prone manual craft into a very professional high-technology profession. Table 3 shows probable quality gains from increasing volumes of software reuse.

Since the current maximum for software reuse from certified components is only in the range of 15% or a bit higher, it can be seen that there is a large potential for future improvement.

Note that uncertified reuse in the form of mashups or extracting materials from legacy applications may top 50%. However uncertified reusable materials often have latent bugs, security flaws, and even error-prone modules so this not a very safe practices. In several cases the reused material was so buggy it had to be discarded and replaced by custom development.

Software Methodologies

Unfortunately selecting a methodology is more like joining a cult than making an informed technical decision. Most companies don’t actually perform any kind of due diligence on methodologies and merely select the one that is most popular.

In today’s world agile is definitely the most popular. Fortunately agile is also a pretty good methodology and much superior to the older waterfall method. However there are some caveats about methodologies.

Agile has been successful primarily for smaller applications < 1,000 function points in size. It has also been successful for internal applications where users can participate or be “embedded” with the development team to work our requirements issues.

Agile has not scaled up well to large systems > 10,000 function points. Agile has also not been visibly successful for commercial or embedded applications where there are millions of users and none of them work for the company building the software so their requirements have to be collected using focus groups or special marketing studies.

A variant of agile that uses “pair programming” or two programmers working in the same cubical with one coding and the other “navigating” has become popular. However it is very expensive since two people are being paid to do the work of one person. There are claims that quality is improved, but formal inspections combined with static analysis achieve much higher quality for much lower costs.

Another agile variation, extreme programming, in which test cases are created before the code itself is written has proven to be fairly successful for both quality and productivity, compared to traditional waterfall methods. However both TSP and RUP are just as good and even better for large systems.

There are dozens of available methodologies circa 2013 and many are good; some are better than agile for large systems; some older methods such as waterfall and cowboy development are at the bottom of the effectiveness list and should be avoided on modern applications.

For major applications in the 10,000 function point size range and above the team software process (TSP) and the Rational unified process (RUP) have the best track records for successful projects and among the fewest failures.

Quantifying Software Excellence

Because the software industry has a poor track record for measurement, it is useful to show what “excellence” means in quantified terms.

Excellence in software quality combines defect potentials of no more than 2.5 bugs per function point combined with defect removal efficiency (DRE) of 99%. This means that delivered defects will not exceed 0.025 defects per function point.

By contrast current average values circa 2013 are about 3.0 to 5.0 bugs per function point for defect potentials and only 85% to 90% DRE, leading to as many as 0.75 bugs per function point at delivery.

Excellence in software productivity and schedules is not a fixed value but varies with the size of the applications. Table 4 shows two “flavors” of productivity excellence: 1) the best that can be accomplished with 10% reuse and 2) the best that can be accomplished with 50% reuse:

As can be seen from table 4, software reuse is the most important technology for improving software productivity and quality by really significant amounts. Methods, tools, CMMI levels, and other minor factors are certainly beneficial. However so long as software applications are custom designed and hand coded software will remain an expensive craft and not a true professional occupation.

Summary and Conclusions

Because software is the driving force of both industry and government operations, it needs to be improved in terms of both quality and productivity. The most powerful technology for making really large improvements in both quality and productivity will be from eliminating costly custom designs and labor-intensive hand coding, and moving towards manufacturing software applications from libraries of well-formed standard reusable components that approach zero-defect quality levels.

Today’s best combinations of methods, tools, and programming languages are certainly superior to waterfall or cowboy development using unstructured methods and low-level languages. But even the best current methods still involve error-prone custom designs and labor-intensive manual coding.

Disclaimers:

Copyright ® 2013-2014 by Capers Jones. All Rights Reserved.


References and Notes

References:

1. Abran, A. and Robillard, P.N.; “Function Point Analysis, An Empirical Study of its Measurement Processes”; IEEE Transactions on Software Engineering, Vol 22, No. 12; Dec. 1996; pp. 895-909.

2. Austin, Robert d:; Measuring and Managing Performance in Organizations; Dorset House Press, New York, NY; 1996; ISBN 0-932633-36-6; 216 pages.

3. Black, Rex; Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing; Wiley; 2009; ISBN-10 0470404159; 672 pages.

4. Bogan, Christopher E. and English, Michael J.; Benchmarking for Best Practices; McGraw Hill, New York, NY; ISBN 0-07-006375-3; 1994; 312 pages.

5. Brown, Norm (Editor); The Program Manager’s Guide to Software Acquisition Best Practices; Version 1.0; July 1995; U.S. Department of Defense, Washington, DC; 142 pages.

6. Cohen, Lou; Quality Function Deployment – How to Make QFD Work for You; Prentice Hall, Upper Saddle River, NJ; 1995; ISBN 10: 0201633302; 368 pages.

7. Crosby, Philip B.; Quality is Free; New American Library, Mentor Books, New York, NY; 1979; 270 pages.

8. Curtis, Bill, Hefley, William E., and Miller, Sally; People Capability Maturity Model; Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA; 1995.

9. Department of the Air Force; Guidelines for Successful Acquisition and Management of Software Intensive Systems; Volumes 1 and 2; Software Technology Support Center, Hill Air Force Base, UT; 1994.

10. Dreger, Brian; Function Point Analysis; Prentice Hall, Englewood Cliffs, NJ; 1989; ISBN 0-13-332321-8; 185 pages.

11. Gack, Gary; Managing the Black Hole: The Executives Guide to Software Project Risk; Business Expert Publishing, Thomson, GA; 2010; ISBN10: 1-935602-01-9.

12. Gack, Gary; Applying Six Sigma to Software Implementation Projects; <http://software.isixsigma.com/library/content/c040915b.asp.&gt;

13. Gilb, Tom and Graham, Dorothy; Software Inspections; Addison Wesley, Reading, MA; 1993; ISBN 10: 0201631814.

14. Grady, Robert B.; Practical Software Metrics for Project Management and Process Improvement; Prentice Hall, Englewood Cliffs, NJ; ISBN 0-13-720384-5; 1992; 270 pages.

15. Grady, Robert B. & Caswell, Deborah L.; Software Metrics: Establishing a Company-Wide Program; Prentice Hall, Englewood Cliffs, NJ; ISBN 0-13-821844-7; 1987; 288 pages.

16. Grady, Robert B.; Successful Process Improvement; Prentice Hall PTR, Upper Saddle River, NJ; ISBN 0-13-626623-1; 1997; 314 pages.

17. Humphrey, Watts S.; Managing the Software Process; Addison Wesley Longman, Reading, MA; 1989.

18. IFPUG Counting Practices Manual, Release 4, International Function Point Users Group, Westerville, OH; April 1995; 83 pages.

19. Jacobsen, Ivar, Griss, Martin, and Jonsson, Patrick; Software Reuse - Architecture, Process, and Organization for Business Success; Addison Wesley Longman, Reading, MA; ISBN 0-201-92476-5; 1997; 500 pages.

20. Jones, Capers and Bonsignour, Olivier; The Economics of Software Quality, Addison Wesley Longman, Reading, MA; 2011.

21. Jones, Capers; Estimating Software Costs; 2nd edition; McGraw Hill; New York, NY; 2007.

22. Jones, Capers; Software Engineering Best Practices; McGraw Hill, New York, NY; 2010.

23. Jones, Capers and Bonsignour, Olivier; The Economics of Software Quality; Addison Wesley, Boston, MA; 2011; ISBN 978-0-13-258220-9; 587 pages.

24. Jones, Capers; “A Ten-Year Retrospective of the ITT Programming Technology Center”; Software Productivity Research, Burlington, MA; 1988.

25. Jones, Capers; Applied Software Measurement; McGraw Hill, 3rd edition 2008.

26. Jones, Capers; Software Engineering Best Practices; McGraw Hill, 1st edition 2010.

27. Jones, Capers; Assessment and Control of Software Risks; Prentice Hall, 1994; ISBN 0-13-741406-4; 711 pages.

28. Jones, Capers; Patterns of Software System Failure and Success; International Thomson Computer Press, Boston, MA; December 1995; 250 pages; ISBN 1-850 -32804-8; 292 pages.

29. Jones, Capers; Software Assessments, Benchmarks, and Best Practices; Addison Wesley Longman, Boston, MA; 2000 (due in May of 2000); 600 pages.

30. Jones, Capers; Software Quality – Analysis and Guidelines for Success; International Thomson Computer Press, Boston, MA; ISBN 1-85032-876-6; 1997; 492 pages.

31. Jones, Capers; The Economics of Object-Oriented Software; Software Productivity

Research, Burlington, MA; April 1997; 22 pages.

32. Jones, Capers; Becoming Best in Class; Software Productivity Research, Burlington, MA; January 1998; 40 pages.

33. Kan, Stephen H.; Metrics and Models in Software Quality Engineering; 2nd edition;

Addison Wesley Longman, Boston, MA; ISBN 0-201-72915-6; 2003; 528 pages.

34. Keys, Jessica; Software Engineering Productivity Handbook; McGraw Hill, New York, NY; ISBN 0-07-911366-4; 1993; 651 pages.

35. Love, Tom; Object Lessons; SIGS Books, New York; ISBN 0-9627477 3-4; 1993; 266 pages.

36. McCabe, Thomas J.; “A Complexity Measure”; IEEE Transactions on Software Engineering; December 1976; pp. 308-320.

37. Melton, Austin; Software Measurement; International Thomson Press, London, UK; ISBN 1-85032-7178-7; 1995.

38. Multiple authors; Rethinking the Software Process; (CD-ROM); Miller Freeman, Lawrence, KS; 1996. (This is a new CD ROM book collection jointly produced by the book publisher, Prentice Hall, and the journal publisher, Miller Freeman. This CD ROM disk contains the full text and illustrations of five Prentice Hall books: Assessment and Control of Software Risks by Capers Jones; Controlling Software Projects by Tom DeMarco; Function Point Analysis by Brian Dreger; Measures for Excellence by Larry Putnam and Ware Myers; and Object-Oriented Software Metrics by Mark Lorenz and Jeff Kidd.)

39. Paulk Mark et al; The Capability Maturity Model; Guidelines for Improving the Software Process; Addison Wesley, Reading, MA; ISBN 0-201-54664-7; 1995; 439 pages.

40. Perry, William E.; Data Processing Budgets - How to Develop and Use Budgets Effectively; Prentice Hall, Englewood Cliffs, NJ; ISBN 0-13-196874-2; 1985; 224 pages.

41. Perry, William E.; Handbook of Diagnosing and Solving Computer Problems; TAB Books, Inc.; Blue Ridge Summit, PA; 1989; ISBN 0-8306-9233-9; 255 pages.

42. Putnam, Lawrence H.; Measures for Excellence -- Reliable Software On Time, Within Budget; Yourdon Press - Prentice Hall, Englewood Cliffs, NJ; ISBN 0-13-567694-0; 1992; 336 pages.

43. Putnam, Lawrence H and Myers, Ware.; Industrial Strength Software - Effective Management Using Measurement; IEEE Press, Los Alamitos, CA; ISBN 0-8186-7532-2; 1997; 320 pages.

44. Radice, Ronald A.; High Quality Low Cost Software Inspections; Paradoxicon Publishingl Andover, MA; ISBN 0-9645913-1-6; 2002; 479 pages.

45. Royce, Walker E.; Software Project Management: A Unified Framework; Addison Wesley Longman, Reading, MA; 1998; ISBN 0-201-30958-0.

46. Rubin, Howard; Software Benchmark Studies For 1997; Howard Rubin Associates, Pound Ridge, NY; 1997.

47. Rubin, Howard (Editor); The Software Personnel Shortage; Rubin Systems, Inc.; Pound Ridge, NY; 1998.

48. Shepperd, M.: “A Critique of Cyclomatic Complexity as a Software Metric”; Software Engineering Journal, Vol. 3, 1988; pp. 30-36.

49. Strassmann, Paul; The Squandered Computer; The Information Economics Press, New Canaan, CT; ISBN 0-9620413-1-9; 1997; 426 pages.

50. Stukes, Sherry, Deshoretz, Jason, Apgar, Henry and Macias, Ilona; Air Force Cost Analysis Agency Software Estimating Model Analysis ; TR-9545/008-2; Contract F04701-95-D-0003, Task 008; Management Consulting & Research, Inc.; Thousand Oaks, CA 91362; September 30 1996.

51. Symons, Charles R.; Software Sizing and Estimating – Mk II FPA (Function Point Analysis); John Wiley & Sons, Chichester; ISBN 0 471-92985-9; 1991; 200 pages.

52. Thayer, Richard H. (editor); Software Engineering and Project Management; IEEE Press, Los Alamitos, CA; ISBN 0 8186-075107; 1988; 512 pages.

53. Umbaugh, Robert E. (Editor); Handbook of IS Management; (Fourth Edition); Auerbach Publications, Boston, MA; ISBN 0-7913-2159-2; 1995; 703 pages.

54. Weinberg, Dr. Gerald; Quality Software Management - Volume 2 First-Order Measurement; Dorset House Press, New York, NY; ISBN 0-932633-24-2; 1993; 360 pages.

55. Wiegers, Karl A; Creating a Software Engineering Culture; Dorset House Press, New York, NY; 1996; ISBN 0-932633-33-1; 358 pages.

56. Yourdon, Ed; Death March - The Complete Software Developer’s Guide to Surviving “Mission Impossible” Projects; Prentice Hall PTR, Upper Saddle River, NJ; ISBN 0-13-748310-4; 1997; 218 pages.

57. Zells, Lois; Managing Software Projects - Selecting and Using PC-Based Project Management Systems; QED Information Sciences, Wellesley, MA; ISBN 0-89435-275-X; 1990; 487 pages.

58. Zvegintzov, Nicholas; Software Management Technology Reference Guide; Dorset House Press, New York, NY; 1994; ISBN 1-884521-01-0; 240 pages.


Capers Jones

Click to view image

Capers Jones is currently vice president and chief technology officer of Namcook Analytics LLC. The company designs leading-edge risk, cost, and quality estimation and measurement tools.
Prior to the formation of Namcook Analytics in 2012 Capers Jones was the president of Capers Jones & Associates LLC between 2000 and 2012.
He is also the founder and former chairman of Software Productivity Research LLC (SPR). Capers Jones founded SPR in 1984 and sold the company to Artemis Management Systems in 1998. He was the chief scientist at Artemis until retiring in 2000. SPR marketed three successful commercial estimation tools: SPQR/20 in 1984; CheckPoint in 1995; and KnowledgePlan in 1998. SPR also built custom proprietary estimation tools for AT&T and Bachman Systems.
Before founding SPR Capers was Assistant Director of Programming Technology for the ITT Corporation at the Programming Technology Center in Stratford, Connecticut. During his tenure Capers Jones designed three proprietary software cost and quality estimation tools for ITT between 1979 and 1983.
He was also a manager and software researcher at IBM in California where he designed IBM’s first two software cost estimating tools in 1973 and 1974 in collaboration with Dr. Charles Turk.
Capers Jones is a well-known author and international public speaker. Some of his books have been translated into five languages. His five most recent books are “The Technical and Social History of Software Engineering,” Addison Wesley 2014; “The Economics of Software Quality with Olivier Bonsignour,” Addison Wesley, 2011; “Software Engineering Best Practices,” McGraw Hill 2010; “Applied Software Measurement,” 3rd edition, McGraw Hill, 2008; and “Estimating Software Costs,” McGraw Hill, 2nd edition, 2007.
Among his older book titles are “Patterns of Software Systems Failure and Success” (Prentice Hall 1994); Estimating Software Risks,” International Thomson 1995; “Software Quality: Analysis and Guidelines for Success” (International Thomson 1997); and “Software Assessments: Benchmarks, and Best Practices” (Addison Wesley Longman 2000).
Capers and his colleagues have collected historical data from thousands of projects, hundreds of corporations, and more than 30 government organizations. This historical data is a key resource for judging the effectiveness of software process improvement methods and also for calibrating software estimation accuracy.
Capers Jones data is also widely cited in software litigation in cases where quality, productivity, and schedules are part of the proceedings. Capers Jones has also worked as an expert witness in 15 lawsuits involving breach of contract and software taxation issues.


Phone: 401-864-2632

E-mail: Capers.Jones3@gmail.com

Blog: http://Namcookanalytics.com

Web: http://www.Namcook.com


« Previous Next »