Fee Download Software Assessments, Benchmarks, and Best Practices, by Capers Jones
Reviewing guide Software Assessments, Benchmarks, And Best Practices, By Capers Jones by on the internet could be likewise done effortlessly every where you are. It appears that waiting the bus on the shelter, waiting the list for line up, or various other places feasible. This Software Assessments, Benchmarks, And Best Practices, By Capers Jones could accompany you during that time. It will not make you really feel weary. Besides, through this will certainly also improve your life top quality.
Software Assessments, Benchmarks, and Best Practices, by Capers Jones
Fee Download Software Assessments, Benchmarks, and Best Practices, by Capers Jones
Suggestion in selecting the very best book Software Assessments, Benchmarks, And Best Practices, By Capers Jones to read this day can be gained by reading this page. You could locate the very best book Software Assessments, Benchmarks, And Best Practices, By Capers Jones that is offered in this globe. Not only had actually guides released from this nation, however also the other countries. And also now, we intend you to review Software Assessments, Benchmarks, And Best Practices, By Capers Jones as one of the reading materials. This is just one of the most effective publications to collect in this site. Consider the page and browse guides Software Assessments, Benchmarks, And Best Practices, By Capers Jones You can find lots of titles of guides supplied.
The advantages to take for checking out the books Software Assessments, Benchmarks, And Best Practices, By Capers Jones are coming to boost your life top quality. The life high quality will not only concerning how significantly expertise you will certainly acquire. Even you review the enjoyable or enjoyable e-books, it will help you to have improving life high quality. Feeling fun will lead you to do something completely. In addition, guide Software Assessments, Benchmarks, And Best Practices, By Capers Jones will provide you the driving lesson to take as a great need to do something. You might not be useless when reviewing this e-book Software Assessments, Benchmarks, And Best Practices, By Capers Jones
Never ever mind if you don't have enough time to go to the publication store as well as search for the favourite book to check out. Nowadays, the online publication Software Assessments, Benchmarks, And Best Practices, By Capers Jones is pertaining to offer ease of reviewing behavior. You might not have to go outside to browse guide Software Assessments, Benchmarks, And Best Practices, By Capers Jones Searching and downloading and install the e-book entitle Software Assessments, Benchmarks, And Best Practices, By Capers Jones in this short article will certainly give you much better remedy. Yeah, on-line book Software Assessments, Benchmarks, And Best Practices, By Capers Jones is a type of digital e-book that you can obtain in the web link download supplied.
Why should be this on the internet publication Software Assessments, Benchmarks, And Best Practices, By Capers Jones You may not should go somewhere to review guides. You could read this publication Software Assessments, Benchmarks, And Best Practices, By Capers Jones whenever and every where you desire. Even it is in our extra time or feeling tired of the tasks in the workplace, this is right for you. Obtain this Software Assessments, Benchmarks, And Best Practices, By Capers Jones right now and also be the quickest person who finishes reading this publication Software Assessments, Benchmarks, And Best Practices, By Capers Jones
Billions of dollars are wasted each year on IT software projects that are developed and either released late or never used. In light of recent large-scale errors, the methods, tools, and practices used for software development have become the subject of significant study and analysis. One qualitative method for analysis is software assessment, which explores the methodologies used by businesses for software development. Another method of analysis is software benchmarking, which collects quantitative data on such topics as schedules and costs. Renowned author Capers Jones draws on his extensive experience in economic analysis to present Software Assessments, Benchmarks, and Best Practices, a useful combination of qualitative and quantitative approaches to software development analysis. When assessment data and benchmarking data are analyzed jointly, it is possible to show how specific tools and practices impact the effectiveness of an organizations development efforts. The result is a clearer, bigger picture--a roadmap that allows an organization to identify areas for improvement in its development efforts. With this book as your guide, you will learn: *To combine assessments and be
- Sales Rank: #1604876 in Books
- Published on: 2000-05-11
- Released on: 2000-05-01
- Original language: English
- Number of items: 1
- Dimensions: 9.06" h x 1.58" w x 7.32" l, 2.47 pounds
- Binding: Paperback
- 688 pages
From the Inside Flap
During my writing, this book evolved considerably from the first plan. Originally I intended to divide the book into two major sections. The first section was to discuss a number of assessment and benchmark methods used in the United States and Europe. The second section was to present an overview of software productivity and quality benchmarks, and associated "best practices" derived from benchmark studies. The benchmarks and best practices in this book cover six major kinds of software project: (1) management information system (MIS) projects, (2) outsource projects, (3) systems and embedded software projects, (4) commercial software projects, (5) military software projects, and (6) personal software projects developed by end users.
However, as the writing commenced, the focus of the book began to change. It soon became clear that a complete discussion of benchmarks and best practices for each of the six kinds of software would be about twice as large as initially planned. I had planned to devote approximately 30 pages to the benchmark and best-practice information for each type of software. But to do justice to the available data, almost 60 pages were needed for five of the six forms of software. Furthermore, a discussion of how assessment and benchmark studies operate and their technical differences may be of interest to those of us in the assessment and benchmark business, but it is not necessarily of great interest to those outside the limited circle of benchmark consultants.
As a result, the discussion of assessment and benchmark methods was cut back, and the sections devoted to information gathered during assessments and benchmark studies was expanded. Instead of a book with two sections of roughly equal size, the book now has a briefer introductory section and greatly expanded discussions of each type of software and the issues that confront each type.
This book also emphasizes assessments and benchmark data from the United States. Although my colleagues and I have gathered data in more than 24 countries, the issues of international benchmarks are quite complex. The international variations in working years and working days, how overtime is treated, and European restrictions on some kinds of data collection made me decide to concentrate on U.S. data.
Readers should note that this is a book about assessments and benchmarks written by someone who is in the assessment and benchmark business. Because my company has been performing assessments and benchmarks since 1985, we have an obvious interest in the topic. However, this is not a marketing book, nor is it a book about how my company's assessments and benchmarks work. The topics of software assessments and benchmarks are important ones, and this book attempts to include the general principles under which all assessment and benchmark consulting groups operate.
In my view, and also in the view of my competitors, assessments and software benchmarks are important to the global economy. Software is the most labor-intensive product of the twentieth century, and the most error prone. Assessments, benchmarks, and empirical data are on the critical path to minimizing software project failures. Every software project manager, every software quality assurance specialist, and every software engineer should understand the basic concepts of software assessments and benchmarks. This is a view shared by all of the assessment and benchmark consulting groups.
The software industry has achieved a notorious reputation as being out of control in terms of schedule accuracy, cost accuracy, and quality control. A majority of large systems run late, exceed their budgets, and many are cancelled without ever reaching completion. Assessment and benchmarks followed by planned process improvement programs can aid in bringing software under management control. These are not "silver bullet" methods. Assessments, benchmarks, and process improvement programs require effort and can be expensive, but project failures are far more expensive.
This book discusses the kinds of complex software projects that benefit from assessments and benchmark studies. Small and simple projects are not the main focus of assessments and benchmarks. The proper focus of assessments, benchmarks, process improvements, and this book is on large and complex applications.
Chapter 1 provides an introduction to the topic of software assessments and benchmarks. This chapter discusses the kinds of data that should be collected. It also cautions against some common problems, such as depending on data without validating it, and using hazardous metrics such as lines of code. This chapter also discusses the need to keep client data protected, and suggests some coding methods that can be used to perform benchmarks without revealing proprietary client information.
Chapter 2 deals with the history of software process assessments and discusses some of the kinds of information that is gathered during software process assessments. Although more than a dozen forms of assessment exist, the form made popular by the Software Engineering Institute is the best known. Some recent and more specialized forms of assessment, such as those performed for the year 2000 problem, have also been widely used since about 1998.
Chapter 3 deals with the related topics of software benchmarks and software baselines. Benchmarks collect and compare quantitative data against industry norms. Baselines measure the rate at which a company can improve productivity and quality when compared with an initial starting point. Of course, sometimes productivity and quality can get worse instead of better.
Chapter 4 discusses 36 key factors that should be recorded during assessment and benchmark studies. If these 36 key factors are recorded, the data gathered by almost any benchmark and assessment consulting group, or by any company or government agency, could be compared meaningfully.
Chapter 5 addresses an important topic that is somewhat ambiguous in the software literature. When we speak of "best practices" what exactly do we mean? Chapter 5 discusses some criteria for including or excluding tools and technologies from best-practice status. It is suggested that any technology considered a potential best practice needs empirical results from at least ten companies and 50 projects.
Chapter 6 discusses an important follow-on activity to assessment and benchmark studies. Both assessments and benchmark studies are diagnostic in nature, rather than therapeutic. These studies can identify problems, but they cannot cure them. Therefore, a natural follow-on activity to either an assessment or a benchmark analysis, or both, would be to implement a process improvement program.
Chapter 7 deals with benchmarks and best practices for MIS projects. These are software applications that companies and government agencies build for their own internal use. MIS applications are often keyed to large corporate database access, and their main purpose is to convert raw data into useful information. Although MIS projects are often fairly high in productivity for small projects, large MIS projects tend to experience more than average failure rates. Quality at the large end is often poor too.
Chapter 8 deals with benchmarks and best practices for outsource software projects. The emphasis in this chapter is on projects under contract for MIS, rather than for military or systems software outsourcing. The major outsource vendors such as Andersen, Electronic Data Systems, and IBM concentrate on the MIS market because it is the largest market for their services. In general, outsource projects have higher productivity and quality levels than in-house MIS projects; however, litigation between clients and outsourcers does occur from time to time.
Chapter 9 deals with benchmarks and best practices for systems and embedded software, which are applications that control physical devices such as computers, telephone switching systems, aircraft flight controls, or automobile fuel injection systems. The close coupling of systems software to physical hardware devices has led to very sophisticated quality control methods. The systems software community has the best track record for large applications larger than 10,000 function points.
Chapter 10 deals with benchmarks and best practices for commercial software. Commercial software applications are intended for the mass market, and some of these applications are used by millions of customers on a global basis. The commercial and systems software domains overlap in the arena of operating systems because commercial products such as Windows 98 are both systems and commercial software. The commercial world needs to deal with special issues such as translation and nationalization of packages, piracy, and very extensive safeguards against viruses.
Chapter 11 deals with benchmarks and best practices for military software, with special emphasis on the U.S. armed services and the Department of Defense. The military software domain is fairly good at building large and complex applications, although military software productivity is lower than any other domain. The legacy of U.S. military standards has left the defense community with some very cumbersome practices. Plans and specifications in the military domain are approximately three times larger than equivalent civilian projects. The bulk is due primarily to military oversight requirements, rather than to the technical needs of the project.
Chapter 12 deals with benchmarks and best practices for end user software development. As the century ends, there are more than 12,000,000 U.S. office workers who know how to write computer programs if they wish to do so. By the middle of this century, the number of computer-literate workers in the United States will top 125,000,000. Indeed, there are some signs that computer literacy will actually pull ahead of conventional literacy in the sense of being able to read and write. End user applications are currently in a gray area outside the scope of normal assessments and benchmarks. More importantly, end user applications are also in a gray area in terms of intellectual property law. As end user applications become more and more numerous, it is important to set policies and guidelines for these ambiguous applications.
As this book is written, benchmarks based on function point metrics are dominant in the software world, except for military software, in which benchmarks based on lines of code still prevail. This book utilizes function point metrics and cautions against lines-of-code metrics for benchmarks involving multiple programming languages. Version 4.1 of the function point rules defined by the International Function Point Users Group is the standard metric used throughout.
0201485427P04062001
From the Back Cover
Billions of dollars are wasted each year on IT software projects that are developed and either released late or never used. In light of recent large-scale errors, the methods, tools, and practices used for software development have become the subject of significant study and analysis. One qualitative method for analysis is software assessment, which explores the methodologies used by businesses for software development. Another method of analysis is software benchmarking, which collects quantitative data on such topics as schedules and costs.
Renowned author Capers Jones draws on his extensive experience in economic analysis to present Software Assessments, Benchmarks, and Best Practices , a useful combination of qualitative and quantitative approaches to software development analysis. When assessment data and benchmarking data are analyzed jointly, it is possible to show how specific tools and practices impact the effectiveness of an organization's development efforts. The result is a clearer, bigger picture--a roadmap that allows an organization to identify areas for improvement in its development efforts.
With this book as your guide, you will learn:
- To combine assessments and benchmarking for optimal software analysis
- To identify best and worst practices for software development
- To improve software quality and application effectiveness
- To reduce costs of software maintenance by avoiding software errors
0201485427B04062001
About the Author
Capers Jones is a leading author and speaker on software productivity and measurement as well as the acknowledged expert on the economic impact of the year 2000 software problem. He is a frequent speaker at software engineering conferences. Formerly a senior researcher at IBM's Santa Teresa software laboratory and Assistant Director of Applied Technology at the ITT Programming Technology Center, Jones is Chairman and Founder of Software Productivity Research. He is also a member of IEEE Computer Society and the International Function Point Users Group (IFPUG).
0201485427AB04062001
Most helpful customer reviews
19 of 21 people found the following review helpful.
This author is a very good writer and technical expositor.
By Ron Radice, Principal Partner, Software Technology Transition
This author is a very good writer and technical expositor. I find him easy to read and the book hard to put down, which is not something one can say about every technical text. I particularly liked the author's frequent lists of pros and cons. He is well aware that every design process is a series of tradeoffs. Knowing what they are is a big help. This book is the culmination of a technology development scenario. It mentions competitive and ancillary software metric methodologies and closes each chapter with an extensive bibliography. The best previous books on this subject were written by this author. This book is the 1999/2000 upgrade.
8 of 8 people found the following review helpful.
Comprehensive and supported by data
By Mike Tarrani
Jones is a master at data collection, distilling it, and drawing supportable conclusions. Like his other books (especially Estimating Software Costs, ISBN 0079130941), this one is wide in scope and deep with data and techniques.
He begins with background material on software process assessments, comparing his company's technique to SEI's, and correlating the two. Note that Jones' approach predates the one developed by SEI and was first published in his 1986 book "Programming Productivity", ISBN 0070328110. This book is a natural extension of that earlier work.
The next part of this book is an exhaustive survey of benchmarks and baselines, including pitfalls and an interesting discussion on activity-based software benchmark data. This material is a lead-in to 36 key factors that Jones identifies, including software classification, project-specific, technology, sociological, ergonomic, and international factors.
Subsequent chapters address best and worst practices, process improvement, and benchmarks and best practices for various software classes and development approaches, including internal IS, outsourced development, systems, commercial, military and end-user software development and delivery. Each class is treated in a comprehensive manner and the findings are well supported.
This book is an ideal resource for any organization wishing to establish a baseline before implementing initiatives such as CMMI, SPICE, etc. More importantly, much of this book is as applicable to the SEI assessment approach as it is to Jones's SPR methodology. I also recommend using Software Program Managers Network (ASIN B0001M00RA)in conjunction with this book (paste the ASIN number in the search box at the top of this page to reach it).
16 of 17 people found the following review helpful.
A assessment guide for sofware development process
By Binoo Mathen
The success or failures of software projects are dependent on various parameters. Software assessments & benchmarks provide qualitative methods and quantitative data on factors that lead to project success and failure. Assessing organizational standards and comparing them with industry standards helps to identify weaknesses and strengths in the software project development process. The book provides a valuable insight into the best practices in the industry and also lays down a clear framework for process improvement and software project excellence. For a Industry that is driven by metrics, the book is a `must read' for any professional in the field.
See all 3 customer reviews...
Software Assessments, Benchmarks, and Best Practices, by Capers Jones PDF
Software Assessments, Benchmarks, and Best Practices, by Capers Jones EPub
Software Assessments, Benchmarks, and Best Practices, by Capers Jones Doc
Software Assessments, Benchmarks, and Best Practices, by Capers Jones iBooks
Software Assessments, Benchmarks, and Best Practices, by Capers Jones rtf
Software Assessments, Benchmarks, and Best Practices, by Capers Jones Mobipocket
Software Assessments, Benchmarks, and Best Practices, by Capers Jones Kindle
Software Assessments, Benchmarks, and Best Practices, by Capers Jones PDF
Software Assessments, Benchmarks, and Best Practices, by Capers Jones PDF
Software Assessments, Benchmarks, and Best Practices, by Capers Jones PDF
Software Assessments, Benchmarks, and Best Practices, by Capers Jones PDF