New technologies and innovations cause even some of the best-run firms to slip and sometimes fall. As you look ahead, it may be helpful to identify innovations that will get your company the most competitive traction.

Sustaining technology is an innovation that makes good technology better. A disruptive technology is an innovation that brings something something worse to market.

New technologies and innovations cause even some of the best-run firms to slip and sometimes fall. As you look ahead, it may be helpful to identify innovations that will get your company the most competitive traction. I would like to offer three mental models that may explain what’s happening in the industries in which you are competing. I hope these innovation models will give you a way to frame what signposts might be important for you to look for along the road between here and the future. The puzzle for the first model came from living in the Boston area and watching Digital Equipment collapse. The collapse was such a baffle because in the 1970s and 1980s Digital was one of the most widely admired companies in our economy. I remember a 1986 BusinessWeek article that praised Digital’s management. The article recounted home-run product after home-run product that they had introduced to the market. It likened the company to a freight train—it had so much momentum that it blew apart any competitor that got in its way. It warned IBM that it was on the track and had better move over. Then about 1988, the freight train just fell off the cliff, and the company began to unravel very quickly.

In 1989, BusinessWeek ran a post-mortem about what had gone wrong in this once-great company. Everything that went wrong was fingered at the inept management. It was such a puzzle to contrast the praise that had been heaped upon those people with the indictment that was given to the very same people just three years later. I didn’t even know how to frame the question. Some framed it saying, “How can good managers get that bad that fast?” Attributing Digital’s fall to a management team who had it together at one point and lost it at another was never comfortable for me because every mini computer company in the world fell off the cliff in unison: Digital, Data General, Prime, Wang, Nixdorf, and the Hewlett-Packard Mini-Computer business. You wouldn’t expect these companies to collude to collapse together. There had to be a more fundamental reason.

The real puzzle was in the early 1980s when Harvard Business School curriculum was filled with case studies about Digital. Everybody admired Digital’s management. Those were the years they decided that the personal computer wasn’t a big deal, and Unix wasn’t important. The two decisions that led to their demise were made when everybody thought this was a well-run company. I began to wonder if there might not be something about the paradigms of good management, as they’re taught at business schools, that might actually cause companies to stumble. When I began to frame it in those terms and look back in history, it actually seemed quite common. For example, in the mid-1960s, if you looked at the covers of Fortune and Forbes, the people who ran Sears had the same position in the deity of management that Bill Gates and Andy Grove have today. Yet it was right then that they decided that discount retailing wasn’t a big wave and that credit cards wouldn’t be an important factor in retailing. The very decisions that led Sears to stumble from the pinnacle of its industry were made when they were on top.

The first innovation model came out of looking at the falls of companies like Digital and Sears. However, it initially came from a deep study of the disk drive industry. It turned out to be a very fruitful industry to study, because in five of six product generations a new company entered and rose to the top, only to be toppled by another entering company. I’d hoped that I could dig a really deep and thorough hole in that one industry and try to understand why nobody could stay on top of it for longer than a single generation. Then maybe I could crawl up out of the hole and put that model on, like a set of lenses, and look back into the histories of some very different industries to see if it helped explain those. If it had robustness in that sense, then maybe I could look forward and see who else might be threatened.

On the first model diagram, I plotted the performance of a product or service over time. The first piece of the model is represented by a dotted red line, which suggests that in every market there is a trajectory of improvement the customers can absorb over time. You can visualize this in the car industry. Every year companies introduce new and improved engines. But we can’t utilize all the performance they make available in the engine because we’ve got speed limits, traffic cops, and other factors that put a clamp on how much performance we can actually use.

The ability to absorb technology improvement exists in every market, and it’s limited by how fast people’s lives can change, how fast their work processes can change, or how fast they can learn new things. Performance that customers can absorb or utilize is depicted in a single line for simplicity but remember that there is a distribution of customers around the median. There are parallel dotted lines way at the high end of the market depicting very demanding customers and dotted lines that run across the bottom of the graph representing fairly simple customers at the low end that get over-satisfied with very little. The fact that there is a trajectory of improvement that customers can absorb or utilize is the first element of the model.

The second piece of the model is represented by a steeply-sloping blue line. The line suggests that in every market there’s a distinctly different trajectory of improvement that the innovators make available as they introduce new and improved products, generation after generation. The most important finding about this is that the trajectory of technological progress almost always outstrips the ability of customers to absorb it. That means that a company whose products or services are squarely positioned on what customers in the mainstream utilize at one point is actually prone to overshoot what the original customers use at another point because it pursues higher profit opportunities in the more sophisticated peers of the market.

To visualize this point, think of the Pentium IV microprocessor in your computer. You probably don’t use anywhere near the megahertz available if you’re in a mainstream business application. There are people at the high-end of the industry that thank Intel for every megahertz. But overall, Intel has overshot what people in the mainstream use. This also means that a product or service that’s not good enough at one point to be embraced by customers in the mainstream of the market can improve at such a rapid rate that it intersects with their needs at another. The second piece of the model says the trajectory of technological progress in most markets almost always outstrips the ability of customers to utilize it.

The third piece of the model is the distinction between sustaining technological improvements along the blue trajectory line and disruptive technology, which is the down market movement. This piece was hard for me to comprehend because in academia there is a paradigm used to explain this phenomenon. It says that when a company gets big and successful, it gets more risk averse and bureaucratic, loses its entrepreneurial zeal, and continues to seek out incremental innovation. However, it also loses its ability to come up with radical breakthrough innovations. It turns out that the paradigm doesn’t hold up to the evidence. There is a very different way to cut the kinds of technologies or innovations that industry leaders spot and get right on top of and the kinds of innovations they’d strangle on. It cut a lot more cleanly along this sustaining vs. destructive distinction.

Sustaining technology is an innovation that makes good technology better. A disruptive technology is an innovation that brings something worse to the market. It sounds kind of simple, but it turns out to have made a big difference in what we studied. This idea came from the disk drive study. We built a database of every model of disk drive ever introduced by any company anywhere in the world between 1975 and the present. We had about 5,000 models.

For each of the models we got data on the components used to build the drives. That allowed us to stick our finger right on the spot in the industry’s history where each new technology first got used. We could then trace the patterns by which the technology did or didn’t diffuse through the industry, see who the leaders and the laggards were, and then study whether there was a correlation between being a leader or a laggard and what subsequently happened to them in the market.

Through that process, we figured that there were about 116 new technologies of one sort or another used in the market. Of the 116, 111 were sustaining technologies, meaning they enabled innovators to bring a better product to market. Some of those sustaining technologies were, in fact, simple incremental year-to-year engineering improvements. But a remarkable number of them were dramatic leap frogs ahead of the competition, way up the curve. It didn’t matter how technologically difficult it was. In all 111 of the cases where the innovation sustained the trajectory of technical progress, the companies that had led the industry in the prior technology led in the new technology. Again, it didn’t matter how tough it was, as long as it enabled them to make a better product that they could sell for more money to their best customers. They figured out a way to get it done. The track record was impeccable in moving up market.

Of the 116, there were five that actually brought something worse to the market. In each of those five cases, the leading companies missed the mark and got killed. I called them disruptive technologies because they were so much worse than the product that had historically been available that none of the existing customers could use it. Yet they had other attributes that enabled it to be used in a different context. Because the pace of technological progress outstrips the ability of customers to utilize it, it improved at such a rapid rate that it blew the leaders out of the water.

Intel has used this way of thinking. I was in a management meeting in 1997 with Andy Grove, looking through this mechanism with a chart similar to this, and I saw that he had a very puzzled look on his face. All of a sudden it was like the light bulb turned on and he raised his hand and said, “I figured out what you did wrong on your chart.” He came up and pointed to the word “technology” under disruptive, and said, “If I understand the concept, you’re going to mislead the world if you call it “disruptive technology.” Unfortunately, the book had just been published, and it was locked in. But he said, “If I understand it, it would be more accurate to describe it as trivial technology that disrupts the business model of the leading companies.”

He went on to give his view of what happened at Digital. As he described it, mini computers were complex and expensive. To sell them you had to sell directly to the customer. That process involved a lot of service, training, and support, and you had to have those costs in the business just to play the game. Given that cost structure, Digital had to make about 45 percent gross margins to be acceptably profitable. In that environment, somebody walks in to senior management with a proposal to make the best mini computer that’s ever been made, that can reach up into the highest tiers of their markets and start to sell to customers who historically had to buy much more expensive mainframes. What did those business plans look like? They had gross margins of 60 percent. Sell the machines for $150,000 and up. Then somebody walks in and says that the future is with the personal computer, that’s where the company should invest our money. What did those business plans look like? In the very best years they had 35 percent margins, and they were headed to 20 percent fast. You could sell the machines for two or three thousand dollars apiece.

Grove asked a very interesting question: “What would you have done if you were running Digital Equipment? Would you have invested your resources in a product that your best customers needed that would improve your profit margins, or would you focus those resources on a product that none of your customers could use that would wreck your profit margins?” That’s the real dilemma—doing what makes sense in the context of the business can prove to be your undoing when this particular disruptive phenomenon emerges. And something so illogical as the personal computer was to Digital, can actually be very important.

Where else in our economy have you seen a company come in at the low end of the market and clean it out? Motorcycles. That’s exactly what Honda did with their little SuperCub. It came up against Harley. Volkswagen did the same thing and then the Japanese car companies. Toyota made a lot of money in the 1960s, until they got joined at the low end by nondescript Japanese imports called Datsun and Honda. Because the pricing of the imports was killed at the low end, they had no alternative but to migrate to the high end of the market.

I am asked quite often if the Internet is a disruptive technology. It’s a theory of relativity in many ways. To illustrate that, think about Dell, one of the most widely admired of all the companies in our economy today—admired because they are probably the largest online merchant. What was Dell before the Internet? They had tried to establish retail distribution and failed miserably. They evolved to this model where they took orders over the phone for their products and had evolved to customizing the attributes of the computers to some degree to the specific needs of their customers. This meant that they had to have highly trained telephone salespeople who could enter data manually. Now bring the Internet to Dell. What does it look like to them? It’s a sustaining technology. It helps them make more money in the way they’re structured to make money. It made most of their processes actually work better. Dell’s achievements as an online retailer are impressive. But the odds that the leaders will end up on top in a sustaining technology battle are 100 percent—we would expect Dell to succeed at this. But to Compaq, which has traditional retail distribution, the Internet is very disruptive.

The second innovation model came about as I sought to understand the following question: “If I were running a company that needed growth because I have already made the existing as efficient as I could, how might I create new growth markets with the highest probability of success?”

Disruptive innovations typically have enabled a larger population of less skilled, less wealthy people to do things in a more convenient, lower cost setting, which historically could only be done by specialists in less convenient, centralized settings. Disruption has been one of the fundamental causal mechanisms through which the quality of our lives has improved. I had not thought this through when I wrote The Innovator’s Dilemma. Because I’m at the Harvard Business School and our view on the world historically has been a big company view, I wrote the book from the perspective of what kinds of technologies could kill leading companies. In retrospect, I should’ve done a much more balanced job, because on the flip side of every one of those murders was a tremendous entrepreneurial success story. If I want to grow a new business, there really are some hints from this data about how to do it. This idea—that I should do something that enables a larger population to do something that historically had been available to the skilled or wealthy—is one of them.

Think about computing. When I was at BYU learning how to program, I had to take my punch cards to the mainframe center and an expert there had to run the job for me. When the personal computer came out, it wasn’t good enough to handle complex problems, but it put the unwashed masses in the business of computing for themselves in the convenience of their home and office. We still had to refer the complex problems to the mainframe center for the experts to do, and we could handle the simple ones. But as the PC got better, it helped the unwashed masses become more and more capable. Until today, there are very few esoteric problems that need to be solved on a mainframe. We consume infinitely more computing now because the personal computer enables a much larger population of less skilled people to do things that, historically, only a specialist could do, and do it in a convenient, local setting, when historically you could only do it in a centralized location.

Now college students manage their own portfolios online, and nobody yearns to return to the days of the $300 commission, or to the corporate photocopy center. Almost always disruptive innovations, such as these have been ignored or opposed by the leading institutions in their industries for perfectly rational reasons.

If you look at the list of companies on this chart that have all been profitable stock investments over the last twenty years, most of them had their roots as a disruptive technology company. For example, Cisco with the router is the disruptive technology against circuit switch equipment made by Nortel, Lucent, and Erickson. Intel was a disruptive technology against the wired circuit logent of large computers. EMC in storage technology did it to IBM. Every one of these companies is being disrupted themselves. If you look back in history, when the disruption came at the market from below, initially the leading company ignored it, because it wasn’t important. When it finally was clear that it was important, they framed it as a threat. In reality, because the disruption enables a much larger population of less-skilled people to play in the market, they were all poised on the brink of a huge growth opportunity.

Almost always, when this has happened, the leading companies have failed to seize the opportunity because they framed it as a threat, and the new entrants caught the growth. An example of this was when the transistor first emerged as a disruptive technology vs. the vacuum tube. It was disruptive because it couldn’t handle the power that was required in the existing market. The established market for consumer electronics at the time were large floor-standing televisions and tabletop radios. Every one of the vacuum tube companies took a license to the transistor from Bell Labs, but they carried the license into their own laboratories and framed the problem as a technology problem. How do we make the transistor good enough that it can handle the power required in the market? They invested millions trying to cram the transistor into the market as it existed. The transistor actually made it to the market because it had different attributes that enabled new applications to emerge, and different classes of customers to afford products that, historically, they couldn’t afford.

The first application was a germanium transistor hearing aid—an application that valued the technology for the very attributes that made it unusable in the mainstream. The next market was Sony’s little pocket radio in 1955. It had lousy quality and couldn’t compete with the quality of the tabletop radios that were made with vacuum tubes. But the Sony transistor radio enabled a much larger population of people to conveniently do something that, historically, hadn’t been possible. Teenagers could listen to rock and roll out of the earshot of their parents. A big market began to coalesce. In about 1959, people wondered whether somebody could use this solid state electronics technology to make a portable television. This case history taught me a lot about why the established companies almost always frame it as a technology problem going after the established market, and why entrant companies create the new growth markets.

GE was the second largest television maker at the time. They saw this potential and hired Arthur D. Little (ADL), a management consulting firm, to study the potential of the portable television in the United States. In today’s consulting dollars, they paid ADL about 5 million. ADL did a great job. They interviewed hundreds of customers about whether, when, why, and how they would use a portable television. They interviewed distributors, service people, and retailers and concluded there was no demand for portable televisions in the United States. A month after they delivered their report to General Electric, Sony introduced its first portable television and sold two million units the first year. Sony had a policy never to do market research.

I don’t tell this story to ridicule General Electric or ADL, because the past is hard to understand and the future is impossible to see. But the interesting question is why General Electric paid ADL all that money to measure the size of a market that didn’t exist? And why would ADL go to such pains to interview customers about the attributes of a product that they had never had the chance to think about before? Because in established markets, that’s what you should do. Authors who assert that to be innovative you’ve got to be willing to risk failure are wrong. In an established market you never want to be wrong. The market is there; you can measure its size. The customers are there; you can understand them. The competition is there; the technology exists.

There’s a business model that helps you estimate costs and revenues and return on investment. You have to carefully make investment decisions in established markets, because if you make a mistake, you can wreck a very good business. But when the same analytical process is used to evaluate whether we should fund a project to create a market that doesn’t yet exist, the process paralyzes you. You can never muster convincing enough evidence that a market is going to grow from nothing when the proposal must compete for resources with products targeted at an established marketplace. One of the fundamental reasons why leading companies miss growth opportunities is in the resource allocation process. Companies need one resource allocation process to use in an established market and a different process to evaluate whether to create new growth.

We also saw this happen in the early nineties. Remember the buzz about personal digital assistants, the little handheld computing devices? Every computer company saw this coming. They weren’t asleep at the switch. They framed it as a technological deficiency: “How do we make that little device like a computer?” Apple was the most visible, investing 350 million to make its Newton. By framing it as a computer, Apple caused the customers to say, “Well I could pay $1,200 for the Newton or $1,600 for a notebook computer. The Newton just can’t do it all.” Apple took enormous risks and made huge investments to try to make the technology good enough to be used in the established market, yet it’s very difficult for the disruption to be as good as the sustaining one.

Palm tried the same thing at the outset. When it was clear that wouldn’t work, they migrated to the established markets and defined the little Pilot. At the outset it wasn’t a computer, and they didn’t market it as a computer. But little by little, it’s getting more and more like a computer. Interestingly, other computer companies are reacting the same way, they’re all moving into higher performance workstations. Dell’s big thrust is to get out of the personal computer and into workstation and servers. New companies are capturing the growth. Why? At its root, the inability of the established companies to create the new markets is in the resource allocation processes they use to make decisions in new product development.

When I wrote The Innovator’s Dilemma, I decided that rather than writing the summary chapter as a rehash of the first eight chapters, I would use the disruptive technologies model to understand the toughest innovation problem known to man, which I viewed as the electric vehicle. The question is whether the electric vehicle will ever disrupt the folks in Detroit who make gas powered vehicles? Or is it just a pipe dream of the Sierra Club? It turns out that there’s some really good data on this. In the early 1990s, California mandated that by 1998 nobody could sell gas powered vehicles in California unless 2 percent of their volume was electric. The mandate set off a mad race to make electric vehicles good enough to be sold in California.. By 1996, it was clear that the battery technology just couldn’t cut it. They petitioned the California government to relent and give them more time. The government put off the deadline to 2003. But the deadline is still there.

Did it make sense for Chrysler to design a purpose built vehicle for electric power or should they just take one of the existing platforms and equip it with electric power? Because of their minivan volume it was a no-brainer. The minivan was a much more economical vehicle for this purpose. They set out to make an electric minivan. To get it to cruise as far as possible, they had to load 1,600 pounds of lead acid batteries in the back of the minivan. Even then, it would only cruise eighty miles. It wouldn’t even meet the minimum the market required. With all the weight in the back of the minivan it would accelerate from zero to sixty in nineteen seconds, not nine. It took four times longer to stop the electric minivan than the gas minivan, and it cost $100,000. So they went back to California and did more consumer panels, asking consumers if they would prefer a car that cruised 80 miles between refueling stops, went from zero to sixty in nineteen seconds, took four times longer to stop, and cost $100,000, or whether they preferred the new, gas-powered Voyager for $23,000. Which do you want? Nobody outside the Sierra Club preferred an electric vehicle. So they went back to the government and petitioned them to back off, and they did.

I had a brilliant student who wrote a paper on this. He said it was the right answer to the wrong question. The question to ask is whether there is a market that would proactively value, not just tolerate, but value a car that wouldn’t cruise far or accelerate fast. Asking the question gives you some interesting ideas. How about the parents of teenagers? America’s teenagers drive overpowered cars. Every time they get in you say, “I wish it wouldn’t accelerate that fast.” Teenagers primarily go to high school and visit their friends. If the electric vehicle targeted a teenager, designed as a fashion statement-where the critical trajectory of technological progress was the sound system, not the motor—there might be a market. Or, how about other possible markets like retirement communities, the golf cart industry, or the crowded streets of Bangkok.

Will the electric vehicle disrupt cars? It’s not a technological question, it’s a marketing problem. It will succeed if somebody creates a market that enables a new population of people to play mobile conveniently. For teenagers, it would be a $5,000 car that you could master charge. Then, from those humble beginnings, all of the technological questions—such as, “What kind of battery will it be, would it be fuel cells, will photovoltaics ever work?—will get sorted out by the folks that are in the market trying to move towards Detroit. All the while, Detroit will be working in the laboratory on what, in all probability, will be the wrong technology. Again, because they frame it as a threat rather than an opportunity they end up asking the wrong questions and missing the growth.

The third innovation model relates to how integrated companies give way to whole populations of focused companies. I read an article about four years ago hat ridiculed IBM’s management. In the early 1980s IBM had better microprocessor technology than Intel, and better operating system technology than Microsoft, yet they chose to outsource to Intel and Microsoft, and in the process put into business the two companies that subsequently made all the profit in the industry. IBM held on to a slice of value-added, which is designing and assembling a computer, where nobody could make any money. The reason the article bugged me so much was that at the time IBM made those decisions, everybody knew this was a wise thing to do. With all the graduates of our schools running around the world today as consultants, advising everybody that they should outsource everything that’s not their core competency, it really makes you wonder how many of today’s outsourcing decisions would history judge as having been similarly flawed.

It turned out that this disruptive technology model was a good way to frame this new problem. During the era in industry history where the available technology is not good enough for what the customers need, the architecture of the product tends to be integral in character. Visualize the mainframe computer industry. You could not have existed in the early days as an independent supplier of operating systems or core memory or logic circuitry, because all the subsystems had to be interactively designed together. There were no standards by which the pieces of the system fit. They were interactively designed because the product wasn’t good enough. The engineers were always compelled to put the pieces of the system together in a new, untested, and more efficient way, because they’re always trying to wring the maximum performance possible out of the technology available.

Because of that, you had to be an integrated company in order to play. Even if you go up to the high end of most industries today, where they’re pushing the bleeding edge of performance, you still have nonstandard, technologically integral architectures made by integrated companies. A Hewlett-Packard mission critical enterprise server, for example, has a custom version of HP UNIX carefully married with a custom designed HP risk chip all made inside of an integrated company, because they’re still trying to wring the most performance possible out of available technology. In this phase of the industry’s history, being integrated is a big advantage. The way you compete is to make better products. So the computing industry was dominated by IBM and Digital. Automobiles by General Motors and Ford. Photography by Kodak. Telecommunications by AT&T. Aluminum by Alcoa. Again and again this integration was a key competitive advantage when available products were not good enough.

How do you compete, however, for the business of customers in those tiers of the market that are overserved by the available functionality? Typically, the existing players continue to pursue profit, as they perceive profit, up to the high end. The disruptive innovators who come from below begin to compete in a very different way. Speed to market now matters a lot. On the left side of the diagram, speed to market didn’t matter because people were willing to wait for a better product. But when it’s good enough, time to market begins to accelerate, and the ability to customize the features and functions of the product to the needs of customers in smaller and smaller niches of the market become the mechanisms by which companies compete. Innovations that improve speed to market and the ability to be convenient as a supplier, and to customize, are the kinds of innovations that get traction in the marketplace. Better products no longer cut it. To enable companies to compete on that basis, to be fast and flexible, the architecture of the product tends to migrate from integrality to modularity. In a modular world, interfaces among the pieces of the system become well defined. That enables independent companies to provide pieces of the system, and another company can assemble those pieces.

If you peeled apart an old IBM mainframe, everything inside was made by IBM, because it had to be made by IBM. If you peel the cover off a Compaq or a Dell machine, every piece is made by a different company. When disruption happens the integrated company gets displaced by a whole population of specialized companies, and the industry disintegrates.

There’s one last piece from this model that has been helpful. The people who make the money in the industry are, of course, the big integrated players. The slice of value added they make is in the design and assembly of the product that the customer uses. The people that were suppliers of the integrated companies got hammered. If you were a supplier to IBM or General Motors, you lived a miserable, profit-free existence year after year after year, because they held all of the power. After the disruption, the people who design and assemble the product that the customer uses have a very ahard time making money, and the ones that provide the subsystems that really drive the performandce of the product, they’re the ones that make the money. The ability to make money in computing migrated back to Intel, Microsoft, Sharp,Applied Materials, which themselves provide technologically integral subsystems. If you happen to be an engineer working for Compaq, and your boss tells you to design a better computer than Dell, what are you going to do? Put in a faster microprocessor? Higher capacity disk drive? More DRAM? When you’re assembling a modular product, it’s very hard to differentiate what you do on the basis of performance or cost.

Think about what the car companies did. They did exactly what IBM did; they sold Visteon and Delphi. They had to disintegrate, buy they sold off the wrong pieces of the business, because in the future where the money will be made is in the companies that supply the chassis and the braking systems and the electrical systems. The design and assembly of a car itself will become something much more automated, mechanical, and copyable as they move to a modular architecture. This is what’s happening to Microsoft. I think it will happen independently of anything the Justice Department does. Microsoft’s products are integral architectures. We owe a great debt to Microsoft, because in the early years you simply couldn’t do what they’ve enabled us to do if you didn’t have integral architecture. But now they’ve overshot what people can absorb. What you see coming up underneath them is internet-based computing. In particular Internet protocols as operation systems–the Java programming languages are consummately modular. Java is not nearly good enough to be used in the applications where Microsoft makes its money, but on the Internet it has other attributes that actually make it more valuable. So you have a single integrated company at the high end fighting it out against tens of thousands of small Java programmers at the low end. The industry will disintegrate. Microsoft, in my view, will always be the dominant player in that high end of the business, just like IBM continues to dominate mainframes. However, its place of dominance will become progressively less relevant to where the center of gravity is in the computing world.

I offer you this set of innovation models as a way of structuring the way you think about how technology and markets interact and how this intersection of technological progress with what customers need actually precipitates a change in the way you have to compete. At one stage, competing with better products mattered a lot. In a later stage, innovations that enable a larger population of less-skilled people to do things with products that are more convenient are the kinds of innovations that matter. I hope these models will constitute signposts along the road so you know what to watch for as your competitors move with you into the future.

Disruptive innovations typically have enabled a larger population of less-skilled or less-wealthy people to things in a more convenient, lower-cost setting , which historically could only be done by specialists in less convenient, centralized settings. Disruption has been one of the fundamental causal mechanisms through which our lives have improved.

Clayton Christensen, a professor at the Harvard Business School, is author of the best selling, The Innovator’s Dilemma. This article is based on his address to the BYU Marriott School of Management’s Sixth Annual Conference.