If you're in Stockholm this summer, be sure to check it out:
If you're in Stockholm this summer, be sure to check it out:
Since Mike Speiser stopped blogging, Fred Wilson has provided the best blog connection to the venture capital world. Fred is a thoughtful guy, and has put tons of work into building a wonderful community through his blog. Check out out how many comments he responds to. Unlike so many other bloggers, he's really interested in what his readers think about, and reads and responds to many comments.
Which is why I was a bit disappointed with his latest post. As I see it, he's pandering a bit to his younger readers here. His post is about how you don't need a college degree to be an entrepreneur.
Precocity attracts attention. We all know the stories about how Bill Gates dropped out of college. But for every Bill Gates there are probably thousands of college dropouts whose new businesses never really got off the ground. Many of them would have been better off taking the time to learn what was already known and written down in books. Some young people have a hard time with that. Yet that is what I believe we should encourage young people to do.
The Kauffman Foundation published an interesting study last year that provides some empirial basis to support this view:
Challenging the perception of American technology entrepreneurs as 20-something wunderkinds launching businesses from college dorm rooms, a new study by the Ewing Marion Kauffman Foundation and researchers at Duke and Harvard universities reveals most U.S.-born technology and engineering company founders are middle-aged, well-educated and hold degrees from a wide assortment of universities.
I know it's true that you learn some things from trying to start a business that you cannot learn in school. But you can always start a business after you graduate. It's harder to go back to school.
The larger point is that innovations -- including new businesses -- require the cooperation between a creator and a group of people who recognizes the work of the creator as a contribution. Young people are better creators because of their naivete. But older people are better at listening, watching, and meeting needs.
This op-ed by successful academic, inventor, and entrepreneur Stephen Quake is a landmark piece. The current system of technology transfer must evolve for the United States to maintain its competitive advantage as a producer of new technology:
Licensing is often a protracted process, and licensing officers so paralyzed with fear about making a mistake and not maximizing licensing revenues that they discourage all but the most persistent licensees. Because universities are non-profit institutions, the true measurement of technology transfer success should not be the total amount of licensing revenue, but rather the successes in helping faculty members patent inventions, in forming new ventures that create jobs, and in facilitating the commercialization of technologies that in many cases will help improve our society.
The best way for universities to achieve this would be to make the same decision the federal government did, and relinquish their control over licensing. Since in most cases faculty know the context of their invention and how it can be best commercialized, they should drive the licensing process, and the OTL should play a supporting role. The university deserves to receive some compensation, but this should be fixed by a simple formula and limited — bearing in mind that the vast majority of research funding that leads to inventions has been obtained by the faculty through grants, and that the university has already taxed a fair bit of that to support its facilities.
Please read the whole thing. There is almost nobody with more first-hand experience than Quake. He's telling us what's wrong with the system.
Commercial entities (from venture capital funds to public corporations) and universities face a prisoner's dilemma when it comes to IP rights. Cooperation offers benefits to both; but the incentive in individual transactions is to defect. Quake is calling for inventors to have more control over the process, and that will help. But even better would be for technology transfer offices to recognize a bigger role for private equity funds in assisting with the process. An intermediary with equity incentives will outperform even the best organized and well-meaning of in-house technology transfer offices.
Read the comments on the NYT page too. See how many in industry are frustrated with the technology transfer offices?
Economists care about preference satisfaction, but where do these preferences come from? Evolutionary psychology offers what is arguably the most comprehensive explanation: we want things that helped our ancestors succeed at leaving surviving offspring in the environments in which the human mind was shaped by natural selection (Barkow, Cosmides, & Tooby, 1992). Our love of sweet and fatty foods, even when we know that we now eat too much of them; the desire for prestige and our concern for the opinions of others, even when we wish not to care; the desperate passion to protect our own children and the rapidly declining concern we show for more distantly related children; all of these human preferences flow readily from an analysis of the preferences that led early hunter-gatherers to succeed as individuals. David Buss (2000) has even offered a catalog of evolutionarily-informed methods for increasing human happiness in the modern environments we now inhabit. But nearly all analyses of happiness from evolutionary psychology, like those from economics, focus on individuals and their preferences. Might there be group-level preferences too? Might individuals be happiest when their groups are doing things that led, over eons of evolution, to group success?
The Fama/French forum just gets more and more puzzling to read. These guys really seem to like the zen koans: (My comments in brackets.)
EFF: This is a market efficiency question. If firms facing extreme financial difficulty are properly priced to take account of the risks they face, there is no reason to avoid them, unless you don't like the risks. [A tautology. If a stock is properly priced to take account of risks, then there is no reason not to buy -- unless the stock is not properly priced. Isn't it better to buy when it isn't properly priced?]
KRF Whenever you think about a proposition like this, you should ask yourself, "What do you know that the market doesn't?" Does the market know the firms are facing extreme difficulty? If so, your best bet is that the price is right. This does not mean that the price is always right, or even that the market always incorporates all publicly available information. Sure the price of a distressed firm may be too high, but it is equally likely it is too low. To decide how the market has erred in a specific case, you have to know more than the market or you need a better model than the market. Although most investors seem to think they have the expertise to beat the market, an enormous amount of empirical evidence says this is a very high bar.[No argument here. I'd just add that it's not an intelligence bar, but more of an emotional one.]
Their model seems to confirm Granovetter's "weak ties" hypothesis because cities have a higher per capita number of such weak ties.
From Gary Becker:
The Chinese government holds most of its more than $2 trillion in official reserves in US Treasury securities. China gets a bad deal from selling goods made by Chinese labor and capital in exchange for large amount of paper assets that yield low returns. China has accumulated far more reserves in the form of these assets than can be justified as a buffer against fluctuations in its imports and exports, or than is wise given its low standard of living. The US seems to have made the better bargain by exchanging low interest paper assets for a rich variety of consumer and producer goods.
I hope we still feel that way ten years from now. But somehow I doubt that will be the case. My twin brother is a fluent Mandarin speaker. I studied Homeric Greek. What was I thinking?
Any readers have experience or knowledge here?
A time series of journal entries records sales and costs of sales. Later these are summarized into income statement, balance sheet, and cash flows. Lost in these financial statements is the information about how often each unit of sales and costs of sales was earned or incurred, respectively. But obviously the company has to keep track of the rate or frequency of sales and costs to keep profits increasing. The two have to stay synchronized to keep profits increasing.
How is this done now?
Check out the many feedback loops in this Figure. Notice in particular how the loops connect both adjacent and non-adjacent stages in the cycle, wrapping the pipeline back on itself several times:
Now guess when the Figure in this article was published. Leave your answer in the comments below, and click through here to see if you're right.
A recent post over at Growthology has got me to thinking about evolution and economics again. Long-time readers will recall that this has been a theme on Broken Symmetry for a while. See an earlier post here.
The current crisis presents us with an unprecedented opportunity to redesign our model for public and private institutions. That's an imposing problem, and we're not going to come up with a perfect solution. But the thing to do when faced with a messy problem is to go for the low-hanging fruit. Identify the worst parts of the older model, fix those, and repeat.
The low-hanging fruit here is the rational hypothesis of economic theory. Economists, lawyers, and politicians make claims about the market, the legal system and government. If they're any good at their jobs, they offer reasons in support of their claims. But more often then not, their warrant -- i.e., the logic that permits the reasons to support their claim -- is the rational hypothesis.
The flaws in the rational hypothesis are too well-documented elsewhere to be rehearsed here. The attack on the rational hypothesis has been ongoing for decades. The rational hypothesis persists because nobody has come up with something to fill the vacuum that would be created were it to be retired.
What could fill that vacuum? How could you get more general than the rational hypothesis? All the rational hypothesis says is that people seek to maximize their utility subject to a static preference function.
For a patent lawyer like me, there is an obvious answer to the question of how to make a claim broader -- take something out. Just for kicks, let's start taking things out of the rational hypothesis. What to start with? That people seek to maximize some function? I hesitate there because extremization of functions turns out to be fundamental to our understanding of the universe. See, for example, Feynman's lecture on minimization principles in the second volume of his lectures.
How about the assumption that the preference function is static? Let the preference function evolve over time so that a given configuration of preferences -- both for individuals and groups -- is well-defined only a particular moment in time.
This sound promising, but there are difficulties. First, if we don't assume preferences are static, we have to find some other way to measure utility than the principle of revealed preferences, which relies on that assumption. How can one measure utility with respect to preferences that are not static?
How about this strategy: Assume that everybody gets about the same amount of utility from engaging in the same activity, with "activity" being defined as a particular form of individual behavior. Everybody gets the same enjoyment out of eating an apple a day, swinging a golf club every month, sleeping eight hours a night, and so on. Instead of using revealed preferences to measure utility, we could simply count how often each person engages in the given activity within a certain window of time. More frequent activity means higher utility. Rather than assuming a person's preferences to be static and triangulating her preference function from her revealed choice to forego one activity in favor of another, we would rank her preferences within a certain window of time according to the frequency with which she engaged in each activity. Breathing comes first, then eating, then sleeping, and so on.
Notice that because we don't have to assume that people do what they do because they prefer it to something else, we are free to come up with other theories about why people did what they did within a particular window of time. When preferences are stable from within a time frame much longer than the window of measurement, then the rational hypothesis is probably a pretty good one to use in modeling behavior. Except in that limit, however, the cycles of activity will require a different or more detailed explanation.
Having the freedom to explore other hypotheses about behavior could lead in many interesting directions. For example, perhaps the reason that some people engage in certain activities is because of a biological drive to discover and create knowledge. Recently, Kenneth Arrow has acknowledged that the rational hypothesis does not adequately address that possibility, which is plausibly explained by evolutionary psychology.
Perhaps the reason why people engage in activities that require cooperation with others is because of a perceived need to win in a competition with another group. If that were the case, then it might be in our best interest as a nation to make our groups as large and inclusive as possible, and our enemies more abstract and ideological. Maybe it's better if our war is on terror rather than terrorists, since we are prone instinctively to label other people.
This last possibility is the most intriguing to me. If competition fosters growth because it encourages people to cooperate with their clan, then at the same time it fosters conflict between clans. What we want to design is institutions that use competitions to foster growth without at the same time fomenting conflict. There are some organizations that seem to cooperate as if in conflict with another organization, but they are few and far between. The "Stage 5" organizations described in Tribal Leadership may be examples. Certain religious organizations seem also to provide examples.
Putting aside the question of whether a perfect competition could ever be designed, the prescription is nonetheless clear that our bias should be toward competitions at the largest temporal and spatial scales, scales that require cooperation at the smaller and shorter scales below them. In other words, people are more likely to cooperate within a firm of 5,000 when they're competing with another firm of 5,000 than when they're competing with a bunch of firms of 50.
Of course as a patent lawyer I'm biased, but I think that's a great argument in favor of the patent system and patent pools. The patent system extends the temporal scale of competition among firms to decades. Rather than peddling patent medicines, drug companies spend billions of dollars on R&D paid for with medicine patents. We pay more in the short term for the newest drugs and treatments, but we're much richer over the long-term as the results of the competitive game move off patent and become part of the soil for the next generation of drugs and treatments to grow.
Similarly, patent pools are a way for an industry to declare a truce on the battle for market power that would make the life of most consumer goods manufacturers nasty, brutish, and short. With proper checks and balances in place, an industry can channel the competitive instincts of its workers into activities that are both more enjoyable for the workers and more producitive for society as a whole. Both Peter Drucker and Mihalyi Csikszentmihalyi seem to have had similar ideas in mind.
And the progress we make can be measured with respect to the increased frequency that various institutions are able to achieve a given activity. Why should two people compete to see who can make a nail faster when two-thousand people could compete as groups of one-thousand each and make millions of nails and get the same enjoyment out of the competition? All else equal, if two firms are equally profitable but one manages to produce its goods or services at a higher frequency, then the higher-frequency firm will be a more enjoyable place to work, and society as a whole will be richer (because of the increased availability of that firm's workers' time).
Why not measure and compete in units of profit per unit time? How else can we evolve? The success or failure of a firm should not depend on whether profits match quarterly forecasts alone, but on how much human capital (in units of time) was required to make those profits. With profits alone, in an extreme scenario, we might work ourselves to death.
Mahoney envisions multiple revenue sources for I-Cubed, including selling IP and taking a percentage, negotiating royalty agreements with public partners, buying ownership stakes in companies that receive funding, and charging a percentage for managing the investment fund. The clearinghouse would employ a sales and marketing team to market Idaho technology globally. Most licensing revenues would be returned to the source company or institution. Although all three state universities have tech transfer departments, they must “go back to the legislature every year for more [funding], which doesn’t breed consistency,” says Brian Dickens, administrator of the Idaho Department of Commerce’s Commercial Innovation Division. Universities have been thus far receptive to I-Cubed, whose business model “seems to supplement efforts that are already underway here and other places,” says Mary Givens, TTO director at Boise State University. “Once it is up and running, Boise State will look for ways we can work together.”
Sounds like a great idea to me. I'd say that even more established tech transfer offices would be interested in that sort of partnership if I didn't know better from personal experience. But the real story here is the innovative source of funding -- a great example of creative entrepreneurship:
I-Cubed is seeking foreign investment under the federal EB-5 immigration program, which grants visas to foreigners in exchange for committing at least $500,000 to rural or high-unemployment areas and $1 million to urban areas for at least two years.
This is a terrific way to foster the growth of human capital here in the U.S.
The FTC web page with agenda, panelists, presentations and transcript of the December hearing is here.
The second hearing was held last week. Aron Levko presented preliminary data from the 2009 PwC survey of patent litigation. Bryan Lord from AmberWave gave a presentation with an interesting model (from Karl Ulrich at Wharton) of how technology and markets need to develop concurrently for economic growth to obtain.
The next hearing is scheduled for March 18 and 19 and is free and open to the public. See earlier post about the first hearing here.
Back in October I read and blogged about Tribal Leadership by Logan, King, and Fischer-Wright. The thesis presented in Tribal Leadership is that the health of organizations correlates with growth in the average number of connections between members of the organization over time. If you haven't read Tribal Leadership, then you should check it out. That book, unlike this one, also includes a useful cheatsheet, which summarizes the most important points of the book. It's the better place to start, and should probably be read in conjunction with The Three Laws of Performance.
Both of these books promote what Mihalyi Csikszentmihalyi would call a systems model for explaining individual and group performance. According to the systems model, there is no particular habit or activity that will guarantee an individual who practices it better performance or more creativity. The reason for this is that whereas an individual produces the performance or creative work, the work must then be evaluated and accepted by the field of experts for the symbolic domain of the performance before it can become a part of that domain. In other words, transcendent performances (including creative works and the accomplishment of challenging business goals) require collaboration between individuals and groups. Growth (or decay) results as groups and the individuals who constitute them evolve to tackle successively more complex goals.
The Three Laws of Performance provides a list of rules (and stories explaining how they work in practice) that promise to help individuals in leadership roles facilitate the group coherence and cohesion that are necessary to bring about transcendent performance. The book is based on a wealth of experience from decades of applying the ideas it explains. If you're interested in this kind of theory, the book is worth a read. It was disappointing, however, that the authors didn't make more of an effort to connect their ideas up with similar work done by others in the past. I'll explain with reference to the three laws themselves.
The first law of performance holds that how people perform correlates to how situations "occur" to them. This odd choice of word raises many questions. Fortunately, the authors were merciful enough to address the confusion upfront:
So what exactly does occur mean? We mean something beyond perception and subjective experience. We mean the reality that arises within and from your perspective on the situation. In fact, your perspective is itself part of the way in which the world occurs to you. "How a situation occurs" includes your view of the past (why things are the way they are) and the future (where all this is going)."
Now without a systems model in mind, this is nearly incomprehensible to me. Again, the best way that I have of understanding it is in terms of a systems model that includes feedback loops between individuals, the field that includes those individuals, and the domain of symbolic knowledge that the field operates within. If you want a picture, consider this diagram from Roger Penrose's Road to Reality:
In this diagram we see the feedback loop that connects the physical world, our mental picture of it (our consciousness), and our models of it in a given symbolic domain (such as language or mathematics). We can associate each stage of the loop with different people who work within a particular symbolic domain. For a domain to be cohesive, each individual within the group has to a have a mental picture of the entire domain. For it to be coherent, those pictures have to be consistent with the rules of the domain set by a group of recognized experts. So the bottom left corner would correspond to individuals within the group. The top corner corresponds to the domain as it is defined by the group of people who are considered experts in the domain. Note that although the rules of the domain are determined by the group of experts, the domain exists independent of any one of the experts -- making it possible for individuals in the bottom left corner to contribute, and for the domain to outlive any particular expert.
Finally, in the bottom right corner we have the physical world in which everything is embedded. It is through this corner that new information (including experimental observations or discoveries) enters the loop.
Having a systems model in mind makes the first law more comprehensible, at least to me. With the systems model in mind, we can reinterpret the first law as "how individuals perform correlates to their memory and forecast for how the group and the world itself have and will evolve over time." (Or something pretty close to that.) So far this might seem fascinating, but also useless. So what? Should seeing things this way make any difference in how we act?
The answer is yes because as Werner Erhard knew well, it is possible to change the loop by convincing individuals to change how they understand their past and forecast their future, especially as it relates to others within the group. As the Author's Note makes clear, part of the intellectual history of this book is derived from the knowledge and experience of people who worked with Erhard in the est seminars popular several decades ago. Incidentally, that inheritance might also explain the willingness to leave some terminology inscrutable. Part of the ethics of est seems to have involved challenging the audience to engage with the teaching intellectually -- a practice best exemplified by Socrates and worst exemplified by Heidegger, both of whom might also be considered antecedents to this sort of philosophy.
Back to the book. The remaining two laws basically spell out a prescription for how to implement transformational change within an organization given that the systems model applies. So the second law holds that "how a situation occurs arises in language." This is the Sapir-Whorf hypothesis, which is now an accepted fact at least among cognitive psychologists. The Sapir-Whorf hypothesis explains how corners of the loop are linked.
The third law holds that "[f]uture based language transforms how situations occur to people." In other words, when enough individuals working within a domain decide to change the domain, then the domain and hence the activities of the indviduals and groups that constitute it will shift into a different mode of operating. What is perhaps less intuitive is the fact that such shifts may occur voluntarily when enough individuals agree to make them happen. That's another reason why this book should be read in conjunction with Tribal Leadership.
I like the vision that the authors have in this book, and believe their prescriptions are sound advice for carrying out transformational change within an organization. There is so much theory compacted into a short book, however, that it is a bit difficult to digest, and is probably best used in conjunction with training by somebody familiar with the theory. (The authors are also professional consultants with an impressive track record of helping organizations achieve transformational change.)
As a final note, I was reminded by this book of this post on how Thomas Schelling's concept of focal points is useful in facilitating institutional change, and this book review on how the culture of reformed Protestantism may have influenced the Founding Fathers. There is more to be said about insitutional design than is captured by the static mental models of economic equilibrium taught in law and business schools, and this book is a good contribution to the evolution of these mental models.
UPDATE: If you're interested in systems theory in business, then you might also like some of the books and papers published by Prof. John Sterman at MIT.
FULL DISCLOSURE: Per new FTC rules, I want to disclose that I received a free copy of this book before providing the review here. I guess they found me because of my earlier review of Tribal Leadership and thought I'd be interested. I probably would have written a nice review if I didn't feel myself pulled in that direction by my desire to get more free books in the future!
I think government is broken for entrepreneurs and venture capitalists. Sarbanes Oxley costs a fortune, so great growth companies can't afford to go public. FAS 157 forces mark to market for venture capitalists, who then have to report short term pricing on long term assets, so the investors get scared away. 409A makes it so that companies have to report income for stock options. All of these regulations were written for very large companies. Small business creates all the net new jobs in our country and they are being strangled out of existence by regulators who are focused on big businesses.
I would add only that it's broken for inventors too.
Here's Milton Friedman:
Let the apparent immediate determinant of business behavior be anything at all -- habitual reaction, random chance, or what not. Whenever this determinant happens to lead to behavior consistent with rational and informed maximization of returns, the business will prosper and acquire resources with which to expand; whenever it does not the business will tend to lose resources and can be kept in existence only by the addtion of resources from outside. The process of natural selection helps to validate the hypothesis [of profit maximization]-- or, rather, given natural selection, acceptance of the hypothesis can be based largely on the judgment that it summarizes appropriately the conditions for survival.
Here's Tjalling Koopmans picking that apart:
Here a postulate about individual behavior is made more plausible by reference to the adverse effects of, and hence the penalty for, departures from the postulated behavior. The reality of the penalty is documented by technological and institutional facts, such as the reproducibility of production processes and the operation of accounting procedures in bankrputcy laws. . . But if this is the basis for our belief in profit maximization, then we should posulate that basis itself and not the profit maximization which it implies in certain circumstances.
Friedman's argument is incomplete for at least two reasons. First, we need an additional explanation as to how or why profit maximization is the relevant selection criterion. There is no a priori reason for this to be true of any institution. Second, we need to know at what level the selection criterion is applied -- i.e., to individual or group performance. The level at which competitive pressure applies will determine the amount of in-group coopearation below that level.
The greatest invention of the nineteenth century was the invention of the method of invention.From Science and the Modern World. Why was the method of invention perfected in the nineteenth century? Might it not have had something to do with the Patent Act of 1836? Or was it not reinvented in the U.S. in the nineteenth century?
IPKat reports on a landmark decision in the U.K.:
Only thirty one years after compensation for employee inventors was introduced into UK patent law, and another four years after the compensation provisions were amended in order to give inventors a chance of getting anything, the Patents Court for England and Wales has just made its first compensation order. The case in question is Kelly and Chiu v GE Healthcare Ltd  EWHC 181 (Pat), a ruling of Mr Justice Floyd.
Similar cases have been decided in Japan. Although in most cases, inventors in the U.S. have no similar right, there are provisions under the Bayh-Dole Act that require entities that receive federal funding to share royalties with inventors. But no private right of action has been recognized under that portion of the statue. See the Broken Symmetry overview of the law of patent ownership here.
[F]or every percentage point rise in the share of immigrant college graduates in the U.S. population, the total per capita number of patents for the entire population should increase by an astonishing 6%. Even more important, Hunt and Gauthier-Loiselle found that natives are not crowded out by immigrants, and that "immigrants do have positive spillovers, resulting in an increase in patents per capita of 9%-18% in response to a one-percentage-point increase in immigrant college graduates.
We have noted for some time that the Bayh-Dole effect in the USA itself has withered away, with a relative decline of university patenting since 2000. However, since our indicators were not sufficiently robust, we have not previously published these results. More recently, Wong &Singh (2007) published data about the numbers of US patents by leading universities as a percentage of all patents in the database of the US Patent and Trade Office (USPTO). This data, and the data made available by the Association of University Technology Managers (AUTM, 2007) in their yearly Surveys of US Licensing Activity, corresponded so well with our previous results that we investigated the noted decline of university patenting further.
At the global level university patenting is still gaining momentum, but in the most advanced economies the effects of the Bayh-Dole Act of 1980 seem to have faded away since the turn of the millennium. In our opinion, the reason for this is structural. More universities are nowadays increasingly ranked in terms of their knowledge output, and patents or spin-offs are usually not part of this ranking (e.g., THES, 2008). The nature of the competition among universities is changing, and the incentive to patent has thus withered. International collaborations and coauthorships, for example, have become more important in research assessment exercises than university-industry relations (Glänzel, 2001; Leydesdorff & Sun, 2009; Persson et al., 2004; Wagner, 2008).
Readers with a longer memory of the market may recall that the late '90s coincided with a burst of enthusiasm for IP as an asset. According to the timeline presented here, university investments in R&D patenting leveled off during the dot-com crash, and never picked up again with the rest of economic activity thereafter.
Although the authors seem to have a different "structural" reason in mind (U.S. News?), another conjecture as to why university patenting leveled off is that a good chunk of private investment shifted from technology to real estate, credit, and derivatives following the dot-com crash. Quite a bit of this funding comes from university endowments. None of these asset classes is closely linked to new technology, much less the early-stage kind that universities have a comparative advantage in producing post-Bayh-Dole. Thus an alternative explanation for the leveling off of investment in patents is that less private money was invested over that period of time. Also less public money was available for R&D at universities under the Bush administration than under Clinton administration. Finally, changes in the substantive rules of patent law and a general shift in public sentiment against the patent system over the same time period probably didn't help attract new investment either. It is possible that most university endowments maintained their pre-dot-com level of allocation to venture capital investment during this time period, but I doubt it. Only a handful invest with such long time horizons.
Putting aside the question of whether patenting R&D should be part of a university's mission (as regular readers know, I believe it should be), one can ask how the trend of private investment in university research and patenting could be rehabilitated given the recent history. The answer is that universities, like any institutions, will produce whatever they give their people incentives to produce. For there to be more industry joint ventures or other commercial development of IP, there have to be more and clearer incentives to researchers and tech transfer office employees to cooperate with private actors.
There are political reasons to be concerned about this decline in patenting. We in the U.S. like to congratulate ourselves for having the best universities and inventors in the world. But how are we going to pay for those universities and inventors in the future if we don't protect and promote their work?
There are always glimmers of hope. For example, I would doubt that patenting has stayed level at the University of Rochester since it's new subscription model was implemented.
What happens when the normal flow of demand gets delayed?
When the economy contracts by as much as it did in the fall, it means that consumers and businesses are forgoing spending that they might otherwise see as necessary purchases.
In a normal year, for example, about 5% of the cars, pickups and other light vehicles on the road are sold for scrap. With roughly 250 million light vehicles in the country, that means that just to keep up with the scrap rate, about a million new vehicles need to be sold each month. The last time more than a million light vehicles were sold in the U.S. was August, according to the Commerce Department. In January, just 655,200 vehicles were sold -- the lowest number in the 33 years of the government data.
Mike Darrah, owner of Darrah's Automotive & Recycling in York, Pa., said the number of cars people have brought to him to scrap has fallen by 40% to 50% since September.
From today's WSJ.
The Vox EU blog carries an article summary of how the collective use of inappropriate market information and flawed models led to systemic financial problems.
Among the authors' prescriptions for improvement to existing risk models is an increased diversity of methodologies for estimating risk. The authors note, however, that such diversity is difficult so long as the data is relatively inaccessible.
Besides its disciplining effect on attempted fraud, transparency into government or market activity has this added bonus -- it permits a diversity of experiments.
An evolutionary approach underscores the importance of maintaining variety in the economic system. Competition policy authorities as well as other agencies must be concerned with protecting economic diversity and meaningful variety in organizational forms. The focus need not be a particular market - - - it should be broader as what’s outside the market tends to be amongst the best candidates for Schumpeterian entry and radical innovation.
More generally, the presumption that more competitors is always better is overturned - - - once the goal is not just lowering price but also protecting innovation. Barriers to entry may need to be examined over a longer time period and must be examined at the firm level.
The role of supporting structures and government funding for research also affect entry conditions. They may purely reflect capabilities that incumbents have developed that newcomers shouldn’t expect to possess. Capabilities are likely to reflect the search for unique advantages. Their possession drives competition.
Subsequent research has established that firms exhibit more stability in their capabilities than in their products. In this sense, capabilities are easier to analyze than products. Capabilities are a proxy for those interrelated and interdependent aspects of the enterprise that govern its competitive significance. They are arguably a better proxy for competitive position than (downstream) market share.
Here's the paper. And here's the website for the Law & Economics of Innovation Conference at George Mason University, where the paper was presented last year.
In a pleasant turn of events, the most provocative remarks from day one of the IP & Antitrust Conference were provided by neither a practitioner nor an academic, but by FTC Commissioner Tom Rosch. A PDF of Commissioner Rosch's remarks is available from the FTC here.
Commissioner Rosch took as his topic the question of whether and to what extent restraints affecting pure innovation markets should be challenged by regulators. In considering the question, he undertook a review of academic literature, the history of regulation of such markets, the practical problems encountered throughout that history, and the prominent legal issues raised in that history.
His review of the academic literature recounted Schumpeter's arguments in favor of concentration in innovation markets and Arrow's arguments in favor of more competition in such markets. His purpose was not to deal with the nuances of these authors' work, but rather to point out that views are divided within the academy as to what level of concentration or competition is best to foster innovation. His remarks called to my attention the extraordinary remarks of Justice Scalia in the 2004 Trinko decision:
The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices–at least for a short period–is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.
Commissioner Rosch insisted that this was dicta in the Trinko decision. Nonetheless it is a clear statement of what would be called a Schumpeterean view of competition in innovation markets.
Next came a summary of the last few decades of the history of regulation of innovation markets. Commissioner Rosch is an eyewitness to much of this history. He was working at the FTC, for example, when Xerox was investigated in 1974 on the theory that its patent portfolio and joint venture with another company were anticompetitive. His conclusion, however, was that through this history, there was not a single instance in which the agency had made a successful challenge to allegedly anticompetitive conduct in an innovation market.
Commissioner Rosch left off his survey of the regulation of innovation markets with the provocative observation that innovation is "necessarily dynamic and evolving" and that static market definitions (such as the definitions required under Philadelphia Bank or the 2 year rule under the guidelines for review of horizontal restraints) make no sense in such markets.
Regular readers of Broken Symmetry will appreciate how these words struck a chord with me. Only a few weeks ago we enjoyed hearing from guest blogger Michael A. Gollin about the importance of a dynamic equilibrium approach to understanding IP. Without competition regulatory authorities onboard with any industry plan to plant a new redwood forest, it is only too likely that industry agreements to conserve or preserve certain resources will be challenged as anticompetitive conduct ("You logging companies can't agree to stop other companies from cutting down those trees!").
The challenge has now been laid down by Commissioner Rosch. It is up to this generation of academics, practitioners, and regulators to rethink how we regulate markets to accommodate growth that occurs on a week-to-week basis rather than a year-to-year or decade-to-decade basis. What new standards can be applied to harmonize the goals of antitrust and IP law? How can academics make a dynamic theory of innovation easy enough for everybody involved -- business people, politicians, and the public -- to agree and buy in to a new model?
At the meeting, Commissioner Rosch specifically implicated the market definition rule as an unnecessary distraction to determining whether anticompetitive conduct has occurred in an innovation market. How can such markets be well-defined when they are so rapidly changing? He pointed to an interchange between Judges Douglas Ginsburg and Diane Wood on this question, although I didn't quite catch whether this was on the record for them. (They sit on different courts, but both teach at the University of Chicago.)
Here are a few of my own thoughts on these matters: besides market definition, even market share is not a meaningful figure if such markets are considered at only a single moment in time. Moving from a static to a dynamic picture of IP and Antitrust will require us to take multiple snapshots in time. Unfortunately, what we have going against us in setting up a new model are the facts that dominant mental models of economic equilibrium are time-independent and the accounting theory behind our financial statements is based on a static model of firm value in liquidation.
In the future, time-dependent measures, such as cash-flow per unit time, will provide a more reliable way to measure the effect of a given activity within an innovation market. The effect of given activities might even be deconvoluted out of a time-series of cash flows -- i.e., by looking at how growth curves of accounts inside various firms changed (or didn't change) as the result of a questionable activity, we might be able to tell whether that activity had an anticompetitive effect.
So for example, it might make more sense to gather statistics on the average internal rate of return for successful and unsuccessful startups in a given market. Perhaps only startups over a certain minimum IRR and annual revenue should be subject to antitrust scrutiny. Growth below that threshold (which probably is technology dependent; consumer internet IRRs are right now much higher than semiconductor IRRs) is not subject to review because it is unlikely to have been due to collusion.
Moreover, competition regulatory authorities should take into account how macroeconomic factors (such as the economic inaccessibility of IPOs for companies with under several hundred million dollars in revenue) can influence competition. In fact, if the FTC really wanted to promote competition, then the first thing it should do is sit down with the SEC and identify exactly who each agency seeks to protect, and what rules are designed to do that. As Hume would have said, whatever remains should be cast to the flames. Technological change has happened fast enough so that incremental changes to the regulatory regimes within these agencies will probably not get the job done.
Mallun Yen of Cisco kicked off the conference with her moderation of a panel on the aforementioned topic, which included two academics and two practitioners: David Djavaherian from Tessera, Joseph Farrell and Richard Gilbert from Berkeley, and Henry Su from Howery.
The topic for this panel squarely raised what is in this writer's view the most important issue in patent and competition law today: when many people contribute to the development of a new technology -- especially a platform technology, or other technology that must be standardized to maximize it's benefit to consumers --, how should the profits be split up?
Before we had a patent system, there was a de facto (albeit not optimal) answer: the biggest firm (or group of firms, or "trust" of firms in the late 19th Century) wins. Nobody liked that might makes right rule.
So when smaller companies and individuals figured out at the turn of the 20th Century (think Edison) that patents could give them a foothold in the game with larger entities, they got a lot of them.
Naturally, the large companies weren't going to stand for that -- two can play at that game -- and soon thereafter began accumulating their own large portfolios. Aside from partially solving the problem of threats from new entrants (blocking patents), this widespread accumulation had an unanticipated added bonus for the incumbents: patent pools are a handy way to punish (and hence enforce) cartel agreements among horizontal competitors.
But antitrust authorities and courts alike got wise to this game and started breaking up patent pools in the mid-20th Century, and
that is where the lawmaking was left off until very recently.
What happened? Well recently an explosion of platform technologies (ahem the Internet and it's many subtechnologies) have made the value (and hence potential profit) of cooperating to develop standards clearer than ever. There are more standards setting organizations (SSOs), playing more important roles, than ever. We only notice this now as consumers when SSOs fail (see Blu-ray vs. HD).
But patents, at least until two or three years ago, have not gotten weaker. Hence, many newer companies (without any institutional knowledge of the antitrust perils of patent pools) have been patenting the heck out of industry standards. And who can blame them for trying, right? What a business model: no overhead for commercialing technology; just R&D costs plus legal expenses to tap into future profits.
Naturally, the companies that do make and sell things have gotten much more interested in patent pooling again. Nobody likes to work hard for years making a commercial hit, and then get hit with a tab for the supposedly shared standard technology.
But the core problem here is that the best way (maybe the only way) that these problems can be solved without litigation is early in time when the SSO does its work. But at that time there is no data about demand for a particular technology, much less for a bunch of alternative technologies, to settle debates. Everybody's idea is worth a million dollars.
Did the panel get to the meat of this? Some. But there could have been a more detailed discussion. We are still talking about whether patents are good or bad for competition. We need to deal with one problem at a time to understand the system and make improvements. In this context, we should assume that some R&D wouldn't get done without patents, so we just have to deal with the fragmentation of patent rights if we want to have new technology to standardize.
Any answer to this problem is going to have to be narrowly tailored to the details of a particular technology and market. Thus, on my view, the ultimate answer to this problem will not hinge on what but who. That is, not what terms (royalty percent, exclusivity, etc.), but who gets to set them.
The who should be the inventors working at an SSO at the point in time when the standards are set. Licensing of patents should be a clear prerequisite (clear as in enforceable by contract, see Federal Circuit in 2003 Rambus decision) to partcipation in an SSO, and part of the work done by the SSO should include an ex ante agreement as to how profits should be splitup. Are CEOs going to go along with that? Yes IF we get their buy-in at THAT momentum time. Of course they're going to lawyer up and fight if they had no idea that that was part of the deal.
How's that for a workable solution?