S & T Information Policy in the Context of
a Diffusion Oriented National Technology Policy
Lewis M. Branscomb
92-01 January 1992
CITATION AND REPRODUCTION
This document appears as Discussion Paper 92-01 of the Center for Science and International Affairs. CSIA Discussion papers are works in progress. Comments are welcome and may be directed to the author in care of the Center.
This paper may be cited as: Lewis M. Branscomb, Kennedy School of Government, Harvard University. "S & T Information Policy in the Context of a Diffusion Oriented National Technology Policy." CSIA Discussion Paper 92-01, Kennedy School of Government, Harvard University, January 1992.
The views expressed in this paper are those of the authors and publication does not imply their endorsement by CSIA and Harvard University. This paper may be reproduced for personal and classroom use. Any other reproduction is not permitted without written permission of the Center for Science and International Affairs, Publications, 79 JFK Street, Cambridge, MA 02138, telephone (617) 495-3745 or telefax (617) 495-5776.
S & T Information Policy in the Context of
a Diffusion Oriented National Technology Policy.¹
Lewis M. Branscomb
Director, Science Technology and Public Policy Program
Center for Science and International Affairs
Harvard University
Most scientists who follow scientific and technological information policy (STI) in the U.S. date the beginning of serious attention to this issue at the level of the White House back to the 1963 Report of the President's Science Advisory Committee (PSAC), known as the Weinberg Report.² In fact, it really began with the crash attention given to U.S. science and technology stimulated by Sputnik.³ Sen. Hubert Humphrey, who supported the rapid buildup in science research (which grew at some 15 percent a year in those days) led Congressional concerns about the efficiency of management of those R&D resources and the outpouring of research results they produced. This was the birth of serious U.S. interest in STI policy -- at least in the post-war period.
Ed Wenk, initially on Humphrey's staff, and then in Legislative Reference at the Library of Congress helped Humphrey formulate a policy for reduced unintentional duplication of effort and for enhanced accessibility and utilization of research results. The Federal Council for Science and Technology (forerunner of FCCSET) was created, and the Committee on Science and Technology Information (COSATI) was made part of the OST staff. Ed Wenk moved to the White House to head the FCST and Col. Andrew Aines -- the much beloved Don Quixote of STI4 -- ran COSATI. The federal agencies -- mostly the same ones in CENDI today -- were well advanced in their policy work on STI when it was appreciated that the active support of the scientific community was needed. What better way to elicit that support than a report by the distinguished scientists and engineers on the PSAC? So Al Weinberg, Bill Baker, and others wrote the famous report that made the case for a social contract between science and the public welfare. Their report was addressed more to the scientific community than to the federal agencies, who are were already seriously engaged with STI issues.
The Unfulfilled Social Contract
This social contract proposed that in return for a high level of scientific autonomy -scientists could chose their own research priorities and would insure quality work through peer review -- the scientists would accept the obligation to make their work available in useful form to technical practitioners who, but putting it to good use, would see that the science served the public interest.
It is my personal view that 100 percent of the academic scientists bought the first half of that deal -- their own autonomy -- but only a small minority of public spirited scientists every bought into the second half. Among them are all those who wrote the classic texts, who wrote and edited review journals,5 who created data analysis centers and built the National Standard Reference Data System,6 who made a life's work of providing authoritative recommendations on large collections of data to all those who needed it.7 Thus I feel the scientific community must share with the politicians the blame for the long decline in attention paid to STI issues in the last 20 years.
Winds of Change for STI
Today, there is new reason to believe a resurgence of serious attention to STI may be in the wind. In part information technology is responsible: the power of computers and communications has so increased the level of services and so reduced their cost that new capabilities are springing up at every hand. The enactment on December 9, 1991, of the High Performance Computing Act is a serious commitment to the needed physical infrastructure.
But the primary driver of policy change is the American confrontation with a dramatically new economic, political, and national security situation. Although the highest levels of the Administration are still very reluctant to come to grips with the debate about the decline in U.S. commercial technological performance in relation to competitors abroad, there is serious interest in the Congress, the business community, the universities, and in the working levels of all of the federal technical agencies.
The search for a political and economic middle ground between a laissez-faire economic policy and a full-blown industrial policy made little progress until quite recently. A new approach, which appears to have the makings of a consensus, urges the development of a U.S. "technology policy," in which the federal government helps develop and provide access to the technical knowledge on which the competitiveness of commercial enterprises and the productivity of all U.S. institutions depend. Among the advocates of an explicit technology policy are science and technology policy scholars, civilian high-tech industry executives (including members of the private Council on Competitiveness), some microeconomists, and influential technology advocates within the Bush administration, including Assistant to the President for Science and Technology Allan Bromley, and leaders in many technologyintensive agencies and departments: specifically the group of agencies represented here today.
Goals for a Diffusion Oriented U.S. Technology Policy
Allan Bromley, speaking for the administration, made himself the leader of this middleground approach by sending to the Congress last September a formal document entitled "The U.S. Technology Policy." Washington wags said that the most important thing about this little-publicized report was its title page. But a team headed by James Ling staffed from Bromley's Office of Science and Technology Policy (OSTP) and Darman's Office of Management and Budget (OMB) spent 14 months crafting the policy and gaining its acceptance. Building a consensus in the White House for any document with the words "technology policy" in the title was no small achievement.
But this document is more important for another reason: beyond establishing the political legitimacy of technology policy, it advances a technology policy that -- at least in its principles -- represents an important departure from the forty years since World War II. This policy has the hallmarks of a "diffusion-oriented" policy.
The five primary goals of U.S. technology policy, as formulated in this document are:
- a quality work force that is educated, trained, and flexible in adapting to technological and competitive change;
- a financial environment that is conducive to longer-term investment in technology;
- the translation of technology into timely, cost-competitive, high-quality manufactured products;
- an efficient technological infrastructure, especially in the transfer of information; and
- a legal and regulatory environment that provides stability for innovation and does not contain unnecessary barriers to private investments in R&D and domestic production.
The first, third and fourth of these goals are focused on increasing productivity through enhanced diffusion, with the latter specifically relating to NREN. The policy statement also calls for collaboration of government, industry, and academia in three diffusion-related areas of opportunity:
- technology transfer and research cooperation, particularly involving small and midsized companies;
- building upon state and regional technology initiatives; and
- mutually beneficial international cooperation in science and technology.
Although scholars describe these kinds of policies as "diffusion— oriented," the term "capability-enhancing" is perhaps more descriptive. They are not so much distributive in their objectives8 as they are aimed at enhanced power to absorb and employ technologies productively. Capability-enhancing policies are designed to prepare workers for an increasingly sophisticated work environment and develop their problem— solving abilities, to accelerate the commercialization of innovative ideas, to increase the productivity and lower the cost of industrial production, and to increase the capacity of all firms, large and small, to use technology to improve their competitiveness. The net effect of a capability-enhancing policy is to diffuse economic benefits and increase competition not by "picking winners" but by increasing innovative capacity.
Persistence of a Mission-Oriented Technology Strategy
While OSTP laid out these principles, and enactment of the High Performance Computing Act gives substance to the trend, the actuality is that the U.S., as Henry Ergas has said, still pursues a dominantly "mission-oriented" technology strategy. Ergas contrasts British and French policy with German, Swiss, and Swedish policy, which he calls “diffusion-oriented.”9
A mission strategy is designed to galvanize public attention to aggressive technical goals, to drive selected segments of industry to reach beyond incremental improvements in technology, to increase the scale and scope of national technical activity, and to force into existence technologies some of which can be adapted by commercial industry for world markets.
Megaprojects are the hall-mark of mission-oriented technology policy. Linda Cohen and Roger Noll, in The Technology Pork Barrel, have pursued six case studies of megaprojects aimed at commercialization of a technology prototyped by the federal government.10 They were the SST, coal gasification, photovoltaics, breeder reactor, the space shuttle, and applications satellites. Not all were unsuccessful, but the record is very mixed. The authors also note that, although President Carter created many of the largest projects in pursuit of energy independence, and the Reagan Administration terminated many of them, there seems to be little correlation between economic ideology and the political appeal of megaprojects with intended commercial value. Reagan continued the Clinch River breeder, and initiated the National Aerospace Plane. Indeed the case can be made that the mission-oriented policy of the last forty years is based on "picking winners" among projects and industries serving federal agency missions.
The indirect technical benefit of such projects is based on the assumption that “spinoff” is a free, automatic and effective mechanism for diffusing benefits to the commercial economy.11 While there is reality to spin-off, especially at the level of basic and exploratory research, tools, facilities, and technical data, spin off is not free, automatic or efficient.
If the nation finds itself with inadequate resources (as it does) or is strongly challenged competitively (as it is), technology diffusion must be undertaken as a deliberate goal of public policy, as the Germans, Swiss and Swedes have done. While the case for this position may be clear to CENDI, it has obviously not won the day with either the Administration or with the scientific community. I believe their at least eight different reasons why STI should be given more serious attention as an element of national competitiveness strategy.
Eight lines of argument that lead to priority for S&T information policy:
1) A knowledge-intensive, post-industrial society
A decade ago the U.S. economy was popularly conceived to be on an evolutionary track from its early agricultural base to a manufacturing base, and most recently to services. We began to envision the "post-industrial" society, in which "soft" technologies (services) were dominant and in which information-intensive processes encroached on energy-intensive ones.12 Philosophically this was -- and still is -- an appealing vision. It accommodates three qualityof-life improving values: reduced emphasis on consumption and materialism, reduced pressure on the environment, and increased emphasis on education, intelligence, organization, and aesthetics. So long as manufacturing and agricultural exports counterbalanced increasing U.S. dependency on raw-material imports, this vision seemed to be on its way to reality. It certainly calls for an information-intensive strategy.
2) The fraction of knowledge that is codified is growing; it is much more susceptible of diffusion by its nature.
Historically most technical information was experiential, gained through trial and error, passed on to others through apprenticeship. One of the most striking changes in the modern scientific era is the steady progression of knowledge from "embedded" to "encoded." First the physical sciences achieved a strong base in theory, although at the end of WWII only for the simplest physical systems. Then chemistry, followed biology developed the theoretical structure that allows modeling and simulation of natural processes. Today it is manufacturing that is still dominated by "imbedded" knowledge, but an international project proposed by the Japanese (The Intelligent Manufacturing Systems project) seeks to change that situation.
This codification of engineering knowledge is what allows product cycles to be shortened, permits concurrent initiation of high-tech production in plants all over the world, allows automated design and product and process simulations. Indeed one of the best definitions of "high tech" is any technology or process that can be mathematically simulated.
Under these circumstances, any nation whose information diffusion strategy is based on the assumption that knowledge is imbedded will suffer a serious comparative disadvantage, even if that nation enjoys a very sophisticated scientific mastery of encoded knowledge.
3) New paradigms for innovation in competitive world economy: R&D as the driving force for growth.
Economists have historically measured factor productivity by examining capital investment and labor usage, allocating to R&D and education the residual productivity growth not accounted for. Today the most advanced firms -- especially in Japan -- are investing more in R&D than in factory labor or capital. U.S. firms, and especially U.S. government officials, more often see R&D as the source of new products, flowing
But the Japanese are not only out spending comparable firms in the U.S. but spend it differently. R&D is a critical input to manufacturing processes and quality management. Technological improvements follow from many incremental advances, with research information entering the process at every stage, not just the "front end." Information flows increasingly must cross cultural barriers (for example from research to process engineering), calling for increased adaption to user needs. In short, the production function has become information intensive.
4) Leveraging U.S. lead in research.
The United States has a prodigious capacity for creating useful information. The National expenditure of about $ 150 billion on R&D exceeds the sum of that invested by America's primary competitors in the world economy. Half of this R&D — $ 77 Billion this year -- is financed by, and much of it carried out in the laboratories of federal agencies. Few doubt that information generated as the result of both government and private activities make a strong contribution both to innovation and to productivity in the world economy.
But serious doubts attend America's ability to gain comparative advantage from these efforts, since other strong free-market economies seem to be more effective at finding, acquiring, and using knowledge commercially. The only way the U.S. economy can be differentially advantaged from its strength in knowledge creation is to have a superior capacity for creation, selection, distribution, adaptation, and absorption of information. This aspect of national economic policy has been an active concern since Sputnik, but it has never received the level of serious attention it deserves. In fact, there is evidence that in many areas the lack of accessibility, quality, and responsiveness of information systems constitutes a serious handicap in the U.S. economy. This issue is especially important because of the strategy of decentralization of resource allocation for research and the commitment to scientific autonomy.
5) Leveraging heavy U.S.commitments to military, space research:
U.S. is competitive with Japan, Germany, and others in national R&D/GNP; all are 2.82.9 %. But US civil R&D/GNP is only 1.8 %. That means that either (a) the U.S. has gained a huge commercial advantage from our military R&D -- which very few believe, or (b) it is seriously underinvesting in commercially relevant R&D and the nation must improve the institutions and facilities for technology diffusion from its military and space investments.
The assumption that spinoff is free and automatic has in fact prevented more serious attention being paid to the issue (which is one reason these assumptions appeal to the libertarian conservatives and neo-classical economists). In fact, with the collapse of the USSR and declining U.S. defense budgets, the military will have to turn to "spin-on" technology. They will have to find a way to induce commercial firms to apply their leading edge technology to a diminishingly attractive military market. Thus the U.S. will be forced to move to a single, unified technology base instead of the luxury of a defense industrial base and a commercial base, each isolated from the other.
6) Leveraging existing knowledge through diffusion-oriented strategies: industrial extension vs. tech transfer:
There is a great deal of evidence that U.S. firms -- especially small to medium sized firms where much of our technology creating capacity is located -- do not adopt available productive technology to the extent Germans and Japanese do. U.S. federal STI policy traditionally looks to the dissemination of new knowledge; access to the body of extant knowledge is left to libraries and engineering hand books. But state governments have begun to explore the value of industrial extension services. Unimpeded by the federal allergy to "industrial policy," governors are experimenting with information services to small and medium firms to improve their manufacturing productivity. The Congress has assigned to NIST the responsibility to foster this process from the federal level. But the Japanese and Germans have extensive, well established networks of institutions engaged in this kind of information support to industry, and the U.S. has much to learn about how to do it successfully.
7) Enhancing the productivity of R&D and inter-disciplinary linkages: shifting from supply-side to demand-side strategy in knowledge production.
During the decades of surplus federal revenues, the U.S. was able to establish worldleading science by a strategy that depended on optimizing creativity, not efficiency. Multiple sources of funding, decentralized priority setting, strong commitment to research autonomy. Linkages between disciplines were left to chance as was the reliability and adaptation for use of research outputs. This is still the best way to finance and manage basic research -- with two exceptions: (a) Special provision must be made to institutionalize the capacity for interdisciplinary work, which is hard to peer review and runs against conventional disciplinary traditions. (b) Information must be user-adapted (as well as adapted to specific use by the user) if it is to be efficiently used.
As competitiveness concerns draw increased attention to "generic, pre-competitive" research, the importance of user interests in defining both the objects of research and in influencing the form in which the results are disseminated grows. Using disciplines must have means for influencing resource allocation priorities. This unavoidably requires some limitations on the scientific autonomy of science researchers, to the extent they are expected to be influenced by extrinsic criteria for choice of research objectives. This limitation will no doubt be resisted by many, especially those in the academic community, but to the extent that knowledge generation is perceived to satisfy the needs of application practitioners, it by become easier to justify high levels of investment in research. This is what is meant by a "demand side" system of research management.
8) The need for increased R&D productivity.
The U. S. finds itself heavily resource-constrained at a time when there are at least four sources of new pressure for more scientific and engineering research output:
a)Need to strengthen science-based process technology. (Ed Mansfield found that U.S.
firms allocate 30% of R&D to processes and 70% to product development; Japanese firms have the reverse ratio, and it shows up in superior quality and lower costs.)
b) Need to strengthen and modernize engineering education and research. Increased
support for research on design methodology and for production engineering to achieve high quality and low costs should accompany new multidisciplinary engineering curricula aimed at training for careers in science-intensive design and production.
c) Need to exploit science technologically -- especially in life sciences. Biotechnology
appears to offer rapidly growing economic opportunity as the result of massive investments in basic bio-chemistry and molecular biology, largely by the NIH. Rather than reduce the basic research that created the opportunity, additional resources (such as the Genome project, fermentation research, etc.) need to be added to accelerate the pace of commercialization.
d) Competition for funds that otherwise might go to science research from demands for
education reform.The NSF, for example, is being pressed to increase rapidly its investments in K-12 math and science education. Unless these funds can be new and additional, scientific research will suffer.
Given the competition for research funds now flowing to investigator-initiated basic research that these new sources of demand represent, it is easy to understand why the basic science community is nervous about "demand side" management and resource allocation strategies, and even about substantial funding to STI activities.
Prescriptive versus enabling strategies13
Government strategies for improving the performance of the economy can take two forms: prescriptive and enabling. Prescriptive, or targeted, approaches aim at short-term effects in specific areas of industry. Targeting is the hallmark of "industrial policy." Targeted projects in pursuit of agency missions are, nevertheless, popular politically precisely because intended beneficiaries (industries and technologies) are usually explicitly identified.14 Enabling strategies focus on infrastructure and public goods, are sometimes capital-intensive, and may have structural side-effects ignored in planning. They may have great long-range value, but that value is often hard to quantify. Because their beneficiaries are indeterminate, enabling strategies are more difficult to support politically, even though they may be superior policies from a economic perspective.
Much of today's government technology-transfer policy, such as Stevenson Wydler in both its 1979 and 1986 versions, seeks to promote specific relationships, forcing information flows into prescribed and sometimes inappropriate channels. In this sense, much of the government's technology transfer policy is prescriptive.
Public investment in information infrastructure, however, has several advantages over investments in prescriptive megaprojects and technology transfer programs that target specific industries. Infrastructure investment does not predetermine the relative emphasis to be given to any particular sector, whether it be manufacturing, agriculture, resource extraction or services. Information infrastructure allows information flows to follow demand, rather than requiring demand to be predicted. In that sense information infrastructure is enabling.
NREN as Information Infrastructure
The Internet, and its proposed successor, the National Research and Education Network (NREN),15 are examples of "information infrastructure."16 It incorporates both public and private components, emphasizes the diffusion of useful knowledge, and contributes to both vertical and horizontal integration of intellectual and economic activity. Its structure and pricing policies can explicitly provide for equity in access to services, for example by subsidizing academic connectivity. To the extent that it embodies new technology (for example, the gigabit backbone network) NREN anticipates new applications.17 By testing such technologies in a research network, NREN will lower the economic risk of their introduction into commercial service.
Optimal utility of a new service is determined by the experience of users.18 New information services respond to application innovations (such as telecollaboration, distant learning, vertical integration, and consensus management.) without requiring government decision-makers to understand the pace of change brought on by information technology. Much of the technology transfer promoted by government policy today specifies the relationships it seeks to promote, forcing information flows into prescribed and sometimes inappropriate channels. It is in this sense that NREN services are considered "enabling."
Cross-subsidization is common in the publicly supported component of infrastructure, reflecting government concern for equity as well as efficiency and efforts to compensate for market imperfections. Elements of infrastructure may be public (such as libraries) or private (such as communications carriers) or mixed (such as NSFNET). They may be capital intensive (like NSF supercomputer centers) or labor intensive (like the U.S. Post Office). They may be subsidized (as are university research laboratories, private but not-for-profit (such as ANS19) or profit seeking (like Meade Data Central's Lexis/Nexis service). Finding the correct balance of public and private investment and of user charges to ensure the cooperative interplay of such diverse activities is a major challenge for public policy.
Because an essential characteristic of infrastructure is its accessibility, standards also are important issues of public policy. Standards usually evolve through a publicly accountable process. In the case of NREN three sets of standards must be harmonized: the FCC's telecommunications standards for the carriers, the OMB's Federal Information Processing Standards (FIPS) for the government run services, and ANSI voluntary standards for commercial components and services.
Information infrastructure depends equally on "hard" and "soft" technologies, both, that is, on the physical network and on the arrangements for locating, adapting, accessing and using information supported by software and manual services. Government regulations, or at least policies, must seek to coordinate these hard and soft technologies, so that access and connectivity are enhanced throughout the information infrastructure.
These issues are made more important and more complex by the rapid shift from paper and voice access to government generated information to electronic distribution through digital networks. Electronic information searching can be both more selective and more comprehensive than with paper; it can certainly enjoy superior economies of scale. Importantly, on-line systems permit user feedback which if correctly used by those managing the information collection, organization, and quality can increase dramatically its value to users. But this capacity raises its own issues: how to preserve the privacy of information users while making information services more responsive to their needs.
The next steps
Although demand-side technology policy has a long and honored history in agriculture-new tools and techniques brought to farmers by agricultural extension agents made U.S. agriculture the most productive in the world— the dominant U.S. STI policy has been a supply-side policy. Contemporary political accommodation to the idea of a more demand-side "technology policy" began with the President Reagan's reorganization of the Commerce Department and acceptance of the 1988 Trade and Competitiveness Act, whose technology policies were designed by Senator Ernest Hollings of South Carolina and widely supported by both Republicans and Democrats in the Congress.
The Act gave a new name— the National Institute for Standards and Technology— and a new mission to the venerable National Bureau of Standards. NIST's new mission includes three programs, all viewed with some suspicion by economic conservatives: the Advanced Technology Program to finance "pre-competitive generic" research in commercial firms; an experimental technology-extension program to help smaller manufacturers improve their productivity; and the establishment of manufacturing technology centers in cooperation with the states. White House skepticism, however, has restricted these three NIST programs to less than three percent of DARPA's R&D budget, despite a generous congressional authorization. Thus the three Commerce programs must be regarded as very tentative experiments in capability-enhancing technology policy.
In 1989, Senator Jeff Bingaman and the Senate Armed Services Committee began asking first the Department of Defense, and more recently OSTP to identify for the Congress a list of "critical technologies" deserving of federal investment. Meanwhile, Department of Commerce officials developed such a list of their own. Actually, the construction of "critical technology" lists has become a small industry, for such lists have also been published by the U.S. Council on Competitiveness, by the Computer Systems Policy Project, by the Aerospace Industries Association, and by the Japanese and the European Community. All of the lists are virtually identical— suggesting the merits of investing in technologies other than those on everyone else's list. In any case, a list of technologies alone provides no guidance on what governments should specifically do about them.
Unfortunately, even these steps, while diffusion-oriented in that they seek to accelerate commercialization of government-funded technology and improve manufacturing productivity, still do not address an enabling strategy based on a more demand-oriented information policy.
So what might the next steps be in implementing a capability-enhancing strategy for the U.S.? The Department of Commerce, the defense establishment, the specialized technology agencies (the Department of Energy and NASA), the education and training agencies (the Departments of Education and Labor, the National Science Foundation, and the National Institutes of Health), the White House, and state governments must all rethink the roles they play in supporting the development of American scientific and technological capabilities.
An Acquisitive Information Policy
The starting point is a changed attitude toward the technical achievements of others. In comparison with Japanese companies, Americans suffer extensively from the "not invented here" syndrome. This shortcoming— -the byproduct of a technology strategy focused on maintaining national prestige— is costly in both time and dollars. A good diffusion strategy, by contrast, gives as much emphasis to importing knowledge and adapting it for use as it does to accessing home-grown knowledge. Funding to collect and evaluate information from abroad and the acquisition of new technologies through joint projects with the Japanese and the European Community can help achieve this goal.
Creation of a COSATI under the FCCSET
To provide better access to science and technology information (STI), the federal government should capture the benefits from its $70 billion annual R&D investment by reversing the downward trend in support for quality control, user adaptation, and dissemination of R&D results. The OSTP needs to coordinate efforts across all the agencies-as it did twenty years ago through its Committee on Scientific and Technical Information— by mandating that agencies serve information users through centers for data evaluation, compilation and dissemination as well as through commissioned review papers and the consolidation of technological knowledge in engineering handbooks. The appropriate mechanism for this would be a standing Committee on Science and Technology Information Policy of the Federal Coordinating Council for Science, Engineering, and Technology (FCCSET) -- in short the revival of COSATI.
Under OSTP's leadership, this COSATI should work closely with OMB to reexamine the guidelines for agency science and technology information policies, which are embodied in OMB Circular A-130, soon to be reissued. This document sets policy for the obligations of agencies to distribute information to the public. With scientific and engineering professional societies beginning to experiment with electronic journals, it is essential that policies that encourage dissemination of reliable information be adopted.
Renewing the STI Social Contract with Science
A strong federal initiative supporting a diffusion-oriented technology policy that strikes a balance between knowledge creation and enabling knowledge utilization is a necessary requirement to induce a reluctant scientific community to take seriously its obligations to users of their work. A place to start is the revitalization of the National Standard Reference Data System as a national project, building on both public and private resources. The reason I accord such importance to this one element of STI policy is that it focuses on the content and quality of scientific work as communicated to potential users. This emphasis can bring productivity increases to the economy, national security and environmental goals of the nation; without it STI services are limited to improving disciplinary productivity only.
Federal Contribution to Enabling NREN Information Services
The government's investment in the National Research and Education Network (NREN)-a central part of the strategy to develop the nation's information infrastructure— will make expanded STI services accessible to thousands of laboratories in universities, industry, and government. By aggregating a national market for such services, it can attract investment by private information vendors as well as justify increased government efforts in STI. Agencies should now be planning investments in the data collection, evaluation, user adaptation, and search strategies needed to support the rising demand for information services over the network.
Building a Knowledge Base for Dual Use Technology: Industrial Support for Security and Competitiveness
NREN will also contribute to building a unified industrial technology base of dual-use technologies— products and techniques that meet both civilian and military needs. Today we support two weakly connected economies: Defense draws its technology from government funding, while commercial companies remain largely dependent on their own investments. As the defense budget declines, the government will become more dependent on access to an increasingly sophisticated commercial high-tech industry. This suggests that commercial and defense programs will need to share a common technological base. Toward this end, OSTP and the National Security Council should work together, as recommended by the Carnegie Commission, to coordinate the technology strategies of military and civil agencies.
The increased focus on critical, dual-use technologies means that R&D projects will have to be broadly applicable, producing generic or enabling technologies that have the potential for broad use in many sectors of industry. A new class of "public good" technologies -- new tools, test methods, processes, and materials -- will thus emerge. Such infrastructural research may not be as glamorous as path-breaking discoveries leading to new industries, but it contributes directly to the capability of today's laboratories and plants to achieve the lowest-cost, highest-quality, and quickest response to market signals. NIST's new Advanced Technology Program is in the early stages of just such a program of infrastructural investment.
Extending STI Policy to Downstream Innovation Phases
Federally-funded R&D and associated STI services should begin to focus on the "downstream" phases of the innovation cycle. Most government agencies, primarily interested in research to create new capabilities, contribute little to process or manufacturing technology. But quality of products can only be assured if production processes are themselves innovative and continuously improved. It will be particularly unfortunate if the Department of Commerce's ATP program emulates DARPA and other mission agencies and fails to focus attention on "downstream" technical challenges.
NIST is, however, experimenting with other ways to enhance "downstream" performance, notably through provision of industrial extension services to help smaller companies identify and take advantage of technological opportunities to improve their manufacturing performance. These services are offered through a growing array of stateinitiated programs that promote innovation and productivity growth. Taken together, the states are spending over a billion dollars on such programs. But many of them enjoy a few years of exceptional success, only to die when a political change in state government accompanies a recession year, as happened recently in Massachusetts. The federal government should help stabilize what is otherwise a very innovative set of state initiatives by matching the funds spent on these programs.
Improving the quality of STI services related to production processes, quality management, and related downstream technical matters will be difficult because most government-assisted STI services are managed by disciplinary professional societies or by federal agencies funding "upstream" research. Much of the downstream information is created in industry, and is poorly documented even when it is not proprietary. This is a natural role for NIST, helping bring together the engineering societies, private information servicers, and industrial extension servicers at state and federal levels. The international IMS project provides one vehicle for making a beginning.
Conclusion
Japan and Germany have insatiable appetites for technology; both run deficits in their balance of payments for intellectual property. The U.S. and Britain, on the other hand, enjoy large (although declining) positive balances in patent licenses and royalties. The cure for the American and British problems is not the diversion of science investments to diffusion, but investments in both. This will energize the economy not only to demand more science but to use it more effectively.
Nor can U.S. policy be conducted in isolation from the rapid globalization of the world economy and the mobility of technology and capital. We have seen only a beginning of the trend to acquisitions, joint ventures, and strategic alliances between firms in different nations. The content of manufactured goods will increasingly contain components of multinational origin. It will be harder and harder to know what an "American firm" or a "foreign product" really means. Under these circumstances, the duty of our government is to focus its attention on making the United States a most attractive place for the generation and use of high-quality, innovative technology. In short, the government's role is to increase the comparative advantage of Americans and their institutions.
Given the many advantages to be derived from a state of the art information infrastructure, why is the U.S. government so slow to respond to the need for information services in the interest of greater efficiency in the creation and use of knowledge? Until recently "technology policy" has been held hostage to fears of "industrial policy." The nation is comfortable and on safe ground with its science policy. Politicians of both parties support the financing of basic research in our colleges and universities. They are also comfortable with a broad range of projects, some hugely expensive, serving legislatively established government missions, such as the manned exploration of space. But the system of knowledge generation, diffusion, and use has, like commerce, been largely left to the "play of the marketplace." The administration's technology policy, as promulgated in the Sept. 1990 U.S. Technology Policy statement, takes a significant step beyond this politically safe ground, at least in its expression of policy goals.
ENDNOTES
1 This work is expanded from a presentation to CENDI (Commerce, Energy, NASA, NLM, and Defense Information) at the National Library of Medicine, Dec. 5 1991. It is based on work supported at the John F. Kennedy School of Government by the Alfred P. Sloan Foundation as part of a program of study of The United States Science and Technology System in its Global Context. It has been submitted to Government Publications Review.
2Alvin Weinberg, of Oak Ridge National Laboratories, made so many seminal contributions to science policy over the years that it is appropriate that the report he chaired occupies this position in history.
3 William Baker, President of Bell Telephone Laboratories, panel chairman of a rather informal 1958 PSAC Report first pointed out that stronger institutional arrangements for dealing with the large volume of STI reports was needed. See Charles R. McClure and Peter Hereon, United States Scientific and Technical Information Policies, Norwood New Jersey: Ablex Publishing 1989, p. 10.
4 A.A. Aines, "A visit to the wasteland of Federal scientific and technical information policy," Journal of the American Society for Information Science, vol. 35, May 1984, pp. 179-184. Followers of the long decline of federal interest in STI after about 1972 will know his home-produced Infoscope, lovingly prepared and sent to anyone who still cares.
5 I was editor of the Review of Modern Physics, succeeding Edward U. Condon. The journal's Associate Editors set very high standards of reliability and relevance and worked tirelessly to persuade the most competent scientists to fulfill the obligation we felt all should feel. It was -- and still is -- an uphill battle.
6 Dr. David Lide, who recently retired from NIST, and Dr. Edward Brady before him were pioneers in behalf of data evaluation as a serious matter of scientific scholarship.
7 Dr. Charlotte Moore Sitterly, who for years led the pack in Science Citation Index, compiled the Tables of Atomic Energy Levels at NBS, and spent her life cajoling spectroscopists, astronomers and others to provide better data. Dr. Kay Way did the same for nuclear data.
8I find lay people, particularly conservatives, associate "diffusion" with active policies to transfer information assets from the haves to the have-nots. This is, of course, not how diffusion works, either in molecules or in information. It is a random process in which receptors may filter what they need from that which is flowing freely in the public domain.
9 Henry Ergas, "Does Technology Policy Matter?" in Guile, Bruce R. and Harvey Brooks, Technology and Global Industry: Companies and Nations in the World Economy, National Academy Press, 1987. page 192.
10Linda Cohen and Roger Noll, The Technology Pork Barrel, Washington DC: The Brookings Institution, 1991.
11 This argument is advance at some length in a forthcoming book: J. Alic, L.M. Branscomb, H. Brooks, A. Carter, and G. Epstein, Beyond Spinoff, Military and Civilian Technologies in a Changing World, Boston: Harvard Business School Press, to appear in early 1992.
12 Daniel Bell, "The Post Industrial Society: a conceptual schema," in A.E. Cawkell, ed., Evolution of an Information Society, London: Aslib, the Association for Information Management, 1987. pp. 60-75. sequentially to development, and then to the factory, hence the "pipeline" metaphor for innovation.
13 The following discussion draws on the author's chapter in Information Infrastructure for the 1990s, edited by Brian Kahin, to be published by McGraw Hill in Dec. 1991 or Jan. 1992.
14DARPA investments in dual use technology, in SEMATECH (the Semiconductor Manufacturing TECHnology consortium), and in high temperature superconductivity consortia are current examples.
15Executive Office of the President, Office of Science and Technology Policy, The Federal High Performance Computing Program, Sept. 8, 1989. For a convenient summary see U.S. Congress, Office of Technology Assessment, High Performance Computing and Networking for Science -- Background Paper, OTA-BP-CIT-59 (Washington DC: U.S. Government Printing Office, Sept. 1989).
16 The following definition of "information infrastructure" is used in this discussion: "Information infrastructure is comprised of those facilities and services whose shared use by individuals and institutions, both public and private, enable more efficient and effective creation, adaptation and diffusion of useful information."
17Richard and Paulette Mandelbaum, "The Strategic Future of the Mid-level Networks," in Brian Kahin, ed. Building Information Infrastructure: Issues in the Development of the National Research and Education Network, New York, McGraw-Hill, to appear Dec. 1991.
18The ARPANET was originally conceived as a means for load balancing mainframe computers, taking advantage of three time zones. Users discovered the attractions of electronic mail, of file transfers among remote collaborators, and of other applications.
19ANS is the Advanced Network and Services Co., a not-for-profit joint venture of IBM and MCI Corporations which, working for Merit, Inc. provides the backbone NSFNET service for the Internet.
Branscomb, Lewis. “S & T Information Policy in the Context of a Diffusion Oriented National Technology Policy.” Belfer Center for Science and International Affairs, Harvard Kennedy School, January 1, 1992