« My Blog | Main | Thoughts from the cradle of the Industrial Revolution »

May 16, 2005

Comments

Bob Alexander

Nice reading your thoughts.

One question: while the benefits of open standards and open developments are quite obvious, I am trying to focus on one idea. Can this open standards/open development interconnected world actually hinder truly disruptive innovation ? In the past very often really revolutionary ideas have sprung up with "mad scientists" working on their obsession, often regarded as insane by the majority. If everything is disclosed and has to work with everything else we might end up in some "total entropy" state of ideas ... ??? Just wondering. :)

Irving Wladawsky-Berger

Bob, you raise a very important point. Let me try to paraphrase your question. Could the collaborative innovation model make it more difficult for truly disruptive innovations to occur, because the community would reject far out ideas that are very different from what they are working on? I think that this is a real possibility, and that is why the community innovation model is not the only one. Something totally brand new may very well emerge, perhaps usually emerges, with one very creative individual. The two approaches, community and individual, co-exist with each other, and each has its place. I also would expect that ideas that start out as the brainchild of one individual, eventually have to build a larger and larger community around them, who work on and keep improving on the original idea, if they are going to have an impact on science and society. Hopefully, at the same time, other individuals are inventing the next set of far out ideas.

Bob Alexander

Irving, your command of the English language is much better than mine :) You have summarized my thoughts exceptionally well.

Could we agree on a working the hypothesis that the "paradygm changing" discoveries and inventions often tend to stem from the serendipity of one person or small groups working in relative isolation, but that in the current epoch transforming them into innovation which implies broad acceptance, refinement, consensus and "marketing" in a very broad sense more and more does rely on the ideas being developed in an open discussion ? I know that trying to find models fully explaining nature, especially human nature, is often a sterile task but at the same time having one seems irresistible for some of us :)

Nice reading. Keep up this interesting work.

Bob Alexander

Irving Wladawsky-Berger

Bob, I totally agree with your hypothesis as to how paradigm changing discoveries usually occur. And in fact, one of the ways of looking at collaborative innovation in the age of the Internet, is that it accelerates the previously much slower process for innovators to build communities and followings around their ideas.

Rob Russell


Someone, somewhere will raise this point, if I don't do it first ;-) You write that "our technologies are way ahead of our ability to apply them" and write of the concomitant and beneficial changes to society wrought by innovation. But what safeguards should society employ, if any, to ensure that the rate of innovation doesn't exceed society's ability to absorb change? Should society - or someone - consider the manifold impacts of innovation, both good and bad? Should "they" be empowered to do something if the "societal balance" is wrong?

To flip that around, what can "we" do (as innovators) to help "society" feel comfortable with both the type and rate of change - and the effects of such change? Should the innovator(s) take responsibility in some way, or is society's role here to act as the safety net? Do we let innovation run its course and see what happens?

I'm interested in this ethically... the 'IT Revolution' like the Industrial Revolution before it has arguably already delivered massive improvements in health, wealth and quality of life... but it has also resulted in major adjustments, if not direct costs. Who accounts for the cost of increased rates of obesity and diabetes, just to name 2 modern 'diseases' of human technology. What of entire trades made redundant, or the effect on learning or family structures of new work and communication paradigms?

Do we as a society just accept that changes will happen, and possibly happen even quicker in the future, without considering and adjusting for the social cost? Or is it self-correcting? Can we be sure?

Irving Wladawsky-Berger

Rob, you raise very important points. There are no easy or short answers. I will offer a short comment now, and come back to this point in a future weblog posting.

In the early days of a technology, all the attention is focused on the technology itself, making sure it works, and delivers the right performance, price and quality, so it can be brought to the marketplace. The bulk of the innovation is around the technology itself.

Eventually, the focus of innovation shifts, especially for those technologies that become succesful and ubiquitous in the marketplace. The biggest issues then are less with the technology itself, than with the applications of the technology to business, society, and/or our personal lives. A number of accomplished economists have written about the lifecycle of technologies, one of the most interesting being Professor Carlota Perez (http://www.carlotaperez.org/) of Cambridge University and the University of Sussex in the UK.

I believe that information technologies are going through this transition, precisely because the underlying components of IT are now so powerful, inexpensive, reliable and standardized. However, much remains to be done in understanding how to apply IT most effectively as a problem solving tool, especially given the scope and complexity of the problems we are able to tackle. We must work equally hard, at many levels, to understand the implications of becoming an increasingly information-based society. It is these areas that I believe need the most focus, research . . . and innovation.

You raise the question of who has responsibility for understanding the implications of our technologies, especially the potentially negative implications? The short answer is all of us. First of all, our universities and research labs need to conduct the right investigations to anticipate the consequences of our technologies in the future, and to recommend the right actions, tools and policies to make sure that we lessen the negative impacts. We already see universities rising to this challenge as in the work of Professor Perez and many others.

Governments should encourage and fund this research, and should be on guard to make sure that the technologies are not misused by unscrupolous or even by well meaning, but misguided, people and businesses. A balance must be reached between the government's trusting the marketplace to self-correct when there is a problem, and taking appropriate action as needed. Privacy and spam are examples of areas where governments have stepped in to make sure that IT is not abused.

Finally, the marketplace is indeed self-correcting. If people are unhappy with a technology or fearful of it and its application, they will invariably reject it, as they did with nuclear energy in the US. The force of the marketplace is even more important today, given how fast information travels.

As I said, there are no easy answers to such fundamental questions and there are no unmixed blessings. We need to work as a community at many levels to make sure that we benefit from our innovations, while doing everything possible to lessen their negative impacts.

The comments to this entry are closed.