Book Review : The Big Switch – Rewiring the world from Edison to Google by Nicholas Carr


Prologue:

There is no doubt that the computing infrastructure is going thru another phase – Cloud computing has captured the imagination and mindset of the IT industry [http://www.nytimes.com/2008/05/25/technology/25proto.html]. Some even compare this movement as the fourth wave – dial-up, Client Server, browser-based and now Cloud. While the concepts were around in different forms, the industry was spun first by Amazon (and their AWS – Amazon Web Services, especially the EC2 -Elastic Compute Cloud), Google followed with their AppEngine, Microsoft with their LiveMesh, IBM with their Blue Cloud and Sun with their various initiatives.

It is with this backdrop that I began reading Nicholas Carr’s “The Big Switch – Rewiring the world from Edison to Google”. Nic Carr is no stranger to the IT industry [http://www.roughtype.com/], his book “Does IT Matter” created a stir in many quarters. In his latest book he writes about the parallels between the development of the power grid and the emerging cloud computing. The cloud computing industry has the potential to change the way IT hardware and software are bought, deployed and used; of course, there will be winners and losers. Naturally the computer hardware and software industry should learn from history – the development of power grid !

How accurate are his observations ? Let me answer it this way – While I was writing this review, Nic posted an entry about Bill Gates’ final speech to developers at Microsoft’s TechEd 2008 [http://www.microsoft.com/events/teched2008/developer/default.mspx] – where Bill talks about “The design of massive data centers” as “one of the key areas of innovation in computing today”

Without wasting any time let me get to the review.

The Review:

Nic starts the book, reminiscent of Michael Crichton, with his visit to a mysterious data center, “a hulking gray building at the end of a litter-strewn street”, in Boston. Actually Nic would have been proud of the latest data center, SuperNAP in, of all places, in Vegas ! [http://www.theregister.co.uk/2008/05/24/switch_switchnap_rob_roy/] He then dives into different aspects of the Internet age and offers a commentary & observation from three aspects economic, social & political.

Nic’s observations on the locus and trajectory of technology and relationship to economic forces (p.22) is a must read. He correctly identifies the technological progress as the consequence of economic forces beyond our control ! The progression of Edison Electric Light Company to GE (p.32) is an illuminating story (pardon the pun!) Nic, on p.36, with an uncanny ability to poster historical details at the right level, describes how factories with pulleys and chains transform to ones driven by electric power. Visit your data center and you can see the “hopeless confusion” – agreed these are rows of boxes, but you just have to see a management console or deployment process thru dev-stage-prod or look at apps dependency diagram (if the IT folks have one) to understand the complexities. The parallel is exact – may be even more complex because in the world of pulleys and chains, you can see them and if something is broken, it is visible. Many times, in the IT world, one has to go thru myriad of log files and debugging before one can even grasp what went wrong.

The core of his argument in this book (p.57) as well as in “Does IT matter?” is that all datacenters use “similar software running on similar equipments and employing similar kinds of workers”. In “Does IT matter”, he said it (IT) is a commodity and in this book he talks about computing power as being generated centrally and distributed like electric power. He is right. What it also enables (which Nic touches upon lightly) is that it frees companies to focus on their core competence i.e. business systems and free them from the “digital millwork”! IT is a necessary component but not sufficient for a competitive advantage.

Nic correctly defines computing as GPT (General Purpose Technologies). There is a good discussion (p.15) of the agonies (no standards, no broad distribution) and the ecstasies (can be applied broadly, offer huge economies of scale, can do innovative things) of being a GPT.

His observations on the effect of Internet on the newspaper industry is excellent – the effect on journalism (p.155) and how editors won’t pay for in-depth, well researched stories once there are no subscription revenue is very true. Another important observation – “more choice doesn’t necessarily mean better choices”

I really like the way Nic explains many forms of the digital divide (p.145) – contrary to popular belief the Internet widens the divide – “interplay between technology and economic forces rarely produce the results we at first expect” and results in plutonomy – “economic growth powered by and consumed be he wealthy few”. Correct observation- but am not sure it is a characteristic of the “universal computing grid”, but is more on the Internet in general. Another form of digital divide is polarization “by forces that can amplify slight and seemingly harmless personal references into dramatic and troubling consequences”. He also talks about ideological amplification by narrowing our focus by reading RSS feeds and blogs, which we subscribe thus narrowing our span of interest.

There is a good chapter (9:Fighting the net) on the darker side of the internet – spam, botnets, terrorism networks et al. While this is a good overview chapter, there is nothing new and there are no new observations. May be I am spoiled by the earlier chapters, this chapter is kind of weak, considering the depth of analysis on the other chapters. This chapter wouldn’t be of surprise to any of the /. Readers.

There is a chapter (10: A Spider’s Web) on anonymity (or lack thereof) and privacy issues. Carr writes in a very interesting way, how, from AOL keyword searches folks figured out who 4417749 was ! Carr (on p.188) says “One of the essential characteristics of the computer grid is the interconnection of stores of information” – I tend to disagree on this. Yes, we can download Amazon wish lists, AOL search metadata (if they publish) and other sources, but the grid itself does not interconnect stores of different data and being able to perform analytics based solely on the is not (yet!) possible. Google just announced on-line health records (www.google.com/health) !

Nic is right saying “computer systems in general and internet in particular puts enormous power in the hands of individuals, but even more power in the hands of companies” (p.191). The discussion on computers being used as a control plane is very interesting – from IBM statisticians and data mining experts modeling IBM consultants to Google modeling it’s employees and matching them with achievements to help recruit people, based on their answers.

Nic argues that while we all think corporate control is most pronounced by the use of these technologies, the most effective is the control of consumer behavior. I agree – with techniques like search engine optimization, targeted ads, personal product placement, social networks (which are fertile grounds of impressionable minds of youngsters) – it is not only possible but also happening !

Nic’s discussion on neuro marketers (by uncovering mental triggers!) is slightly disconcerting. I thought the last chapter (11: iGod) was overboard with topics like wireless brain appliances, physical neural interface et al. As Nic quotes Bill Gates – I also prefer “the computer to be over there and myself over here”. Also the final discussions on semantic web are a little overboard – the questions Nic asks are very valid and relevant, but the metaphors are a little off.

There are a few inconsistencies – for example, p.75 talks about remote answering machine as an example of virtualization; also defines virtualization as use of software to simulate hardware. Not fully right. In subsequent pages he gets his bearing right and talks about virtualization, multi-tenancy et al, which are more to the point.

Epilogue:

Nic Carr does an excellent job in “weaving history, economics and technology in an engaging way”. “Utility computing is a new and disruptive force, but it’s not without precedents” says Carr(p.110) and explains from the demand side – usually we hear from the supply side i.e. the technologies. So it is refreshing to read not only about the forces but also the historical similarities.

The book has some great quotes – for example “Here is our poetry, for we have pulled the stars to our will” – Ezra Pound in 1910, gazed upon Manhattan’s nigh skyline ! Whenever I am in Ne York, I try to have a glimpse at the Manhattan Skyline by night.

The stories Nic tells are very relevant and contextual – on p.90, he talks about the early industry – distribution of ice by storing huge ice sheets from lakes during winter – just melted away due to electric cooling ! What industry will melt away because of the cloud computing ?

The quote (p.219) from Alexander Solzhenitsyn’s Cancer Ward about threads we create as we go thru life is so true ! Social networks are trying to create this thread and capitalize on their popularity of intertwining social threads !

In short, a well thought out book. The last couple of chapters miss the depth, wisdom, deliberation and the inquiry. May be I am biased; am eager to hear your impressions after you get a chance to read the book.

Reference:
[1] NYT article on Cloud NY Article on Cloud http://www.nytimes.com/2008/05/25/technology/25proto.html
[2] Nicholas Carr╒s Blogs http://www.roughtype.com/
[3] SuperNAP story http://www.theregister.co.uk/2008/05/24/switch_switchnap_rob_roy/
[4] Microsoft TechEd 2008 NA http://www.microsoft.com/events/teched2008/developer/default.mspx
[5] http://www.nytimes.com/2006/06/14/technology/14search.html

Another interesting book


Founders at Work

Guy Kawasaki has an excellent review here on the book ‘s review. I will add my comments after I get a chance to read and internalize ! Actually Guy has done an excellent job of introducing the book – am not sure I can add more … let us see …

Book Review : Multi-Core Programming – Increasing Performance through Software Multi-threading


I had written this review for /. and never got published. Basically I never got any reply. So thought will post it here, lest it be useful for the readers

Multi-Core Programming – Increasing Performance through Software Multi-threading by Shameem Akhter & Jason Roberts, Intel press

Link for the book

Context

A funny thing happened on the microprocessors’ race to faster and faster clock speeds; somewhere on the way, people stopped caring about the clock speed and started looking for overall system/application performance and that led to a different disruptive solution – hyper threading and now multi-core CPUs. As a result, parallel programming and parallel computing, which were once the realm of a few esoteric applications, are slowly becoming mainstream. How important is this, enquiring minds want to know. A glimpse of the relevance can be inferred by glancing the PCWorld 100 best products of the year 2006 – the #1 and #2 were Intel Core Duo and AMD Dual-Core Athlon 62 Dual-Core ! Even the iPod came later in the list ! (For the same enquiring minds – iPod came 4th after Craigslist.org, Google Earth 6th and YouTube.com was 9th) In the world of game consoles, multiple execution threads are a norm, the Xbox 360 has three cores and PS/3’s cell processor has 6-9 vectors attached to a PPC core.

The building blocks of the domain consist of software concepts and mechanisms like parallel programming, parallel computing and multi-threaded programming, combined with hardware pragmas like SMP, hyper threading and multi-core. All these are intersecting concepts, but they are not quite the same – that is where this book stands out, the conceptual depth in terms of fundamentals, in the back drop of multi-core, hyper-threaded microprocessors.

Not that every programmer needs to be intimately familiar with multi-core parallel programming. Majority will write to the higher order threading models of Java and C# without worrying about the underlying microprocessor architecture. But kernel writers, folks who write C/C++ code (for example for imaging, visualizations, digital media applications and games) that leverages the microprocessor at a bare metal level would need a good working knowledge of the microprocessor primitives as well as understand the patterns. For example, the recent Ubuntu Summit in Paris (which was where I was writing this review) had a couple of BOF discussions on various aspects dual-core as applicable to Ubuntu kernel internals. The discussions were centered around parallel loader with hierarchical parallel tasks, using dual-core for computing the symbol table hash and making c-routines like strcpy() and malloc() “core-aware”.

Organization

The organization is straight forward – 11 chapters structured incrementally as follows:

  • Chapters 1 – 3 cover Introduction to Multi-Core Architecture, Overview of Threading, and Fundamentals of Parallel Programming. While these chapters start out from the basics, they race from 0-1000 mph in three chapters !
  • Chapter 4 covers the parallel programming constructs with the next two chapters (5 & 6) delve into threading APIs – Windows, POSIX and the OpemMP.
  • I found Chapter 7 (Solutions to Common Parallel Programming) and Chapter 8 (Multi-threaded Debugging Techniques) the most useful and informative.
  • The next two chapters cover the hardware aspects – Chapter 9 a short chapter on single processor & Chapter 10 a detailed discussion on Threading on Intel Multi-Core Processors. Both chapters are good.
  • Chapter 11 is Intel® Software Development Products, which is Ok.

Interesting to note that the book has a unique (I assume so) serial number and you can register the book at Intel’s web site. But the web site is very anemic and has no big value and it is not intuitive either. They have the code samples in a zip file – that is all. Even though the book cover and other materials highlight “immediately usable code” I don’t think the code is all that useful.

There are two related books from Intel Press viz : Programming with Hyper-threading Technology and the just released second edition of The Software Optimization Cookbook. Another book worth looking into is the Patterns for Parallel Programming. I plan to review all the three in the near future …

Gory Details …

Architecture

Normal microprocessors have one thread of execution, called the hardware thread (we will visit this later as well), consists of the architecture state (control registers, interrupt logic et al), execution units and cache. What the engineers found that because execution units are 100 or more times faster than the memory fetch, they remain idle while the registers are busy fetching values.

One solution is to add two architectural state lanes to one execution unit/cache set. This is called hyper Threading technology and the performance increase is around 30%. Hyper-threading is logical – it looks like there are two cores to the software, but actually the two threads are interleaved to the same execution unit !

The next idea was to actually have two execution unit/cache pairs in a chip with their own architectural states. This is the dual-core architecture. In this case, as the actual processing power is doubled and the performance boost is almost doubled (minus the overhead). The next idea was to combine both, thus giving four hardware execution threads !

But high performance is not just adding hardware threads, the software also need to be written to take advantage of the logical and physical parallelism. High performance = multi-core hardware + Hyper-threading + multi-core-aware scheduler + parallel programming algorithmics.

Software & hardware threads

There is a correlation between software and hardware threads. In some quarters, there is a firm belief that there no advantage by having more software threads than can be mapped to h/w threads; in fact the performance might be negative as more threads will invalidate each others’ cache and spend time in thrashing in and out.

But there is another dimension – compute intensive tasks vs. I/O bound interactive tasks; in order for an OS to be sensitive to users and appear responsive, the interactive tasks need to have higher priority and should be given preference when they are runnable – usually they spend time in waiting for user input – for example waiting on a keyboard.

Going back to hardware threads, till the advent of the HT and dual-core philosophy, there was only one hardware thread in the normal microprocessors. But now the picture has changed – in a HT microprocessor, there are two threads; a dual-core one with HT has 4 hardware threads and moving on, in the Xbox there are 3 to 6 threads and in IBM’s cell processor (which is in PS/3) there are 6-9 hardware tasks ! Even with one hardware thread, Linux, for example, has a very sophisticated scheduler. Now, with 2-9 hardware tasks, the opportunities are endless. Also many new algorithmics need to be employed, in areas like rendering, visualization and graphics.

Parallel Algorithmics

As the book says, multi threaded applications are inherently much more difficult than single threaded applications and proper software engineering principles need to be followed. For example, one main difference between hyper-threading and multi-core is that while in single core or hyper-thread technology, the tasks are interleaved, the multi-core architecture actually runs two tasks at the same time ! This might manifest some bugs, for example, if you achieve sequencing by different priorities, as a lower priority thread will run if it is runnable while the high priority one is running in another core ! I have made this assumption a few times, now I know should be avoided.

The authors have done a great job to point out good parallel programming as well as debugging techniques to avoid synchronization bugs (i.e. multi-threading errors involving race conditions, lock contentions through priority inversions/ceilings), while not impeding performance (i.e. performance bugs by the over use of locks and mutexes).

On the next level, one does have to think through and understand the parallel programming patterns. One advantage is that the art of parallel programming has a long history and has lots of good material to refer to and apply in the multi-core domain. The major forms of decomposition – task decomposition, data decomposition and data flow and the associated patterns task-level parallelism, divide and conquer, geometric decomposition, pipeline and wavefront are introduced in this book. But for a detailed discussion on the specific patterns, the patterns book would be better, even though it is slightly old and most probably will not contain multi-core patterns

In Short …

In short, this is a book with conceptual depth that touches all the essential elements of a very complex (and emerging – that will gain more momentum) domain and that, it does very well. The Slashdot crowd will like the book. The writing style is very dry and matter-of-fact, but it doesn’t come in the way of understanding – I was able to cover the last few chapters during my SFO-CDG-SFO flight, in between Soduku, where I am writing this review! And of course, I wanted to be the first to review this book and that was a good motivation … ;o)

As a foot-note for future exploration, dual-core is a domain which is much broader than just Intel microprocessors. In fact, many consider AMD to be the leader in this space. It will be informative to see how AMD handles the multi-core technology in terms of interfaces, programming mechanics et al. As a comparison, the cell processor and the Xenon in Xbox 360 need to be specifically programmed. But in case of Intel and AMD dual-cores, I am sure, a programmer, so long as one uses the Java/C# APIs, will not even notice the difference; a Windows API or the POSIX thread model or OpenMP, should not have to code any differently either. But still, it would still be instructive to see the difference in the designs and the resultant execution models between the two … The difference would be felt by the kernel writers and folks who write optimized code for device drivers and high performance digital media and game applications.

Reviewer : Krishna Sankar