Marc Andreessen with Charlie Ross – Innovation, mobility, Social Media & Viral platforms

A very informative interview – Charlie asked interesting questions and Marc has equally insightful answers & discussions.

Video and full text at

For the attention challenged my bullet notes:

  1. Future of news papers

    • Two words – kill it ! Stop Printing newspapers !
    • Fundamental structural change happening in the newspaper business. It is happening in all branches of the media industry but the newspaper is at the front
    • Investors have seen thru the transition. But the industry is still trying to survive.
    • An interesting analogy : Chronic pain vs acute pain – How many years of chronic pain vs one year of acute pain for transition ;o)
    • Acute pain will be acute but inevitable. But need to build for future.
    • Wrote a blog New York Times deathwatch ! “What is with you & NY Times ?” Charlie asks. Blog post no longer there
  2. Social Media Industry

    • Facebook: Facebook 175 million users, half of them use it every day; many use it 50 times a day. On its way to 500 million users !  Mark is on the board of directors. 135 million active users  equates to 6th most populous country in the world !
      They are taking a more organic growth model if they had taken the normal advertising, they would make been over a billion dollar in ad revenue. Facebook has tremendous potential for example could monetize the home page just a question of how they choose to extract the value. They want to build a long term business, eyes way on the horizon and big vision (to connect everybody on the planet (what about beyond?))
    • Ning: Ning has crossed 20 million users adding 2 million users a month. There are a million social networks on Ning !
    • YouTube, Facebook et al – under-monetized assets
    • Twitter as a real-time electronic nervous system – says you could twitter when a plane lands in water. May be people did, but I wouldn’t be twittering if my plane crash landed on water ;o)
      Story of twitter – Evan Williams had a podcasting company ; raised ~3.5 million; didn’t succeed and returned all (Evan made up the difference!) Twitter was a side business at that time, it took off. So they changed focus, closed the podcast operation and focused on Twitter
    • Social networking is here to stay and it’s potential is just beginning. Marc is big on “viral” applications
    • The Obama campaign employed the social networking approach and philosophy as the engine for fund raising, volunteer coordination
    • Viacom suing YouTube wrong strategy – They should be using it to distribute their videos ! Every time there is a Viacom video in YouTube, there should be a buy button! Distribution channel that bring traffic to their properties !
      Napster – 20 million people showed up. If music industry had a buy button they would have been successful. When people line up, find a way to monetize it
  3. Innovation

    • More opportunity than ever before – Cascading effect – every new layer of technology makes another layer of innovation possible and that keeps rolling
      There is an interesting discussion of Intel’s transformation from a memory chip maker to a microprocessor maker in around 1985; was not an obvious bet to make, but they had to do it to escape the overhang of Japanese memory makers who were crushing Intel.
    • Innovation Cycle: Silicon graphics was out of business due to Intel’s microprocessor and that freed up engineers to work on nVidia/ATI which in-turn is posing challenges to Intel in video and graphics business
  4. Mobility, iPhone & the new landscape

    • Usually people talk about a new idea for long time, finally the technology comes together and the thing takes off – internet in ‘95 is an example, mobile is in that stage now
    • iPhone is a template every other vendor will copy. For first time iPhone real os, sdk and an application delivery infrastructure – 1st time all of these over a fast network
    • iPhone itself is fantastic – beams from the future as Marc characterizes it – and inspired a lot more creative thinking around it
    • He mentioned an investment of his Qik [], where any phone can be the source of live streaming video to any device or other phone; will be very effective as phones with HiDef videos capability in 2 years
  5. The Magic Business

    • Bill Joy once said : some products have the “it works” feature !
    • There were more than thirty-five search ventures before Google; but Google search really worked in terms of the core technology plus they unlocked the ad business model.
    • Marc characterizes this as the “Magic Business” which happens once in 10 years or so – Cisco was a magic business, intel was one, so was Microsoft and even Amazon. With Magic Business, one goes for scale and size. People had written AMZ off in 2002, but Bezos had the fortitude and foresight to stick with the long vision
  6. New form factor

    • Marc believes Kindle is the new form factor along with iPhone and netbook; each with a different but effective purpose.
    • Kindle is the web-pad, a 7” form factor, the next opportunistic screen size which people will for video, telephony and conferencing.
    • Most probably the next new product from Apple would be this 7” e-Book, conference, web appliance !
  7. New VC Firm with a slightly different focus

    • Marc is starting a new VC fund with Ben Horowitz. They have invested their own money in the last 3 years in 36 companies
      They focus on smaller companies – 100K-200K; may be 500K to million. Marc is of the opinion that a whole generation of startups do not need very much money (“very much” defined as  200K – 1.5 million)
    • His new VC firm’s name – Andreessen Horowitz ; can be a law firm or a vc firm! Abbreviates to A to Z and will get listed first in yellow pages – could be a good name for a tow truck business as well!
  8. Impact of the recent economy related challenges

    • During the 2001 recession, we were the nose of the dog , this time we are the tail. Companies on valley do not generally run on debit financing and so affected the least. But the big recession will impact salesSilicon Valley will be the tragic beneficiary from damage in other industries – like banking et al
    • <KS>
      • I thought the discussion on new types of banks was a little asymptotic but the concept of new way of just-in-time credit scoring and credit provisioning by Bill me Later is interesting.
      • On a tangential discussion, Marc was referred to “Good Banks, bad banks and ugly Assets” and ideas by Paul Romer
    • </KS>
    • Innovation will continue tons of innovation will be bottled up in the next 5 years. Companies like Google, YouTube and Facebook developed thru the last bust. Look for return in 7-10 years from today’s funding.


  1. Good Comments. Thanks.

A Berkeley View Of Cloud Computing : An Analysis – the good, the bad and the ugly

I read thru the technical report from UC Berkeley, Above the Clouds: A Berkeley View of Cloud Computing with interest. My analysis:


  • As an undergrad work on cloud computing, the paper gets an A+. But as a position paper from eminent academics, I can only give a C-. Granted it correctly identifies many of the trends and obstacles. But that material is widely available !
  • With a title “A Berkeley view of cloud computing” the report misses the point. “A Berkeley observation…” is more like it – view requires original thinking and interpolation, which the report lacks.



  • The authors got some of the essentials of Cloud Computing right viz: infinite capacity, no up-front commitment and pay as you go.
  • The three classes viz: amazon , Microsoft and the Google model is interesting. But there are more in-between.
  • They have some good points on the cost advantage of power et al and leveraging that aspect by building datacenters at the appropriate locations.
  • The new application models viz. analytics, parallel batch processing, compute-intensive desktop applications and so forth are excellent observations.
  • They have done some good work in characterizing elasticity. Pages 10 and 11 are good read – the models are very simplistic, though.
  • They also have done a good job in showing the economies of scale that can be achieved by a cloud computing infrastructure.
  • I like their assertion that “there are no fundamental obstacle to make cloud-computing environments secure as well as compliant to data and processing rules. Declarative policy and enforcement thereof is my answer.
  • They have correctly identified scalable storage as one of the bottlenecks. The BigTable(Google), Dymo(AMZ) and Cassandra(facebook) all are solutions for the challenge.



  • But, they got the model wrong ! The essentials of Utility Computing is the consumption model not the payment model. No doubt the pay-as-you-go model is attractive to startups, but the payment model is the second order effect. For enterprises and other organizations, the value proposition is the elasticity and the just-in-time availability of resources.  Even for startups the pay as you go is attractive but elasticity is much more important.
  • Argument about increase in performance and resultant cost reduction. This just Moore’s law and it is achievable within IT environments as well as a cloud computing space. I think computers are on a 5 year amortization schedule and depreciation. And a refresh can be done – with associated efficiency whether they are a cloud provider or an IT shop.
  • I think the major disconnect in the paper is the basic definition of a cloud as public. The artificial separation of public/private clouds and the focus on payment were the two areas where their definition has gone awry. Cloud is an architectural artifact and a business model of computing. But clouds are clouds – internal or external, public or private. The internal vs. external is only a spatial artifact – which side of the firewall. Not worth a demarcation when we talk about the domain of cloud computing.  Which side of the internet (firewall) does the cloud infrastructure lie, should not be the criteria. By their definition, they have disenfranchised the whole set of clouds inside organizations. The internal-external cloud integration across data, management, policy and compute planes is an important topic which this model conveniently skips. Also as I mentioned earlier, utility is the consumption not a payment model. A big organization can have a cloud computing infrastructure and it’s business units can leverage the elasticity – no need for a credit card, a charge back model will do.



  1. I really didn’t get the “statistical multiplexing” they mention a few times. What exactly is this and what is the relevance ? Just a buzz word to jazz up the paper ?
  2. I literally got lost in their characterization of DDoS attack and the cost models there of on P.15. Really convoluted and it does not change for traditional vs. cloud. They found a break-even point for DDoS attack based on very slim assumptions.
  3. I do not think the data transfer bottleneck, as described in the paper (P.16), is an issue. Who is going to transfer 10TB of data routinely for cloud processing ? Looks like a force fit for some calculations done by someone.
  4. The report has no useful tables or equations. Equations 1 and 2 (which are the same, btw) are not right – in thesense that the datacenter cost includes the utilization and I do not think we need to accommodate for it additionally.
  5. I am sorry to say all the cost models and the equations look forced and very unnatural. Even the assertion of 1/5 to 1/7 cost advantage of a datacenter is at best questionable.No value what so ever – sorry folks



  1. Good comments. Thanks folks.
  2. James Urquhart has an excellent blog on the subject. Thanks James. He is more generous than me ;o)
  3. [Feb 19,2009] Blog from GoGrid – good analysis

Cloud Standards – Putting the cart before the horse ?

I have been following some of the chatter in the “Cloud Standards” space. Looks to me, we have more talk about standardization work than actual cloud work. May be I am moving in the wrong circles ;o)

Anyway the reason for the blog is Bob Sutor’s notes from a Cloud Standards gathering . To be fair, I did not attend the meeting and I will be careful to separate the message from the messenger – Bob’s ideas vs. what he is reporting from the gathering.

A few things that caught my eye – some I agree, a lot I disagree:

  • There could be potentially 100s of standards
  • To quote Scooby-doo, “Yikes”. I think this is the wrong approach. The domain (Cloud Computing) is barely born and we are already talking about 100s of standards- new or old ! This will definitely slow down the progress or (most probably) everyone will ignore the “standards” and do whatever is necessary to run the business. We should strive for simplification. I propose that overstandardization was one of the reasons for the demise of SOA and Web Services. (Yep, I know lots of people will disagree. But before you start throwing stones, at least read thru Ann’s blog  –
  • Need “marketecture” & speed of standardization is important
  • This is exactly where I think we have the cart before the horse. Usually standardization happens based on a few running instances and when practitioners realize that some parts need interoperability based on experience – let me say again, actual experience. Because of running code, working protocols, data fields et al, the participants know what to standardize and how. Without the extensive implementation experience to guide us, how are we going to standardize based on an imaginary marketecture ? A marketecture is just that and it should never be confused with actual hands-on experience.
  • And we should never rush standards – usually standards are a little deliberate and systemic. Agreed, once we understand a domain and come across opportunities, we shouldn’t take infinite time.  But I have never heard of standardization based on “marketecture” even before a domain is being developed ! (Until now, of course)
  • Need to understand interchangeable parts vs. interoperability
  • I think we are over analyzing here. To tell the truth, I have no clue what it means.
  • “It is not acceptable to delay standardization until a particular provider establishes lock-in or a monopoly”
  • Eh ? ;o) Very strange. Monopoly or duopoly or … comes via business relevance – not by standardization. Here also I am lost – are we saying that all the hoopla about standardization is against a known or unknown cloud provider ?
  • “Cloud computing is here today, but we are very, very early”
  • Agreed. Of course, this clashes with other statements where it says we already have a monopoly
  • “We should not waste time having an official cloud computing definition of ‘interoperable.’”
  • Agreed. Instead, may be we should define “interchangeable” and move on o;) (Sorry, couldn’t resist) Again, this is right – the domain is very new that we really do not know what it is. Me thinks, we should not smother and suffocate it with too much standardization. What says thee ?

Book Review – Programming Erlang, Software for a Concurrent World By Joe Armstrong

Note: This is the blog version of my review slated to be published in an upcoming issue of the Journal Of Functional Programming Volume 19, Issue 2, March 2009


A funny thing happened on the microprocessors’ race to faster and faster clock speeds; somewhere along the way, the speeds and feeds hit a brick wall and the only way out is lateral – more cores rather than faster CPUs. It started with hyper threading and progressed to dual cores and to multi-core CPUs.  While this paradigm shift solved the hardware challenges, a new bottleneck rose – in the software! Multi-threaded programming, which was mainly for leveraging I/O waits and SMPs couldn’t scale to the moderately massive parallelism – while the domain complexity of concurrent programming itself is interesting, the additional accidental complexity of threads and mutexes’ and semaphores in traditional programming just doesn’t cut it.

The answer it seems is “Pluralitas non est ponenda sine neccesitate” i.e. the singularity of immutability and stateless-ness of functional languages rather than the pluralities (of mutexes, semaphes, spin locks and code locking)  – the Ockham’s Razor!

As a result, more and more folks are looking towards functional programming as the silver bullet for solving the software’s version of the Moore’s Law! In fact a timely article in Dr.Dobbs  journal [1] wonders  if “functional programming is on the verge of becoming a must-have skill”! And Erlang is at the forefront of this revolution – an article in Queue[2] is of the opinion that “… designed for concurrency from the ground up, the Erlang language can be a valuable tool to help solve concurrent problems.”.

In short, Joe Armstrong’s book “Programming Erlang-Software for a concurrent world” is at the right place and at the right time! As the title implies, the book’s focus is more on concurrency than functional programming but essential functional programming is the backdrop for the required functionalities.


The book is pretty thick – 20 chapters and 5 appendices, coming out around 500 pages. The subject is deep but the style makes it interesting and easy to comprehend. I felt that the book ended very fast ! In my opinion, this is not a traditional programming book – I consider this a systems book. It is difficult to separate Erlang the language from Erlang the system. The book follows the architecture of the Erlang system  – makes sense, especially as the author is one of the originators of the language

Gory details

The book has a good logical progression and is logically divided into four parts:

  • Part 1 – The important concepts of the language with enough syntax ending with compiling and running Erlang Programs
  • Part 2 – Dives into bigger topics like concurrency and distributed programming plus files and sockets
  • Part 3 – Is where the Erlang system concepts are detailed including OTP and databases.
  • The book ends with Part 4, which talks about multi-core programming.

On the whole, the book is laid out well. I felt that this is not a book for the casual reader – the depth of the subject as well as the breadth of understanding required to “grok” Erlang is slightly more than other languages. Erlang is slightly difficult to get around for oo programmers. I have Cobol and Pascal background so was easier, still was corrupted by years of oo programming ;o)

One should read the book fully to get the benefit – some of the concepts become very clear at the later chapters – an artifact of the language-as-a-system characteristic of Erlang. There is another Erlang book coming from  O’Reilly ( and am looking forward to see how the authors are handling this in that book. May be I will write a review.

Half way thru the book, I found it helpful to read thru the the Erlang language reference – just to get a feel for the full syntax. The Erlang language is very small language that is the beauty of the language. Another must-read is the History Of Erlang paper [6] as well as the presentations [7]. The history of Erlang is fascinating and a good read before or while reading the book.

A few observations from my reading of the book:

  1. Joe explained the Bit syntax very well. It could have helped me for a time sync protocol implementation (IEEE1588v2) with 48byte manipulations
  2. The database is called mnesia – interesting that it was called Amnesia – strange name for a database !
  3. No surprise – the chapters on concurrency are the strongest
  4. One area it doesn’t cover is the cloud computing which Erlang very strong – for example AmazonDB is supposed to be written in Erlang and the new cloud infrastructure project Verteba is in Erlang.
  5. The chapter on ets/dts keyvalue store is timely as that is very common in internet applications
  6. Finally want to mention two reviews of this book in the O’Reilly site [4].


Overall, an exceptionally good book on a very relevant language/system. I would suggest pre-reading the  couple of papers – the Erlang syntax as well as history of Erlang before reading this book. Then lots of things would make sense as well as the reader would get a clear understanding.
Looking forward to Erlang Programming by Francesco Cesarini and Simon Thompson[5] and how they treat the introductions. May be I will follow-up with a review !

One interesting artifact of the immutability of data in Erlang is that handling traditional data structures like Linked lists. A companion book that would be of interest to the readers of this book is “Purely functional data structures”. In addition to Erlang, the other FP languages under consideration include Scala, F# and Clojure. In fact I am looking forward to reviewing the book “Programming Scala: Tackle Multi-Core Complexity on the Java Virtual Machine” which follows the concurrency paradigm covered in the Erlang book.   Scala is a mixture of OO and functional programming and has many of the Erlang patterns like gen_server. Clojure might be of interest to readers because of it’s LISP origins.

There are concerns that functional programming – pure or part of an oo model – itself might not solve the massive concurrency problem; more than 16 cores might hit the memory wall[3]! Let us not worry about it now; most probably it is a topic for yet another article !

It is apt to quote the epilogue which says it all – “… large monolithic programs are becoming dinosaurs … the future has multiple cores and the future has distributed processors …


Think Way Outside the box – Microsoft’s Azure into cloud computing

Finally Microsoft has entered into the cloud business formally with Windows Azure. These are my notes from MS PDC 2008.

Executive summary:

Windows Azure is not an OS but an offering. It consists of :

  • A hosting environment to deploy your services (you define the rules & provide your code, the platform will take care of the rest!) for a spectrum of users – from hobbyists to enterprise developers
  • Automated service management (abstracts hardware, load balancing and a host of other similar functions, based on the service model you create, which has things like service topology, size, health constraints and so on)
  • Scalable storage
  • A rich developer experience(This is where Microsoft has leverage- the Azure fits seamlessly into their development environment- you can write usual code, test it in their cloud simulation environment, debug the code and then deploy it to the cloud. So the current development skills are fully transferable! The deployment is so easy “even a CEO or a VP can do it!”)
  • Windows Azure as a services layer with .NET services (service bus,access control and work flow services)
  • They also offer SQL Services, SharePoint services and CRM on this platform

Their perspective of a cloud is very simple – “A set of connected servers;on which install and run services;and store and retrieve data” – and their offerings reflect that view of the world

Their view of the business requirements on a cloud infrastructure are:

  • Interoperability and Business Processes
  • Identity and Security
  • Data management & Compliance
  • Services Management

Some quick thoughts

  • In terms of impact, Bob Muglia compares this year to PDC ’92, when Microsoft announced Widows NT
  • Microsoft characterizes this as 5th generation of computing
    • Monolithic(70s), Client Server(80s),web(90s),SOA(now)-services(2009+)
  • They are not embracing the term Cloud, but are calling it services exposed via web protocols!
  • Their motto is software + services -> The power of choice
  • They see Azure as helping to evolve existing paradigms to work with hybrid architectures
  • Also suddenly there are fabric everywhere – Azure service management fabric, development fabric,… <KS>Neiman Marcus anyone ?</KS>
  • All features are not currently available, they are “exposing functionality in a staged manner”
  • Introducing the cloud, they have a little arrogance and one would come away with the feeling that Microsoft had been at it for years ! Ray did tip his hat to Amazon and Jeff, though.
  • From a business perspective, Microsoft s formally in the infrastructure business

Gory Details:

Day 1

Ray Ozzie introduced the concepts and Azure, followed by Amitabh Srivastava, Bob Muglia and David Thompson detailing different parts of the offering.

  • Current applications have enterprise as the scope and with cloud the scope has expanded – Cloud is the externalization of IT
  • The separate roles of software developer and operations are intertwined with the cloud computing paradigm
  • While some companies have the resources required for the operation discipline to run a global infrastructure,many find it a disproportionate burden
  • Some challenges to be solved by cloud computing
    • Meeting customer expectation of interactive, participatory web systems
    • Operating across peaks and valleys
    • Continuity Issues
    • Loosely coupled architecture, data replication strategies, data partition strategies
    • Ray calls clouds “overdraft protection for your web site”
  • This “high scale internet infrastructure” is a new tier
    • Desktop Tier – scope of a single machine
    • Enterprise Systems – scope of the enterprise
    • Web Tier – scope of the web
  • Windows Azure is not an OS but Kernel of cloud platform. “Kernels do not demo very well”, so they showed demo of a few apps
  • Business View of Azure
    • Scalable infrastructure
    • Ability to manage a large global datacenter infrastructure
    • Federated DataCenter (This is one of their key themes)
    • Automated Service Management
    • Applications and OS managed separately
    • Fabric Controller manages lifecycle
  • The service model is one of the key concepts
    • That is how one defines a service declaratively. The model includes roles.channels, adaptors, interfaces, configuration setting et al
    • The service model is an XML file
  • The services bus is another important piece because it securely connects the on perm and cloud seamlessly even through firewalls
  • The have a good Identity substrate with connectors and gateways to transcend between on perm and cloud. <KS>This I thought was a good value proposition; seamless identity across enterprise and cloud is difficult. I also saw that they now support openID.Good move</KS>

Odds & Ends

  • An interesting URL!
  • Azure white paper
  • Yep, I know – Azure is not a verb. May be it should ;o)

Cloud Computing, Grids and Paczkis – Part Deux

<epilogue  – the end as the beginning or conclusions first>

  • Both the papers are well written and I thank the authors for the details as well as the e-mails. It is worth the cloud community’s time to deliberate and debate the concepts.
  • IMHO, Grids and Clouds are conceptually distinct beasts; but if one squints long enough or one abstracts to the stratosphere, they might look the same. Let us call them Paczkis (Pronounced “punchkeys” ) on Paczki day, Grid on Grid days and Cloud, rest of the days ;o) And as a cloud expert once said, “that which we call a cloud, by any other name would smell as sweet”.
  • Whether the grid domain is successful or not, I leave to knowledgeable folks like Ian to decide. Assuming grids have a large accidental complexity (which acts as barrier to entry), it is reasonable to conclude that, may be, clouds can simplify them – of course, now all grids will morph into cloud infrastructures, anyway ;o) And that is good – the cloud community also gets the experts and their experiences and thinking.

Parting question: Is the Hadoop infrastructure a cloud or a grid ? By the same extension would Google apps be grids than clouds ? ;o)

<The gory details>
Now that we have covered the basics in the prologue, let us double-click couple of times. Jha, Merzky and Fox in their paper “Using Clouds to Provide Grids Higher-Levels of Abstraction and Explicit Support for Usage Modes” talk about the challenges with the grids and address complexity, interoperability, deployment support, They have done a good job of explaining a lot of the concepts.

I do disagree with a few of their statements viz “in some ways Clouds are the evolution of Grids”, “Cloud systems (or just Clouds) are, in some respects, narrow Grids” and agree with one statement “Cloud computing is a catch-all term for better contextualization, virtualization and most importantly simplicity of use” which they refer from the “Future of the TeraGrid” papers. Their definition of virtualization is a little weak, IMHO. But they understand the canonical cloud “A distinguishing feature of Cloud interfaces are that instead of exposing the largest possible amount of semantics for a specific domain, Clouds tend to expose a minimal amount of semantics to the end-user, while still being useful” – excellente ! While various Cloud infrastructure offerings do exhibit the characteristics of “affinity” as described by the authors, I am not sure “affinity” is a cloud  artifact – layers of “affinity” may be. All their 3 observations are correct, but I can’t agree with the semantic ordering (Fig.2).
But the fundamental premise that grids can be saved by a cloud wrapper is questionable. I think, this could be hog in a tuxedo (I will leave the dissertation on pigs and makeups to politicians!) Remember, there is domain complexity and there is accidental complexity. I think the grid community has done a good job on the accidental complexity by various frameworks like Globus (even I had occasional contribution to the grid world). I think this is what Ian Foster is saying in his blog.  And there are certain classes of problems addressed by grids and as per Ian’s blog they are doing well. No need for a cloud based tuxedo for grids, thank you ;o)

May be over reliance to WS-* rather than a REST architecture is the cause, may be the breadth and depth of the interfaces are complex, may be … this discussion I leave alone … it is for folks like Ian to debate and conclude.

Another interesting paper “Grids Challenged by a Web 2.0 and Multicore Sandwich” by Fox and Pierce, explains cloud as “Broad grid” (as opposed to “narrow grid” by Jha et al). They believe that  grid “infrastructure is ripe for significant revisions”. I tend to agree with most of their observations viz “the problem of the next generation of computing will be an abundance (rather than a scarcity) of computing power for many problems”, “these clouds address ‘commodity usage’ rather than the high
performance simulations and data transport that are characteristic of Grids
” and their insightful discussions on the trajectory and locus of massively multi-core systems (BTW, didn’t see two of my favorite topics Erlang and functional programming mentioned anywhere – CSP is mentioned, in passing ), You see, grids came from the world of high performance and parallel computing which is more related to the multi-core paradigms rather than virtualization and clouds.

On the notion of grids vs. clouds, my insights are in my previous blog Cloud Computing and Grids.

</The gory details>

<prologue – the beginning as the end>
The question “Is cloud a grid” has been discussed in detail in the cloud computing forms. The current crop of discussions swirl around Ian’s blog on paper by Jha, Merzky and Fox. Geoffrey pointed out another paper by Fox and Pierce in the same zip code. I did read both of them in some detail (need to think more as the papers have good depth) and here are some quick thoughts:
I agree that the discussion is somewhat orthogonal. Whether you call them grids or clouds or paczkis, providers will provide what makes sense for them and so is the case with cloud consumers. But, as I had said in one of my earlier blogs, our view of a domain has influence on our solutions – form follows the function. So, this is not an academic exercise but has pragmatic relevance. And, for a domain to grow, we need systemic disciplined definitions, interfaces and programming models.


[Geoffrey07] Grids Challenged by a Web 2.0 and Multicore Sandwich

[IanBlog] A critique of “Using Clouds to Provide Grids…”

[Jha08] Using Clouds to Provide Grids…

[KrishnaBlog] Cloud Computing and Grids

[Paczkis] Pronounced “punchkeys”