An excursion into ranking the NBA with Elo


P.S: Copy of my blog in Linkedin

Ranking and odd making are one of the oldest professions, probably dating to around AD 69 – the romans applying inferences on predicting gladiatorial shows! Fast forward, the recent NBA finals have become more interesting (from a Data Science perspective, of course) after the Cavalier’s Win !

Update (for those who were here before) : The Game 4 win by GSW (See the Update section at the end) shows how Elo adjusts for larger Margin Of Victory without oscillation!

One interesting algorithm is the Elo ranking, which has seen application in chess, computer games, NFL, NBA and Facebook ! In the movie Social Network, Eduardo Saverin writes the Elo on the glass, responding to Mark Zuckerberg’s call for the algorithm – the picture says it all !

An Introduction to Elo:

Leaving Eduardo and Mark Zuckerberg aside for a moment and moving on to the world of LeBron James and Steph Curry, Elo ranks teams or individuals in chess, basketball, computer games et al. The rank goes up or down as one wins or loses.

If a team is expected to win and it wins, the Elo rank goes up by a small amount; the gains are higher when a lower ranking opponent wins against a stronger team, with adjustments made for the margin of victory.

After every season, the rank reverts to a norm of 1505 (for Basketball) – but basketball teams being stable Year-to-Year, the folks at 538 has a distribution of 75% carry over and 25% revert to norm – we won’t deal with this now, but I did check this in my R program

Back to the main feature … NBA

The current NBA is a dream series for Elo – the thrills and chills of Elo can be observed! viz. a good matchup, but definitely a seemingly strong team, winning 1st game as expected; and boom, games 2 & 3 won by the (not so) weaker team !

You can see (below) the Elo stats at it’s best viz. capturing the transition, giving credit (and higher ranking) where it is due

I had done Elo for NFL, but wasn’t going to try NBA after game 1, but now lit ooks like a good exercise in data algorithmics …

Fortunately Nate Silver & his team has curated the basketball data from 1946 and explained their methodology. Thanks Guys.

I downloaded the data and did some R programming.

An ugly graph plotting Elo rating for the 2015 season for GSW (black) & CLE (blue).

We can definitely see that GSW is the stronger team, but CLE (Cavaliers) is getting stronger recently – especially as it wins over stronger teams.

Let us trace the stats summary ie the Elo rating of the teams, the point spread predictions, the actual score and the response from the algorithm ….

Stuff that brings tears to the eyes of a Data Scientist !

  • Going to Game 1, Elo said – GSW : 1802; CLE : 1712 ; Point Spread : GSW by 6.78 points. Actual – GSW by 8 points
  • Nothing fancy; the Elo ranking of GSW goes up by a little, CLE goes down a little
  • Going to Game 2, Elo said – GSW : 1806; CLE : 1708 ;Point Spread : GSW by 7.07 points. Actual – CLE by 2 points
  • Now, Elo kicks in ! CLE gains higher Elo (because they won over a stronger team), GSW loses more
  • Going to Game 3, Elo said – GSW : 1798; CLE : 1716 ;Point Spread : GSW by 2.92 points. Actual – CLE by 5 points
  • GSW’s Elo goes down; CLE’s future brightens; GSW still has a slim lead
  • Going to Game 4, Elo says – GSW : 1791; CLE : 1723 ;Point Spread : GSW by 2.3 points ! <- We are here (June 10,2015)

I will update with more Elo stats after Games 4,5,6 & 7 … (am sure it has the possibility to go to 7!)

6/11/15 : See Updates below

Incidentally Nate Silver’s tweets have an unintended consequence ! They are motivating Steph ! I am hoping this is the beginning of GSW’s path to a title …

Updates:

  • [Update 6/11/15 10:31 PM ] Actual : GSW by 21 Points !
  • Nate’s Tweets worked !
  • It is instructive to see the Elo graph. Even though the point spread (21 points) is much larger (than 2 & 5 points from earlier games) the Elo doesn’t go up by a huge amount. This is good, because we don’t want Elo to oscillate, but still should account for the larger than normal point spread. The Margin Of Victory multiplier adjusts that. Interesting to see the graph below, as Nate says it, in one game GSW regained their old position.
  • Going to Game 5, Elo says – GSW : 1810; CLE : 1704 ;Point Spread : GSW by 7.35 points (with home court advantage-refer to the formula (above) for details) ! <- Now, we are here (June 11,2015)
  • [Update 6/14/15] Game 5, GSW by 13 !
  • Going to Game 6, Elo says – GSW : 1814; CLE : 1701 ;Point Spread : GSW by 4.04 points (without home court advantage-refer to the formula (above) for details) ! <- Now, we are here (June 14,2015)

Reference:

  1. http://blogs.mercurynews.com/kawakami/2015/06/09/steph-curry-awakens-in-game-3-its-too-late-its-not-enough-but-its-exactly-what-the-warriors-need-the-rest-of-the-nba-finals/
  2. https://doubleclix.wordpress.com/2015/01/20/the-art-of-nfl-ranking-the-elo-algorithm-and-fivethirtyeight/

Of Byzantine Failures,unintended consequences & Architecture Heuristics


P.S: Copy of my blog in Linkedin

Way …. back in 2007, I gave a talk on Architecture Heuristics – we talked about Byzantine failures, systems with strong bones and the politics of systems architectures.

One would think that all this is way behind us ! Apparently not so ! There is a software bug in 787 GCU ! The root cause – yep you guessed it, integer overflow !

The plane’s electrical generators fall into a failsafe mode if kept continuously powered on for 248 days. The 787 has four such main generator-control units that, if powered on at the same time, could fail simultaneously and cause a complete electrical shutdown

And self-parking car hits pedestrians because …

Keeping the car safe is included as a standard feature, but keeping pedestrians safe isn’t. …

Interesting … whatever happened to the prime directive ? And Pedestrian Recognition – an option in self parking cars ? What next ? Steering wheel as an option ?

And, we keep on building machines that are software intensive ! Ford GT has more code than a 787 !

Back to Architecture Heuristics …

  1. Select technologies that you can dance with & Be flexible in scaling as you grow
  2. Embrace Failure & Influence Scalability
  3. Build systems with good bones (my slides from 2007 sill look relevant!)
  4. Solve the right problems
  5. While we build complex AI systems, remember that our ingenuity is hard to beat – even by the smart machines that we build !
  6. And, those who don’t learn from the history should read these recommendations, they are still valid !
  7. … Of course, pay that extra $3,000 and buy the Pedestrian Detection – you might drive the car in this world (where we humans reside – at least for now) not in Mars !

Take Care of the Ball, Value every Possession & Protect the Rim


P.S: Copy of my blog in Linkedin

CurryWas watching the NBA Western Conference Finals; the Warriors Team, Coach Kerr & Stephen Curry all are inspiration not only for Sports but also for the startup world.

I picked up a few insightful quotations from the post-game conference … will let you fill-in the inferences & lessons to keep this blog short …

Agility & Nimbleness : What I like most about the Warriors, is the way they morph & raise to the occasion. They find ways to reorganize & adapt against different teams … time will tell how they will do against LeBron and the Cleveland Gang … but for now, they are very effective …

[Update 5/24/15] Interestingly, today Tim Kawakami expressed the same sentiment in his blog at San Jose Mercury News !

“Take Care of the ball, value every possession & protect the rim” – Steve Kerr. Lot of truth in this statement … for life and business …

Steve Kerr about Harden “He sees every angle and we try to close as many of them as we can …”. That is all what we need to do in business to get ahead. The talented will make the shots, under any circumstances, like Kobe says … (er, tweets)

So be comfortable in taking those difficult shots !

Lesson for the Rockets : “Don’t play around the edges, play in the paint” echoed by Kevin McHale “Win the paint & win the board” … So true in sports and in startup business …

Curry Flurry : “Stephen Curry is very patient & will let the offense come to him ! Then he starts !” – In game 3 he had 40 points but only one in 1st quarter ! Once he got the offense, he flawlessly executed his characteristic “confidence & smoothness of the shots” …

In short, “Steph”, Kerr said “was Steph” !

BTW, don’t count the Houston Rockets out yet ! Against all odds, they won against LA Cilppers; Harden & Kevin McHale have a way with adversities …

And on another note, I need to update my NFL/ELO blog to applying ELO in BasketBall …

Reference for material & pictures:

  1. http://www.nytimes.com/2015/05/24/sports/basketball/stephen-curry-with-a-little-help-tramples-the-rockets.html
  2. http://scores.espn.go.com/nba/recap?gameId=400796357
  3. http://scores.espn.go.com/blog/statsinfo/post/_/id/105884/harden-swoons-curry-soars-in-game-3
  4. http://uproxx.com/dimemag/2015/05/kobe-bryant-stephen-curry-defense-twitter/
  5. http://diys.didiroesmana.com/trends/golden-state-warriors
  6. http://blogs.mercurynews.com/kawakami/2015/05/23/when-the-warriors-figure-out-an-opponent-this-happens-a-game-3-houston-obliteration-by-the-playoff-monster/

Data Science is the new Electronics


P.S: This is a copy of my blog in Linkedin.

Electronics

A good friend of mine asked me “What exactly is this Data Science”?

That got me thinking – we have tons of blogs on “Who or What is a Data Scientist” including mine.

One can explain the intuition behind Data Science, the pragmas of the profession, but not the essence !

Then I remembered an engineer on a flight to Tokyo, who was at 61G, I was 61H. It was years ago, probably a lot more years than many (or most) of the readers would remember. I asked him what he was doing and his answer was “Helping companies to embed electronics in their products!”. I remember when autos had no electrical circuits except for the lights. Then came ignition electronics, engine electronics and now powerful computers that control almost all functions; except, of course, to roll where we still need old-fashioned wheels & tires !

We are at that stage with Data Science, where the three Amigos of Data Science(Intelligence, Inference and Interface) can be embedded in enterprise systems increasing their capabilities that far exceed the current ones !

We can really build adaptive systems .. not descriptive, not reactive but truly adaptive, that have malleable intelligence instead of the brittle newtonian rules !

As Sonny Elliot would say – Exactically!

Exactically similar to Electronics some years ago ! Now is the time to think Data Science as embeddable modules with Intelligence/Inference at the systems level and interesting Interfaces for the users …

And that, probably, is the mission of Data Scientists …

If they choose to accept … This blog could self-destruct in 5 seconds …5…4…3…2

Data Science with Spark on the Databricks Cloud – Training at SparkSummit (East)


DataSci-03-P24We had a good Data Science training session in Sheraton, Times Square, NY; second day of SparkSummit (East). It was my privilege to co-author and lead the Data Science track, along with Reza, Paco, Andy, Hossein, TD,Joseph and Xiangrui. I have shared the slideset at Slideshare as well as at the Databricks site.

[Update 4/12/15] : The video is posted at Youtube (5hrs!)

This was the second time I was involved with a training fully based off of the Databricks cloud and it worked out very well ! The Databricks cloud was very robust and resilient. Unfortunately we had problems with the wireless at the Sheraton Hotel !DataSci-03-P27
The training was a mixture of hands-on and lecture.We sterted out with a dataset of 30 records and then moved onto the titanic dataset (900) to the movielens medium (1,000,000) and finally with the RecSyschallenge dataset (33,000,000!). What a progression in a day !

You can see the details in the slides. Ping me if you have any questions.

DataSci-03-P28Data wrangling over the RecSysChallenge 2015 data captures the essence of the Databricks cloud. I will quickly cover the RecSys Challenge dataset as an illustration.

The training data consists of 33,003,944 clicks and 1,150,753 buys. Our mission, if we choose to accept is to predict the session-items bought from a test dataset of 8,251,791 clicks.

A quick data exploration workflowdbc-01:

dbc-02

dbc-03

All at scale, in an elastic cloud, seamlessly moving between dev, model, stage and prod ! The magic of Databricks Cloud !

BTW, we also explored the State Of the Union Speeches from Washington, Lincoln, FDR, Clinton, Bush & Obama. The graphs below show a succinct view of the mood of the nation at each periods …

dbc-04

And finally after 100 slides later …!

DataSci-03-P100

The Best Of the Worst – AntiPatterns & Antidotes in Big Data


Recently I was a panelist at the Big Data Tech Con discussing the Best of the Worst Practices in Big Data. We had an interesting and lively discussion with pragmatic,probing questions from the audience. I will post the video, if they publish it.

Here are my top 5 (Actually Bakers 5 ;o) ) – Anti Patterns and of course, the Antidotes …

1. Data Swamp

swamp-01Typical case of “ungoverned data stores addressing a limited data science audience“.

The scene reads like so :

The company proudly has crossed the chasm to the big data world with a new shiny Hadoop infrastructure. Now every one starts putting their data into this “lake”. After a few months, the disks are full; Hadoop is replicating 3 copies; even some bytes are falling off the floor from the wires – but no one has any clue on what data is in there, the consistency and the semantic coherence.

Larry at IT starts the data import process every week sometime in Friday night – they don’t want to load the DW with a Hadoop feed during customer hours. Sometimes Larry forgets and so you have no clue if the customer data is updated every week or every 14 days; there could be duplicates or missing record sets …

Gartner has an interesting piece about Data Lakes turning to Data Swamps – The Data Lake Fallacy: All Water and Little Substance … A must read …


The antidote is Data Curation. It needn’t be a very process heavy – have a consistent schema & publish them in a Wiki page. Of course, if you are part of a big organization (say retail) and have petabytes of data, naturally it calls for a streamlined process.

Data quality, data lineage, “descriptive metadata and an underlying mechanism to maintain it”, all are part of the data curation layer …

Semantic consistency across diverse multi-structured multi-temporal transactions require a level of data curation and discipline



2. Technology Stampede & Vendor Pollution

  • You start with a few machines, install Apache Hadoop and start loading data. Marketing fols catchup on your big data project and now you have a few apps running. A few projects come onboard and you have a decent sized cluster
  • A famous cloudy Hadoop vendor approaches the management and before you know it, the company has an enterprise license and you are installing the vendor’s Hadoop Manager & MPP database named after a GM car.
  • Naturally the management expects the organization to morph into a data-driven, buzz-compliant organization with this transition. And, naturally a Hadoop infrastructure alone is not going to suddenly morph the organization to a big data analytics poster child … vendor assertions aside …
  • And behold, the Pivotal guys approach another management layer and inevitably an enterprise license deal follows … their engineers come in and revamp the architecture, data formats, flows,…
    • Now you have Apache, the cloudy Hadoop vendor and this Pivotal thing – all Hadoop, all MPP databases, but of course, subtle differences & proprietary formats prevent you from doing anything substantial across the products …
      • In fact even though their offices are few miles apart in US-101, their products look like they are developed by people in different planets !
  • While all is going on, the web store front analytics folks have installed Cassandra & are doing interesting transformations in the NOSQL layer …
  • Of course the brick and mortar side of the business use HBase; now there is no easy way to combine inferences from the web & store-walk-ins
  • One or two applications (may be more) are using MongoDB and they also draw data from the HDFS store
  • And, the recent water cooler conversations indicate that another analytics vendor from Silicon Valley is having top level discussions with the CIO (who by now is frustrated with all the vendors and technology layers) … You might as well order another set of machines and brace for the next vendor wave …

The antidote is called Architecture & a Roadmap (I mean real architecture & roadmap based on business requirements)

Understand that no vendor has the silver bullet and that all their pitches are at best 30% true … One will need products from different vendors, but choose a few (and reject the rest) wisely ! 



3. Big Data to Nowhere

bridge-to-nowhereUnfortunately this is a very common scenario – IT sees an opportunity and starts building the infrastructure, sometimes massive, and puts petabytes of data in the Big Data Hub or lake or pool or … But no relevant business facing apps. A conversation goes like this …

Business : I heard that we have a big data infrastructure, cool. When can I show a demo to our customers ? 

IT : We have petabytes of data and I can show the Hadoop admin console. We even have the Spark UI !

Business : … (unprintable)


The Antidote is very simple. Build the full stack ie bits to business … do the data management ie collect-store-transform as well as a few apps that span the model-reason-predict-infer

Build incremental Decision Data Science & Product Data Science layers, as appropriate … for example the following conversation is a lot better …

Business : I heard that we have a big data infrastructure, cool. When can I show a demo to our customers ? 

IT : Actually we don’t have all the data. But from the transaction logs and customer data, we can infer that Males between 34 -36 buy a lot of stuff from us between 11:00 PM & 2:00 AM !

Business : That is interesting … Show me a graph. BTW, do you know what is the revenue is and the profit margin from these buys ?

IT : Graph is no problem. We have a shiny app with the dynamic model over the web logs.

IT: With the data we have, we only know that they comprise ~30% of our volume by transaction. But we do not have the order data in our Hadoop yet. We can … let me send out a budget request …

Now, business got an app, not fully production ready, still useful. IT has proved the value of the platform and can ask for more $ !



4. A Data too far

This is more of a technology challenge. When disparate systems start writing to a common store/file system, the resulting formats and elements would be different. You might get a few .gz files, a few .csv files and of course, parquet files. Some will have IDs, some names, some aggregated by week, some aggregated by day and others pure transactional. The challenge is that we have the data, but there is no easy way to combine them for interesting inferences …



5. Technology Maze

This is the flip side of the data format challenge (#4 above) This one is because of the technology stampede and the resulting incompatible systems where data is locked in.

Antidote : Integrated workflows.

Came across an interesting article by Jay Kreps. Couple of quotes:

” .. being able to apply arbitrary amounts of computational power to this data using frameworks like Hadoop let us build really cool products. But adopting these systems was also painful. Each system had to be populated with data, and the value it could provide was really only possible if that data was fresh and accurate.”

“…The current state of this kind of data flow is a scary thing. It is often built out of csv file dumps, rsync, and duct tape, and it’s in no way ready to scale with the big distributed systems that are increasingly available for companies to adopt.”

“..The most creative things that happen with data are less about sophisticated algorithms and vast computation (though those are nice) than it is about putting together different pieces of data that were previously locked up in different silos.”

The solution : A pub-sub infrastructure (Kafka) + “surrounding code that made up this realtime data pipeline—code that handled the structure of data, copied data from system to system, monitored data flows and ensured the correctness and integrity of data that was delivered, or derived analytics on these streams



6. Where is the Tofu ?

This is succinctly articulated by Xavier Amatriain when he talked about the Netflix Prize:

It is very simple to produce “reasonable” recommendations and extremely difficult to improve them to become “great”. But there is a huge difference in business value between reasonable Data Set and great …

The Antidote : The insights and the algorithms should be relevant and scalable … There is a huge gap between Model-Reason and Deploy …



What says thou ? Pl add comments about the Anti Patterns that you have observed and lived thru!



Updates:

[Jan 3, 15] Interesting blog in Forbes “Four Common Mistakes That Can Make For A Toxic Data Lake” says :

  1. Your big data strategy ends at the data lake

  2. The data in your data lake is abstruse

  3. The data in your data lake is entity-unaware

  4. The data in your data lake is not auditable

[]

[]

Ref:
[1] http://2buiart.deviantart.com/art/Swamp-Temple-Night-443413082

[2] https://www.flickr.com/photos/xdmag/1470243032/

Human Ingenuity puts a Fiat 500 C infront of VW HQ in Google Street View


Image

Interesting Story at Zagg Blog

Fiat’s workers spotted the Google Street view car, which captures panoramic views of locations around the world with a roof camera, driving past its offices. They quickly drove a Fiat 500C to Volkswagen’s head office @ Södertälje 45 minutes away and parked there ! Just in Time for the StreetView Camera to snap the following picture !

Now the Googel Pegman shows the Fiat in front of VW HQ !

An ode to the Easter Eggs, Ecstasies & Agonies of a GoogleIO Ticket


Chronicles of my failed attempt at procuring a GioogleIO Ticket … The Google Wallet ate my GogleIO 2013 Ticket !

It was the night before GoogleIO … Excitement was in the air … Tweets were in order …

Image

The order of the day was to find all Easter Eggs in the page …

Image

I clicked and clicked and clicked … and got thru all the easter Eggs …

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

And I slept …

It was early AM when I woke up … still 15 min before the GoogleIO stores open …

Image

The wait was agonizing, but all for a good cause, so I thought …

I was there when the GoogleIO Ticket store opened …

Image

I was not disappointed when my first try failed after 6 minutes …

Image

And my optimism payed off when it eventually found me a precious little ticket …

Image

I reviewed the purchase … and gave it to Google Wallet … little did I know that …

Image

But the screen stayed there and the time ticked down ….

By now the verdict was clear – The Google Wallet is going to eat my lucky GoogleIO Ticket ….

And It did …..

Image

And soon after the registration ended …. The cold hand of fate …

Image

Can I find a kind soul at Google to help me or should I wait for GoogleIo 2014 ? ….