Adventures in Machine Learning

1

Posted by Anonymous on 26 Jul 2013 at 23:07

Lately I have been thinking about how to recommend movies to movie watchers, purchases to shoppers, artists to music lovers. In general, if you have a bunch of items and a bunch of users, how do you figure out which items to recommend to which users?

There are many solutions to this problem. One extreme solution is to ask a movie connoisseur to learn the movie watcher's tastes, habits, lifestyle, and more. Then the movie connoisseur carefully picks out a few movies he thinks the movie watcher would enjoy. While this solution can produce incredibly personalized results, it is very time-consuming and does not scale. The solution on the opposite end of the spectrum is collaborative filtering. In collaborative filtering, you take all the existing data on which movie watchers like...

Everything about Everyone, in one random access table

0

Posted by Lawrence Sinclair on 18 Sep 2009 at 16:27

HBASE - Sumit Khanna pointed out this element of the data processing space. I like to think about it as enabling one big table, big enough for every fact about everyone who ever lived.

HBase is the Hadoop database. Use it when you need random, realtime read/write access to your Big Data.

HBase ia an open-source, distributed, column-oriented store modeled after the Google paper,Bigtable: A Distributed Storage System for Structured Databy Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop. HBase's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware

Data Processing Performance Options

1

Posted by Lawrence Sinclair on 14 Sep 2009 at 04:19

Here are a few of my thoughts on technologies and approaches for achieving better data processing performance in the current technology landscape.

RDBMS
Using mySQL or another RDBMS, performance might be addressed with better indexing or by partitioning the data.
MAP-REDUCE NON-RELATIONAL SYSTEMS
A non-relational approach might be to useHadoopor one of its distributions (such asCloudera). This would allow processing to be distributed anywhere from 3 local machines, or a virtually unlimited (hundreds+++) number of machines on the cloud (such as Amazon EC2). But this is best suited for analytic and data processing tasks that can takes several minutes or hours.
THE BEST OF BOTH WORLDS?!
Somewhere in between these two systems isHadoopDBbyDaniel Abadiof Yale. It uses the Hadoop...

Facebook Social Data

1

Posted by Lawrence Sinclair on 01 Aug 2009 at 02:54

Socialscore is a dashboard for Facebook users' social networks. The concept is to present meaningful metrics, information, and search capabilities to enable Facebook users to understand their social networks and social influence. The idea (IMHO) was rather cool, but we ended up with under a hundred users and the site was never very successful from a business perspective. However, it does remain a good demonstration of some of our programming capabilities.

I present Socialscore in the following video:

Hadoop: free scalable data processing

0

Posted by Lawrence Sinclair on 10 Jan 2009 at 04:53

Hadoop -- If you're a startup and think you have a lot of data, then the cool solution to your data processing problems is to use this technology. Hadoop is an open source distributed system for reading and transforming ("map") then sorting and summarizing ("reduce") raw text data on an arbitrarily large network of cheap computers.

In some specialized cases, Hadoop is becoming a competitor to commercial tools used for ETL ("Extract-Transform-Load") tools such as Informatic and SAS. Hadoop is free and far more scaleable than commercial alternatives. However, it is less flexible, less user friendly, and has no built in reporting or analytic capabilities, and has no database loading capabilities, leaving data in the same flatfile form from which it must come. In the right applications,...