How to get started with Big Data Analysis

Question:

I’ve been a long time user of R and have recently started working with Python. Using conventional RDBMS systems for data warehousing, and R/Python for number-crunching, I feel the need now to get my hands dirty with Big Data Analysis.

I’d like to know how to get started with Big Data crunching.
– How to start simple with Map/Reduce and the use of Hadoop

  • How can I leverage my skills in R and Python to get started with Big Data analysis. Using the Python Disco project for example.
  • Using the RHIPE package and finding toy datasets and problem areas.
  • Finding the right information to allow me to decide if I need to move to NoSQL from RDBMS type databases

All in all, I’d like to know how to start small and gradually build up my skills and know-how in Big Data Analysis.

Thank you for your suggestions and recommendations.
I apologize for the generic nature of this query, but I’m looking to gain more perspective regarding this topic.

  • Harsh
Asked By: harshsinghal

||

Answers:

Using the Python Disco project for example.

Good. Play with that.

Using the RHIPE package and finding toy datasets and problem areas.

Fine. Play with that, too.

Don’t sweat finding “big” datasets. Even small datasets present very interesting problems. Indeed, any dataset is a starting-off point.

I once built a small star-schema to analyze the $60M budget of an organization. The source data was in spreadsheets, and essentially incomprehensible. So I unloaded it into a star schema and wrote several analytical programs in Python to create simplified reports of the relevant numbers.

Finding the right information to allow me to decide if I need to move to NoSQL from RDBMS type databases

This is easy.

First, get a book on data warehousing (Ralph Kimball’s The Data Warehouse Toolkit) for example.

Second, study the “Star Schema” carefully — particularly all the variants and special cases that Kimball explains (in depth)

Third, realize the following: SQL is for Updates and Transactions.

When doing “analytical” processing (big or small) there’s almost no update of any kind. SQL (and related normalization) don’t really matter much any more.

Kimball’s point (and others, too) is that most of your data warehouse is not in SQL, it’s in simple Flat Files. A data mart (for ad-hoc, slice-and-dice analysis) may be in a relational database to permit easy, flexible processing with SQL.

So the “decision” is trivial. If it’s transactional (“OLTP”) it must be in a Relational or OO DB. If it’s analytical (“OLAP”) it doesn’t require SQL except for slice-and-dice analytics; and even then the DB is loaded from the official files as needed.

Answered By: S.Lott

One thing you can consider is the DMelt (http://jwork.org/dmelt/) data analysis program. One notable feature is that it has hundreds of examples using the Python language, and a few books. The reason I was using it is that it runs on my Windows 10 (since it uses Java VM), plus it has very good graphics in 2D/3D which can be exported to the vector graphics format.

Answered By: Elia
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.