Impala – Use Incremental stats instead of Full Table stats

If you have a table which is partitioned on a column then doingCompute stats TABLE_NAMEwill execute on all partitions. Internally compute stats run NDV function on each column to get numbers. However NDV function works faster then other count(COLUMN), but it will run for each partition which may be irrelevant when you are working/updating/modifying values … Read more

Impala – Optimise query when using to_utc_timestamp() function

From 40 minutes to just 4 minutes Impala to_utc_timestamp() function is used to convert date/timestamp timezone to UTC. But it works very slow. If you have less data in table even then you can easily notice its slow performance.  I faced a similar issue and noticed it was taking around 40 minutes alone to complete … Read more

Sqoop – Optimise Import

Importing data using Sqoop is one of the most time consuming task of BigData environment. Sqoop is a powerful yet simple tool to import data from different RDBMSs into HDFS. But while importing data following 2 points should be considered with higher priority to reduce time : Number of Mappers: Mapper provides parallelism while importing … Read more

Sqoop – How to import data in HDFS

Sqoop is a tool to transfer data between Hadoop and relational databases. It uses MapReduce to import and export the data, which provides parallel operation as well as fault tolerance. Basic syntax to run Sqoop is :sqoop tool-name [Generic-Args] [Tool-arguments] All Generic arguments should precede any tool arguments. All Generic Hadoop arguments are preceded by a … Read more