Snowflake – Performance Tuning and Best Practices

Note: This article is a compilation effort of multiple performance tuning methodologies in Snowflake. Some Text/Images in the following article has been referred from various interesting articles and book, details of which are captured under “References”. Introduction to Snowflake Snowflake is a SaaS-based Data Warehouse platform built over AWS (and other clouds) infrastructure. One of the … Read more

Impala – Create Table AS Select * FROM Table – is SLOW

Below query seems like the simplest way to create a replica of table. But simplicity comes with some cost as well. Above query will : NOT create partitions if there are any on TABLE_NAME_2 run very slow Instead of above we should follow following 2 way approach :  CREATE TABLE TABLE_NAME    Like TABLE_NAME_2;  — … Read more

HDFS – Data Movement across clusters

You can move data in HDFS cluster using distcp command. distcp uses 10 mappers by default to bring data from source system. While doing data movement I encountered a problem in which data movement was failing because of checksum mismatch. If any block mismatch in the checksum then the complete data block was getting discarded.  Checksum is … Read more

Impala – Use Incremental stats instead of Full Table stats

If you have a table which is partitioned on a column then doingCompute stats TABLE_NAMEwill execute on all partitions. Internally compute stats run NDV function on each column to get numbers. However NDV function works faster then other count(COLUMN), but it will run for each partition which may be irrelevant when you are working/updating/modifying values … Read more

Impala – Optimise query when using to_utc_timestamp() function

From 40 minutes to just 4 minutes Impala to_utc_timestamp() function is used to convert date/timestamp timezone to UTC. But it works very slow. If you have less data in table even then you can easily notice its slow performance.  I faced a similar issue and noticed it was taking around 40 minutes alone to complete … Read more

Sqoop – Optimise Import

Importing data using Sqoop is one of the most time consuming task of BigData environment. Sqoop is a powerful yet simple tool to import data from different RDBMSs into HDFS. But while importing data following 2 points should be considered with higher priority to reduce time : Number of Mappers: Mapper provides parallelism while importing … Read more

Real Numbers representation in Impala

Many a times we face challenge in keeping the precision scale of real numbers in database after applying complex mathematical functions. When dataset is small a small variation in actual number may not worry a lot. But when dataset is huge and when dealing in BigData then a small variation in one number can lead … Read more

Impala – Optimisation at partition level

We all know that to optimise our queries these 3 strategies are like most common : Partitioned table Bucketing Collecting Stats But sometimes a simple query will run on ALL partitions instead of one. You may notice your query should work on one partition but it will run on all partitions.Let me show you an … Read more

Sqoop – Handle NULL values

By default Sqoop import NULL as null, if you want to change this default configuration you can use following arguments. While importing data :  –null-string –null-non-string While exporting data :  –input-null-string –input-null-non-string Check this example for more clarification :  In above example : –null-string argument represents what should be writtern in HDFS whenever a NULL is identified in … Read more