Snowflake – Performance Tuning and Best Practices

Snowflake Performance Tuning with charts and best practices visual illustration

Snowflake’s cloud-native architecture makes it incredibly easy to get started — but running it efficiently at scale is a whole different game. If you’ve ever faced slow queries, ballooning credit consumption, or unpredictable performance, you’re not alone. Tuning Snowflake workloads requires more than just adjusting warehouse sizes — it involves understanding how Snowflake stores data, … Read more

Apache Spark – Performance Tuning and Best Practices

Visual representation of Apache Spark performance tuning with Spark logo and performance gauge, from the blog Apache Spark – Performance Tuning and Best Practices.

Apache Spark has revolutionized the way we process large-scale data — delivering unparalleled speed, scalability, and flexibility. But as many engineers discover, achieving optimal performance in Spark is far from automatic. Your job runs — but takes longer than expected. The cluster scales — but the costs rise disproportionately. Memory errors appear out of nowhere. … Read more

Data Serialisation – Avro vs Protocol Buffers

Visual comparison of Avro vs Protocol Buffers for data serialisation, with arrows representing data flow for each format

Background File Formats Evolution Why not use CSV/XML/JSON?  Repeated or no meta information. Files are not splittable, so cannot be used in a map-reduce environment. Missing/ Limited schema definition and evolution support. Can leverage “JsonSchema” to maintain schema separately for JSON. It may still require transformation based on a schema, so why not consider Avro/Proto? … Read more

Count(*) – Explaining different behaviour in Joins

Observations :  Count(1) or Count(*) – This is never expanded on each column individually so will work perfectly fine on complete data.  Count(1) is more optimized then Count(*) Count(source.*) – source represents “Left table” of “Left Outer Join”: This will be evaluated as Count(source.col1, source.col2, …. source.colN ) So, if any column has NULL, then the complete row … Read more

Impala – Create Table AS Select * FROM Table – is SLOW

Below query seems like the simplest way to create a replica of table. But simplicity comes with some cost as well. Above query will : NOT create partitions if there are any on TABLE_NAME_2 run very slow Instead of above we should follow following 2 way approach :  CREATE TABLE TABLE_NAME    Like TABLE_NAME_2;  — … Read more

HDFS – Data Movement across clusters

You can move data in HDFS cluster using distcp command. distcp uses 10 mappers by default to bring data from source system. While doing data movement I encountered a problem in which data movement was failing because of checksum mismatch. If any block mismatch in the checksum then the complete data block was getting discarded.  Checksum is … Read more

Impala – Use Incremental stats instead of Full Table stats

If you have a table which is partitioned on a column then doingCompute stats TABLE_NAMEwill execute on all partitions. Internally compute stats run NDV function on each column to get numbers. However NDV function works faster then other count(COLUMN), but it will run for each partition which may be irrelevant when you are working/updating/modifying values … Read more

Impala – Optimise query when using to_utc_timestamp() function

From 40 minutes to just 4 minutes Impala to_utc_timestamp() function is used to convert date/timestamp timezone to UTC. But it works very slow. If you have less data in table even then you can easily notice its slow performance.  I faced a similar issue and noticed it was taking around 40 minutes alone to complete … Read more

Sqoop – Optimise Import

Importing data using Sqoop is one of the most time consuming task of BigData environment. Sqoop is a powerful yet simple tool to import data from different RDBMSs into HDFS. But while importing data following 2 points should be considered with higher priority to reduce time : Number of Mappers: Mapper provides parallelism while importing … Read more

Real Numbers representation in Impala

Many a times we face challenge in keeping the precision scale of real numbers in database after applying complex mathematical functions. When dataset is small a small variation in actual number may not worry a lot. But when dataset is huge and when dealing in BigData then a small variation in one number can lead … Read more