Tunable PGA analogue in Hadoop

Assume your query needs to do sorting. If the sort memory size (derived from pga_aggregate_target) is not big enough to complete the whole sorting in memory, Oracle will spill to disk to it. The sorting process will go though multiple passes to sort the whole data.

PGA is a private memory area allocated exclusively to a process. Memory chunks in it are categorized into tunable and untunable. The latter is the memory area allocated for pl/sql collections, varchar types and etc. The former can be controlled by adjusting corresponding parameters. When pga_aggregate_target is set to a non-zero value, Oracle adjusts certain internal (undocumented) parameters which is then used for making decisions when allocating memory to certain operations. For example, the memory area used for hash join is one of those derived from it. Another one is memory used for sortingThe tunable part can be controlled.

Regarding Hadoop…MapReduce works in two phases. Map phase prepares data by converting it into a collection of key-value value pairs which are then consumed by reducers. This phase sorts data by key before passing it to reducers. Sorting may spill to disk, if the sort buffer is not enough to complete the sorting entirely in memory. So, in order to fasten the job itself one may want to consider increasing the mapreduce.task.io.sort.mb (defaulting to 100 MB) parameter to give extra memory to do the sort. This is a potential tuning trick if you know that the slowness of the job is indeed related to the fact that the map phase spills multiple times to the disk to complete. That said, this parameter mapreduce.task.io.sort.mb in Hadoop is nothing but _smm_max_size in Oracle.

More Hadoop tricks in the next posts…

Posted in Data Engineering.

Leave a Reply