Skip Ribbon Commands
Skip to main content
Navigate Up
Sign In

Quick Launch

Average Rating:

(1 Rating)
facebook Twitter
Email
Print Bookmark Alert me when this article is updated

Feedback

FAQ: What execution engine would be used for running Profiling jobs when pushed down to Hadoop cluster from Informatica DEI?
Answer

Informatica Data Engineering Integration (DEI), earlier known as 'Big Data Management' (BDM), supports execution of mappings in the Hadoop Environment. For running mappings in Hadoop Environments, one of the following execution engines can be chosen: 

  1. Spark Engine
  2. Blaze Engine
  3. Hive Engine ('Map Reduce' or 'Tez' modes) (Available in Pre-Informatica 10.2.2 versions and not available from Informatica 10.2.2 version onwards)

Profiling jobs can be pushed down to Hadoop environment using 'Hadoop Connection' from 'Informatica Developer'/'Analyst Tool'/'Enterprise Data Catalog' (EDC).


In Informatica versions before 10.4.0, when profiling jobs are pushed down into Hadoop cluster, profiling jobs would be executed using 'Blaze' engine. Starting from Informatica 10.4.0 version, it is also possible to use 'Spark' engine for running profiling jobs in Hadoop environment, in addition to 'Blaze' engine.

​​​​​

More Information
​​

Applies To
Product: Data Engineering Integration(Big Data Management); Enterprise Data Preparation; Enterprise Data Catalog; Data Engineering Quality(Big Data Quality)
Problem Type: Configuration; Product Feature
User Type: Administrator; Architect; Data Analyst
Project Phase: Configure; Onboard
Product Version: Informatica 10.1.1; HotFix; Informatica 10.2; Informatica 10.2.1; Informatica 10.2.1 Service Pack 1; Informatica 10.2.2; Informatica 10.4
Database:
Operating System:
Other Software:

Reference

Attachments

Last Modified Date:3/30/2020 11:33 PMID:527454
People who viewed this also viewed

Feedback

Did this KB document help you?



What can we do to improve this information (2000 or fewer characters)