when will we use unconnected & connected lookup? How it
will effect on the performance of mapping?
Answer Posted / sweta kedia
A. Connected Lookup
Receives input values directly from the pipeline.
We can use a dynamic or static cache
Supports user-defined default values
Unconnected Lookup
Receives input values from the result of a :LKP expression
in another transformation.
We can use a static cache
Does not support user-defined default values
When you compared both basically connected lookup will
return more values and unconnected returns one value. conn
lookup is in the same pipeline of source and it will accept
dynamic caching. Unconn lookup don't have that faclity but
in some special cases we can use Unconnected. if output of
one lookup is going as input of another lookup this
unconnected lookups are favourable
Moreover if the mapping requires multiple lookup using same
lookup conditions then its better to use unconnected lookup
& call the lookup wherever required.This also prevents the
mapping from being complex.
| Is This Answer Correct ? | 3 Yes | 0 No |
Post New Answer View All Answers
Where are the source flat files kept before running the session?
What is event and what are the tasks related to it?
Describe the scenarios where we go for joiner transformation instead of source qualifier transformation?
Explain what are the different types of transformation available in informatica.
What are the issues you have faced in your project? How did you overcome those issues?
How to extract sap data using informatica?
My source is delimited flat file Flat file data is H|Date D1|ravi|bangalore D2|raju|pune T|4 The data will be send to target if the fallowing two conditions satisfied 1.The first row Date column is equal to SYSDATE 2.Last record second port equal to number of records. How to achieve?
Debugger what are the modules, what are the options you can specify when using debugger, can you change the expression condition dynamically when the debugger is running.
Which transformation is needed while using the Cobol sources as source definitions?
If the source has duplicate records as id and name columns, values: 1 a, 1 b, 1 c, 2 a, 2 b, the target should be loaded as 1 a+b+c or 1 a||b||c, what transformations should be used for this?
How many dimensions are there in informatica?
Explain the aggregator transformation?
EXL informatica Questions
How can you increase the performance in joiner transformation?
What is domain and gateway node?