What is the main data object present inbetween source and
target. I answered Mapping. Transformation etc.. But it is
not the answer. So please give me an apt answer. Thanks in
advance
Answers were Sorted based on User's Feedback
Answer / santoshi
The main data object present inbetween source and target is
staging layer only, Staging layer will do eliminate the
inconsistency data and gives the result data object
| Is This Answer Correct ? | 15 Yes | 1 No |
Answer / rajupatel
Data Object: intermediate table if we r using or Stage
table (temp table) which are the resulting data object
after applying the transformation.
| Is This Answer Correct ? | 1 Yes | 0 No |
Answer / anupama
source qualifier is the correct answer,because with out
source qualifier u cant do any thing
| Is This Answer Correct ? | 2 Yes | 1 No |
Answer / manohar
Answer is Source Qualifier.
Since the source may be anything so Informatica should be
able to understand the data types perfectly, so only the
data can be pass through to the target. So you can see when
drag the souce along with the source SQ will be created
automatically.
why not the other transformations?
Since other transformations are not capable of wat the SQ
transformation has.
Correct me if i am wrong
| Is This Answer Correct ? | 5 Yes | 4 No |
Answer / bsgsr
hi,
i believe ods cant b called a data object. its a temp
database for validating the data n applying business logic
on that.
it shud me a transformation. probably the ans kud be
repository object since transformation is a reposiyory
object.
| Is This Answer Correct ? | 1 Yes | 1 No |
Answer / pratibha
Links may be..Between source and target, there are links that are joining ports.
| Is This Answer Correct ? | 0 Yes | 0 No |
I want my deployment group to refer an external configuration file, while i deploy in the production environment. How can i achieve it.
How to convert flat file into xml file? How to tune joiner?
what is galaxy repository?
can anyone suggest best free Talend data integration training online
Where are the source flat files kept before running the session?
i have a flat file and the file s are like below ****%%^^@@@G**@#A@#$N*&^E%^S@#h@@@##$$ IN THIS FORMATE Means un limited special charecter between and sides of the string data..output should be(GANESH) HOW YOU HANDEL THIS TYPE OF RECORDS,,COZ U DON"T KNOW WHERE AND WHAT ARE THE SPECIAL CHARACTERS IN BETWWEN THE NAME...
i have an aggregater in my mapping and no group by port on any column and i ampassing 100 rows through aggregater,so how many rows i will get as out put from aggregater
What is the benefit of session partitioning?
1.why we need to use unconnected transformation? 2.where we can static chach,dynamic chach
What are the conditions needed to improve the performance of informatica aggregator transformation?
There are 2 files, Master and User. We need to compare 2 files and prepare a output log file which lists out missing Rolename for each UserName between Master and User file. Please find the sample data- MASTER.csv ---------- Org|Tmp_UsrID|ShortMark|Rolename ---|---------|----------|------------ AUS|0_ABC_PW |ABC PW |ABC Admin PW AUS|0_ABC_PW |ABC PW |MT Deny all GBR|0_EDT_SEC|CR Edit |Editor GBR|0_EDT_SEC|CR Edit |SEC MT103 GBR|0_EDT_SEC|CR Edit |AB User USER.csv -------- Org|UserName|ShortMark|Rolename ---|--------|---------|------------ AUS|charls |ABC PW |ABC Admin PW AUS|amudha |ABC PW |MT Deny all GBR|sandya |CR Edit |Editor GBR|sandya |CR Edit |SEC MT103 GBR|sandya |CR Edit |AB User GBR|sarkar |CR Edit |Editor GBR|sarkar |CR Edit |SEC MT103 Required Output file: --------------------- Org|Tmp_UsrID|UserName|Rolename |Code ---|---------|--------|------------|-------- AUS|0_ABC_PW |charls |ABC Admin PW|MATCH AUS|0_ABC_PW |charls |MT Deny all |MISSING AUS|0_ABC_PW |amudha |ABC Admin PW|MISSING AUS|0_ABC_PW |amudha |MT Deny all |MATCH GBR|0_EDT_SEC|sandya |Editor |MATCH GBR|0_EDT_SEC|sandya |SEC MT103 |MATCH GBR|0_EDT_SEC|sandya |AB User |MATCH GBR|0_EDT_SEC|sarkar |Editor |MATCH GBR|0_EDT_SEC|sarkar |SEC MT103 |MATCH GBR|0_EDT_SEC|sarkar |AB User |MISSING Both the files are mapped through Organization, Shor_mark. So, based on each Organization, Short_Mark, for each UserName from User.csv, we need to find the Matching and Missing Rolename. I am able to bring Matching records in the output. But really I don't find any concept or logic to achieve "MISSING" records which are present in Master and not in User.csv for each UserName. Please help out guys. Let me know if you need any more information. Note:- In User.csv file, there are n number of Organization, under which n number Shortmark comes which has n number of UserName.
Make a note of the quantity vaults made in informatica?