why we are using level option in normalizer transformation
Answer / lokesh
If you want to see result in quarterly based with all other information like location, branch then we can go for normalizer transformation.
Your normalizer transformation looks like this for above example:
Field Level Occurs Description
Location 1 0
Branch 2 2 In each record we can see branch detail occurs two times
result 2 4 For each branch 4 times we can see quarter result, so in a record 4(occurs) X 2 (level) = 8 times
It automatically create input/output port depends on your level and occurs. Also generates some output key ports. You can see more detail on key port in help material.
| Is This Answer Correct ? | 0 Yes | 0 No |
Can any body tell about the (UTC) unit test cases with the examples in informatica.
whats the logic to load the lower level of granularity of data to fact table.
How to retrieve last two days updated records?
What is meant by LDAP users?
How can we create index after completion of load process?
Can anyone give some input on "Additional Concurrent Pipelines for Lookup Cache Creation" ? I know that this property is used to build caches in a mapping concurently. But which values should I set into this ( i.e. 1 or 2 or 3 or something else ) for concurrent cache building ?
what are factless facts? And in which scenario will you use such kinds of fact tables.
for ex: in source 10 records are there with column sal. use a filter transformation condition as Sal=TRUE and connect to target. what will happen.
Is snow flake or star schema used? If star schema means why?
How many repositories can we create in Informatica??
i have a data in my source as a flat files what test i have to perform the next step can any body help to me
I am Unable to load the FixedWith FlatFile Into The Target.What Is The Reason. PLZ Help Me..