Please explain in detail with example about
1.Confirmed Dimension.
2.Junk Dimension.
3.Degenerated Dimension.
4.Slowly changing Dimensions
Answers were Sorted based on User's Feedback
Answer / raveendra g
One dimension which used more than one cube called
Confirmed dimension.
Junk dimension is nothing but Durty. We have to take
unrelated attributes and data into maintain single table
called junk dimension.
DD: The value of the dimension which is stored in the
factable called Degenerated Dimension
SCD: The value of the dimension is happen to change very
rarely and commonly called SCD.
3 Types of SCDS we are having
SCD1: Only current
SCD2: Current+History, Here we can add rows
SCD3: Current+just recent, Here we can add columns level
data.
| Is This Answer Correct ? | 4 Yes | 0 No |
Answer / usha
1.Confirmed Dimension:- the dimensions that are bult once
and can be use reused in multiple fact tables
2.Junk Dimension:- the dimension which is used to cerate
junk attributes
3.Degenerated Dimension:- The dimension that are located
in fact table
4.Slowly changing Dimensions:- The dimensions that chaanfes
slowly over a period of time.
Types
SCD type1
SCD type2
SCD type3
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / vasu
check all pevious Questions and Answers in
Allinterview.com
| Is This Answer Correct ? | 1 Yes | 9 No |
Are there any types of Dynamic Caches in informatica
Enlist few areas or real-time situations where informatica is required.
All active transformations r passive or not?
What is the filename which you need to configure in UNIX while installing infromatica?
How to Explain My project flow,In TCS ased me this question. In which way i can start my project flow and they asked how many fact tables and dimensional tables u used. Can any one Explain Briefly for this question and project architecture also.. please..
3 Answers CTS, HCL, Infosys, TCS,
There are two sessions are connected to the workflow. When the first session run load the data(employees data sum_salaries <= 1000000) into the target1. When the second session run load the data(employees data sum_salaries > 1000000) into the target2. How to achieve this?
waht type of interface is used for testing the data in informatica
What are the uses of etl tools?
3,if our source containing 1 terabyte data so while loading data into target what are the thing we keep in mind?
Design a mapping to load the cumulative sum of salaries of employees into target table?
Where is the cache stored in informatica?
What is complex mapping?