when load type is selected as bulk or normal in session level ?let me know the internal process and give me an example?
Answers were Sorted based on User's Feedback
Answer / sandeep desai
when session scheduled then load type is bulk mode at that time integration service bypass the database log file..it not create it and you cannot rollback the data
but in normal mode integration service create database log file and u can rollback the data
| Is This Answer Correct ? | 7 Yes | 0 No |
For History Load, generally the data is bulk loaded because the cost of indexes on the table is skipped when bulk loading is done as indexes on the target table are not allowed for bulk load.Once the data is loaded then the indexes are created. This way we could load the data into the target much faster.Also bulk load would not write the rows in the roll back segment.The time taken to write into roll back segment is also saved.Hence performance improved.
| Is This Answer Correct ? | 2 Yes | 0 No |
A:when you run a session with Normal load IS commits data into the target table as well as commits rowid into a log table.
For Example: you have 10000 records in your source,while loading data if your session fails at any point of time then there is a scope of recovery.After you fetch the issue when you rerun the job you need to ensure that your session is running with Normal load and instead of loading from the scratch,IS connect to the Repository service then Repo service connect to the Repo Database and check what is the last rowid commited in the log table and start loading from the max rowid+1 on wards.
checkpoint at session level: Resume From Last save heckpoint
BulkLoad: when you run a session with Bulk load the IS loads the data into target table and it won't commit any rowid into the log table. so performance will be good because only one operation is happing here but if session is failed at any point of time then there is no scope of recovery.
Note: when you work with Bulk load you need to ensure that there is no indexes on target table.if it has indexes your session get fails.
Q: I want to use bulk load and i have indexes on target table can't i access?
A: Yes you can achieve this by ensuring the option
Pre-SQL as :Drop Index index_Name;
Post-SQL as:create Index index_Name on table_Name(col_Name);
| Is This Answer Correct ? | 1 Yes | 0 No |
What are the different ways to implement parallel processing in informatica?
Define filter transformation?
i have thousand records in my source(flat file) i wanna load 990 records i don't want load first 5 records and last 5 records at informatica level?
how eliminate the duplicates in flat file with out using sorter ,aggregater
difference between top down(w.h inmon)and bottom up(ralph kimball)approach?
what is mapping override?
can we generate reports in informatica ? How?
i have 2 mapings for this 2 mapings i want use only one lookupt/r how?
How informatica recognizes mapping?
What is the advantage of persistent cache? When it should be used.
how to through logic of informatica using the push down optimization into oracle.
Explain pmcmd command usage in informatica