Problem Description: W2 have a production database and need to configure a GoldenGate uni-directional replication in real time without downtime. Is there a best practice, white paper, knowledge document would help us in desiging the transition? We can start with a pair of database which is Data Guard Physical replication, or start with a brande new target database. Generic steps provided by Metalink:
Implementing Oracle GoldenGate


0) Install OGG on the source and target systems
1) Establish what tables are to be replicated from the source and what tables they will be mapped to on the target.
2) Run defgen against the source tables and ftp the created efs file (call it defs.def for example) to the
dirdef subdirectory on the target install.
3) Create an ongoing capture extract to capture changes to the archive / redo logs.
4) Add the extract via ggsci
5) add the exttrail via ggsci
6) Create a datapump to read the local trail and write a remote trail to the target machine.
7) Start the local capture and datapump
8) Create an ingoing replicat. Do not start it.


9) Initial Load the target database
This can be done with native or local utilities
or with GoldenGate as follows

Begin Initial Load:
Much of what you will do depends on having enough resources and space.
Divide up your tables so that you can create several (say 10) extracts and their datapumps.
Create a sourceisfile extract for each grouping of tables. They can be run in parallel if you have the horsepower.
Create an initial load replicat for each extract/datapump thread.


There are 3 ways I recommend to do the initial load with OGG
First, do the normal prescribed manner writing to the target sode over TCP
or
Second, write to local disks and ftp the written trails to the target side.
Run the initial load replicats against the trail files. The ftp method is fatser.
or, if you have disks connected to both source and target databases, there is a fatser method.
This method is especially useful for vary large databases.


Begin initial load method
Have your sourceisfile extracts write their trails to shared disks.
You can start the initial load replicats while their trails are still being written. This reduces overall time.
You are loading the target at the same time you are unloading the source.
When all initial load replicats are finished, they can be stopped.
End Initial Load




Start the ongoing replicat(s) with HANDLECOLLISIONS turned on. When the replicats catch up, stop them and
remove HANDLECOLLISIONS, and restart them.
You should be migrated.
You can divide up your tables into as many extracts as you have memory and CPU power to do so.
If you use the ftp method, you will need at least 1.3 times the amount of data in your database on the
source and on the target for intermediate trails.
If you use the shared disk method, and start the initial load replicats while extraction is going on, you can
get by with a lot less space

N.B.
Creating initial load trails for this size of data requires a special technique.
Use the options megabytes 2000, maxtrails 900 for the extfile. Make the trail file name two characters.
This will cause the initial load trails to look just like normal trails.
Your initial load replicats will look just like regular replicats because they will be reading a trail of files, not a single file.