在11gR2使用AP帳號做datapump匯出時,發現執行很久,檢查發現parallel設定8個,但產生出來的dump檔只有2個 (真正執行只有parallel 1)
datapump parfile內容
$ cat expdp_meta.par userid=schema/password tables=SCHEMA.METADATA:2022M02 directory=dmp dumpfile=20220322_2022M02_%U.dmp logfile=20220322_2022M02.log parallel=8 CONTENT=DATA_ONLY COMPRESSION=ALL
執行datapump結果,只產生一個大檔,parallel只執行1個
$ expdp parfile=expdp_meta.par Export: Release 11.2.0.2.0 - Production on Tue Mar 22 10:31:01 2022 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options FLASHBACK automatically enabled to preserve database integrity. Starting "SCHEMA"."SYS_EXPORT_TABLE_01": schema/******** parfile=expdp_meta.par Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 235.1 GB . . exported "SCHEMA"."METADATA":"2022M02" 40.63 GB 582748643 rows Master table "SCHEMA"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SCHEMA.SYS_EXPORT_TABLE_01 is: /ARCHIVE/dmp/20220322_2022M02_01.dmp /ARCHIVE/dmp/20220322_2022M02_02.dmp Job "SCHEMA"."SYS_EXPORT_TABLE_01" successfully completed at 12:21:04
修改expdp_meta.par內容,改用system帳號
$ cat expdp_meta.par userid=system/password tables=SCHEMA.METADATA:2022M02 directory=dmp dumpfile=20220322_2022M02_%U.dmp logfile=20220322_2022M02.log parallel=8 CONTENT=DATA_ONLY COMPRESSION=ALL
結果能正常執行parallel參數設定
$ expdp parfile=expdp_meta.par Export: Release 11.2.0.2.0 - Production on Wed Mar 23 10:37:04 2022 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options FLASHBACK automatically enabled to preserve database integrity. Starting "SYSTEM"."SYS_EXPORT_TABLE_03": system/******** parfile=expdp_meta.par Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 235.1 GB . . exported "SCHEMA"."METADATA":"2022M02" 40.63 GB 582748643 rows Master table "SYSTEM"."SYS_EXPORT_TABLE_03" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_03 is: /ARCHIVE/dmp/20220323_2022M02_01.dmp /ARCHIVE/dmp/20220323_2022M02_02.dmp /ARCHIVE/dmp/20220323_2022M02_03.dmp /ARCHIVE/dmp/20220323_2022M02_04.dmp /ARCHIVE/dmp/20220323_2022M02_05.dmp /ARCHIVE/dmp/20220323_2022M02_06.dmp /ARCHIVE/dmp/20220323_2022M02_07.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_03" successfully completed at 10:58:43
檢查了system權限,與一般user權限
SQL> select granted_role from DBA_ROLE_PRIVS where grantee = upper('system'); GRANTED_ROLE ------------------------------------------------------------ MGMT_USER DBA JAVAUSERPRIV AQ_ADMINISTRATOR_ROLE SQL> select granted_role from DBA_ROLE_PRIVS where grantee = upper('DBA'); GRANTED_ROLE ------------------------------------------------------------ DATAPUMP_IMP_FULL_DATABASE EXECUTE_CATALOG_ROLE DELETE_CATALOG_ROLE WM_ADMIN_ROLE IMP_FULL_DATABASE DATAPUMP_EXP_FULL_DATABASE GATHER_SYSTEM_STATISTICS SELECT_CATALOG_ROLE SCHEDULER_ADMIN JAVA_DEPLOY PLUSTRACE GRANTED_ROLE ------------------------------------------------------------ JAVA_ADMIN EXP_FULL_DATABASE SQL> select granted_role from DBA_ROLE_PRIVS where grantee = upper('schema'); GRANTED_ROLE ------------------------------------------------------------ PLUSTRACE RESOURCE CONNECT EXECUTE_CATALOG_ROLE
授權DATAPUMP_EXP_FULL_DATABASE給一般user
SQL> grant DATAPUMP_EXP_FULL_DATABASE to schema; Grant succeeded. SQL> select granted_role from DBA_ROLE_PRIVS where grantee = upper('schema'); GRANTED_ROLE ------------------------------------------------------------ DATAPUMP_EXP_FULL_DATABASE PLUSTRACE RESOURCE CONNECT EXECUTE_CATALOG_ROLE
重新使用一般user再次執行expdp,這次就能正常使用parallel了
$ expdp parfile=expdp_meta.par Export: Release 11.2.0.2.0 - Production on Wed Mar 23 12:00:30 2022 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options FLASHBACK automatically enabled to preserve database integrity. Starting "SCHEMA"."SYS_EXPORT_TABLE_01": schema/******** parfile=expdp_meta.par Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 235.1 GB . . exported "SCHEMA"."METADATA":"2022M02" 40.63 GB 582748643 rows Master table "SCHEMA"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for PB2CMGR.SYS_EXPORT_TABLE_01 is: /ARCHIVE/dmp/20220323_2022M02_01.dmp /ARCHIVE/dmp/20220323_2022M02_02.dmp /ARCHIVE/dmp/20220323_2022M02_03.dmp /ARCHIVE/dmp/20220323_2022M02_04.dmp /ARCHIVE/dmp/20220323_2022M02_05.dmp /ARCHIVE/dmp/20220323_2022M02_06.dmp /ARCHIVE/dmp/20220323_2022M02_07.dmp Job "SCHEMA"."SYS_EXPORT_TABLE_01" successfully completed at 12:22:51
後記:
事後去查support文件,有發現這篇
DataPump Not Using Parallel For Non-privileged Users (Doc ID 1511264.1)
APPLIES TO: Oracle Database - Enterprise Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2] CAUSE Bug 14463113 was opened to explore the behavior. Our developers have responded in the bug with the following: "We don't allow unprivileged users to do load/unload a table or table partition in parallel (using PQ slaves) because there are problems with getting the PQ slaves to share in the data pump environment for unprivileged users. " The following restriction statements have been added to description of the Data Pump PARALLEL parameter in the Oracle documentation for 11.2 and 12.1: "To export a table or table partition in parallel (using PQ slaves), you must have the DATAPUMP_EXP_FULL_DATABASE role." "To import a table or table partition in parallel (using PQ slaves), you must have the DATAPUMP_IMP_FULL_DATABASE role." SOLUTION Use a privileged user for parallel operations. The privileges required are contained in export_full_database or import_full_database roles.
0 留言