OGG-01224 Bad file number

Posted yxwkaifa

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了OGG-01224 Bad file number相关的知识,希望对你有一定的参考价值。

今天在看OGG的日志时。发现例如以下OGG-01224 Bad file number错误。查阅资料才知道port不可用,看了一下mgr的參数,发现是设置的DYNAMICPORTLIST 动态port。为什么还不可用。看看MOS上面咋说的:
 

OGG GoldenGate Extract | Pump Abends with: "TCP/IP Error 9 (Bad File Number)" (文档 ID 1359087.1) 技术分享转究竟部 技术分享

技术分享

In this Document

Symptoms
  Cause
  Solution
  References

 

APPLIES TO:

Oracle GoldenGate – Version 11.1.1.1.0 and later
Information in this document applies to any platform.
***Checked for relevance on 24-Mar-2013***

SYMPTOMS

Extract or pump abends with "TCP/IP error 9 (Bad file number)" when starting.
The exact error number may vary:

e.g.,
1. Version 11.1.1.1 12771498 
 

ERROR OGG-01224 TCP/IP error 9 (Bad file number).
ERROR OGG-01668 PROCESS ABENDING.

2. pre GA release version:  v11_1_1_1_024
 

GGS ERROR 150 TCP/IP error 9 (Bad file number). 
GGS ERROR 190 PROCESS ABENDING.

Let‘s say you only have 4 Extract pump processes communicating with this target system, and have specified 21 dynamic ports on the target system. Why can‘t your Extract connect to the remote system?

CAUSE

The cause is that all the ports listed in DYNAMICPORTLIST in the manager parameter at downstream server are in use.  

This error message is usually due to port allocation failure or some orphan collector processes on target preventing the startup of new collectors.

In the GoldenGate environment on the target system, check the ports that Manager is using with this command:

 

GGSCI (ggdb1) 21> send mgr getportinfo detail

Sending GETPORTINFO, request to MANAGER …

Dynamic Port List
Starting Index 21
Reassign Delay 3 seconds

Entry Port Error Process Assigned Program
—– —– —– ———- ——————- ——-
0 7810 98
1 7811 98
2 7812 0
3 7813 0 18713 2011/07/28 20:39:12 Server
4 7814 0
5 7815 0 3662 2011/07/29 22:11:02 Server
6 7816 0 27070 2011/07/30 02:16:11 Server
7 7817 0 7789 2011/07/31 16:56:10 Server
8 7818 0 14116 2011/07/31 17:36:18 Server
9 7819 0 10900 2011/07/31 17:59:59 Server
10 7820 0 28045 2011/08/01 04:26:01 Server
11 7821 98
12 7822 0 31379 2011/08/01 05:29:31 Server
13 7823 0 23538 2011/08/02 10:21:54 Server
14 7824 0 23593 2011/08/02 10:22:01 Server
15 7825 0 6687 2011/08/03 07:17:50 Server
16 7826 0 17339 2011/08/03 07:22:01 Server
17 7827 0 24905 2011/08/03 08:51:51 Server
18 7828 98
19 7829 0 1881 2011/08/03 08:55:45 Server
20 7830 0 5884 2011/08/03 08:57:03 Server

It is reporting error 98 on the open ports from 7810-7828. The error 98 is "address in use".

You can also check the Manager report file on the target system (dirrpt/MGR.rpt) for an error message.
 

GGS INFO 301 Command received from EXTRACT on host 10.30.113.28 (START SERVER CPU -1 PRI -1 PARAMS ).
GGS INFO 302 No Dynamic Ports Available.

This confirms that the Manager cannot allocate anymore dynamic ports. Since there should be plenty of ports available, this indicates that there may be "orphaned" server collector processes.

SOLUTION

The workaround is to increase the port numbers in DYNAMICPORTLIST in mgr.prm.

Bounce the manager afterwards.

If the source Extract dies without communicating to the target server collector, that server will be orphaned and must be killed. Development plans to improve this behavior in a future release (tracked via enhancement Bug 10430342), but until that time, they must be handled manually in this manner.

You can determine which processes are orphans by stopping the upstream pumps and then seeing what Server processes are still running, and killing them. Once these servers are killed, you should be able restart all the pumps.

In ggsci on the target system:
 

send mgr childstatus debug

This will retrieve status information about processes started by Manager, and the corresponding port numbers that have been allocated by Manager.

 

GGSCI (ggdb1) 23> send mgr childstatus debug

Sending CHILDSTATUS, request to MANAGER …

Child Process Status – 14 Entries

ID Group Process Retry Retry Time Start Time Port
—- ——– ———- —– —————— ———– —-
0 PEESIS 27760 0 None 2011/08/04 09:30:45 7843
1 PPESIS 27767 0 None 2011/08/04 09:30:45 7845
2 SRESIP 31149 1 2011/08/04 09:44:32 2011/08/04 09:30:45 8003
3 PEASPIS 3177 2 2011/08/04 10:12:07 2011/08/04 09:30:45 8002
4 PPASPIS 27784 0 None 2011/08/04 09:30:45 7854
5 SRASPIP 27792 0 None 2011/08/04 09:30:45 7860
6 PEIPATIS 27798 0 None 2011/08/04 09:30:46 7861
8 SRIPATIP 27800 0 None 2011/08/04 09:30:46 8000
9 PEDTIS 28879 0 None 2011/08/04 09:30:53 8001
13 PETIBCOS 28959 0 None 2011/08/04 09:30:58 8004
19 SRDTIP 29051 0 None 2011/08/04 09:31:07 8005
20 SREEXIP 29060 0 None 2011/08/04 09:31:08 8006
21 SREXCIP 29097 0 None 2011/08/04 09:31:09 8007
22 SRIPIP 29098 0 None 2011/08/04 09:31:10 8008

You can also use this command to determine what server collector processes are running:

 

ps -ef | grep server

{ggate}sintegoradb1.aeso.ca:/usr/ggate/product/10.4.0/ggs >ps -ef | grep server
ggate 1881 7278 0 08:55 ? 00:00:02 ./server -p 7829 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
ggate 3556 1 0 Jul25 ?

00:01:57 ./server -p 7877 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
ggate 3558 1 0 Jul20 ?

00:03:51 ./server -p 7828 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
ggate 5884 7278 0 08:57 ?

00:00:02 ./server -p 7830 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
root 6312 1 0 Jul05 ? 00:00:00 /usr/bin/hidd –server
ggate 6687 7278 0 07:17 ? 00:00:04 ./server -p 7825 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
ggate 9121 1 0 Jul06 ? 00:07:34 ./server -p 7953 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
root 10924 1 0 Jul05 ?

00:00:00 /usr/libexec/gam_server
ggate 13060 1 0 Jul22 ? 00:02:35 ./server -p 7867 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
ggate 14345 1382 0 13:32 pts/4 00:00:00 grep server
ggate 17339 7278 0 07:22 ? 00:00:03 ./server -p 7826 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
ggate 18528 1 0 Jul06 ? 00:05:54 ./server -p 7967 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
ggate 23233 1 0 Jul06 ? 00:05:55 ./server -p 7965 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
ggate 23538 7278 0 Aug02 ? 00:00:14 ./server -p 7823 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
ggate 23593 7278 0 Aug02 ? 00:00:15 ./server -p 7824 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
ggate 24905 7278 0 08:51 ? 00:00:02 ./server -p 7827 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log
ggate 31379 7278 0 Aug01 ? 00:00:28 ./server -p 7822 -k -l /usr/ggate/product/10.4.0/ggs/ggserr.log

In summary:

1) Stop the Manager on the target system, and all of the upstream GoldenGate pump Extract processes.
2) Kill any remaining server processes.
3) Restart the Manager, and then restart the Extracts.

REFERENCES














































































以上是关于OGG-01224 Bad file number的主要内容,如果未能解决你的问题,请参考以下文章

IOException: write failed: EBADF (Bad file number)

Android java.io.IOException: write failed: EBADF (Bad file number)

java.lang.UnsupportedClassVersionError: Bad version number in .class file (unable to load class org.

Java:出现java.lang.UnsupportedClassVersionError: Bad version number in .class file解决办法

tomcat启动后报错Bad version number in .class file (unable to load class oracle.jdbc.OracleDriver)

Spyder 无法打开问题的解决 bad file descriptor