DBI 连接,失败:致命:抱歉,已经有太多客户端
Posted
技术标签:
【中文标题】DBI 连接,失败:致命:抱歉,已经有太多客户端【英文标题】:DBI connect, failed: FATAL: sorry, too many clients already 【发布时间】:2014-09-30 06:41:23 【问题描述】:我正在运行一个 crontab,如下所述:
* 1 * * * /var/fdp/reportingscript/an_outgoing_tps_report.pl
* 1 * * * /var/fdp/reportingscript/an_processed_rule_report.pl
* 1 * * * /var/fdp/reportingscript/sdp_incoming_traffic_tps_report.pl
* 1 * * * /var/fdp/reportingscript/en_outgoing_tps_report.pl
* 1 * * * /var/fdp/reportingscript/en_processed_rule_report.pl
* 1 * * * /var/fdp/reportingscript/rs_incoming_traffic_report.pl
* 1 * * * /var/fdp/reportingscript/an_summary_report.pl
* 1 * * * /var/fdp/reportingscript/en_summary_report.pl
* 1 * * * /var/fdp/reportingscript/user_report.pl
并得到一个错误:(对于所有脚本,错误都是相同的)
DBI connect('dbname=scs;host=192.168.18.23;port=5432','postgres',...) 失败:致命:抱歉,/var/fdp/reportingscript/sdp_incoming_traffic_tps_report 已经有太多客户端。 pl 第 38 行。
此外,如果我一次手动运行一个脚本,它不会显示任何错误。
为了您的参考,我还附上了我已显示上述错误的脚本:
#!/usr/bin/perl
use strict;
use FindBin;
use lib $FindBin::Bin;
use Time::Local;
use warnings;
use DBI;
use File::Basename;
use CONFIG;
use Getopt::Long;
use Data::Dumper;
my $channel;
my $circle;
my $daysbefore;
my $dbh;
my $processed;
my $discarded;
my $db_name = "scs";
my $db_vip = "192.168.18.23";
my $db_port = "5432";
my $db_user = "postgres";
my $db_password = "postgres";
#### code to redirect all console output in log file
my ( $seco_, $minu_, $hrr_, $moday_, $mont_, $years_ ) = localtime(time);
$years_ += 1900;
$mont_ += 1;
my $timestamp = sprintf( "%d%02d%02d", $years_, $mont_, $moday_ );
$timestamp .= "_" . $hrr_ . "_" . $minu_ . "_" . $seco_;
print "timestamp is $timestamp \n";
my $logfile = "/var/fdp/log/reportlog/sdp_incoming_report_$timestamp";
print "\n output files is " . $logfile . "\n";
open( STDOUT, ">", $logfile ) or die("$0:dup:$!");
open STDERR, ">&STDOUT" or die "$0: dup: $!";
my ( $sec_, $min_, $hr_, $mday_, $mon_, $year_ ) = localtime(time);
$dbh = DBI->connect( "DBI:Pg:dbname=$db_name;host=$db_vip;port=$db_port",
"$db_user", "$db_password", 'RaiseError' => 1 );
print "\n Dumper is " . $dbh . "\n";
my $sthcircle = $dbh->prepare("select id,name from circle");
$sthcircle->execute();
while ( my $refcircle = $sthcircle->fetchrow_hashref() )
print "\n dumper for circle is " . Dumper($refcircle);
my $namecircle = uc( $refcircle->'name' );
my $idcircle = $refcircle->'id';
$circle->$namecircle = $idcircle;
print "\n circle name : " . $namecircle . "id is " . $idcircle;
sub getDate
my $daysago = shift;
$daysago = 0 unless ($daysago);
my @months = qw(Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec);
my ( $sec, $min, $hour, $mday, $mon, $year, $wday, $yday, $isdst ) = localtime( time - ( 86400 * $daysago ) );
# YYYYMMDD, e.g. 20060126
$year_ = $year + 1900;
$mday_ = $mday;
$mon_ = $mon + 1;
return sprintf( "%d-%02d-%02d", $year + 1900, $mon + 1, $mday );
GetOptions( "d=i" => \$daysbefore );
my $filedate = getDate($daysbefore);
print "\n filedate is $filedate \n";
my @basedir = CONFIG::getBASEDIR();
print "\n array has basedir" . Dumper(@basedir);
$mon_ = "0" . $mon_ if ( defined $mon_ && $mon_ <= 9 );
$mday_ = "0" . $mday_ if ( defined $mday_ && $mday_ <= 9 );
foreach (@basedir)
my $both = $_;
print "\n dir is $both \n";
for ( keys %$circle )
my $path = $both;
my $circleid = $_;
print "\n circle is $circleid \n";
my $circleidvalue = $circle->$_;
my $file_csv_path = "/opt/offline/reports/$circleid";
my %sdp_hash = ();
print "\n file is $file_csv_path csv file \n";
if ( -d "$file_csv_path" )
else
mkdir( "$file_csv_path", 0755 );
my $csv_new_file
= $file_csv_path
. "\/FDP_"
. $circleid
. "_SDPINCOMINGTPSREPORT_"
. $mday_ . "_"
. $mon_ . "_"
. $year_ . "\.csv";
print "\n file is $csv_new_file \n";
print "\n date:$year_-$mon_-$mday_ \n";
open( DATA, ">>", $csv_new_file );
$path = $path . $circleid . "/Reporting/EN/Sdp";
print "\n *****path is $path \n";
my @filess = glob("$path/*");
foreach my $file (@filess)
print "\n Filedate ---------> $filedate file is $file \n";
if ( $file =~ /.*_sdp.log.$filedate-*/ )
print "\n found file for $circleid \n";
my $x;
my $log = $file;
my @a = split( "-", $file );
my $starttime = $a[3];
my $endtime = $starttime;
my $sdpid;
my $sdpid_value;
$starttime = "$filedate $starttime:00:00";
$endtime = "$filedate $endtime:59:59";
open( FH, "<", "$log" ) or die "cannot open < $log: $!";
while (<FH>)
my $line = $_;
print "\n line is $line \n";
chomp($line);
$line =~ s/\s+$//;
my @a = split( ";", $line );
$sdpid = $a[4];
my $stat = $a[3];
$x->$sdpid->$stat++;
close(FH);
print "\n Dumper is x:" . Dumper($x) . "\n";
foreach my $sdpidvalue ( keys %$x )
print "\n sdpvalue us: $sdpidvalue \n";
if ( exists( $x->$sdpidvalue->processed ) )
$processed = $x->$sdpidvalue->processed;
else
$processed = 0;
if ( exists( $x->$sdpidvalue->discarded ) )
$discarded = $x->$sdpidvalue->discarded;
else
$discarded = 0;
my $sth_new1 = $dbh->prepare("select id from sdp_details where sdp_name='$sdpid' ");
print "\n sth new is " . Dumper($sth_new1);
$sth_new1->execute();
while ( my $row1 = $sth_new1->fetchrow_hashref )
$sdpid_value = $row1->'id';
print "\n in hash rowref from sdp_details table " . Dumper($sdpid_value);
my $sth_check
= $dbh->prepare(
"select processed,discarded from sdp_incoming_tps where circle_id='$circleidvalue' and sdp_id='$sdpid_value' and start_time='$starttime' and end_time='$endtime'"
);
print "\n Dumper for bhdatabase statement is " . Dumper($sth_check);
$sth_check->execute();
my $duplicate_row = 0;
my ( $success_, $failure_ );
while ( my $row_dup = $sth_check->fetchrow_hashref )
print "\n row_dup is " . Dumper($row_dup);
$duplicate_row = 1;
$success_ += $row_dup->'processed';
$failure_ += $row_dup->'discarded';
if ( $duplicate_row == 0 )
my $sth
= $dbh->prepare(
"insert into sdp_incoming_tps (id,circle_id,start_time,end_time,processed,discarded,sdp_id) select nextval('sdp_incoming_tps_id'),'$circleidvalue','$starttime','$endtime','$processed','$discarded','$sdpid_value' "
);
$sth->execute();
else
$success_ += $processed;
$failure_ += $discarded;
my $sth
= $dbh->prepare(
"update sdp_incoming_tps set processed=$success_,discarded=$failure_ where circle_id='$circleidvalue' and sdp_id='$sdpid_value' and start_time='$starttime' and end_time='$endtime'"
);
$sth->execute();
# my $file_csv_path = "/opt/offline/reports/$circleid";
# my %sdp_hash = ();
# if ( -d "$file_csv_path" )
# else
# mkdir( "$file_csv_path", 0755 );
#
# my $csv_new_file = $file_csv_path . "\/FDP_" . $circleid . "_SDPINCOMINGTPSREPORT_". $mday_ . "_" . $mon_ . "_" . $year_ . "\.csv";
print "\n file is $csv_new_file \n";
print "\n date:$year_-$mon_-$mday_ \n";
close(DATA);
open( DATA, ">>", $csv_new_file ) or die("cant open file : $! \n");
print "\n csv new file is $csv_new_file \n";
my $sth_new2 = $dbh->prepare("select * from sdp_details");
$sth_new2->execute();
while ( my $row1 = $sth_new2->fetchrow_hashref )
my $sdpid = $row1->'id';
$sdp_hash$sdpid = $row1->'sdp_name';
#print "\n resultant sdp hash".Dumper(%sdp_hash);
#$mon_="0".$mon_;
print "\n timestamp being matched is $year_-$mon_-$mday_ \n";
print "\n circle id value is $circleidvalue \n";
my $sth_new
= $dbh->prepare(
"select * from sdp_incoming_tps where date_trunc('day',start_time)='$year_-$mon_-$mday_' and circle_id='$circleidvalue'"
);
$sth_new->execute();
print "\n final db line is " . Dumper($sth_new);
my $str = $sth_new->NAME;
my @str_arr = @$str;
shift(@str_arr);
shift(@str_arr);
my @upper = map ucfirst($_) @str_arr;
$upper[4] = "Sdp-Name";
my $st = join( ",", @upper );
$st = $st . "\n";
$st =~ s/\_/\-/g;
#print $fh "sep=,"; print $fh "\n";
print DATA $st;
while ( my $row = $sth_new->fetchrow_hashref )
print "\n found matching row \n";
my $row_line
= $row->'start_time' . ","
. $row->'end_time' . ","
. $row->'processed' . ","
. $row->'discarded' . ","
. $sdp_hash $row->'sdp_id' . "\n";
print "\n row line matched is " . $row_line . "\n";
print DATA $row_line;
close(DATA);
else
next;
$dbh->disconnect;
请帮忙,我怎样才能避免这个错误。
感谢您的建议。
【问题讨论】:
max_connection
在postgresql.conf
中分配的值是什么??
@Winged,嗨,我不知道如何检查..你能帮我看看这个方法吗?谢谢
你是用pgAdmin
还是别的什么??
在psql
SHOW max_connections;
我使用:psql -U postgres -h 192.168.18.23 -d scs
【参考方案1】:
如错误消息所示,最直接的问题是一次运行所有这些脚本需要的数据库连接数超过了服务器允许的数量。如果它们单独运行良好,那么单独运行它们将解决这个问题。
根本问题是您的 crontab 错误。 * 1 * * *
将在每天从 0100 到 0159 运行所有脚本每分钟。如果它们需要超过一分钟才能完成,那么新的集合将在前一个集合完成之前启动,这需要一组额外的数据库连接,这将相当快地通过可用连接池。
我假设您每天只需要运行一次每日脚本,而不是 60 次,因此将其更改为 5 1 * * *
以仅在 0105 运行一次。
如果仍然存在问题,请在不同的时间运行每一个(无论如何这可能是个好主意):
5 1 * * * /var/fdp/reportingscript/an_outgoing_tps_report.pl
10 1 * * * /var/fdp/reportingscript/an_processed_rule_report.pl
15 1 * * * /var/fdp/reportingscript/sdp_incoming_traffic_tps_report.pl
20 1 * * * /var/fdp/reportingscript/en_outgoing_tps_report.pl
25 1 * * * /var/fdp/reportingscript/en_processed_rule_report.pl
30 1 * * * /var/fdp/reportingscript/rs_incoming_traffic_report.pl
35 1 * * * /var/fdp/reportingscript/an_summary_report.pl
40 1 * * * /var/fdp/reportingscript/en_summary_report.pl
45 1 * * * /var/fdp/reportingscript/user_report.pl
【讨论】:
或者依次运行它们:1 1 * * * cd /var/dfp/reportingscript && ls ./*.pl | sh
以上是关于DBI 连接,失败:致命:抱歉,已经有太多客户端的主要内容,如果未能解决你的问题,请参考以下文章
spring boot postgres:致命:对不起,已经有太多客户了