在将 aws lambda 与 redis 连接时,任务在 23.02 秒后超时错误

Posted

技术标签:

【中文标题】在将 aws lambda 与 redis 连接时,任务在 23.02 秒后超时错误【英文标题】:while connecting aws lambda with redis getting Task timed out after 23.02 seconds error 【发布时间】:2020-11-18 20:44:09 【问题描述】:

在我的项目中,我想将 lambda 函数连接到 Redis 存储,但在进行连接时,我收到了任务超时错误。即使我已将私有子网与 NAT 网关连接。

Python 代码:

import json
import boto3
import math
import redis
# from sklearn.model_selection import train_test_split
redis = redis.Redis(host='redisconnection.sxxqwc.ng.0001.use1.cache.amazonaws.com', port=6379)

s3 = boto3.client('s3')

def lambda_handler(event, context):
    # bucket = event['Records'][0]['s3']['bucket']['name']              // if dynamic allocation
    # key = event['Records'][0]['s3']['object']['key']                  // if dynamic searching 
    bucket = "aws-trigger1"
    key = "unigram1.csv"
    
    response = s3.head_object(Bucket=bucket, Key=key)
    fileSize = response['ContentLength']
    fileSize = fileSize / 1048576
    print("FileSize = " + str(fileSize) + " MB")
    # redis.rpush(fileSize)
    redis.ping
    redis.set('foo','bar')
    
    
    
    obj = s3.get_object(Bucket= bucket, Key=key)
    file_content = obj["Body"].read().decode("utf-8")
    
    
    
    #Calculate the chunk size
    chunkSize = ''
    MAPPERNUMBER=2
    MINBLOCKSIZE= 1024
    chunkSize = int(fileSize/MAPPERNUMBER)
    numberMappers = MAPPERNUMBER
    if chunkSize < MINBLOCKSIZE:
        print("chunk size to small (" + str(chunkSize) + " bytes), changing to " + str(MINBLOCKSIZE) + " bytes")
        chunkSize = MINBLOCKSIZE
        numberMappers = int(fileSize/chunkSize)+1
    residualData = fileSize - (MAPPERNUMBER - 1)*chunkSize
    # print("numberMappers--",residualData)
    
    #Ensure that chunk size is smaller than lambda function memory
    MEMORY= 1536
    memoryLimit = 0.30
    secureMemorySize = int(MEMORY*memoryLimit)
    if chunkSize > secureMemorySize:
        print("chunk size to large (" + str(chunkSize) + " bytes), changing to " + str(secureMemorySize) + " bytes")
        chunkSize = secureMemorySize
        numberMappers = int(fileSize/chunkSize)+1
    
    # print("Using chunk size of " + str(chunkSize) + " bytes, and " + str(numberMappers) + " nodes")
    
    #remove 1st row from the data
    file_content=file_content.split('\n', 1)[-1]
    # print("after removing column name")
    # X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.5, randomstate=42)
    train_pct_index = int(0.5 * len(file_content))  
    
    X_Map1, X_Map2 = file_content[:train_pct_index], file_content[train_pct_index:]
    # print("the size is--------------",X_Map1)
    # print("the size is--------------",X_Map2)

    
    linelen = file_content.find('\n')
    if linelen < 0:
        print("\ n not found in mapper chunk")
        return
    extraRange = 2*(linelen+20)
    initRange = fileSize + 1
    limitRange = fileSize + extraRange
    
    # chunkRange = 'bytes=' + str(initRange) + '-' + str(limitRange)
    # print(chunkRange)
    

     #invoke mappers
    invokeLam = boto3.client("lambda", region_name="us-east-1")
    payload = X_Map1
    payload2 = X_Map2
    print(X_Map1)
    # resp = invokeLam.invoke(FunctionName = "map1", InvocationType="RequestResponse", Payload = json.dumps(payload))
    # resp2 = invokeLam.invoke(FunctionName = "map2", InvocationType="RequestResponse", Payload = json.dumps(payload2))
    
    return file_conte

connection of VPC in lambda

【问题讨论】:

【参考方案1】:

尝试从 S3 检索对象时,您可能会收到超时。 检查您的 VPC 中是否配置了 Amazon S3 终端节点: https://docs.aws.amazon.com/glue/latest/dg/vpc-endpoints-s3.html

【讨论】:

以上是关于在将 aws lambda 与 redis 连接时,任务在 23.02 秒后超时错误的主要内容,如果未能解决你的问题,请参考以下文章

AWS Lambda:Redis ElastiCache 连接超时错误

连接AWS lambda和RDS数据库时超时

SQL服务器连接在lambda容器中工作正常,但是在将zip上传到aws lambda后失败

在将ssh尝试到实例时关闭AWS EC2连接

AWS Lambda Snowflake Python 连接器在尝试连接时挂起

从AWS Lambda向SQS队列发送数据时重置连接