Golang Stream Data to S3

  • 状态: Closed
  • 奖金: $1000
  • 参赛作品已收到: 6
  • 获胜者: jorissuppers

竞赛简介

We are looking to contract several Golang developers to help build out our backend microservices. To do so we are offering a contest to find the best Golang developers in the world. We welcome both independent Freelancers and outsource companies to apply. This is your chance to show your awesome skills and work on a very meaninful project.

您还可能感兴趣的技能

此竞赛的顶尖作品

查看更多参赛作品

公共说明面板

  • TwoHat
    竞赛主办者
    • 4 年 之前

    it should have only one writer per file. if you generate a guid and append to the filename you'll know it is yours

    • 4 年 之前
    1. yadavgajender087
      yadavgajender087
      • 4 年 之前

      Usage of ./s3_uploader:
      -acl="bucket-owner-full-control": ACL for new object
      -bucket="": S3 bucket name (required)
      -chunk_size=50MB: multipart upload chunk size (bytes, understands standard suffixes like "KB", "MB", "MiB", etc.)
      -expected_size=0: expected input size (fail if out of bounds)
      -key="": S3 key name (required; use / notation for folders)
      -mime_type="binary/octet-stream": Content-type (MIME type)
      -region="us-west-2": AWS S3 region
      -retries=4: number of retry attempts per chunk upload
      -sse=false: use server side encryption

      • 4 年 之前
  • TwoHat
    竞赛主办者
    • 4 年 之前

    at least 10000qps using a max 8 core machine.

    • 4 年 之前
    1. yadavgajender087
      yadavgajender087
      • 4 年 之前

      is these one is right S3 has a maximum multipart count of 10000, therefore: total_input_size / chunk_size

      • 4 年 之前
  • yadavgajender087
    yadavgajender087
    • 4 年 之前

    Stream to S3 from stdin using concurrent, multipart uploading.
    Intended for use with sources that stream data fairly slowly (like RDS dumps), such that getting the initial data is the dominant bottleneck. It is also useful to upload large files as quickly as possible using concurrent multipart uploading

    • 4 年 之前
  • TwoHat
    竞赛主办者
    • 4 年 之前

    Contest closes tomorrow. Looking forward to all the great submissions

    • 4 年 之前
  • ankurs13
    ankurs13
    • 4 年 之前

    Is there any QPS expectation for this service (under what constraints)? Also, what should happen if the file corresponding to the message already exists in S3 (when the program starts)? Do we overwrite the file or append to it? Will there be multiple writers to the same log file? Do we need to handle that situation?

    • 4 年 之前

显示更多评论

如何以竞赛开始

  • 发布您的竞赛

    发起您的竞赛 快速简单

  • 获取众多参赛作品

    获取大量参赛作品 来自世界各地

  • 悬赏最佳参赛作品

    悬赏最佳参赛作品 下载文件-简单!

立即发布竞赛 或者立即加入我们!