1. <small id='lUziP'></small><noframes id='lUziP'>

        <i id='lUziP'><tr id='lUziP'><dt id='lUziP'><q id='lUziP'><span id='lUziP'><b id='lUziP'><form id='lUziP'><ins id='lUziP'></ins><ul id='lUziP'></ul><sub id='lUziP'></sub></form><legend id='lUziP'></legend><bdo id='lUziP'><pre id='lUziP'><center id='lUziP'></center></pre></bdo></b><th id='lUziP'></th></span></q></dt></tr></i><div id='lUziP'><tfoot id='lUziP'></tfoot><dl id='lUziP'><fieldset id='lUziP'></fieldset></dl></div>

        <tfoot id='lUziP'></tfoot>
          <bdo id='lUziP'></bdo><ul id='lUziP'></ul>

      1. <legend id='lUziP'><style id='lUziP'><dir id='lUziP'><q id='lUziP'></q></dir></style></legend>

        python支持断点续传的多线程下载示例

        时间:2023-12-15

          <tfoot id='Zgwm5'></tfoot>
            <bdo id='Zgwm5'></bdo><ul id='Zgwm5'></ul>

            <i id='Zgwm5'><tr id='Zgwm5'><dt id='Zgwm5'><q id='Zgwm5'><span id='Zgwm5'><b id='Zgwm5'><form id='Zgwm5'><ins id='Zgwm5'></ins><ul id='Zgwm5'></ul><sub id='Zgwm5'></sub></form><legend id='Zgwm5'></legend><bdo id='Zgwm5'><pre id='Zgwm5'><center id='Zgwm5'></center></pre></bdo></b><th id='Zgwm5'></th></span></q></dt></tr></i><div id='Zgwm5'><tfoot id='Zgwm5'></tfoot><dl id='Zgwm5'><fieldset id='Zgwm5'></fieldset></dl></div>
          • <small id='Zgwm5'></small><noframes id='Zgwm5'>

              <tbody id='Zgwm5'></tbody>
              • <legend id='Zgwm5'><style id='Zgwm5'><dir id='Zgwm5'><q id='Zgwm5'></q></dir></style></legend>

                  下面是对于“python支持断点续传的多线程下载示例”的完整攻略:

                  背景介绍

                  在进行大文件下载时,常常需要使用多线程进行下载加速,但是在下载过程中,如果意外终止了下载,那么就需要重新下载。这时候,我们可以使用断点续传的功能,可以在下载被中断后从上次下载的位置继续进行下载。

                  示例1:使用urllib库实现断点续传

                  import urllib.request
                  import os
                  
                  class Download:
                      def __init__(self, url):
                          self.url = url
                          self.downloaded = 0
                          self.total = 0
                          self.filename = url.split("/")[-1]
                          self.headers = {
                              "User-Agent": "Mozilla/5.0 (Windows NT 10.0;Win64) AppleWebkit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299",
                              "Range": "bytes=0-",
                          }
                  
                      def get_total(self):
                          req = urllib.request.Request(self.url, headers=self.headers)
                          res = urllib.request.urlopen(req)
                          content_range = res.headers.get("Content-Range")
                          if content_range:
                              self.total = int(content_range.split("/")[-1])
                          else:
                              self.total = int(res.headers.get("Content-Length"))
                  
                      def download(self, start=0):
                          self.headers["Range"] = "bytes=%d-" % start
                          req = urllib.request.Request(self.url, headers=self.headers)
                          res = urllib.request.urlopen(req)
                          with open(self.filename, "ab") as f:
                              while True:
                                  chunk = res.read(1024*50)
                                  if not chunk:
                                      break
                                  f.write(chunk)
                                  self.downloaded += len(chunk)
                              f.flush()
                  
                  def main():
                      url = "http://speedtest.ftp.otenet.gr/files/test100Mb.db"
                      d = Download(url)
                      d.get_total()
                      if os.path.exists(d.filename):
                          d.downloaded = os.path.getsize(d.filename)
                      print("total:", d.total)
                      print("downloaded:", d.downloaded)
                      d.download(d.downloaded)
                  
                  if __name__ == "__main__":
                      main()
                  

                  这段代码使用了urllib库,可以在下载过程中实现断点续传。首先获取文件总大小,然后在下载过程中根据已经下载的文件大小设置http请求的Range请求头,就能够实现从断点处继续进行下载了。需要注意的是,在下载过程中需要将数据追加写入文件而不是覆盖原有文件。

                  示例2:使用requests库实现断点续传

                  import requests
                  import os
                  
                  class Download:
                      def __init__(self, url):
                          self.url = url
                          self.downloaded = 0
                          self.total = 0
                          self.filename = url.split("/")[-1]
                          self.headers = {
                              "User-Agent": "Mozilla/5.0 (Windows NT 10.0;Win64) AppleWebkit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299",
                              "Range": "bytes=0-",
                          }
                  
                      def get_total(self):
                          res = requests.head(self.url)
                          content_range = res.headers.get("Content-Range")
                          if content_range:
                              self.total = int(content_range.split("/")[-1])
                          else:
                              self.total = int(res.headers.get("Content-Length"))
                  
                      def download(self, start=0):
                          self.headers["Range"] = "bytes=%d-" % start
                          res = requests.get(self.url, headers=self.headers, stream=True)
                          with open(self.filename, "ab") as f:
                              for chunk in res.iter_content(1024*50):
                                  if chunk:
                                      f.write(chunk)
                                      self.downloaded += len(chunk)
                              f.flush()
                  
                  def main():
                      url = "http://speedtest.ftp.otenet.gr/files/test100Mb.db"
                      d = Download(url)
                      d.get_total()
                      if os.path.exists(d.filename):
                          d.downloaded = os.path.getsize(d.filename)
                      print("total:", d.total)
                      print("downloaded:", d.downloaded)
                      d.download(d.downloaded)
                  
                  if __name__ == "__main__":
                      main()
                  

                  这段代码使用了requests库,同样可以实现断点续传的效果。类似于上面的方法,首先获取文件总大小,同时根据已经下载的文件大小设置http请求的Range请求头。另外,需要使用stream=True参数来启用流式下载,从而避免将文件全部加载进内存导致内存使用过多。同样需要注意的是,下载过程中需要将数据追加写入文件而不是覆盖原有文件。

                  以上是两种实现方法。需要注意的是,在进行下载时要确保文件名存在且文件名唯一,否则可能会出现文件覆盖或者文件内容错误等问题。另外,对于下载速度等问题也需要进行合理配置,避免对服务器或者网络造成过大的负担。

                  上一篇:python调用百度语音REST API 下一篇:Python 识别录音并转为文字的实现

                  相关文章

                      • <bdo id='TGeVw'></bdo><ul id='TGeVw'></ul>
                      <i id='TGeVw'><tr id='TGeVw'><dt id='TGeVw'><q id='TGeVw'><span id='TGeVw'><b id='TGeVw'><form id='TGeVw'><ins id='TGeVw'></ins><ul id='TGeVw'></ul><sub id='TGeVw'></sub></form><legend id='TGeVw'></legend><bdo id='TGeVw'><pre id='TGeVw'><center id='TGeVw'></center></pre></bdo></b><th id='TGeVw'></th></span></q></dt></tr></i><div id='TGeVw'><tfoot id='TGeVw'></tfoot><dl id='TGeVw'><fieldset id='TGeVw'></fieldset></dl></div>

                      <small id='TGeVw'></small><noframes id='TGeVw'>

                      <legend id='TGeVw'><style id='TGeVw'><dir id='TGeVw'><q id='TGeVw'></q></dir></style></legend>
                    1. <tfoot id='TGeVw'></tfoot>