当我们需要使用Python进行高效的网络爬虫时,通常需要使用多线程技术,以便同时爬取多个网页并提高爬取的效率。下面就是一份Python多线程爬虫的示例攻略,其中包含两个示例说明:
import requests
import threading
url_list = [url1, url2, url3, ...]
def get_response(url):
response = requests.get(url)
# 对获取的response进行处理
...
threads = []
for url in url_list:
t = threading.Thread(target=get_response, args=(url,))
threads.append(t)
t.start()
for t in threads:
t.join()
假设我们需要爬取三个网站,分别是https://www.baidu.com
、https://www.google.com
和https://www.bing.com
,那么可以按照以下步骤进行:
import requests
import threading
url_list = ['https://www.baidu.com', 'https://www.google.com', 'https://www.bing.com']
def get_response(url):
response = requests.get(url)
print(response.text)
threads = []
for url in url_list:
t = threading.Thread(target=get_response, args=(url,))
threads.append(t)
t.start()
for t in threads:
t.join()
执行后可以看到,程序会同时向三个网站发送请求,提高了爬取的效率。
import requests
import threading
img_url_list = [img_url1, img_url2, img_url3, ...]
def download_img(img_url):
response = requests.get(img_url)
with open('image/{}.jpg'.format(img_url.split('/')[-1]), 'wb') as f:
f.write(response.content)
threads = []
for img_url in img_url_list:
t = threading.Thread(target=download_img, args=(img_url,))
threads.append(t)
t.start()
for t in threads:
t.join()
假设我们需要爬取几张图片,比如https://www.example.com/image1.jpg
、https://www.example.com/image2.jpg
和https://www.example.com/image3.jpg
,那么可以按照以下步骤进行:
import requests
import threading
img_url_list = ['https://www.example.com/image1.jpg', 'https://www.example.com/image2.jpg', 'https://www.example.com/image3.jpg']
def download_img(img_url):
response = requests.get(img_url)
with open('image/{}.jpg'.format(img_url.split('/')[-1]), 'wb') as f:
f.write(response.content)
threads = []
for img_url in img_url_list:
t = threading.Thread(target=download_img, args=(img_url,))
threads.append(t)
t.start()
for t in threads:
t.join()
执行后可以发现,在image
文件夹中已经下载好了三张图片。同时,这个过程也是多线程并发进行的,提高了图片下载的效率。