很多网站的防采集的办法,就是判断浏览器来源referer和cookie以及userAgent,道高一尺魔高一丈.
最近发现维护的一个爬虫应用,爬不到数据了,看了一下日志发现被爬网站做了防采集策略,经过定位后,发现被爬网站是针对referer做了验证,以下是解决方法:
在Java中获取一个网站的HTML内容可以通过HttpURLConnection来获取.我们在HttpURLConnection中可以设置referer来伪造referer,轻松绕过这类防采集的网站:
HttpURLConnection connection = null;
URL url = new URL(urlStr);
if (useProxy) {
Proxy proxy = ProxyServerUtil.getProxy();
connection = (HttpURLConnection) url.openConnection(proxy);
} else {
connection = (HttpURLConnection) url.openConnection();
}
connection.setRequestMethod( "POST");
connection.setRequestProperty("referer", "http://xxxx.xxx.com");
connection.addRequestProperty("User-Agent", ProxyServerUtil.getUserAgent());
connection.setConnectTimeout(10000);
connection.setReadTimeout(10000);