好的,所以我对 HTMLAgilityPack 中使用的 XPath 查询真的很陌生.
Ok so i am really new to XPath queries used in HTMLAgilityPack.
让我们考虑这个页面 http://health.yahoo.net/articles/healthcare/what-your-favorite-flavor-says-about-you.我想要的是只提取页面内容而不是其他内容.
So lets consider this page http://health.yahoo.net/articles/healthcare/what-your-favorite-flavor-says-about-you. What i want is to extract just the page content and nothing else.
为此,我首先删除脚本和样式标签.
So for that i first remove script and style tags.
Document = new HtmlDocument();
Document.LoadHtml(page);
TempString = new StringBuilder();
foreach (HtmlNode style in Document.DocumentNode.Descendants("style").ToArray())
{
style.Remove();
}
foreach (HtmlNode script in Document.DocumentNode.Descendants("script").ToArray())
{
script.Remove();
}
之后我尝试使用//text() 来获取所有文本节点.
After that i am trying to use //text() to get all the text nodes.
foreach (HtmlTextNode node in Document.DocumentNode.SelectNodes("//text()"))
{
TempString.AppendLine(node.InnerText);
}
但是,我不仅得到了文本,而且还得到了许多/r/n 字符.
However not only i am not getting just text i am also getting numerous /r /n characters.
在这方面我需要一些指导.
Please i require a little guidance in this regard.
如果你认为 script
和 style
节点只有孩子的文本节点,你可以使用这个XPath 表达式获取不在 script
或 style
标记中的文本节点,这样您就无需事先删除节点:
If you consider that script
and style
nodes only have text nodes for children, you can use this XPath expression to get text nodes that are not in script
or style
tags, so that you don't need to remove the nodes beforehand:
//*[not(self::script or self::style)]/text()
您可以使用 XPath 的 normalize-space()
进一步排除纯空格的文本节点:
You can further exclude text nodes that are only whitespace using XPath's normalize-space()
:
//*[not(self::script or self::style)]/text()[not(normalize-space(.)="")]
或更短的
//*[not(self::script or self::style)]/text()[normalize-space()]
但您仍然会得到可能有前导或尾随空格的文本节点.这可以按照@aL3891 的建议在您的应用程序中处理.
But you will still get text nodes that may have leading or trailing whitespace. This can be handled in your application as @aL3891 suggests.
这篇关于使用 HTMLAgilityPack 仅提取页面文本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!