我有一个数据表,我想将它转换为 xml,然后使用 DotNetZip 压缩它.最后用户可以通过 Asp.Net 网页下载它.我的代码在下面
I have a DataTable that i want to convert it to xml and then zip it, using DotNetZip. finally user can download it via Asp.Net webpage. My code in below
dt.TableName = "Declaration";
MemoryStream stream = new MemoryStream();
dt.WriteXml(stream);
ZipFile zipFile = new ZipFile();
zipFile.AddEntry("Report.xml", "", stream);
Response.ClearContent();
Response.ClearHeaders();
Response.AppendHeader("content-disposition", "attachment; filename=Report.zip");
zipFile.Save(Response.OutputStream);
//Response.Write(zipstream);
zipFile.Dispose();
zip 文件中的 xml 文件为空.
the xml file in zip file is empty.
2 件事.首先,如果您保留现有的代码设计,则需要在将其写入条目之前对 MemoryStream 执行 Seek().
2 things. First, if you keep the code design you have, you need to perform a Seek() on the MemoryStream before writing it into the entry.
dt.TableName = "Declaration";
MemoryStream stream = new MemoryStream();
dt.WriteXml(stream);
stream.Seek(0,SeekOrigin.Begin); // <-- must do this after writing the stream!
using (ZipFile zipFile = new ZipFile())
{
zipFile.AddEntry("Report.xml", "", stream);
Response.ClearContent();
Response.ClearHeaders();
Response.AppendHeader("content-disposition", "attachment; filename=Report.zip");
zipFile.Save(Response.OutputStream);
}
即使您保留这种设计,我也会建议使用 using() 子句,正如我所展示的,以及所有 DotNetZip 示例,代替调用 Dispose().using() 子句在遇到故障时更可靠.
Even if you keep this design, I would suggest a using() clause, as I have shown, and as described in all the DotNetZip examples, in lieu of calling Dispose(). The using() clause is more reliable in the face of failures.
现在你可能想知道,为什么在调用 AddEntry() 之前需要在 MemoryStream 中查找?原因是,AddEntry() 旨在支持那些传递位置很重要的流的调用者.在这种情况下,调用者需要从流中读取条目数据,使用流的当前位置.AddEntry() 支持这一点.因此,在调用 AddEntry() 之前设置流中的位置.
Now you may wonder, why is it necessary to seek in the MemoryStream before calling AddEntry()? The reason is, AddEntry() is designed to support those callers who pass a stream where the position is important. In that case, the caller needs the entry data to be read from the stream, using the current position of the stream. AddEntry() supports that. Therefore, set the position in the stream before calling AddEntry().
但是,更好的选择是修改您的代码以使用 接受 WriteDelegate 的 AddEntry() 重载.它专为将数据集添加到 zip 文件而设计.您的原始代码将数据集写入内存流,然后在流上查找,并将流的内容写入 zip.如果您一次写入数据,它会更快更容易,这是 WriteDelegate 允许您执行的操作.代码如下所示:
But, the better option is to modify your code to use the overload of AddEntry() that accepts a WriteDelegate. It was designed specifically for adding datasets into zip files. Your original code writes the dataset into a memory stream, then seeks on the stream, and writes the content of the stream into the zip. It's faster and easier if you write the data once, which is what the WriteDelegate allows you to do. The code looks like this:
dt.TableName = "Declaration";
Response.ClearContent();
Response.ClearHeaders();
Response.ContentType = "application/zip";
Response.AppendHeader("content-disposition", "attachment; filename=Report.zip");
using(Ionic.Zip.ZipFile zipFile = new Ionic.Zip.ZipFile())
{
zipFile.AddEntry("Report.xml", (name,stream) => dt.WriteXml(stream) );
zipFile.Save(Response.OutputStream);
}
这会将数据集直接写入 zipfile 中的压缩流.效率很高!没有双缓冲.在 ZipFile.Save() 时调用匿名委托.仅执行一次写入(+压缩).
This writes the dataset directly into the compressed stream in the zipfile. Very efficient! No double-buffering. The anonymous delegate is called at the time of ZipFile.Save(). Only one write (+compress) is performed.
这篇关于从流创建 Zip 文件并下载的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!