<small id='uAoH7'></small><noframes id='uAoH7'>

      <bdo id='uAoH7'></bdo><ul id='uAoH7'></ul>
  1. <tfoot id='uAoH7'></tfoot>
    <i id='uAoH7'><tr id='uAoH7'><dt id='uAoH7'><q id='uAoH7'><span id='uAoH7'><b id='uAoH7'><form id='uAoH7'><ins id='uAoH7'></ins><ul id='uAoH7'></ul><sub id='uAoH7'></sub></form><legend id='uAoH7'></legend><bdo id='uAoH7'><pre id='uAoH7'><center id='uAoH7'></center></pre></bdo></b><th id='uAoH7'></th></span></q></dt></tr></i><div id='uAoH7'><tfoot id='uAoH7'></tfoot><dl id='uAoH7'><fieldset id='uAoH7'></fieldset></dl></div>

  2. <legend id='uAoH7'><style id='uAoH7'><dir id='uAoH7'><q id='uAoH7'></q></dir></style></legend>

      如何设计 boost::asio 套接字或其包装器的正确释放

      时间:2023-08-26

      <small id='mLAmL'></small><noframes id='mLAmL'>

      <tfoot id='mLAmL'></tfoot>
        <tbody id='mLAmL'></tbody>

        <bdo id='mLAmL'></bdo><ul id='mLAmL'></ul>

          <legend id='mLAmL'><style id='mLAmL'><dir id='mLAmL'><q id='mLAmL'></q></dir></style></legend>

              <i id='mLAmL'><tr id='mLAmL'><dt id='mLAmL'><q id='mLAmL'><span id='mLAmL'><b id='mLAmL'><form id='mLAmL'><ins id='mLAmL'></ins><ul id='mLAmL'></ul><sub id='mLAmL'></sub></form><legend id='mLAmL'></legend><bdo id='mLAmL'><pre id='mLAmL'><center id='mLAmL'></center></pre></bdo></b><th id='mLAmL'></th></span></q></dt></tr></i><div id='mLAmL'><tfoot id='mLAmL'></tfoot><dl id='mLAmL'><fieldset id='mLAmL'></fieldset></dl></div>
              • 本文介绍了如何设计 boost::asio 套接字或其包装器的正确释放的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

                问题描述

                在几年没有接触它之后,我正在尝试使用 boost::asio 制作我自己的简单异步 TCP 服务器.

                I am making a few attempts at making my own simple asynch TCP server using boost::asio after not having touched it for several years.

                我能找到的最新示例列表是:http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html

                The latest example listing I can find is: http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html

                我在这个示例列表中遇到的问题是(我觉得)它作弊而且作弊很大,通过将 tcp_connection 设为 shared_ptr,这样它就不必担心每个连接的生命周期管理.(我认为)他们这样做是为了简洁,因为这是一个小教程,但该解决方案不是现实世界.

                The problem I have with this example listing is that (I feel) it cheats and it cheats big, by making the tcp_connection a shared_ptr, such that it doesn't worry about the lifetime management of each connection. (I think) They do this for brevity, since it is a small tutorial, but that solution is not real world.

                如果您想通过计时器或类似方式向每个客户端发送消息怎么办?在任何现实世界的非平凡服务器中,都需要一组客户端连接.

                What if you wanted to send a message to each client on a timer, or something similar? A collection of client connections is going to be necessary in any real world non-trivial server.

                我担心每个连接的生命周期管理.我认为很自然的事情是在 tcp_server 中保留一些 tcp_connection 对象或指向它们的指针的集合.从 OnConnect 回调添加到该集合并从该集合中删除 OnDisconnect.

                I am worried about the lifetime management of each connection. I figure the natural thing to do would be to keep some collection of tcp_connection objects or pointers to them inside tcp_server. Adding to that collection from the OnConnect callback and removing from that collection OnDisconnect.

                请注意,OnDisconnect 很可能会从实际的 Disconnect 方法中调用,而该方法又会在出现错误时从 OnReceive 回调或 OnSend 回调中调用.

                Note that OnDisconnect would most likely be called from an actual Disconnect method, which in turn would be called from OnReceive callback or OnSend callback, in the case of an error.

                嗯,问题就在这里.

                假设我们有一个看起来像这样的调用堆栈:

                Consider we'd have a callstack that looked something like this:

                tcp_connection::~tcp_connection
                tcp_server::OnDisconnect
                tcp_connection::OnDisconnect
                tcp_connection::Disconnect
                tcp_connection::OnReceive
                

                这会在调用堆栈展开时导致错误,并且我们正在一个已调用其析构函数的对象中执行代码......我想,对吧?

                This would cause errors as the call stack unwinds and we are executing code in a object that has had its destructor called...I think, right?

                我想每个从事服务器编程的人都会以某种方式遇到这种情况.处理它的策略是什么?

                I imagine everyone doing server programming comes across this scenario in some fashion. What is a strategy for handling it?

                我希望解释足够好以供遵循.如果不让我知道,我将创建自己的源列表,但它会非常大.

                I hope the explanation is good enough to follow. If not let me know and I will create my own source listing, but it will be very large.

                相关

                ) 异步 C++ 代码中的内存管理

                IMO 不是一个可接受的答案,依赖于在接听电话上使用 shared_ptr 作弊而仅此而已,并且不是现实世界.如果服务器想每 5 分钟向所有客户端说嗨"怎么办.某种类型的集合是必要的.如果您在多个线程上调用 io_service.run 怎么办?

                IMO not an acceptable answer, relies on cheating with shared_ptr outstanding on receive calls and nothing more, and is not real world. what if the server wanted to say "Hi" to all clients every 5 minutes. A collection of some kind is necessary. What if you are calling io_service.run on multiple threads?

                我也在 boost 邮件列表上询问:http://boost.2283326.n4.nabble.com/How-to-design-proper-release-of-a-boost-asio-socket-or-wrapper-thereof-td4693442.html

                I am also asking on the boost mailing list: http://boost.2283326.n4.nabble.com/How-to-design-proper-release-of-a-boost-asio-socket-or-wrapper-thereof-td4693442.html

                推荐答案

                虽然其他人的回答与此答案的后半部分类似,但似乎是我能找到的最完整的答案,来自于在 Boost Mailing 上提出的相同问题清单.

                While others have answered similarly to the second half of this answer, it seems the most complete answer I can find, came from asking the same question on the Boost Mailing list.

                http://boost.2283326.n4.nabble.com/How-to-design-proper-release-of-a-boost-asio-socket-or-wrapper-thereof-td4693442.html

                我会在这里总结一下,以帮助那些将来搜索到这里的人.

                I will summarize here in order to assist those that arrive here from a search in the future.

                有两个选项

                1) 关闭套接字以取消任何未完成的 io,然后在 io_service 上发布断开连接后逻辑的回调,并在套接字断开连接时回调服务器类.然后它可以安全地释放连接.只要只有一个线程调用了io_service::run,那么其他的异步操作在回调的时候就已经解决了.但是,如果有多个线程调用了 io_service::run ,那么这是不安全的.

                1) Close the socket in order to cancel any outstanding io and then post a callback for the post-disconnection logic on the io_service and let the server class be called back when the socket has been disconnected. It can then safely release the connection. As long as there was only one thread that had called io_service::run, then other asynchronous operations will have been already been resolved when the callback is made. However, if there are multiple threads that had called io_service::run, then this is not safe.

                2) 正如其他人在他们的回答中指出的那样,使用 shared_ptr 来管理连接生命周期,使用未完成的 io 操作来使它们保持活动状态是可行的.如果需要,我们可以使用一个weak_ptr集合来连接连接以便访问它们.后者是关于该主题的其他帖子中遗漏的让我感到困惑的花絮.

                2) As others have been pointing out in their answers, using the shared_ptr to manage to connections lifetime, using outstanding io operations to keep them alive, is viable. We can use a collection weak_ptr to the connections in order to access them if we need to. The latter is the tidbit that had been omitted from other posts on the topic which confused me.

                这篇关于如何设计 boost::asio 套接字或其包装器的正确释放的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

                上一篇:如何确定套接字listen() backlog 参数的值? 下一篇:增加 TCP 窗口大小

                相关文章

                  <tfoot id='rs7X5'></tfoot>
                      <bdo id='rs7X5'></bdo><ul id='rs7X5'></ul>
                  1. <small id='rs7X5'></small><noframes id='rs7X5'>

                    <i id='rs7X5'><tr id='rs7X5'><dt id='rs7X5'><q id='rs7X5'><span id='rs7X5'><b id='rs7X5'><form id='rs7X5'><ins id='rs7X5'></ins><ul id='rs7X5'></ul><sub id='rs7X5'></sub></form><legend id='rs7X5'></legend><bdo id='rs7X5'><pre id='rs7X5'><center id='rs7X5'></center></pre></bdo></b><th id='rs7X5'></th></span></q></dt></tr></i><div id='rs7X5'><tfoot id='rs7X5'></tfoot><dl id='rs7X5'><fieldset id='rs7X5'></fieldset></dl></div>

                  2. <legend id='rs7X5'><style id='rs7X5'><dir id='rs7X5'><q id='rs7X5'></q></dir></style></legend>