我想知道如何在不同的程序模块之间共享一些内存 - 比方说,我有一个主应用程序 (exe),然后是一些模块 (dll).它们都链接到同一个静态库.这个静态库会有一些管理器,提供各种服务.我想要实现的是让这个管理器在所有应用程序模块之间共享,并在库初始化期间透明地执行此操作.在进程之间我可以使用共享内存,但我希望它只在当前进程中共享.你能想出一些跨平台的方法来做到这一点吗?可能会使用 boost 库,如果他们提供了一些工具来做到这一点.
I was wondering how to share some memory between different program modules - lets say, I have a main application (exe), and then some module (dll). They both link to the same static library. This static library will have some manager, that provides various services. What I would like to achieve, is to have this manager shared between all application modules, and to do this transparently during the library initialization. Between processes I could use shared memory, but I want this to be shared in the current process only. Could you think of some cross-platform way to do this? Possibly using boost libraries, if they provide some facilities to do this.
我现在能想到的唯一解决方案是使用各自操作系统的共享库,所有其他模块将在运行时链接到,并将管理器保存在那里.
Only solution I can think of right now, is to use shared library of the respective OS, that all other modules will link to at runtime, and have the manager saved there.
澄清我真正需要的东西:
To clarify what I actually need:
我认为您将需要共享库的帮助才能以任何可移植的方式执行此操作.它不一定需要知道关于模块之间共享的对象的任何信息,它只需要提供一些从键(可能是字符串)到指针的全局可访问映射.
I think you're going to need assistance from a shared library to do this in any portable fashion. It doesn't necessarily need to know anything about the objects being shared between modules, it just needs to provide some globally-accessible mapping from a key (probably a string) to a pointer.
但是,如果您愿意调用操作系统 API,这是可行的,而且我认为您可能只需要操作系统特定部分的两种实现(一种用于 Windows DLL 和 GetProcAddress,一种用于使用 dlopen 的操作系统).
However, if you're willing to call OS APIs, this is feasible, and I think you may only need two implementations of the OS-specific part (one for Windows DLLs and GetProcAddress, one for OSes which use dlopen).
当每个模块加载时,它会遍历先前加载的模块列表,寻找任何导出特殊命名函数的模块.如果它找到一个(任何一个,都没有关系,因为不变的是所有满载的模块都知道公共对象),它从先前加载的模块中获取公共对象的地址,然后增加引用计数.如果找不到,它会分配新数据并初始化引用计数.在模块卸载期间,它会减少引用计数并在引用计数达到零时释放公共对象.
As each module loads, it walks the list of previously loaded modules looking for any that export a specially-named function. If it finds one (any, doesn't matter which, because the invariant is that all fully-loaded modules are aware of the common object), it gets the address of the common object from the previously loaded module, then increments the reference count. If it's unable to find any, it allocates new data and initializes the reference count. During module unload, it decrements the reference count and frees the common object if the reference count reached zero.
当然,有必要为公共对象使用操作系统分配器,因为虽然不太可能,但它可能从与第一次加载它的库不同的库中释放.这也意味着公共对象不能包含任何虚函数或任何其他类型的指向不同模块段的指针.它的所有资源必须使用操作系统进程范围的分配器动态分配.对于 libc++ 是共享库的系统,这可能不是一种负担,但您说您是静态链接 CRT.
Of course it's necessary to use the OS allocator for the common object, because although unlikely, it's possible that it is deallocated from a different library from the one which first loaded it. This also implies that the common object cannot contain any virtual functions or any other sort of pointer to segments of the different modules. All its resources must by dynamically allocated using the OS process-wide allocator. This is probably less of a burden on systems where libc++ is a shared library, but you said you're statically linking the CRT.
Win32 中所需的函数包括 EnumProcessModules
、GetProcAddress
、HeapAlloc
和 HeapFree
、GetProcessHeap
和 GetCurrentProcess
.
Functions needed in Win32 would include EnumProcessModules
, GetProcAddress
, HeapAlloc
, and HeapFree
, GetProcessHeap
and GetCurrentProcess
.
考虑到所有因素,我想我会坚持将公共对象放在它自己的共享库中,它利用加载器的数据结构来找到它.否则,您将重新发明装载机.即使 CRT 静态链接到多个模块中,这也会起作用,但我认为您正在为 ODR 违规设置自己.非常注意保持公共数据 POD.
Everything considered, I think I would stick to putting the common object in its own shared library, which leverages the loader's data structures to find it. Otherwise you're re-inventing the loader. This will work even when the CRT is statically linked into several modules, but I think you're setting yourself up for ODR violations. Be really particular about keeping the common data POD.
这篇关于模块间共享内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!