celix icon indicating copy to clipboard operation
celix copied to clipboard

[RFC] Skip bundle extraction when reloading bundle from cache.

Open PengZheng opened this issue 3 years ago • 14 comments

This PR is requesting for comments, and NOT for immediate merge.

The intention is to remove bundle zip files (or truncate to size 0) after they have been installed into cache. This could be very useful for highly storage-constrained embedded device. In my day time job, I keep the zip file but truncate it to size 0 so that when bundles are configured using config.properties, relative path resolution against CELIX_BUNDLES_PATH, which involves using 'access' to test for entity existence, works as before.

Will this change conflict with OSGi specs or any existing use cases?

Next I will dive into the OSGi modular layer, and try to solve some remaining issues, e.g. #130 #301. Storage optimization and startup speed optimization are both on my list. Guidance will be highly appreciated!

PengZheng avatar Sep 01 '22 10:09 PengZheng

Interesting PR...

On the background I was also looking to how bundle cache work in Celix and whether this should and can be improved.

The Celix bundle cache exists because we wanted to work with zip files like Java (jar is a zip file, but where the MANIFEST file is always the first entry). With zip files you can store .so lib files and resources file in a single deployable unit.

If I read the OSGi correctly, Celix is currently not following the OSGi spec. The spec talks about bundle resources: https://docs.osgi.org/specification/osgi.core/8.0.0/framework.lifecycle.html#i3258563 And it talks about persistence storage: https://docs.osgi.org/specification/osgi.core/8.0.0/framework.lifecycle.html#i1236436

The current Celix bundle cache solution provides both. This has grown this way and the primary reason is that you need to extract the bundle zip files before you can use the resources (especially the so files, dlopen does not work in a zip file). If I could start over, I would have handled bundles as (read only) dirs instead of zip files.

I think the way forward is the first split the cache and storage. If I am correct Celix only uses/needs the storage in the Celix::http_admin bundle (to setup soft links to other bundle resources, because otherwise civetweb cannot provide resources from multiple bundles). For example for the future the celix_bundle_getEntry still returns the bundle cache entry, but a new function (celix_bundleContext_ getDataFile(const char*) returns the bundle storage (if enabled).

But I also think that we can refactor the bundle cache so that this is not managed per framework instance, but on a global celix cache dir. Maybe something like ~/.cache/celix/bundles or ~/.celix/cache/bundles/<bundle-id>. maybe the ID can be a hash of the zip file. Note that this also needs some (file-based?) locking mechanisme and I do not have yet experience with this. This should really save disk space and improve startup times.

An other option would be to also support bundles as dir and let the celix_bundle_getEntry function directly return the bundle dir (ideally as read only).

WDYT?

pnoltes avatar Sep 04 '22 17:09 pnoltes

My gut feeling is that we shall establish a natural transformation between Celix's modeling and OSGi specs, which addresses intrinsic language differences (like java class loader) as smooth as possible.

That being said, the system-wide global Celix cache you suggested seems to me the right way to go. And indeed, file locking is needed to share the cache between multiple Celix framework instances. If we want to build cache management into the Celix framework instead of delegating it to a separate command like dpkg, then we have to deal with the annoying interaction between POSIX advisory lock and multi-threading: https://github.com/sqlite/sqlite/blob/e04c9f4b33521a99388ce27eb46a0947fda44a26/src/os_unix.c#L1027

As for the structure of the cache , I guess we can steal some idea from conan's design? Something like ~/.celix/bundles/<bundle_name>/<bundle_version>/<hash> will permit multiple versions of the same bundle live happily together. I suggest we keep per framework private cache to do some minimal bookkeeping like which version of which bundles is installed for the current framework instance. Of course, any unzipped bundle exists solely in the gloabal Celix bundle, and the private cache merely reference into it for real bundle resource. Maybe the private cache can also serve as private storage, which is both readable and writable, in the contrary to the read-only global cache? Before we refine Celix's modeling to make it align with OSGi specs, I cannot answer this question.

With such a modeling in place, we can develop a very useful notion of general APP for embedded Linux. Such an APP can run in one of the two modes:

  • Standalone Mode, the bundles implementing the business logic and all their dependencies are contained in a Celix container, running in a separate Docker conainer, or any light weight homemade sandbox.
  • Plugin Mode, the app bundles and all their dependencies are running with other bundles in a monolithic process.

Use standalone mode, we can constrain each app's resource usage, while the plugin mode will be valuable for highly resource constrained environments. The user can switch between two modes easily. Supporting multiple-version of the same bundle is essential for this usage, since APPs have very different release schedules and thus are likely to depend on different versions of the same bundle. The global cache guarantees that for any specific version of any specific bundle, there is only one unzipped bundle system-wide. That means though each app may carry all its dependencies, there will be no duplication system-wide.

Now, I really should find a place to sit down, and have a good reading of the specs before any prototyping attempt.

PengZheng avatar Sep 05 '22 07:09 PengZheng

As for the structure of the cache , I guess we can steal some idea from conan's design? Something like ~/.celix/bundles/<bundle_name>/<bundle_version>/<hash> will permit multiple versions of the same bundle live happily together.

I like the cache dir structure, although this does mean we need the bundle (symbolic) name, its version and the hash of the zip file to store it correctly in cache. Currently the bundle symbolic name and version can only be read from the zip, so some though how to solve this nicely (and without too much overhead) is needed.

pnoltes avatar Sep 06 '22 14:09 pnoltes

Supporting multiple-version of the same bundle is essential for this usage, since APPs have very different release schedules and thus are likely to depend on different versions of the same bundle.

Yes and if I am correct, OSGi supports the same bundle, but different version in the same OSGi framework. For Celix this is currently not supported.

The challenge I see here is the same as with importing/exporting libraries from bundles: If there are libraries which are different, but have the same SONAME header, a call to dlopen will reuse a (transitive) library with the same SONAME instead of loading a new lib. This is even true if DL_LOCAL is used. In practice this means that every bundle activator library (and ideally every private bundle library) should have a unique SONAME and although in theory this is doable ("just configure the SONAME"), I think that in many cases this adds too much complexity.

I also did some experiment with dlmopen, but because this start with a completely clean symbol space (so no symbols from the executable) IMO this is also not a real solution.

pnoltes avatar Sep 06 '22 14:09 pnoltes

In practice this means that every bundle activator library (and ideally every private bundle library) should have a unique SONAME and although in theory this is doable ("just configure the SONAME"), I think that in many cases this adds too much complexity.

Bundle activator library can have the same SONAME, provided that we install them into different location of the global cache, and provide the full path to dlopen. man 2 dlopen says:

If filename contains a slash ("/"), then it is interpreted as a (relative or absolute) pathname. Otherwise, the dynamic linker searches for the object as follows (LD_LIBRARY_PATH thing)

I worked out a minimal example and tested it on my Ubuntu machine:

   # CMakeLists.txt
   cmake_minimum_required(VERSION 3.23)
   project(hello_solib C)
   
   set(CMAKE_C_STANDARD 99)
   
   
   add_library(hello_impl1 SHARED hello1.c)
   set_target_properties(hello_impl1 PROPERTIES LIBRARY_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/impl1)
   set_target_properties(hello_impl1 PROPERTIES OUTPUT_NAME hello)
   add_library(hello_impl2 SHARED hello2.c)
   set_target_properties(hello_impl2 PROPERTIES LIBRARY_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/impl2)
   set_target_properties(hello_impl2 PROPERTIES OUTPUT_NAME hello)
   
   add_executable(hello_solib main.c)
   target_compile_definitions(hello_solib PRIVATE HELLO1_LOC="$<TARGET_[FILE:hello_impl1](file:///hello_impl1)>")
   target_compile_definitions(hello_solib PRIVATE HELLO2_LOC="$<TARGET_[FILE:hello_impl2](file:///hello_impl2)>")
   target_link_libraries(hello_solib PRIVATE dl)
   
//main.c
#include "hello.h"
#include <assert.h>
#include <dlfcn.h>
#include <stdio.h>

int main() {
    int (*funcp1)(void);
    int (*funcp2)(void);
    void *handle1;
    void *handle2;
    printf("impl1=%s\n", HELLO1_LOC);
    handle1 = dlopen(HELLO1_LOC, RTLD_LAZY);
    assert(handle1 != NULL);
    printf("impl2=%s\n", HELLO2_LOC);
    handle2 = dlopen(HELLO2_LOC, RTLD_LAZY);
    assert(handle2 != NULL);
    *(void **)(&funcp2) = dlsym(handle2, "hello");
    funcp2();
    *(void **)(&funcp1) = dlsym(handle1, "hello");
    funcp1();
    dlclose(handle2);
    dlclose(handle1);
    return 0;
}
//hello.h
#ifndef HELLO_SOLIB_HELLO_H
#define HELLO_SOLIB_HELLO_H
#ifdef __cplusplus
extern "C" {
#endif

int hello(void);

#ifdef __cplusplus
}
#endif
#endif //HELLO_SOLIB_HELLO_H

//hello1.c
#include "hello.h"
#include <stdio.h>

int hello(void) {
    printf("hello1\n");
}
//hello2.c
#include "hello.h"
#include <stdio.h>

int hello(void) {
    printf("hello2\n");
}

Running the binary produces the following console output:

/home/peng/Downloads/hello_solib/cmake-build-debug/hello_solib
impl1=/home/peng/Downloads/hello_solib/cmake-build-debug/impl1/libhello.so
impl2=/home/peng/Downloads/hello_solib/cmake-build-debug/impl2/libhello.so
hello2
hello1

Process finished with exit code 0

The challenge I see here is the same as with importing/exporting libraries from bundles: If there are libraries which are different, but have the same SONAME header, a call to dlopen will reuse a (transitive) library with the same SONAME instead of loading a new lib. This is even true if DL_LOCAL is used.

At first, I thought the importing/exporting problem touches the intrinsic difference between JAVA and C++, and thus is impossible to solve. It turns out that I was wrong.

Instead of dlopen/dlmopen, we should really manipulate ld.so. According to man ld.so:

When resolving shared object dependencies, the dynamic linker first inspects each dependency string to see if it contains a slash (this can occur if a shared object pathname containing slashes was specified at link time).

IIRC, shared object's dependency is encoded in the NEEDED entry in ELF dynamic section:

peng@hackerlife:~/Downloads/hello_solib/cmake-build-debug/impl1$ readelf -d libhello.so

Dynamic section at offset 0x2e10 contains 25 entries:
  Tag        Type                         Name/Value
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
 0x000000000000000e (SONAME)             Library soname: [libhello.so]

If we could modify NEEDED entries of installed shared object at runtime, use relative/absolute path instead, ld.so will be guided to load whatever we want. And it turns out that we can do that!

https://github.com/NixOS/patchelf

$ patchelf --remove-needed libhello.so.1 hello
$ patchelf --add-needed ./libhello.so.1 hello

What we need to do is to implement patchelf inside the Celix framework. Then IMPORT/EXPORT in the manifest become the definitive source of shared object interdependence. The framework does the hard work of wiring shared objects together at appropriate stage (e.g. when bundle get resolved?).

If we do implement this, I bet every C/C++ programmer will be shocked.

Currently SONAME + strict semantic version scheme should work properly in most corporate working environment, though it may be difficult to enforce in the open source world. Besides, conan makes integration test against various version of dependencies relatively easier. I suggest we finish the cache work first.

WDYT?

PengZheng avatar Sep 07 '22 14:09 PengZheng

Currently the bundle symbolic name and version can only be read from the zip, so some though how to solve this nicely (and without too much overhead) is needed.

Suppose we have file-lock protected global cache at hand, then we can first unzip the bundle to ~/.celix/staging temporarily, and then mv to its final destination. Under the protection of our locking scheme, there is only one user concurrently of the global cache and thus of the staging sub-directory. Given 'mv' is fairly cheap when within the same filesystem, the situation is nearly ideal.

PengZheng avatar Sep 07 '22 14:09 PengZheng

In practice this means that every bundle activator library (and ideally every private bundle library) should have a unique SONAME and although in theory this is doable ("just configure the SONAME"), I think that in many cases this adds too much complexity.

Bundle activator library can have the same SONAME, provided that we install them into different location of the global cache, and provide the full path to dlopen. man 2 dlopen says:

If filename contains a slash ("/"), then it is interpreted as a (relative or absolute) pathname. Otherwise, the dynamic linker searches for the object as follows (LD_LIBRARY_PATH thing)

I worked out a minimal example and tested it on my Ubuntu machine:

   # CMakeLists.txt
   cmake_minimum_required(VERSION 3.23)
   project(hello_solib C)
   
   set(CMAKE_C_STANDARD 99)
   
   
   add_library(hello_impl1 SHARED hello1.c)
   set_target_properties(hello_impl1 PROPERTIES LIBRARY_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/impl1)
   set_target_properties(hello_impl1 PROPERTIES OUTPUT_NAME hello)
   add_library(hello_impl2 SHARED hello2.c)
   set_target_properties(hello_impl2 PROPERTIES LIBRARY_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/impl2)
   set_target_properties(hello_impl2 PROPERTIES OUTPUT_NAME hello)
   
   add_executable(hello_solib main.c)
   target_compile_definitions(hello_solib PRIVATE HELLO1_LOC="$<TARGET_[FILE:hello_impl1](file:///hello_impl1)>")
   target_compile_definitions(hello_solib PRIVATE HELLO2_LOC="$<TARGET_[FILE:hello_impl2](file:///hello_impl2)>")
   target_link_libraries(hello_solib PRIVATE dl)
   
//main.c
#include "hello.h"
#include <assert.h>
#include <dlfcn.h>
#include <stdio.h>

int main() {
   int (*funcp1)(void);
   int (*funcp2)(void);
   void *handle1;
   void *handle2;
   printf("impl1=%s\n", HELLO1_LOC);
   handle1 = dlopen(HELLO1_LOC, RTLD_LAZY);
   assert(handle1 != NULL);
   printf("impl2=%s\n", HELLO2_LOC);
   handle2 = dlopen(HELLO2_LOC, RTLD_LAZY);
   assert(handle2 != NULL);
   *(void **)(&funcp2) = dlsym(handle2, "hello");
   funcp2();
   *(void **)(&funcp1) = dlsym(handle1, "hello");
   funcp1();
   dlclose(handle2);
   dlclose(handle1);
   return 0;
}
//hello.h
#ifndef HELLO_SOLIB_HELLO_H
#define HELLO_SOLIB_HELLO_H
#ifdef __cplusplus
extern "C" {
#endif

int hello(void);

#ifdef __cplusplus
}
#endif
#endif //HELLO_SOLIB_HELLO_H
//hello1.c
#include "hello.h"
#include <stdio.h>

int hello(void) {
   printf("hello1\n");
}
//hello2.c
#include "hello.h"
#include <stdio.h>

int hello(void) {
   printf("hello2\n");
}

Running the binary produces the following console output:

/home/peng/Downloads/hello_solib/cmake-build-debug/hello_solib
impl1=/home/peng/Downloads/hello_solib/cmake-build-debug/impl1/libhello.so
impl2=/home/peng/Downloads/hello_solib/cmake-build-debug/impl2/libhello.so
hello2
hello1

Process finished with exit code 0

Nice, this is new for (even though it is documenten in the man dlopen ..). I also gonna try this on a Mac and then think about how we can use this.

The challenge I see here is the same as with importing/exporting libraries from bundles: If there are libraries which are different, but have the same SONAME header, a call to dlopen will reuse a (transitive) library with the same SONAME instead of loading a new lib. This is even true if DL_LOCAL is used.

At first, I thought the importing/exporting problem touches the intrinsic difference between JAVA and C++, and thus is impossible to solve. It turns out that I was wrong.

Instead of dlopen/dlmopen, we should really manipulate ld.so. According to man ld.so:

When resolving shared object dependencies, the dynamic linker first inspects each dependency string to see if it contains a slash (this can occur if a shared object pathname containing slashes was specified at link time).

IIRC, shared object's dependency is encoded in the NEEDED entry in ELF dynamic section:

peng@hackerlife:~/Downloads/hello_solib/cmake-build-debug/impl1$ readelf -d libhello.so

Dynamic section at offset 0x2e10 contains 25 entries:
  Tag        Type                         Name/Value
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
 0x000000000000000e (SONAME)             Library soname: [libhello.so]

If we could modify NEEDED entries of installed shared object at runtime, use relative/absolute path instead, ld.so will be guided to load whatever we want. And it turns out that we can do that!

https://github.com/NixOS/patchelf

$ patchelf --remove-needed libhello.so.1 hello
$ patchelf --add-needed ./libhello.so.1 hello

What we need to do is to implement patchelf inside the Celix framework. Then IMPORT/EXPORT in the manifest become the definitive source of shared object interdependence. The framework does the hard work of wiring shared objects together at appropriate stage (e.g. when bundle get resolved?).

If we do implement this, I bet every C/C++ programmer will be shocked.

Currently SONAME + strict semantic version scheme should work properly in most corporate working environment, though it may be difficult to enforce in the open source world. Besides, conan makes integration test against various version of dependencies relatively easier. I suggest we finish the cache work first.

WDYT?

Nice, yes this can work. Around 2011-2013 we did some experiment with updating the NEEDED and SONAME flags and in that experiment it did work. But updating the NEEDED/SONAME entry is difficult of the string size needs to increased, so for the experiment we only updated NEEDED to something small (libfoo->ab1, libbar->ab2, etc). If I remember correctly we did not follow up this approach, because we had bigger things to tackle. But I agree if we can tackle this we are creating something "magical" for the C/C++ word :).

This should indeed be done in the resolving of a bundle and ideally be done by a resolver. There is a resolver in Celix, but that is outdated and not something I have touched(https://github.com/apache/celix/blame/master/libs/framework/src/resolver.c). But If we refactor this, it should be able to work out which libs are EXPORTED and IMPORTED and define a wire (e.g. how the NEEDED (and maybe SONAME) should be updated) between the export and import libs.

Also note that patchelf is GPL and if I am correct not useable for a ASF project.

pnoltes avatar Sep 08 '22 19:09 pnoltes

Currently the bundle symbolic name and version can only be read from the zip, so some though how to solve this nicely (and without too much overhead) is needed.

Suppose we have file-lock protected global cache at hand, then we can first unzip the bundle to ~/.celix/staging temporarily, and then mv to its final destination. Under the protection of our locking scheme, there is only one user concurrently of the global cache and thus of the staging sub-directory. Given 'mv' is fairly cheap when within the same filesystem, the situation is nearly ideal.

Yes, for unpacking and then moving a staging area will work. I think the challenge is more in how to decide if a bundle does not need to be extracted, because it is already in the cache. This can be done with a hash and hash index, but maybe a hash is already too expensive.

An other option could be to only extract the MANIFEST.MF file from a bundle and use that to see if the rest needs to be extracted. This can work efficiently (also memory wise) if the bundle are created with the java jar command, because a jar is technically a zip file but ensure that the MANIFEST is the first entry in the zip file.

pnoltes avatar Sep 08 '22 19:09 pnoltes

Nice, yes this can work. Around 2011-2013 we did some experiment with updating the NEEDED and SONAME flags and in that experiment it did work. But updating the NEEDED/SONAME entry is difficult of the string size needs to increased, so for the experiment we only updated NEEDED to something small (libfoo->ab1, libbar->ab2, etc). If I remember correctly we did not follow up this approach, because we had bigger things to tackle. But I agree if we can tackle this we are creating something "magical" for the C/C++ word :).

Unlike chrpath, patchelf has no such limitation. For example:


Dynamic section at offset 0x2e10 contains 25 entries:
  Tag        Type                         Name/Value
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
 0x000000000000000e (SONAME)             Library soname: [libhello.so]

peng@hackerlife:~/Downloads/hello_solib/cmake-build-debug/impl1$ patchelf --set-soname libhello.so.1.0.0 ./libhello.so
peng@hackerlife:~/Downloads/hello_solib/cmake-build-debug/impl1$ readelf -d libhello.so

Dynamic section at offset 0x2e10 contains 25 entries:
  Tag        Type                         Name/Value
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
 0x000000000000000e (SONAME)             Library soname: [libhello.so.1.0.0]

I also did some experiment with dlmopen, but because this start with a completely clean symbol space (so no symbols from the executable) IMO this is also not a real solution.

If accessing symbol from the executable is a requirement, there is a conflict here: any shared object (e.g. libssl) imported from a bundle can not be used directly by main executable or any of the executable's link-time dependencies (by -lssh), otherwise it will pollute the global link-map list (aka namespace) and will interfere with import/export. Unlike MacOS, on Linux we don't have two-level namespace, and thus have no way to reference a specific symbol from a specific shared object. Given that such imported/exported library can only be used within the Celix framework, why not just export its functionality as service at the first place?

I also checked nOSGi's implementation, which is discussed in nOSGi, and implemented in nOSGi - A POSIX-Compliant Native OSGi Framework for C/C++. They use libelf to modify shared object's DT_NEEDED entry in the bundle cache. This is unacceptable in our approach: several framework instances mean several binary copies of the same shared object. The global cache should be read-only, and the per-framework cache should reference the unique shared objects in the global cache.

I think there is a way of implementing the perfect import/export mechanism without above restrictions: use the dark Linux magic man 7 rtld-audit:

   la_symbind*()

       The return value of la_symbind32() and la_symbind64() is the address to which control should be passed after the function returns.  If the auditing library is simply monitoring symbol bindings, then it should return sym->st_value.  A different value may be returned  if
       the library wishes to direct control to an alternate location.

Theoretically, we can get complete control of dynamic symbol resolution, bind any symbol reference to wherever we want. The biggest advantage is that we don't have to use any external tools to modify the binary, each framework instance (assuming one instance per process) has complete control at runtime. But this feature is not available on MacOS.

PengZheng avatar Sep 09 '22 11:09 PengZheng

think the challenge is more in how to decide if a bundle does not need to be extracted, because it is already in the cache.

.zip is just like .deb package: once installed into our global cache, the zip file is of no use and can be safely removed from the system. Each framework instance has its own private bundle cache, which does nothing but does bookkeeping like which bundle installed for this framework instance. When celix_bundleContext_installBundle is invoked, it just install in the private cache a reference to the real bundle directory in the global cache. Before a bundle can be installed into a framework instance's private cache, it must be first installed into the global cache via a separate tool like dkpg or a new Celix API celix_system_installBundle.

Maybe we should call this global cache the System Bundle Store instead. Each working bundle exists uniquely in this store in unzipped format.

PengZheng avatar Sep 09 '22 13:09 PengZheng

Theoretically, we can get complete control of dynamic symbol resolution, bind any symbol reference to wherever we want. The biggest advantage is that we don't have to use any external tools to modify the binary, each framework instance (assuming one instance per process) has complete control at runtime. But this feature is not available on MacOS.

If this is feasible for Linux I it is valuable enough to look into. I we can still provide a Celix version for MacOS that does not support import/export of symbols IMO that is acceptable.

pnoltes avatar Sep 22 '22 14:09 pnoltes

think the challenge is more in how to decide if a bundle does not need to be extracted, because it is already in the cache.

.zip is just like .deb package: once installed into our global cache, the zip file is of no use and can be safely removed from the system. Each framework instance has its own private bundle cache, which does nothing but does bookkeeping like which bundle installed for this framework instance. When celix_bundleContext_installBundle is invoked, it just install in the private cache a reference to the real bundle directory in the global cache. Before a bundle can be installed into a framework instance's private cache, it must be first installed into the global cache via a separate tool like dkpg or a new Celix API celix_system_installBundle.

Maybe we should call this global cache the System Bundle Store instead. Each working bundle exists uniquely in this store in unzipped format.

Ah ok, now I understand the dkpg idea. Although technically good, I do then worry if we also need some support some repository system, which can be used to download/intstall bundle zips. Because in the current setup bundles are installed as zip, and if they also need to be installed in the "System Bundle Store" - per users - this default means some disk space overhead.

Maybe it then also an idea to let the install_celix_bundle cmake command installed unzipped bundle in a already "System Bundle Store" complaint directory structure and allow a - per user and system wide - celix config file configure 0 or more "System Bundle Store" locations (something like podman's registries.conf). IMO this should be feasible, especially because the bundle store should be read only.

pnoltes avatar Sep 22 '22 14:09 pnoltes

Sorry for the late reaction, I was a bit flooded with work for my "day-time job".

pnoltes avatar Sep 22 '22 14:09 pnoltes

Concerning rtld-audit. This is new for me, but if I read - and hopefully understand - the functionality of la_objopen and la_symbind, this indeeds gives the flexibility to make an export/import library functionality work while keeping a bundle store readonly.

The biggest downside I am seeing it that the LD_AUDIT env must be set for this to function correctly and - I assume - it must be set before starting an executable.

pnoltes avatar Sep 22 '22 17:09 pnoltes

The biggest downside I am seeing it that the LD_AUDIT env must be set for this to function correctly and - I assume - it must be set before starting an executable.

About the downside, I found --audit AUDITLIB linker option interesting, which seems like a local switch. I need more experiments and a full understanding of linker/loader/JAVA class loader.

PengZheng avatar Dec 14 '22 09:12 PengZheng

This has already been implemented in #476.

PengZheng avatar Apr 15 '23 12:04 PengZheng