ocis icon indicating copy to clipboard operation
ocis copied to clipboard

Shares Jail PROPFIND returns different File IDs for the same item

Open felix-schwarz opened this issue 1 year ago • 7 comments

Describe the bug

A folder shared from one user to another is identified by different File IDs in the Share Jail, depending on the location targeted by the PROPFIND.

Steps to reproduce

  1. Share folder Universe from user A to user B
  2. As user B, perform a PROPFIND on the Shares Jail root folder.
  3. As user B, perform a PROPFIND on the Universe folder in the Shares Jail.
  4. Compare <oc:id>s returned for the Universe folder item.

Expected behavior

Only one File ID is returned for an item in the Shares Jail, irrespective of the PROPFIND path and depth used.

Actual behavior

The File ID is different depending on the (Shares Jail) path targeted by the PROPFIND.

Setup

This was reproduced with demo.owncloud.com.

Examples

PROPFIND on / inside the Shares Jail with Depth: 1:

(request path: /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/)

Response:

<?xml version="1.0"?>
<d:multistatus xmlns:s="http://sabredav.org/ns" xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns">
  <d:response>
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/</d:href>
    <d:propstat>
      <d:prop>
        <d:resourcetype>
          <d:collection/>
        </d:resourcetype>
        <d:getlastmodified>Wed, 28 Aug 2024 20:18:11 GMT</d:getlastmodified>
        <d:getetag>"abfb1a5bbbd08eb8f253e37c51fef1f7"</d:getetag>
        <oc:id>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!a0ca6a90-a365-4782-871e-d44447bbc668</oc:id>
        <oc:size>0</oc:size>
        <oc:permissions/>
        <oc:favorite>0</oc:favorite>
        <oc:owner-id>katherine</oc:owner-id>
        <oc:owner-display-name>Katherine Johnson</oc:owner-display-name>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
  <d:response>
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/</d:href>
    <d:propstat>
      <d:prop>
        <d:resourcetype>
          <d:collection/>
        </d:resourcetype>
        <d:getlastmodified>Wed, 28 Aug 2024 20:18:11 GMT</d:getlastmodified>
        <d:getetag>"abfb1a5bbbd08eb8f253e37c51fef1f7"</d:getetag>
        <oc:id>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!166d1210-cdb9-50ab-9f1e-ecb9ef12a304:4c510ada-c86b-4815-8820-42cdf82c3d51:3ef59385-34ba-4055-b705-f4f05f1388a4</oc:id>
        <oc:size>0</oc:size>
        <oc:permissions>SDNVCK</oc:permissions>
        <oc:favorite>0</oc:favorite>
        <oc:share-types>
          <oc:share-type>0</oc:share-type>
        </oc:share-types>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
  <d:response>
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Theory/</d:href>
    <d:propstat>
      <d:prop>
        <d:resourcetype>
          <d:collection/>
        </d:resourcetype>
        <d:getlastmodified>Wed, 28 Aug 2024 20:10:22 GMT</d:getlastmodified>
        <d:getetag>"202a52840b4f000e6b7997c6d9e025a9"</d:getetag>
        <oc:id>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!166d1210-cdb9-50ab-9f1e-ecb9ef12a304:4c510ada-c86b-4815-8820-42cdf82c3d51:fdb56a22-9792-41b1-8538-ad66f31c67fb</oc:id>
        <oc:size>2567402</oc:size>
        <oc:permissions>SDNVCK</oc:permissions>
        <oc:favorite>0</oc:favorite>
        <oc:share-types>
          <oc:share-type>0</oc:share-type>
        </oc:share-types>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
</d:multistatus>

Notice how the response contains the following <oc:id> for /Universe/: a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!166d1210-cdb9-50ab-9f1e-ecb9ef12a304:4c510ada-c86b-4815-8820-42cdf82c3d51:3ef59385-34ba-4055-b705-f4f05f1388a4

PROPFIND on /Universe/ inside the Shares Jail with Depth: 1:

(request path: /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/)

Response:

<?xml version="1.0"?>
<d:multistatus xmlns:s="http://sabredav.org/ns" xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns">
  <d:response>
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/</d:href>
    <d:propstat>
      <d:prop>
        <d:resourcetype>
          <d:collection/>
        </d:resourcetype>
        <d:getlastmodified>Wed, 28 Aug 2024 20:19:07 GMT</d:getlastmodified>
        <d:getetag>"78306138f800b46e89628999cf69d532"</d:getetag>
        <oc:id>166d1210-cdb9-50ab-9f1e-ecb9ef12a304$4c510ada-c86b-4815-8820-42cdf82c3d51!17634536-719e-47c4-bf8b-e8b97188259a</oc:id>
        <oc:size>993045</oc:size>
        <oc:permissions>SDNVCK</oc:permissions>
        <oc:favorite>0</oc:favorite>
        <oc:share-types>
          <oc:share-type>0</oc:share-type>
        </oc:share-types>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
  <d:response>
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/IMG_0004.jpeg</d:href>
    <d:propstat>
      <d:prop>
        <d:resourcetype/>
        <d:getlastmodified>Wed, 08 Aug 2012 21:29:49 GMT</d:getlastmodified>
        <d:getcontentlength>993045</d:getcontentlength>
        <d:getcontenttype>image/jpeg</d:getcontenttype>
        <d:getetag>"f64ca9ad4b5bbf0a20d091beee2c4473"</d:getetag>
        <oc:id>166d1210-cdb9-50ab-9f1e-ecb9ef12a304$4c510ada-c86b-4815-8820-42cdf82c3d51!d43e8715-6fd7-4736-9790-24f274987de7</oc:id>
        <oc:size>993045</oc:size>
        <oc:permissions>DNVW</oc:permissions>
        <oc:favorite>0</oc:favorite>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
</d:multistatus>

Notice how the response contains the following <oc:id> for /Universe/: 166d1210-cdb9-50ab-9f1e-ecb9ef12a304$4c510ada-c86b-4815-8820-42cdf82c3d51!17634536-719e-47c4-bf8b-e8b97188259a

Additional context

Different File IDs for the same item cause issues in the iOS app, which uses them to track the item's location and lifecycle:

  • disappearing folder contents: https://github.com/owncloud/enterprise/issues/6842
  • breaks the internal chain of item IDs, so items inside shared folders erroneously are seen as "root" shared items themselves, leading to side effects like the wrong actions being shown to the user. F.ex. Unshare (which of course fails) is shown, but Delete (which would work and be useful) is not.

felix-schwarz avatar Aug 28 '24 21:08 felix-schwarz

As noticed by @jesmrec, this issue could be the same issue as #8455 (and, by extension #8766).

felix-schwarz avatar Aug 29 '24 09:08 felix-schwarz

When listing the share jail root we currently return the resourceid of the mountpoint, which resides in the share jail. As a result any resourcce in the sharejail has a resource id starting with a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!.

When listing the content of a folder in the share jail we return the resource id of the shared resource and the contained files.

We could try returning the mount point resourceid when listing the mount point, but IIRC there are use cases where clients, maybe web, need to correlate the resourceid from the PROPFIND with resource ids from the sharedWithMe response.

Hm maybe that would also be cleaner ... @kulmann @JammingBen do we do some kind of resource id comparison magic there?

WebDAV currently has no way to show both the mount point resourceid AND the shared resource id. The Graph API makes this transparent: the driveItem for a mount point has an id and a remoteItem representing the shared resource with its own id.

I guess it would be semantically more correct to also return the mount point id for the shared folder when access happens via the share jail.

butonic avatar Sep 02 '24 08:09 butonic

Thanks for the details.

In the end, the only thing that matters to the iOS app (and presumably the Android app, too) is that:

  • every item in the Shares Jail has a consistent <oc:id>- regardless of the path targeted by the PROPFIND
  • no special handling of these File IDs is necessary (be it lookup / coordination with the drive list - or performing special processing on the ID)

felix-schwarz avatar Sep 03 '24 09:09 felix-schwarz

I agree ... just need to find the prio for this ....

butonic avatar Sep 04 '24 10:09 butonic

Let me take a step back. Why are you even using the sharejail? the graph /me/drives endpoint and the /me/drive/sharedWithMe both return a list of driveItems that contain the remoteItem with the original driveItem id.

Let me be clear: the share jail is a broken and deprecated way af accessing shares. It forces us to invent ids for resources that can be accessed in two ways:

  1. via the share jail PROPFIND /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe
  2. via the resource itself: PROPFIND /dav/spaces/166d1210-cdb9-50ab-9f1e-ecb9ef12a304$4c510ada-c86b-4815-8820-42cdf82c3d51!17634536-719e-47c4-bf8b-e8b97188259a

The latter not only works for the owner of the space but also for users that have been granted access by a share.

Instead of making a PROPFIND againts the share jail root clients should use the /me/drives/sharedWithMe endpoint to get a list of all shares, their mount point name and the underlying remoteItem.

Question: what id is the server supposed to return in a PROPFIND /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/With/Sub/Sub/Sub/Folder? An id from the share jail I suppose? Great now every resource has two ids. One when accessing it via the share jail and one when accessing it by its actual space.

This makes it impossible to correlate shares with the fake ids from the sharejail.

So, why don't we return the actual resoruce id for direct children of the share jail?

Because that is a mountpoint. A different resource than the shared resource. With its own id. In the share jail.

It gets more complicated with WOPI. Because it uses the fileid to access the resource. That is why we return the real id when stating a mountpoint.

We could try returning a share jail id for mountpoints, but I don't know out of my head if that still works with WOPI.

Let me create a PR.

BTW, this has been part of the code since a long time:

	// FIXME when stating a share jail child we need to rewrite the id and use the share
	// jail space id as the mountpoint has a different id than the grant
	// but that might be problematic for eg. wopi because it needs the correct id? ...
	// ... but that should stat the grant anyway

	// FIXME when navigating via /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668 the web ui seems
	// to continue navigating based on the id of resources, causing the path to change. Is that related to WOPI?

butonic avatar Sep 04 '24 13:09 butonic

Semantically, the graph /me/drive/sharedWithMe and the webdav /dav/spaces/{sharejailid} endpoint should be equivalent. You get a list of all shares and their mountpoint name on both. On the graph you also get the share state and the actually shared remoteItem id. We could even add a query filter to only list accepted shares.

Actually, the idea was that clients could query the /me/drives endpoint and would get not only spaces but also shares as spaces with type mountpoint. One endpoint to get a list of all spaces AND shares that the user has access to.

The share jail was just added to allow clients to use it until they had implemented the drive lookup properly. I may have failed to make this clear to all teams, but I think we should still go there...

butonic avatar Sep 04 '24 19:09 butonic

Thanks for the detailed answer and your perspective on this.

Due to time constraints I can't go into the same level of detail, but here are some of the problems using the drive list directly poses to the iOS client:

  • for mountpoint drives, the shared items are also the root item of the WebDAV endpoint, which breaks many underlying assumptions of the client (f.ex. "every file has a containing folder", "root folders don't have a name", …), making adoption a lot harder. If there was always a (virtual) root folder that contains just the shared item, adoption would be a lot easier and more straightforward.
  • for the File Provider, which provides the VFS to the iOS system Files.app, 20 MB of memory usage is a hard limit, meaning that exceeding that amount of memory usage leads to immediate termination without prior warning
    • parsing large JSON objects (like a drives list with large amounts of shares) therefore inherently carries a higher risk of immediate termination than WebDAV XML, which is parsed incrementally as a stream
    • while combining items from different (WebDAV) sources is something the iOS app's SDK is prepared for, it also comes at a memory cost, which, again, increases the risk of immediate termination
    • for perspective: in the most recent iOS version, it's not even possible to debug the File Provider anymore because that makes the memory usage balloon past 20 MB the moment the process is started. Before the first line of the File Provider's code is even run, the linked (system) libraries already consume a large part of it.

In summary: moving away from the Shares Jail and to a drive-endpoint-driven implementation in its current form will require a non-trivial amount of work and time. I'd therefore welcome if – through a fix like https://github.com/cs3org/reva/pull/4835 – the Shares Jail could provide consistent IDs to allow users to fully use the iOS app until we get there.

felix-schwarz avatar Sep 09 '24 15:09 felix-schwarz

Phew, I found the time to debug this in the context of WOPI. Things have gotten better and all Stat requests are using the internal resource id. Be that acessing an office document as the owner, as a recipient via the share jail or via a public link. We now always use the correct fileid.

Let me bump https://github.com/cs3org/reva/pull/4835 and run an ocis full CI as well. Then we can merge this.

butonic avatar Sep 26 '24 12:09 butonic

QA missing

tbsbdr avatar Sep 30 '24 09:09 tbsbdr

QA as in 'let clients test this'? @ScharfViktor and I went over the testsuite. One test scenario needs to be updated and we added it to the expected failures. The fix got merged to master ... backporting it to stable5 might solve some client issues.

butonic avatar Sep 30 '24 13:09 butonic

I tested with different dav paths and here's what I found:

[!NOTE] old and new paths doesn't have Shares space-id prefixed spaces path has Shares space-id prefixed

Need confirmation if this is the expected behavior

  1. old:
  • root:
    curl -XPROPFIND 'https://localhost:9200/webdav/Shares' -udemo:demo -vk
    
  • folder:
    curl -XPROPFIND 'https://localhost:9200/webdav/Shares/Universe' -udemo:demo -vk
    
  • file inside a shared folder:
    curl -XPROPFIND 'https://localhost:9200/webdav/Shares/Universe/lorem.txt' -udemo:demo -vk
    
    same ids (NO Shares space-id prefix) :exclamation:
    <d:href>/webdav/Shares/Universe/</d:href>
    <d:propstat>
      <d:prop>
        <oc:id>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!cbc094dc-a5f8-415a-a4ee-bde36314fe0d</oc:id>
        <oc:fileid>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!cbc094dc-a5f8-415a-a4ee-bde36314fe0d</oc:fileid>
        <oc:spaceid>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0</oc:spaceid>
        <oc:file-parent>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0</oc:file-parent>
        <oc:name>Universe</oc:name>
    
  1. new:
  • root:
    curl -XPROPFIND 'https://localhost:9200/dav/files/demo/Shares' -udemo:demo -vk
    
  • folder:
    curl -XPROPFIND 'https://localhost:9200/dav/files/demo/Shares/Universe' -udemo:demo -vk
    
  • file inside a shared folder:
    curl -XPROPFIND 'https://localhost:9200/dav/files/demo/Shares/Universe/lorem.txt' -udemo:demo -vk
    
    same ids (NO Shares space-id prefix) :exclamation:
    <d:href>/dav/files/demo/Shares/Universe/</d:href>
    <d:propstat>
      <d:prop>
        <oc:id>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!cbc094dc-a5f8-415a-a4ee-bde36314fe0d</oc:id>
        <oc:fileid>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!cbc094dc-a5f8-415a-a4ee-bde36314fe0d</oc:fileid>
        <oc:spaceid>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0</oc:spaceid>
        <oc:file-parent>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0</oc:file-parent>
        <oc:name>Universe</oc:name>
    
  1. spaces:
  • root:
    curl -XPROPFIND 'https://localhost:9200/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668' -udemo:demo -vk
    
  • folder:
    curl -XPROPFIND 'https://localhost:9200/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe' -udemo:demo -vk
    
    same ids (Shares space-id prefix) :question:
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/</d:href>
    <d:propstat>
      <d:prop>
        <oc:id>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!1b4334ac-6184-4d9e-bf34-0d9f1d52ab27:8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0:580e67f7-941e-40e3-a2be-98b30131ecdf</oc:id>
        <oc:fileid>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!1b4334ac-6184-4d9e-bf34-0d9f1d52ab27:8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0:580e67f7-941e-40e3-a2be-98b30131ecdf</oc:fileid>
        <oc:spaceid>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668</oc:spaceid>
        <oc:file-parent>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0</oc:file-parent>
        <oc:name>Universe</oc:name>
    
  • file inside a shared folder:
    curl -XPROPFIND 'https://localhost:9200/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/lorem.txt' -udemo:demo -vk
    
    Original file-id :question:
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/lorem.txt</d:href>
    <d:propstat>
      <d:prop>
        <oc:id>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!15e03060-6ea0-4f15-a89e-fba2d82536e5</oc:id>
        <oc:fileid>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!15e03060-6ea0-4f15-a89e-fba2d82536e5</oc:fileid>
        <oc:spaceid>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0</oc:spaceid>
        <oc:file-parent>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!cbc094dc-a5f8-415a-a4ee-bde36314fe0d</oc:file-parent>
        <oc:name>Universe</oc:name>
    

saw-jan avatar Oct 01 '24 07:10 saw-jan

^ CC @butonic

saw-jan avatar Oct 02 '24 09:10 saw-jan

1. /webdav/Shares

  • is dead legacy
  • contains a sharejail at /Shares that does not even return correct ids
    • it should return the sharejail ids for /Shares and the direct children, aka mountpoints
    • it uses path based requests which are slow because they have to determine which space is responsible for the path
    • the storage registry matches the /Shares folder and directs the request to the share jail
    • the storage registry matches /Shares/{mountpoint} and directs the request to the storage provider of the shared resource - that is why the id is not from the sharejail.
  • the ids for this endpoint are correct
  • IMO we should get rid of it entirely, at least mark it as deprecated in the changelog or even log a warning if it is used

2. /dav/files/{username}

  • is legacy as well
  • uses the same path based storage registry matching mechanism as the /webdav enpoint
  • the ids for this endpoint are correct
  • IMO we should get rid of it entirely, at least mark it as deprecated in the changelog or even log a warning if it is used

3. /dav/spaces/{spaceid}

  • should always be used when accessing a space via webdav
  • The share jail space root collection has id a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!a0ca6a90-a365-4782-871e-d44447bbc668
  • a mountpoint in the sharejail has an id in the form of a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!{shareid}, eg a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!storage-users-1:some-admin-user-id-0000-000000000000:da59350a-6840-4a35-b2c1-7643a5f56ccf
    • note that these ids are opaque to clients. the only thing you might test for now is that the id starts with the share jail id a0ca6a90-a365-4782-871e-d44447bbc668
  • This PR aligns the fileid of a mountpoint in
    • a PROPFIND /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668 -H 'depth: 1' request, where the mountpoint is listed as a child of the sharejail and
    • a PROPFIND /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!{shareid} -H 'depth: 0' request where the mountpoint is listed as the root of the collection
  • the ids for this endpoint are correct
    • 🤦 well ... save the oc:file-parent which should point to the share jail ... not the parent of the shared resource ... I'll submit a PR ...

@TheOneRing all currently released clients are using the graph api to look up spaces and then use /dav/spaces/{spaceid} as the webdav endpoint, right?

butonic avatar Oct 02 '24 14:10 butonic

fix landed in master with https://github.com/owncloud/ocis/pull/10236

butonic avatar Oct 04 '24 08:10 butonic

Looks like fix https://github.com/cs3org/reva/pull/4876 changed the parent-id to share-jail root ONLY when doing PROPFIND to shared resource.

curl -XPROPFIND 'https://localhost:9200/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe' -udemo:demo -vk
<d:href>/remote.php/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/</d:href>
...
<oc:fileid>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!14cb34c2-9a97-40d4-9cb5-c0e221bbd9cb:58aed3df-1088-49b1-b1e6-7ab96a689ec8:df4de6f7-fa18-4721-a670-8265a86a7bb4</oc:fileid>
<oc:spaceid>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668</oc:spaceid>
<oc:name>Universe</oc:name>
...
<oc:file-parent>
  a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!a0ca6a90-a365-4782-871e-d44447bbc668
</oc:file-parent>

But NOT with PROPFIND to shares space (expected one :question:)

curl -XPROPFIND 'https://localhost:9200/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668' -udemo:demo -vk
<d:href>/remote.php/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/</d:href>
...
<oc:fileid>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!14cb34c2-9a97-40d4-9cb5-c0e221bbd9cb:58aed3df-1088-49b1-b1e6-7ab96a689ec8:df4de6f7-fa18-4721-a670-8265a86a7bb4</oc:fileid>
<oc:spaceid>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668</oc:spaceid>
<oc:name>Universe</oc:name>
...
<oc:file-parent>
  14cb34c2-9a97-40d4-9cb5-c0e221bbd9cb$58aed3df-1088-49b1-b1e6-7ab96a689ec8!58aed3df-1088-49b1-b1e6-7ab96a689ec8
</oc:file-parent>

CC @butonic

saw-jan avatar Oct 04 '24 12:10 saw-jan

^ CC @butonic

saw-jan avatar Dec 03 '24 11:12 saw-jan

afaict in

curl -XPROPFIND 'https://localhost:9200/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668' -udemo:demo -vk

oc:file-parent of

<d:href>/remote.php/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/</d:href>

should be a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!a0ca6a90-a365-4782-871e-d44447bbc668

so this is a bug.

/me signing out 👋

butonic avatar Dec 04 '24 17:12 butonic

There is already an API test. Remove from expected failures after implementation

kobergj avatar Feb 26 '25 09:02 kobergj

Fix https://github.com/owncloud/ocis/pull/11101

2403905 avatar Mar 06 '25 18:03 2403905

Hi @felix-schwarz I added the fix, please validate once you get a chance.

2403905 avatar Mar 10 '25 08:03 2403905

@2403905 Thanks! Is there any test instance I could use - or a docker compose file I could use to set up a test server with the changes?

felix-schwarz avatar Mar 10 '25 19:03 felix-schwarz

You can use this ocis_full docker compose deployment setup: https://owncloud.dev/ocis/deployment/ocis_full/ And if you use these ocis image envs while running the setup then you should get the latest changes:

OCIS_DOCKER_IMAGE=owncloud/ocis-rolling
OCIS_DOCKER_TAG=master

saw-jan avatar Mar 11 '25 04:03 saw-jan

@2403905 I just verified that it is now working correctly. The oc:id is now identical in the situation originally described above.

From my side, the issue can be closed.

felix-schwarz avatar Mar 12 '25 12:03 felix-schwarz

Thank you @felix-schwarz

2403905 avatar Mar 12 '25 12:03 2403905