bye-bye-feedly icon indicating copy to clipboard operation
bye-bye-feedly copied to clipboard

Can't get all read later entries

Open gengen1988 opened this issue 6 years ago • 1 comments

Hi michaelx,

I'm arranging my bookmarks recently. This project is useful for me. It retrieve my records since google reader times.

However it can't get all my articles. It only return 500 items. My actually count is 1031.

The code using count 10000, which is the maximum number in developer docs. There must be some other limit.

Maybe using continuation is more reliable choice. I'm not very familiar with python, so I just leave some js example here.

function createClient(accessToken) {
  return require('axios').create({
    baseURL: 'https://cloud.feedly.com/v3/',
    headers: {
      Authorization: `OAuth ${accessToken}`
    }
  })
}

const feedly = createClient(token)

async function getReadLater(streamId) {
  const count = 100
  let continuation
  let entries = []
  do {
    const doc = await feedly.get('streams/contents', { params: { streamId, continuation, count } })
    continuation = doc.data.continuation
    entries = [...doc.data.items, ...entries]
  } while (continuation)
  return entries
}

gengen1988 avatar Feb 25 '19 08:02 gengen1988

Thanks for your report!

The streams/contents endpoint seems to be limited to 1,000. I’m no longer using feedly, so I can’t spent much time on it right now.

In the meantime, the following workaround should be possible:

  1. Run app.py.
  2. Rename the created data.json, so that it doesn’t get overwritten by step 5.
  3. Search for continuation in the data.json from step 2, and copy its value (string).
  4. Append the continuation string from step 3 to the request url (line 14). E.g. …&count=1000&continuation=<string>.
  5. Run app.py.
  6. Merge the created json files.

michaelx avatar Mar 01 '19 08:03 michaelx