`fetchDevel` is not a function
Bug Report
Describe the bug
I am following the steps in CONTRIBUTING.md and setting {devel: true} when testing my changes, but I get fetchDevel is not a function when I do so.
Minimal Reproduction
in the node-yahoo-finance2 create test.ts:
import yahooFinance from "./src/index-node.js";
const queryOptions = { count: 5, region: "US", lang: "en-US" };
const result = await yahooFinance.dailyGainers(queryOptions, {
devel: "fixture.json",
});
Run tsx test.ts:
$ tsx test.ts
TypeError: fetchDevel is not a function
at Object.yahooFinanceFetch [as _fetch] (file:///home/zvictor/development/node-yahoo-finance2/dist/esm/src/lib/yahooFinanceFetch.js:41:48)
at Object.moduleExec [as _moduleExec] (file:///home/zvictor/development/node-yahoo-finance2/dist/esm/src/lib/moduleExec.js:52:29)
at Object.dailyGainers (file:///home/zvictor/development/node-yahoo-finance2/dist/esm/src/modules/dailyGainers.js:138:17)
Environment
Browser or Node: node
Node version (if applicable): v22.13.1
Npm version: yarn 1.22.22
Browser verion (if applicable):
Library version (e.g. 1.10.1): 2.13.3-dev (Commit 17ea3f6)
Note
CONTRIBUTING.md talks about linking the package, but that makes the importing of the lib point to the built files, which is too slow for development. I hope import yahooFinance from "./src/index-node.js" is a better and still acceptable way to import the lib.
Okay... I got rid of the error by replacing from "./src/index-node.js" by from "./src/index-test.js".
There is no documentation at all about such need anywhere, and I don't see any other way one could make that work, so please tell me I am missing another obvious way to get fetchDev working that is not by importing index-test.js or env-test.jsexplicitly.
Hey! Thanks for this and your recent PRs!
I'll have a chance to take a proper look tomorrow and will be in touch then 🙏
Thank you for the lib and all the hard work!
As I just arrived and I am going through the pain of making the first contributions, I notice a lot of small outdated instructions. I will be posting them in this issue as I find them.
To start:
https://github.com/gadicc/node-yahoo-finance2/blob/17ea3f60b7b4eaefe3011b523633c21908200475/docs/validation.md?plain=1#L230
$ yarn schema
yarn run v1.22.22
error Command "schema" not found.
On another note, I realized that the tests run against cached endpoint samples, which makes sense but also means that we are constantly validating our code against outdated data.
So I though, "what if I could update all those samples and then run the tests again, letting all the broken tests rise from hell?"
At first I ran rm ./tests/http/*.json, but that surprisingly raised a lot of errors about missing functions.
I then looked deeper and realized that FETCH_DEVEL=nocache yarn test was probably the way to go.
I was expecting mostly validation errors to appear, but instead I got a big list of No set-cookie header present in Yahoo's response. Something must have changed, please report. errors.
Would you have a recommendation on how I should be running those tests instead?
Hey @zvictor, thanks again for your contributions and patience.
Definitely appreciate your comments to help make contributing a better experience. I'll just note that of these things will be changing soon, so my response might be short-lived. But allow me to elaborate:
-
index-test. Thanks, will look into making this clearer. However, just FYI, in the next major version of the library, we'll be getting rid offetchDevelcompletely, withfetch-mock-cache(also by me, which grew out of my work onfetchDevel, but is more generic and works better as a mock vs code in the project). That will solve a lot of "fetchDevel" issues in different environments. -
yarn schema. We'll be reverting back to this shortly, the code is done and just needs some finishing touches (and for me to adapt your PR appropriately). We switched to typebox for a better developer experience (no need to runyarn schemaon every change), but it had some drawbacks. Unfortunately the library is still too new, upgrades would break the code, and some issues could "only be solved in the next major version" which isn't in planning yet. Also, there's a move away from typescript "slow types" (e.g. inferred types). So we'll be moving back to the original approach of typescript as the single source of truth, but, replacing ajv with custom validation to still allow us to run in e.g. Cloudflare workers. -
stale tests. This is going to require some further thought. I hope in the next major ver we can have a cron job which checks if any of the "general" tests start failing with new responses. However, we might not want to do this for every test, as e.g. some tests test for very specific things, and if, e.g. Yahoo changes something, I'm not sure if we want to then update all those test files too. Maybe we do, maybe we don't - it needs a bit more consideration. But yes, this is something we need to think about and solve and will be relevant when switching to
fetch-mock-cachetoo - we'll probably have to make sure all the tests pass with the latest Yahoo data for that 😅
Anyways, thanks again, and definitely hope to improve the developer on-boarding experience. For now, will merge your PRs (thanks again!), rebase to my branch where we go back to a json schema, patch up your PRs, and hopefully get that all back to the main devel branch in the next few days. Biggest immediate implication for you is that in the future you'll indeed need to run yarn schema after any change to the typescript interfaces, for validation to pass.
If anything's unclear or you have any further thoughts, please let me know. Otherwise, thanks again for your contributions and hope you'll have an easier time here in the future 🙏
Thank you for the detailed explanation!
It looks to me that you are already working in a good direction in the points 1 and 2 (I added just a short comment for the later here).
For the point 3 I think I have more to contribute to: What I envision for this project is a CI Task or Cron job, as you said, that will spawn a process for every test written and will check whether or not that given test is still passing against fresh endpoint tests.
For every test failure, it shall:
- send the results to an LLM agent that will have instructions to classify whether the schemas need to be updated or not.
- the agent will try changes to the schemas and will then repeat the previous steps, until the test pass
- once the test passes, it will push a PR to the repository.
The end goal here is to only have to review PRs with the changes needed, never having to write schema changes ourselves. This is what I was already experimenting with on the previous comment.
Hey @zvictor
I've merged the code I mentioned to revert back to schema compilation from typescript interfaces. I still need to look at all the CONTRIBUTOR instructions when I have a proper chance. But let me now address your comments.
In short, I LIKE THE WAY YOU THINK :D
This is definitely the direction we need to go.
I just don't want to jump the gun too much since we have some big changes coming with the next major version. But I know you were also doing some experimenting and don't want to dampen any excitement... please just be aware that if you do anything now, a lot of things are still going to change (e.g. no more fetchDevel amongst other things).
However, the validation part will presumably stay constant now, I hope; that is... typescript interfaces as the single source of truth, that is compiled to a JSON schema and then validated at runtime. Your comments from the other issue are noted however, the validation code is so small and the validation errors (mostly*) pretty clear, such that I think any LLM will have a much easier time with it vs some other massive validation library. (* we can clean them up more if needed).
Lastly FYI, the only "magic" are the special yahooFinanceTypes, so, if we have e.g. { date: Date }, we'll definitely always return a valid Date object, however, in the validation stage, we accept and coerce e.g. { date: "2025-02-14T15:00:10.391Z" } to { date: new Date("2025-02-14T15:00:10.391Z") }.
Hope that helps!
I am glad my ideas sounded inspiring! Together we can make them become reality!
if you do anything now, a lot of things are still going to change (e.g. no more fetchDevel amongst other things).
As long as the tests are stable/stabilized we should be fine with whatever change is introduced to the rest of the codebase. LLMs can deal with any crap or legacy code more happily than humans, thus the quality of the validation flow is not a big concern for this specific task.
What we need to have in place to start is a reliable test runner that can invalidate outdated endpoints, as I described earlier.