DEPRECATED.rikki
DEPRECATED.rikki copied to clipboard
Document the process for supporting rikki on a new language track
Per @ErikSchierboom's observation in https://github.com/exercism/discussions/issues/110#issuecomment-276891610
I was looking into creating a Rikki analyzer for C# or F#, but unfortunately I found the documentation a bit, eh, lacking. If possible, I'd love to see a step-by-step instruction on what an analyzer should do and how to implement/add one.
@kytrinyx , so after writing the plugin for crystal (which basically copies the ruby one). I think that we could probably have one code path for all languages that just maps a language to their checker service endpoint. This would make it so the only thing needed is a service that responds with code problems or errors. Thoughts?
@mhelmetag @kytrinyx That sounds like a great plan!
Yeah, that sounds great.
I'm sorry, I've just closed my issue that suggests adding Rikki integration to the Rust track, because I would have never done it and I couldn't inflict a task of this magnitude on an unsuspecting contributor due to no docs.
The questions I had:
- Where does the code go?
- I'm going to guess https://github.com/exercism/rikki/tree/master/analysis
- Ah, I see that the Go analysis is in the repo, whereas Ruby and Crystal call out to external APIs.
- So each person who integrates Rikki with a new track can choose between these two choices.
- If the API route is chosen:
- The implementor will choose how it will be hosted
- What interface must the API adhere to? How will Rikki submit files for analysis? How should the analysis results be presented to Rikki?
- See https://github.com/exercism/xcrystal/issues/30 for how it was done for Crystal, most recently.
- How might one test the code?
- Well on one level, this depends on the answer to the above question.
- If it's an external API, it should be easy to test submitting files to the API.
- If it's in the repo directly, should be easy to
go run
something that checks a file you specify.
- After we have the small pieces tested, how do we test the whole thing works? Is it just a "deploy it and cross your fingers" thing, or is there a better way that we can catch any potential problems beforehand?
- Well on one level, this depends on the answer to the above question.
Thank you for documenting your questions, @petertseng. That makes it much easier to start writing the docs.
I agree it would be really good to get some more documentation on this and support automated feedback from Rikki on more of the language tracks. I'd be happy to work on feedback for the R track once I know what the process should be.
One thing to think about: stick high-level documentation about rikki in the docs repository, and the nitty-gritty "how to do it" in this repository.