FDWaveformView
FDWaveformView copied to clipboard
Proposal for view and data provider separation
First of all many thanks for this great library.
I'm in the process of adapting this library for my app and came across some of the limitations that were already mentioned in previous issues.
I'd like to put forward a proposal to separate the view logic from reading data:
- Define a protocol for a data source: this should support reading data in batches returning
Data
chunks or[Float]
arrays based on requested ranges - Clean up
FDWaveformView
andFDWaveformRenderOperation
removingAVFoundation
dependency: only use the datasource protocol
Once you implement the data source protocol you are free to integrate with any arbitrary data source not just AVAsset
s.
As a proof of concept I did all of the above:
- Defined
FDAudioContextProtocol
to define the data access layer - Updated
FDAudioContext
to encapsulate all code related to loadingAVAsset
s, implementingFDAudioContextProtocol
- Updated
FDWaveformRenderOperation
to only use theFDAudioContextProtocol
- Added
SineWaveAudioContext
as an example to demonstrate a custom data source based on a generated sine wave. - Added an option to the iOS example to load the sine wave, please see screenshots below.
I could achieve all of this with couple of API changes:
- made
FDWaveformView.audioContext
public so I can assign my custom sine example - made
WaveformType
enum public - rest of the changes happened in internal changes not affecting the API
I would be interested if this is something you would consider for the project.
If so, let's start a discussion about the specifics. I can dedicate some time to this task in the coming weeks.
Thank you for sharing! I can see you put a lot of effort into this and have considered external API, the implementation choices and also the modularity of the parts. And I appreciate that you put the SineWaveAudioContext into the examples project, not the main project. Top notch.
I am reading through the pull request. Right off the bat I can say overall yes, this is a welcome change — separating the concerns. For your other questions, making FDWaveformView.audioContext
public, yes this is acceptable. Likewise for WaveformType
, as has already been noted in code.
So far my immediate questions are:
- Should
FDWaveformType
process(normalizedSamples: inout [Float])
be moved into the context and out of the FDWaveformView file? - Are the class names general enough considering new responsibility?
- Are TODO notes in code reminding us of any issues?
- Is https://github.com/fulldecent/FDWaveformView/issues/2 possible going forward?
Thanks again, and I'm hoping this opens up the discussion.
Thanks for the reply.
I will look more into the codebase over the weekend to see what would be the best way to abstract the reading part. This was more of a proof of concept on doing it without affecting any of the APIs.
I'll get back to you with my answers in the coming days.
Another note, maybe samplesPerPixel should be outside of that reader.
Significant progress on this issue is at https://github.com/fulldecent/FDWaveformView/pull/144
Nearly done, just left in case anyone wants to handle that last part. And thanks again for the great idea here!
@fulldecent Thanks I will take a look once I have time.
I ended up going into another direction with Metal rendering of the waveform mostly because I needed large zoom levels where even individual samples can be seen. I'm still in the process but maybe the learnings can be useful in the future here as well.
Looking forward to the updates here. Happy new year!