SCRecorder
SCRecorder copied to clipboard
There's a "plop" sound between video segments.
Record a segment. Record another segment. Stop. There's a "plop" sound between segments.
Anyone has an idea of how to solve it?
I am also having the same issue. It is very pronounced. I've tried to implement something similar to how vine does it - http://engineering.vine.co/post/116474023457/vines-infinite-loops. I cut out 0.05 seconds from each end of each segment which made the transitions smoother, but I haven't yet figured out how to create the crossfade, which is the reason for the "plop" sound.
To cut out the 0.05 seconds from each side I added this line
timeRange = CMTimeRangeFromTimeToTime(CMTimeAdd(timeRange.start, CMTimeMake(2205, 44100)), CMTimeSubtract(timeRange.duration, CMTimeMake(2205, 44100)));
to this function in SCRecordSession just under where the time value is set.
(CMTime)_appendTrack:(AVAssetTrack *)track toCompositionTrack:(AVMutableCompositionTrack *)compositionTrack atTime:(CMTime)time withBounds:(CMTime)bounds
Let me know if you have any ideas for how to implement the crossfade, because with it we'll be able to create some very seemless videos :)
You will need to use an AVAudioMix. You can set the audioMix inside an AVPlayerItem (that can be read by SCPlayer/AVPlayer) or in SCAssetExportSession.audioConfiguration.audioMix if you want to export it.
To export an AVAudioMix in SCAssetExportSession.audioConfiguration.audioMix would you need to specify the fade in/fade out times for each individual segment and create audio mix input parameters for each one? Without accounting for each individual segment and just focusing on fading the beginning and end of the entire video, I have the following which doesn't seem to be working, even when using a longer fade duration:
`let trimTimeRange = assetExportSession.timeRange
let fadeDuration = CMTimeMake(2205, 44100)
var fadeInTimeRange: CMTimeRange = CMTimeRangeMake(trimTimeRange.start, fadeDuration);
var startFadeOutTime:CMTime = CMTimeMake(trimTimeRange.start.value + trimTimeRange.duration.value - fadeDuration.value, 1000);
var fadeOutTimeRange: CMTimeRange = CMTimeRangeMake(startFadeOutTime, fadeDuration);
var exportAudioMixInputParameters: AVMutableAudioMixInputParameters = AVMutableAudioMixInputParameters()
exportAudioMixInputParameters.setVolumeRampFromStartVolume(0.0, toEndVolume: 1.0, timeRange: fadeInTimeRange)
exportAudioMixInputParameters.setVolumeRampFromStartVolume(1.0, toEndVolume: 0.0, timeRange: fadeOutTimeRange)
var exportAudioMix: AVMutableAudioMix = AVMutableAudioMix()
exportAudioMix.inputParameters = [exportAudioMixInputParameters]
assetExportSession.audioConfiguration.audioMix = exportAudioMix`
Am I going in the right direction with this?
You will need to create the fade between each individual segments
Yes, and be careful about very short segments (the case in which fade in ramp overlaps fade out).
Hey, not sure if you guys would be able to spot what I'm doing wrong here, but it is eluding me... I'm trying to implement a 1 second fade in fade out to each segment using the following:
`func createInputParameters()->AVAudioMixInputParameters{
var exportAudioMixInputParameters: AVMutableAudioMixInputParameters = AVMutableAudioMixInputParameters()
let fadeDuration = CMTimeMake(44100, 44100)//CMTimeMake(2205, 44100)
var startTime = CMTimeMake(0, 44100)
let totalDuration = recordSession?.duration
for seg in recordSession!.segments {
let segment = seg as! SCRecordSessionSegment
var fadeInTimeRange: CMTimeRange = CMTimeRangeMake(startTime, fadeDuration);
startTime = CMTimeAdd(startTime, segment.duration) // set the new startTime for next segment, which will also be the end of this segment
var startFadeOutTime:CMTime = CMTimeSubtract(startTime, fadeDuration)
var fadeOutTimeRange: CMTimeRange = CMTimeRangeMake(startFadeOutTime, fadeDuration);
exportAudioMixInputParameters.setVolumeRampFromStartVolume(0.0, toEndVolume: 1.0, timeRange: fadeInTimeRange)
exportAudioMixInputParameters.setVolumeRampFromStartVolume(1.0, toEndVolume: 0.0, timeRange: fadeOutTimeRange)
}
return exportAudioMixInputParameters
}`
and then just adding the returned input parameters to an AVMutableAudioMix and setting that to the assetExportSession.audioConfiguration.audioMix. All the time values seem to match up but for some reason its not included in my export. Thanks for the help.
That doesn't work for me neither.
I finally doing an AVExportSession by Segment and fading-in fading-out each segment. So, I have N non dependent segments with faded-in at beginning and faded-out at ending. Then, I merge them in another AVExportSession.
It's very far of the best solution, but I could not do AudioMix + ExportSession + N VideoAssets/segments work together.
Try using audioMixInputParametersWithTrack: instead and give the audioTrack of the generated asset.
I will implement this soon when I have some time.
Is there a solution to this one ?
Has anyone figured this out yet?
@CaptainKurt, @rFlex
.05 thingy didn't not make any difference to me. what about instead of hardcoding 0.05, use these keys ? kCMSampleBufferAttachmentKey_TrimDurationAtStart, kCMSampleBufferAttachmentKey_TrimDurationAtEnd
ref: http://uri-labs.com/macosx_headers/AVAssetWriterInput_h/Classes/AVAssetWriterInput/index.html
Thanks
Bump.
Anyone figure this one out?
Was anyone able to solve this? LOL