flutter_sound
flutter_sound copied to clipboard
[doc] Background playback
need Help for :
- Using the API together with jitsi SDK meeting
So I am using this library to record audio and streaming it to speech-to-text engine. After I try to implement the code, the app seems works, but the meeting SDK (i am using Jitsi SDK btw) runs slowly and laggy.
Here's a scratch to join a Jitsi meeting and start record the audio
@override
Widget build(BuildContext context) {
...
audioRecordStore = AudioRecordStore(tokenId:appStores.authToken, natsService: natsService, apiService: apiService, meetingId: appStores.meetingLink, key: meetingStore.jitsiKey );
meetingId = int.parse(appStores.meetingLink);
await joinMeeting(meetingId, appStores.user.username);
return WillPopScope(
onWillPop: (){SystemNavigator.pop();},
child: MaterialApp(
debugShowCheckedModeBanner: false,
theme: ThemeData(
fontFamily: 'AirbnbCereal',
backgroundColor: Colors.white,
),
home: Observer(
builder: (_) => Scaffold(
body: Observer(
key: _scaffoldKey,
builder: (_) => dashboardLanding(context, appStores, meetingStore, _scaffoldKey) ,
),
),
),
),
);
}
joinMeeting(int meetingId, String userDisplayname) async {
var bytes = utf8.encode('bk-meeting-'+meetingId.toString()); // data being hashed
var digest = sha256.convert(bytes);
print("Digest as bytes: ${digest.bytes}");
print("Digest as hex string: $digest");
try {
Map<FeatureFlagEnum, bool> featureFlags = {
FeatureFlagEnum.WELCOME_PAGE_ENABLED: false,
FeatureFlagEnum.INVITE_ENABLED: false,
FeatureFlagEnum.CLOSE_CAPTIONS_ENABLED: false,
FeatureFlagEnum.LIVE_STREAMING_ENABLED: false,
FeatureFlagEnum.CALENDAR_ENABLED: false,
FeatureFlagEnum.ADD_PEOPLE_ENABLED: false,
FeatureFlagEnum.RAISE_HAND_ENABLED: false,
};
// Here is an example, disabling features for each platform
if (Platform.isAndroid) {
// Disable ConnectionService usage on Android to avoid issues (see README)
featureFlags[FeatureFlagEnum.CALL_INTEGRATION_ENABLED] = false;
} else if (Platform.isIOS) {
// Disable PIP on iOS as it looks weird
featureFlags[FeatureFlagEnum.PIP_ENABLED] = false;
}
var options = JitsiMeetingOptions()
..serverURL = "https://meet.ncloud.bahasakita.co.id"
..room = digest.toString()// Required, spaces will be trimmed
..subject = " "
..userDisplayName = userDisplayname
..audioOnly = true
..audioMuted = true
..videoMuted = true
..featureFlags.addAll(featureFlags);
await JitsiMeet.joinMeeting(
options,
listener: JitsiMeetingListener(onConferenceWillJoin: _onConferenceWillJoin, onConferenceJoined: _onConferenceJoined, onConferenceTerminated: _onConferenceTerminated),
);
} catch (error) {
debugPrint("error: $error");
}
}
void _onConferenceWillJoin({message}) async {
debugPrint("_onConferenceWillJoin broadcasted with message: $message");
}
void _onConferenceJoined({message}) async{
await audioRecordStore.record();
debugPrint("_onConferenceJoined broadcasted with message: $message");
}
void _onConferenceTerminated({message}) async {
await audioRecordStore.stop();
debugPrint("_onConferenceTerminated broadcasted with message: $message");
}
_onError(error) {
debugPrint("_onError broadcasted: $error");
}
Here's the class for recording, I am using mobx state
...
part 'audiorecord.stores.g.dart';
const int SAMPLE_RATE = 16000;
class AudioRecordStore = _AudioRecordStore with _$AudioRecordStore;
abstract class _AudioRecordStore with Store {
ApiService apiService;
NatsService natsService;
@observable
NatsResponse response;
@observable
String message;
@observable
bool isRecording;
@observable
String meetingId;
@observable
String tokenId;
@observable
String key;
@computed
String get channel {
//...
return channel;
}
String _mPath;
FlutterSoundRecorder _recorder;
Food buffer;
StreamController<Food> recordingDataController;
Codec _codec = Codec.pcm16;
StreamSubscription _recordingDataSubscription;
_AudioRecordStore({
this.natsService,
this.apiService,
this.tokenId,
this.meetingId,
this.key,
}) {
_recorder = FlutterSoundRecorder();
isRecording = false;
}
@action
Future<void> record() async {
await _recorder.openAudioSession().then((value) async {
response = await natsService.requestStartStream(this.channel);
});
recordingDataController = StreamController<Food>();
_recordingDataSubscription =
recordingDataController.stream.listen((Food buffer) async {
if (buffer is FoodData) {
String audio = buffer.data.toString();
Map<String, dynamic> data = {
"audio": audio,
"transcribe": true,
};
response = await natsService.requestPostStream(
channel,
message: jsonEncode(data).toString(),
);
}
});
await _recorder.startRecorder(
toStream: recordingDataController.sink,
codec: _codec,
numChannels: 1,
sampleRate: SAMPLE_RATE,
);
isRecording = true;
}
@action
Future<void> stop() async {
try{
await _recorder.stopRecorder();
if (_recordingDataSubscription != null) {
await _recordingDataSubscription.cancel();
_recordingDataSubscription = null;
}
response = await natsService.requestStopStream(channel);
}catch(e, StackTrace){
print(e);
}
}
@action
Future<void> sendAudio() {}
}
Another question is, does flutter_sound run on background? Or, how can is use this library in a background so the record process will not interfere the meeting itself?
Hi Muhammad,
After I try to implement the code, the app seems works, but the meeting SDK (i am using Jitsi SDK btw) runs slowly and laggy.
You plugged several process together. I suggest that you investigate your architecture to realize which is the link responsible of your lag.
It could be Flutter Sound, but it could be also your natsService
, or even the encoding/decoding JSON data.
Your application seems very tough and interesting. Please share with us if you have some more informations.
Good luck Muhammad !
Another question is, does flutter_sound run on background? Or, how can is use this library in a background so the record process will not interfere the meeting itself?
Yes, Flutter Sound can run background. Several Flutter Sound users do that.
One of the possible problem is the amount of data that must be passed from the OS to Dart, and then re-encoding JSON, and then post this data.
Actually Flutter Sound can do Streaming only on raw PCM data. This is not very efficient. I would like to be able to encode the data on the flight, in the OS layer, before passing them to Flutter. Unfortunately I do not have much time to work on that, and actually there is not many developers working on Flutter Sound.
Hi Muhammad,
After I try to implement the code, the app seems works, but the meeting SDK (i am using Jitsi SDK btw) runs slowly and laggy.
You plugged several process together. I suggest that you investigate your architecture to realize which is the link responsible of your lag. It could be Flutter Sound, but it could be also your
natsService
, or even the encoding/decoding JSON data.Your application seems very tough and interesting. Please share with us if you have some more informations.
Good luck Muhammad !
Oh thanks!
Fyi, I send the data to API using NATS service, with this dart-nats library instead of REST.
I also assume that it may be one of that processes. I will try to isolate the JSON data, then I will inform you the progress.
Yes, Flutter Sound can run background. Several Flutter Sound users do that.
Is there any documentation on how to run flutter sound in background in this issue thread or repository? I still clueless how to implement it to run in background 😅
Ref [#547]
Registered in the Flutter Sound Project
Ref [#600]
This issue is stale because it has been open 90 days with no activity. Leave a comment or this will be closed in 7 days.