ASRT_SpeechRecognition icon indicating copy to clipboard operation
ASRT_SpeechRecognition copied to clipboard

Run ASRT on smartphones.

Open Evanston0624 opened this issue 9 months ago • 8 comments

我希望在android & IOS上運行ASRT,我使用python進行後續的測試 首先我將音檔讀入是使用:

import librosa
wav_signal, sample_rate = librosa.load(audio_path, sr=None)

接下來我讀取ASRT的模型參數(這個模型除了原始數據外,還加入了CV的TW數據。

from model_zoo.speech_model.keras_backend import SpeechModel251BN
def load_tf_model(model_path):
	AUDIO_LENGTH = 1600
	AUDIO_FEATURE_LENGTH = 200
	CHANNELS = 1
	# 原始拼音=1427、cv-TW=3、空白=1
	OUTPUT_SIZE = 1431
	sm251bn = SpeechModel251BN(
		input_shape=(AUDIO_LENGTH, AUDIO_FEATURE_LENGTH, CHANNELS),
		output_size=OUTPUT_SIZE
    )
	sm251bn.load_weights('./save_models/SpeechModel251bn/SpeechModel251bn_epoch40.model.h5')
	trained_model, base_model = sm251bn.get_model()
	return trained_model, base_model

我透過上述的代碼取得欲訓練的模型,trained_model是包含CTC loss的,因此我使用base_model 進行轉換。我共使用了ONNX與TF_lite進行測試: TF_lite:

def convert_tf_lite(tf_model, save_path):
	# 轉換为 TensorFlow Lite 模型
	converter = tf.lite.TFLiteConverter.from_keras_model(tf_model)
	tflite_model = converter.convert()
	# 保存 TensorFlow Lite 模型
	with open(save_path, 'wb') as f:
		f.write(tflite_model)
	return os.path.isfile(save_path)

ONNX:

def convert_tf_onnx(tf_model, save_path, opset):
	import tensorflow as tf
	import tf2onnx
	# 轉換為 ONNX 格式
	onnx_model, _ = tf2onnx.convert.from_keras(tf_model, opset=opset)

	# 保存 ONNX 模型
	with open(save_path, 'wb') as f:
		f.write(onnx_model.SerializeToString())
	return os.path.isfile(save_path)

接下來我使用了修改過的Spectrogram進行特徵提取

from speech_features import Spectrogram
data_pre = Spectrogram()
audio_features = data_pre.onnx_run(wavsignal=wav_signal, fs=sample_rate)
audio_features = adaptive_padding(input_data=audio_features, target_length=1600)

我在原始的Spectrogram類參考run創建了onnx_run,實際上只是匹配輸入參數的維度等資訊。

def onnx_run(self, wavsignal, fs=16000):
	if fs != 16000:
		raise ValueError(
			f"[Error] ASRT currently only supports wav audio files with a sampling rate of 16000 Hz, but this "
			f"audio is {fs} Hz.")

	# wav波形 加时间窗以及时移10ms
	time_window = 25  # 单位ms
	window_length = int(fs / 1000 * time_window)  # 计算窗长度的公式,目前全部为400固定值

	wav_arr = np.array(wavsignal)

	range0_end = int(len(wavsignal) / fs * 1000 - time_window) // 10 + 1  # 计算循环终止的位置,也就是最终生成的窗数
	data_input = np.zeros((range0_end, window_length // 2), dtype=np.float64)  # 用于存放最终的频率特征数据
	data_line = np.zeros((1, window_length), dtype=np.float64)

	for i in range(0, range0_end):
		p_start = i * 160
		p_end = p_start + 400

		data_line = wav_arr[p_start:p_end]
		data_line = data_line * self.w  # 加窗
		data_line = np.abs(fft(data_line))

		data_input[i] = data_line[0: window_length // 2]  # 设置为400除以2的值(即200)是取一半数据,因为是对称的

	data_input = np.log(data_input + 1)
	return data_input

接下來透過adaptive_padding將輸入的特徵轉換成跟原始輸入相同的尺寸

def adaptive_padding(input_data, target_length=1600):
	input_data = input_data.astype(np.float32)

	input_data = np.expand_dims(input_data, axis=0)  # 添加批量维度
	input_data = np.expand_dims(input_data, axis=-1)  # 添加通道维度
	# 计算需要填充的长度
	current_length = input_data.shape[1]
	padding_length = max(0, target_length - current_length)

	# 计算填充宽度
	left_padding = padding_length // 2
	right_padding = padding_length - left_padding
	pad_width = [(0, 0), (left_padding, right_padding), (0, 0), (0, 0)]

	# 进行填充
	padded_data = np.pad(input_data, pad_width, mode='constant').astype(np.float32)

 	return padded_data

經由上述轉換後的模型輸出結果,都是空白塊分數最高,後續調用tf.nn.ctc_beam_search_decoder與K.ctc_decode就沒有意義了。

想請問是否有相關的研究或實踐方法可以推薦?又或者我需要提供更多的測試或特定文件?

感謝

Evanston0624 avatar May 06 '24 05:05 Evanston0624