DeepSpeech icon indicating copy to clipboard operation
DeepSpeech copied to clipboard

Feature Request: tensorflow.js support

Open beriberikix opened this issue 5 years ago • 43 comments

Has the team looked into supporting Tensorflow.js? There is a tool and documentation to convert existing models to be compatible with the format needed by tfjs. I have use case that would benefit from running wav transcription example in the browser.

Note: I filed a similar feature request on tfjs project.

beriberikix avatar Jul 06 '19 15:07 beriberikix

Nothing stops you from experimenting. But we have not had time to try that.

lissyx avatar Jul 06 '19 18:07 lissyx

To be honest I fear we won't get any decent perfs with that, given the experiments without vectorization we could perform.

lissyx avatar Jul 06 '19 18:07 lissyx

Understood, I hope to experiment and report back!

I watched a recent talk by two leads and I'm optimistic, at least of the future perf. Today they're using WebGL to get better speed than vanilla JS and in the future they'll be looking into WebAssembly with Threads and SIMD.

beriberikix avatar Jul 06 '19 18:07 beriberikix

Yeah, threads and SIMD that's still a long way, according to a colleague working on that and wasm. Still curious about what you can get.

lissyx avatar Jul 06 '19 20:07 lissyx

FTR @beriberikix I don't know if the TF.js converter has the same constraints as the one for leveraging EdgeTPU, but we cannot (yet) convert our model for running on EdgeTPU. It's not impossible they may share constraints.

lissyx avatar Jul 08 '19 09:07 lissyx

I'm trying to convert deepspeech-0.5.1-models.tar.gz using tensorflow/tfjs-converter but I'm running into an issue (probably because I've never used the tool before!)

SavedModel file does not exist at: 
./deepspeech-0.5.1-models/output_graph.pb/{saved_model.pbtxt|saved_model.pb}

Do you know which, if any, signatures and/or tags were used in generating the SavedModel? Per the help output:

--signature_name SIGNATURE_NAME
                        Signature of the SavedModel Graph or TF-Hub module to
                        load. Applicable only if input format is "tf_hub" or
                        "tf_saved_model".
  --saved_model_tags SAVED_MODEL_TAGS
                        Tags of the MetaGraphDef to load, in comma separated
                        string format. Defaults to "serve". Applicable only if
                        input format is "tf_saved_model".

beriberikix avatar Jul 18 '19 13:07 beriberikix

You should have a look at the export function, maybe some parameters needs to be adjusted?

lissyx avatar Jul 18 '19 14:07 lissyx

@beriberikix, seems like the converter looks for saved_model.pb, but the graph is saved in output_graph.pb instead, so you might need to just rename this file. Other than that, here is a working example of how to convert SavedModel. The options

  --signature_name=serving_default \
  --saved_model_tags=serve

are merely default settings, so you might skip setting them if the model was saved without altering them.

aptlin avatar Jul 18 '19 14:07 aptlin

Ah, I tried something else. Looking at the export function I noticed it was saved as a frozen model, which the latest converter no longer supports:

Note: Session bundle and Frozen model formats have been deprecated in TensorFlow.js 1.0. Please use the TensorFlow.js 0.15.x backend to convert these formats, available in tfjs-converter 0.8.6.

I downgraded to 0.8.6 and got further before another error:

tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'feature_win_len' not in Op<name=NoOp; signature= -> >; NodeDef: {{node model_metadata}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

Which appears to be related to the version of tf used to originally generate the model. I'll try reverting back to latest and renaming the file before trying to investigate the tf version path.

beriberikix avatar Jul 18 '19 14:07 beriberikix

tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'feature_win_len' not in Op<name=NoOp; signature= -> >; NodeDef: {{node model_metadata}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

That's just from some metadata that we add to the exported model. Try to disable that and re-export ?

lissyx avatar Jul 18 '19 15:07 lissyx

@beriberikix Have you been able to make any progress ?

lissyx avatar Aug 29 '19 09:08 lissyx

Unfortunately not. I ran into a few issues and work got very busy lately. Hope to come back to but not in the near-term.

beriberikix avatar Sep 16 '19 02:09 beriberikix

Are those issues workable items we could help about?

lissyx avatar Sep 16 '19 06:09 lissyx

@beriberikix Have you made any progress? I'm interested in this feature, wanted to check before I get started.

alexcannan avatar Apr 01 '20 17:04 alexcannan

@alexcannan unfortunately I moved on. My project went in a different direction and I was a bit over my head to begin with. Would love to follow along with your progress!

beriberikix avatar Apr 01 '20 18:04 beriberikix

For sake of documentation, I was able to set up the tfjs-converter package to attempt to process the output_graph.pb model, using the following command:

tensorflowjs_converter --input_format=tf_frozen_model --output_format=tensorflowjs --output_node_names="logits,new_state_c,new_state_h,mfccs,metadata_version,metadata_sample_rate,metadata_feature_win_len,metadata_feature_win_step,metadata_alphabet" output_graph.pb output_graph.tfjs

The output_node_names I was able to access during the export() function in DeepSpeech.py.

Unforunately, tfjs does not support certain operations to properly convert the existing model.

ValueError: Unsupported Ops in the model before optimization BlockLSTM, AudioSpectrogram, Mfcc

So until tfjs implements these ops, it looks like a simple tfjs conversion won't be possible. There has been movement recently to set up audio-related ops like stft, but it will take some development to get this working. If anyone is interested in contributing, check out this ticket to get an idea of the op development process.

alexcannan avatar Apr 20 '20 19:04 alexcannan

Unforunately, tfjs does not support certain operations to properly convert the existing model.

Thanks, sadly this is aligned with our experience on several other tools, EdgeTPU included :/

lissyx avatar Apr 20 '20 19:04 lissyx

You could try exporting the TFLite model instead. It does not use BlockLSTM. You'll have to comment out the feature computation sub-graph (and figure out an alternative for computing MFCCs in JS), but maybe it's enough to make some progress.

reuben avatar Apr 20 '20 20:04 reuben

Meyda can be used to perform MFCC in JavaScript.

timpulver avatar Apr 20 '20 21:04 timpulver

That seems to work, using the static_rnn RNN impl (like we do for TFLite exports), but without converting to TFLite. https://gist.github.com/reuben/4330b69db52112982c63aa8f98912c9f

Then:

tensorflowjs_converter --input_format=tf_frozen_model --output_format=tfjs_graph_model --output_node_names="logits,new_state_c,new_state_h,metadata_version,metadata_sample_rate,metadata_feature_win_len,metadata_feature_win_step,metadata_alphabet" ../tfjs_test/output_graph.pb output_graph.tfjs
2020-04-20 23:17:51.520207: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:814] Optimization results for grappler item: graph_to_optimize
2020-04-20 23:17:51.520231: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   debug_stripper: debug_stripper did nothing. time = 0.09ms.
2020-04-20 23:17:51.520236: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   model_pruner: Graph size after: 980 nodes (-46), 1671 edges (-44), time = 235.555ms.
2020-04-20 23:17:51.520240: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   constant_folding: Graph size after: 980 nodes (0), 1671 edges (0), time = 1595.54199ms.
2020-04-20 23:17:51.520244: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   arithmetic_optimizer: Graph size after: 828 nodes (-152), 1494 edges (-177), time = 686.765ms.
2020-04-20 23:17:51.520248: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   dependency_optimizer: Graph size after: 793 nodes (-35), 1441 edges (-53), time = 54.915ms.
2020-04-20 23:17:51.520337: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   model_pruner: Graph size after: 793 nodes (0), 1441 edges (0), time = 49.553ms.
2020-04-20 23:17:51.520349: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   constant_folding: Graph size after: 793 nodes (0), 1441 edges (0), time = 755.521ms.
2020-04-20 23:17:51.520354: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   arithmetic_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 661.083ms.
2020-04-20 23:17:51.520359: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   dependency_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 54.496ms.
2020-04-20 23:17:51.520362: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   debug_stripper: debug_stripper did nothing. time = 13.755ms.
2020-04-20 23:17:51.520366: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   model_pruner: Graph size after: 793 nodes (0), 1441 edges (0), time = 36.591ms.
2020-04-20 23:17:51.520528: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   constant_folding: Graph size after: 793 nodes (0), 1441 edges (0), time = 762.896ms.
2020-04-20 23:17:51.520540: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   arithmetic_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 662.148ms.
2020-04-20 23:17:51.520544: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   dependency_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 55.646ms.
2020-04-20 23:17:51.520548: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   model_pruner: Graph size after: 793 nodes (0), 1441 edges (0), time = 50.742ms.
2020-04-20 23:17:51.520552: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   constant_folding: Graph size after: 793 nodes (0), 1441 edges (0), time = 754.065ms.
2020-04-20 23:17:51.520556: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   arithmetic_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 671.946ms.
2020-04-20 23:17:51.520559: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   dependency_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 54.968ms.
2020-04-20 23:17:55.483821: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:814] Optimization results for grappler item: graph_to_optimize
2020-04-20 23:17:55.483844: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   remapper: Graph size after: 768 nodes (-25), 1416 edges (-25), time = 82.316ms.
2020-04-20 23:17:55.483849: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   constant_folding: Graph size after: 768 nodes (0), 1416 edges (0), time = 791.112ms.
2020-04-20 23:17:55.483853: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   arithmetic_optimizer: Graph size after: 768 nodes (0), 1416 edges (0), time = 715.423ms.
2020-04-20 23:17:55.483857: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   dependency_optimizer: Graph size after: 768 nodes (0), 1416 edges (0), time = 56.041ms.
2020-04-20 23:17:55.483861: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   remapper: Graph size after: 768 nodes (0), 1416 edges (0), time = 96.295ms.
2020-04-20 23:17:55.483951: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   constant_folding: Graph size after: 768 nodes (0), 1416 edges (0), time = 771.628ms.
2020-04-20 23:17:55.483964: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   arithmetic_optimizer: Graph size after: 768 nodes (0), 1416 edges (0), time = 712.75ms.
2020-04-20 23:17:55.483968: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816]   dependency_optimizer: Graph size after: 768 nodes (0), 1416 edges (0), time = 55.314ms.
Writing weight file output_graph.tfjs/model.json...

reuben avatar Apr 20 '20 21:04 reuben

@reuben What version of tensorflowjs are you using? It doesn't look like the recommended 0.8.6 version, otherwise it wouldn't let you use --output_format=tfjs_graph_model.

I was able to build a basic output_graph.pb using reuben's diff applied to the current master, but upon running the conversion via the following command:

tensorflowjs_converter --input_format=tf_frozen_model --output_format=tensorflowjs --output_node_names="logits,new_state_c,new_state_h,metadata_version,metadata_sample_rate,metadata_feature_win_len,metadata_feature_win_step,metadata_alphabet" ./exports/output_graph.pb ./exports/output_graph.tfjs

I got the following error:

Using TensorFlow backend.
Traceback (most recent call last):
  File "/home/alex/miniconda3/envs/tfjs-conv/bin/tensorflowjs_converter", line 8, in <module>
    sys.exit(main())
  File "/home/alex/miniconda3/envs/tfjs-conv/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 352, in main
    strip_debug_ops=FLAGS.strip_debug_ops)
  File "/home/alex/miniconda3/envs/tfjs-conv/lib/python3.6/site-packages/tensorflowjs/converters/tf_saved_model_conversion_pb.py", line 331, in convert_tf_frozen_model
    skip_op_check=skip_op_check, strip_debug_ops=strip_debug_ops)
  File "/home/alex/miniconda3/envs/tfjs-conv/lib/python3.6/site-packages/tensorflowjs/converters/tf_saved_model_conversion_pb.py", line 117, in optimize_graph
    ', '.join(unsupported))
ValueError: Unsupported Ops in the model before optimization
AddV2

I added a --skip_op_check=SKIP_OP_CHECK flag to proceed past that ValueError and I was able to get to:

tensorflowjs_converter --input_format=tf_frozen_model --output_format=tensorflowjs --output_node_names="logits,new_state_c,new_state_h,metadata_version,metadata_sample_rate,metadata_feature_win_len,metadata_feature_win_step,metadata_alphabet" ./exports/output_graph.pb ./exports/output_graph.tfjs --skip_op_check=SKIP_OP_CHECK
Using TensorFlow backend.
2020-04-22 18:03:42.796417: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:581] Optimization results for grappler item: graph_to_optimize
2020-04-22 18:03:42.796448: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   debug_stripper: Graph size after: 1026 nodes (0), 1715 edges (0), time = 1.045ms.
2020-04-22 18:03:42.796453: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   model_pruner: Graph size after: 980 nodes (-46), 1671 edges (-44), time = 5.506ms.
2020-04-22 18:03:42.796458: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   constant folding: Graph size after: 975 nodes (-5), 1666 edges (-5), time = 1793.35901ms.
2020-04-22 18:03:42.796463: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   arithmetic_optimizer: Graph size after: 825 nodes (-150), 1489 edges (-177), time = 1143.177ms.
2020-04-22 18:03:42.796468: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   dependency_optimizer: Graph size after: 793 nodes (-32), 1441 edges (-48), time = 8.19ms.
2020-04-22 18:03:42.796473: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   model_pruner: Graph size after: 793 nodes (0), 1441 edges (0), time = 2.886ms.
2020-04-22 18:03:42.796494: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   remapper: Graph size after: 793 nodes (0), 1441 edges (0), time = 2.436ms.
2020-04-22 18:03:42.796498: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   constant folding: Graph size after: 793 nodes (0), 1441 edges (0), time = 887.803ms.
2020-04-22 18:03:42.796512: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   arithmetic_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 1011.50897ms.
2020-04-22 18:03:42.796516: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   dependency_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 7.577ms.
2020-04-22 18:03:42.796521: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   debug_stripper: Graph size after: 793 nodes (0), 1441 edges (0), time = 1.091ms.
2020-04-22 18:03:42.796525: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   model_pruner: Graph size after: 793 nodes (0), 1441 edges (0), time = 2.682ms.
2020-04-22 18:03:42.796529: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   constant folding: Graph size after: 793 nodes (0), 1441 edges (0), time = 889.931ms.
2020-04-22 18:03:42.796534: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   arithmetic_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 1015.008ms.
2020-04-22 18:03:42.796538: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   dependency_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 7.938ms.
2020-04-22 18:03:42.796542: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   model_pruner: Graph size after: 793 nodes (0), 1441 edges (0), time = 2.935ms.
2020-04-22 18:03:42.796546: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   remapper: Graph size after: 793 nodes (0), 1441 edges (0), time = 2.492ms.
2020-04-22 18:03:42.796561: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   constant folding: Graph size after: 793 nodes (0), 1441 edges (0), time = 904.186ms.
2020-04-22 18:03:42.796565: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   arithmetic_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 1013.05298ms.
2020-04-22 18:03:42.796570: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583]   dependency_optimizer: Graph size after: 793 nodes (0), 1441 edges (0), time = 7.999ms.
Writing weight file ./exports/output_graph.tfjs/tensorflowjs_model.pb...
2020-04-22 18:03:43.029490: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-04-22 18:03:43.051779: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3497900000 Hz
2020-04-22 18:03:43.052009: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5580c09fc300 executing computations on platform Host. Devices:
2020-04-22 18:03:43.052033: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
Traceback (most recent call last):
  File "/home/alex/miniconda3/envs/tfjs-conv/bin/tensorflowjs_converter", line 8, in <module>
    sys.exit(main())
  File "/home/alex/miniconda3/envs/tfjs-conv/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 352, in main
    strip_debug_ops=FLAGS.strip_debug_ops)
  File "/home/alex/miniconda3/envs/tfjs-conv/lib/python3.6/site-packages/tensorflowjs/converters/tf_saved_model_conversion_pb.py", line 331, in convert_tf_frozen_model
    skip_op_check=skip_op_check, strip_debug_ops=strip_debug_ops)
  File "/home/alex/miniconda3/envs/tfjs-conv/lib/python3.6/site-packages/tensorflowjs/converters/tf_saved_model_conversion_pb.py", line 139, in optimize_graph
    extract_weights(optimized_graph, output_graph, quantization_dtype)
  File "/home/alex/miniconda3/envs/tfjs-conv/lib/python3.6/site-packages/tensorflowjs/converters/tf_saved_model_conversion_pb.py", line 183, in extract_weights
    [const_manifest], path, quantization_dtype=quantization_dtype)
  File "/home/alex/miniconda3/envs/tfjs-conv/lib/python3.6/site-packages/tensorflowjs/write_weights.py", line 119, in write_weights
    group_bytes, total_bytes, _ = _stack_group_bytes(group)
  File "/home/alex/miniconda3/envs/tfjs-conv/lib/python3.6/site-packages/tensorflowjs/write_weights.py", line 196, in _stack_group_bytes
    _assert_valid_weight_entry(entry)
  File "/home/alex/miniconda3/envs/tfjs-conv/lib/python3.6/site-packages/tensorflowjs/write_weights.py", line 305, in _assert_valid_weight_entry
    data.dtype.name + ' not supported.')
ValueError: Error dumping weight metadata_alphabet, dtype object not supported.

Anyone know why my model has AddV2 ops? I tried to use the latest release (v0.7.0a3) but only the current master had the files applicable for the diff. Perhaps I can try to restructure the v0.6.1 release according to reuben's diff and go from there.

alexcannan avatar Apr 22 '20 22:04 alexcannan

@alexcannan I just did pip install tensorflowjs in a separate virtual environment, so the latest version. We're releasing 0.7.0 in the next few days so you should be able to more easily reproduce this.

reuben avatar Apr 23 '20 07:04 reuben

Upgrading to the most recent version seemed to work! I'll put together a client to test performance.

alexcannan avatar Apr 24 '20 19:04 alexcannan

So, I naïvely put together a small app to transcribe audio thinking that model.predict(inputBuffer) would give me transcribed audio, but it looks like model.predict() calls the session_->Run() function found in the tfmodelstate.cc file under infer(), since running console.log(model.executor.inputs) outputs:

[{
    "name": "input_node",
    "shape": [ 1, 16, 19, 26 ],
    "dtype": "float32"
  },
  {
    "name": "input_lengths",
    "shape": [ 1 ],
    "dtype": "int32"
  },
  {
    "name": "previous_state_c",
    "shape": [ 1, 2048 ],
    "dtype": "float32"
  },
  {
    "name": "previous_state_h",
    "shape": [ 1, 2048 ],
    "dtype": "float32"
  }]

Getting this to work looks like it'll require porting a lot of the native_client C level API to typescript or maybe a WASM module that links up with the model.predict call. I'll continue fudging around with this but if anyone has any quick and dirty ideas I'd love to hear them.

alexcannan avatar Apr 27 '20 18:04 alexcannan

You're right, you'd need to reimplement the whole client logic. Some starting options to get some starting results faster:

  • Export a model with n_steps=None, taking in an entire audio file at once instead of doing streaming. See here for an example: https://github.com/mozilla/DeepSpeech/blob/6e9b251da27cac697783b4076f9128d6c2d5467f/training/deepspeech_training/train.py#L847-L884
  • You can implement a simple greedy CTC decoder as a start. Accuracy won't be as good as with full blown beam search decoding, but it'll give you a baseline. See here for some info: https://distill.pub/2017/ctc/ (Basically take the most likely label at each time step, collapse duplicates, remove blanks.)

reuben avatar Apr 27 '20 19:04 reuben

hi @alexcannan how did you load the model.json in the browser. I am new to js. I have kept the model.json file that you get after tensorflowjs_converter in the src folder. I am building a react app

This is the model.json in case it helps

import * as tf from '@tensorflow/tfjs';
import {loadGraphModel} from '@tensorflow/tfjs-converter';

const MODEL_URL = "./model.json";

class App extends React.Component {
  componentDidMount(){
    (async () => {
      const model = await loadGraphModel(MODEL_URL);
      console.log('tns', model)
    })()

Error

tf-core.esm.js:17 Uncaught (in promise) Error: Failed to parse model JSON of response from model.json. Please make sure the server is serving valid JSON for this request.
    at t.<anonymous> (tf-core.esm.js:17)
    at tf-core.esm.js:17
    at Object.throw (tf-core.esm.js:17)
    at s (tf-core.esm.js:17)

MittalShruti avatar Apr 28 '20 07:04 MittalShruti

@MittalShruti I would try and just use the tf.loadGraphModel() method from the main tfjs package instead of whatever that tfjs-converter import is. I'm able to import the model using TFJS 1.7.3.

alexcannan avatar Apr 28 '20 21:04 alexcannan

I was not using a server. http-server resolved the issue

MittalShruti avatar Apr 29 '20 09:04 MittalShruti

@alexcannan I am a bit confused as to why are you using model.predict to get the transcription. I looked at evaluate tflite.py code, it has ds.stt(audio) to transcribe an audio. Similarly, the client.js file How did you figure out that you need to call model.predict and what should be its parameters?

console.log(model.executor.inputs) outputs 4 parameters that can be found in the tfmodelstate.cc file. But I am unable to figure out a connection between tfmodelstate and model.predict

MittalShruti avatar Apr 29 '20 11:04 MittalShruti

@MittalShruti TensorFlow.JS is an alternative backend, similar to TFModelState and TFLiteModelState in the native client code. This means one has to reimplement the entire inference logic to be able to use the TF.JS converted model. You're not going to magically get the the DeepSpeech API from just converting the model, it needs to be implemented in JS (or compiled/transpiled, I guess).

reuben avatar Apr 29 '20 13:04 reuben