MagicSource

Results 902 issues of MagicSource

Execution failed for task ':flutter_native_timezone:verifyReleaseResources'. > A failure occurred while executing com.android.build.gradle.tasks.VerifyLibraryResourcesTask$Action > Android resource linking failed ERROR:/Users/jintian/dev/codes/apps/neso_flutter/build/flutter_native_timezone/intermediates/merged_res/release/values/values.xml:194: AAPT: error: resource android:attr/lStar not found.

Is possible make it work with Markdown?

亲爱的朋友,咱们真的应该好好考虑一下放一些音频示例在 readme 呀!这一定会超级棒的!

How to load using openai-whisper package to load the model?

https://github.com/LLaVA-VL/LLaVA-NeXT/blob/56cdba265cc786454115f98e5da967a99b532263/llava/model/llava_arch.py#L449 I quite not understand, isn't just ``` if num_images == 0: cur_image_features = image_features[cur_image_idx] cur_input_embeds_1 = self.get_model().embed_tokens(cur_input_ids) cur_input_embeds = cur_input_embeds_1 new_input_embeds.append(cur_input_embeds) new_labels.append(labels[batch_idx]) ``` ?

hi, for the txt image genreateion, why not try using existed LLMs, plus the decoder in tokenizer, and training the whole model with lora? just like SEEd does. So that...

Normaly, I using my data got a reasonable result, but same training process, just replace data with Cambrian 7M, got some extremly bad result, why?

是否有计划直接将SenseVoice嵌入到LLM中进行理解,将SenseVoice当作一个AudioEncoder,embedding与LLM融合,让LLM去进行更深层的理解

流失推理,直接stream写入到pc扬声器,总感觉有破麦的那种声音,应该如何解决啊 https://github.com/user-attachments/assets/d675e682-b2bc-468f-b850-aa4bea0b396b 很大概率爆破音