MuseScore
MuseScore copied to clipboard
Staves and voices not played to MIDI output correctly
Issue type
General playback bug
Bug description
Since MU4, it is no longer possible to select to which MIDI channel an instrument must be played. And MU4's internal configuration is therefore not configurable by the user.
Two issues results from that : 1/ One can at least expect that all the voices of a same staff being played to the same channel. This is not the case. E.g. the voice 2 is not played on the same channel as the voice 1, leading that voice not being heard on the MIDI receiver.
2/ All the instruments are played on the same channel, where one could expect that every staff being sent to a separated channel.
Steps to reproduce
For Issue (1)
- Create a score with 1 staff
- Add a few note on the voice 1, add a few notes on voice 2
- Connect a MIDI receiver and configure MU4's midi output to that receiver
- Configure your MIDI receiver's input to only channel 1
- Play the score in MU4
Expected : All the notes are heard whether on voice 1 or voice 2
Actual: Only the notes on voice 1 are heard not the ones on voice 2
For Issue (2)
- Create a score with 2 staves
- Add a few note on the staff 1, add a few notes on the staff 2
- Connect a MIDI receiver and configure MU4's midi output to that receiver
- Configure your MIDI receiver's input to only channel 1
- Play the score in MU4
Expected : Only the staff 1 is heard not the staff 2
Actual: Both staves are heard.
Screenshots/Screen recordings
No response
MuseScore Version
4.02
Regression
Yes, this used to work in MuseScore 3.x and now is broken
Operating system
Windows 10
Additional context
OS: Windows 10 Version 2009, Arch.: x86_64, MuseScore version (64-bit): 4.0.2-230651553, revision: github-musescore-musescore-dbe7c6d
See https://musescore.com/user/36377501/scores/11363188/s/7jiuxK
Hello!
This isn't a use case we encounter very often. Could you please provide more information about your MIDI receiver and your workflow? (what is the desired outcome and the reasoning behind the workflow)
I connect a Roland ep-09 to a Windows 10 PC through a USB/MIDI adaptor CME U2 MIDI Pro.
The workflow is the following: I work with a headphone and enter, check, test new ideas on my keyboard while playing what I have already written on my score. So I need all the sounds (both what I have already written (and playing with MU) and what I'm playing on my keyboard) to come from a single source: my keyboard., So I connect my headphone on my keyboard and and configure MU to output all the sounds to my keyboard through MIDI. When I'm ready with a new idea/passage, I switch in Input mode and enter through my keyboard that new passage.
In MU3 one can configure which staff goes to which channel. In MU4 this is no longer possible. Si it's not possible to configure which staff must be sent to my keyboard and which must not.
Further more, the different voices are not sent to the same channel. E.g. In the attached score, the first part of the score (which is on the voice 1) is sent to the channel 1. While the second part of the score (which is on the voice 2) is not sent to the channel. (I couldn't manage to use MIDI-OX to identify to which channel those notes were sent).
I also need this feature added back for MuseScore to work properly in my workflow. I am arranging a piece for organ and piano, and to be able to play back the organ part I have MuseScore's MIDI output connected through a MIDI loopback device into GrandOrgue, and I need to be able to output each staff to a different channel so notes will play through the correct divisions. I would also like to be able to output the piano part to a Disklavier system, which is on a different MIDI device.
I think a way to both fix this, fix part of #15606, and add some more flexibility with MIDI output could be to have another option in the Sound drop-down menu in the Mixer for output to MIDI devices. Once MIDI output is selected, a window would open that would allow for a MIDI device to be selected and settings such as channel and transposition to be set for each voice, allowing for any voice to output on any channel depending on what is needed.
It seems like the MIDI Out support in MuseScore 4 is hard-coded to send everything to channel 1 on one single MIDI-out port at all times. Besides causing all instruments from all staves/parts to be irreconcilably conflated, there is apparently also no way to stop the mixer from still generating wave audio for them anyway.
In addition to selecting from the internally generated sampled sound libraries, shouldn't the mixer have a notion of allowing a "MIDI channel/device" output pair for rendering the instrument instead? And as noted, In my opinion, the internal digital audio pathway should be disabled for such MIDI-out assignments.
Finally, it would also be nice to be able to select "none" as a target from that same populated list, to configure that the respective instrument should not be rendered at all (neither MIDI nor wave audio).
Currently, MIDI output is enabled if and only if a SoundFont is selected. Ideally, we should be able to select for each instrument individually to which MIDI device it should send signals, and on which channel; and separately from MS Basic.
The current behaviour w.r.t. choosing channels is the way it is because it is optimised for correct SoundFont playback and not for MIDI output: each voice has its own channel, and additionally each playing technique (e.g. arco,pizz) has its own channel. For MIDI output, that should be different of course.
Some UI design is needed for how the user would choose MIDI outputs.
I've about reached the conclusion that none of the musescore developers uses external MIDI. It just doesn't seem to be a priority, the happy path is definitely musescore internal sounds.
Personally I consider MIDI output as basically "not implemented". It was implemented in a rush, in an attempt to match MS3 in the most minimal way possible, but even that isn't quite achieved. At some point, we should make a proper design and a proper implementation. It's just another of the hundreds of things-we-should-actually-do-properly-at-some-point-but-we-don't-know-when-because-we're-always-busy (well, that is how I see it at least).
MIDI channel assignment / control is a vital feature for MIDIs played back on any modern digital keyboard. The channel selection buttons on a digital keyboard become useless if the MIDI file lumps all notes into one channel!
If I may ask, please put the MIDI channel selection feature back into Musescore 4 at the earliest opportunity.
I follow this tutorial to synchronize REAPER with MuseScore and uses midi channels on outputs. Is a workaround available in MS4?
A robust midi design should include the ability to enable more than one midi input port (in MS Preferences). Currently it's a single-select dropdown in v4.
Then port/channel selection could be implemented in the Mixer channel strip and/or Staff/Part Properties (select a staff and right-click and select Staff/Part Properties).
edit: Specifying independent midi port/ch input and output for each stave is really the most robust implementation.
To add a use case to this discussion. I write organ parts in MuseScore that can then be sent to the actual organ via USB MIDI device. Channel 1 is the Pedal division, Channel 2 is the Great division, and Channel 3 is the Swell division. I would add an organ instrument 3 times (3 staves) in MuseScore 3 and rename them Sw/Gr/Pd and assign their MIDI outputs to the corresponding MIDI channels. I could then have MuseScore "play" the organ for me so that I could listen to it from different places in the church to choose better registrations on the organ. Because the MIDI controller is built into the organ, I have no control over channels and so MuseScore would need to output on the correct channels. In the meantime, I've been running MuseScore 3 on my laptop (which I connect to the organ) and export the files as Music XML from MuseScore 4 on my desktop. Inconvenient for sure, but at least it still works!
I would like to add another use case: Sketching.
During the sketching stage, I do not want to think about orchestration details! Even sketching, say, a woodwind quintet -- I want to have the flexibility to rethink which instrument plays what at a later time. I don't want to try to compose directly into the specific instruments. I just want a 2- or 3-staff abstract instrumentation.
But MS4 playback only works through its own internal instruments, and those are limited to conventional Western instruments.
There seems to be no option to use my own synth instead.
Piano notes decay (so the playback may be misleading as to the effect of the notes). Organ, at higher pitches, adds a sub-octave, so open spacings sound closed. Pretty much only got Accordion :wink:
I may actually revert to MS3 for this.
I need to assign the midi output channel, like the other users. This is my workflow: I want to arrange/compose on a score, able at the same time to hear the track automation applied to effects and instruments. I use Reaper as my main DAW. I need to write in musescore and playback each channel into Reaper and draw the timeline of operations I will run once the score is finished. At the final state of my creation I will export the midi file from musescore and import in reaper to do the last part of the mixing. This workflow is much more efficient than working on two separate programs during the composition.
I compose electronic music using multipart harmony, as you might a string quartet. Indeed, for early competition, I use the string voices in MuseScore. I would love to connect MuseScore to my modular synth directly so I can finish the composition and arrangement using the real timbres and co-evolve the timbres with the harmonic choices I make. Staff notation is by far the easiest way to see and edit multipart harmony. Currently, I have to export the notes to a MIDI sequencer and finish the composition there. However, sequencers make it very difficult to see more than one instrument's part. I really wish I could have the ease of use of MuseScore with the hand-crafted timbres of my euro rack.
Why did you close this thread? It’s a regression from v3 behaviour. For me is crucial to have this feature as I did before
According to the history listed here, it was closed because it was fixed in https://github.com/musescore/MuseScore/pull/24944. This fix has probably been included in the 4.5 update, released last Friday. If anything is still broken after this update, please let us know; then we can either reopen this issue, or create a new one.
However, keep in mind that we are aware of the big limitations of MIDI output currently, and that we plan to overhaul it anyway, so it may not be useful to create issues about certain details.
@cbjeukendrup thank you for the replay. Today I installed v4.5 because I really need the feature, basically I need the feature described here: https://github.com/musescore/MuseScore/issues/19262 be cause it allow me an efficient workflow in compose and producing a piece of music.
I opened my project with v4.5 (ten instruments each with a double staff to trigger articulations to the vst instruments), then I exported in midi format, then I opened into Reaper. here I notice that some unrelated instruments share the same channel in a not coherent way.
I only want to know if you plan to resolve the regression #19262 (new feature if you wish), and if you have some expectation about the version. Do you also plan to let musescore output the midi timecode in order to link the playback to a daw like Reaper (that is able to receive and sync to it)?
We're certainly planning to address https://github.com/musescore/MuseScore/issues/19262, but I can't say yet when.
Outputting time code too might deserve a separate feature request. FWIW, there are also ongoing efforts to support JACK synchronisation, but of course that doesn't stop us from outputting time code information in addition.
Yes, also Jack is not working on Mac systems. Do I need to open a new feature request?
The JACK work is already in progress anyway, so no feature request needed for that; but for MIDI time code output, you could create a request, if that still matters after we implement JACK.