Contents

Internals - how it works

This chapter explains how MusE is built internally, and is meant to be an aid for developers wanting to quickly start up with MusE. For details on why stuff is done please refer to the following chapter.

User interface programming

We use the QT Toolkit for GUI- and other programming. The QT-Assistant is an important tool for getting help. Almost everything can be looked up there.

GUIs can be either be hardcoded (see fterfirsta`a=95 _ `a=58 : `a>64 `a<91 aaa <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>rranger.cpp for an example) or can be created using QT-Designer (see the dialogs under fterfirstw`w=95 _ `w=58 : `w>64 `w<91 www <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>idgets/function_dialogs/ for mostly cleanly-written examples). Don't forget to add your fterfirstc`c=95 _ `c=58 : `c>64 `c<91 ccc <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>pp, fterfirsth`h=95 _ `h=58 : `h>64 `h<91 hhh <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>> and fterfirstu`u=95 _ `u=58 : `u>64 `u<91 uuu <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>i files to the corresponding sections in the fterfirstC`C=95 _ `C=58 : `C>64 `C<91 CCC <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>MakeLists.txt!

Additionally, MusE offers some custom widgets, like menu title items etc. Following, there will be a small, unordered list about custom widgets:


Configuration

Configuration is a bit pesky in MusE in its current state. If you get confused by reading this chapter, that's a sign of a sane mind.

There are three kinds of configuration items:

Reading configuration

fterfirstv`v=95 _ `v=58 : `v>64 `v<91 vvv <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>oid MusECore::readConfiguration(Xml&, bool, bool) in fterfirstc`c=95 _ `c=58 : `c>64 `c<91 ccc <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>onf.cpp is the central point of reading configuration. It is called when MusE is started first (by fterfirstb`b=95 _ `b=58 : `b>64 `b<91 bbb <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>ool MusECore::readConfiguration()), and also when a song is loaded.
It can be instructed whether to read MIDI ports (3), global configuration and MIDI ports (1+3). Per-Song configuration is always read (2).

When adding new configuration items and thus altering fterfirstr`r=95 _ `r=58 : `r>64 `r<91 rrr <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>eadConfiguration(), you must take care to place your item into the correct section. The code is divided into the following sections:

The sections are divided by comments (they contain --, so just search for them). Please do not just remove code for reading obsolete entries, but always add an appropriate entry to the 'skipping' section in order to prevent error messages when reading old configs.

Writing configuration

Global configuration is written using the fterfirstM`M=95 _ `M=58 : `M>64 `M<91 MMM <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>usEGui::MusE::writeGlobalConfiguration() functions, while per-song-config is written by fterfirstM`M=95 _ `M=58 : `M>64 `M<91 MMM <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>usEGui::MusE::writeConfiguration() (notice the missing fterfirstG`G=95 _ `G=58 : `G>64 `G<91 GGG <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>lobal; both implemented in fterfirstc`c=95 _ `c=58 : `c>64 `c<91 ccc <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>onf.cpp).

fterfirstw`w=95 _ `w=58 : `w>64 `w<91 www <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>riteConfiguration is actually just a subset of the code in fterfirstw`w=95 _ `w=58 : `w>64 `w<91 www <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>riteGlobalConfiguration. Duplicate code!

Song state

Additionally to per-song configuration, there is the song's state. This contains "the song", that is all tracks, parts and note events, together with information about the currently opened windows, their position, size, settings and so on. Adding new items here is actually pretty painless: Configuration is read and written using fterfirstM`M=95 _ `M=58 : `M>64 `M<91 MMM <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>usECore::Song::read and fterfirst:`:=95 _ `:=58 : `:>64 `:<91 ::: <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>:write, both implemented in fterfirsts`s=95 _ `s=58 : `s>64 `s<91 sss <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>ongfile.cpp. There are no caveats.

How to add new items

When adding global configuration items, then add them into the second block ("global configuration") in fterfirstr`r=95 _ `r=58 : `r>64 `r<91 rrr <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>eadConfiguration and into fterfirstw`w=95 _ `w=58 : `w>64 `w<91 www <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>riteGlobalConfiguration.

When adding just-per-song items, better don't bother to touch the "configuration" code and just add it to the song's state (there might be rare exceptions).

When adding global configuration items, make sure you add them into the correct section of fterfirstr`r=95 _ `r=58 : `r>64 `r<91 rrr <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>eadConfiguration, and into fterfirstw`w=95 _ `w=58 : `w>64 `w<91 www <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>riteGlobalConfiguration.

User controls and automation

Handling user input

Plugins and synthesizers

Overview

When the user launches a plugin's GUI, either a MusE-window with the relevant controls is shown, or the native GUI is launched. MusE will communicate with this native GUI through OSC (Open Sound Control). The relevant classes are fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginGui, fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginIBase (in fterfirstp`p=95 _ `p=58 : `p>64 `p<91 ppp <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>lugin.h) and fterfirstO`O=95 _ `O=58 : `O>64 `O<91 OOO <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>scIF (in fterfirsto`o=95 _ `o=58 : `o>64 `o<91 ooo <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>sc.h).

If the user changes a GUI element, first the corresponding control is disabled, making MusE not steadily update it through automation while the user operates it. Then MusE will update the plugin's parameter value, and also record the new value. When appropriate, the controller is enabled again.

Processing the input, recording

Upon operating a slider, fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginIBase::setParam is called, which usually writes the control change into the ringbuffer fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginI::_controlFifo. (fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginI::apply(), fterfirstD`D=95 _ `D=58 : `D>64 `D<91 DDD <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>ssiSynthIF::getData() will read this ringbuffer and do the processing accordingly). Furthermore, fterfirstA`A=95 _ `A=58 : `A>64 `A<91 AAA <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>udioTrack::recordAutomation is called, which either directly modifies the controller lists or writes the change into a "to be recorded"-list (fterfirstA`A=95 _ `A=58 : `A>64 `A<91 AAA <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>udioTrack::_recEvents) (depending on whether the song is stopped or played).

The fterfirstA`A=95 _ `A=58 : `A>64 `A<91 AAA <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>udioTrack::_recEvents list consists of fterfirstC`C=95 _ `C=58 : `C>64 `C<91 CCC <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trlRecVal items (see fterfirstc`c=95 _ `c=58 : `c>64 `c<91 ccc <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trl.h), which hold the following data:

It is processed when the song is stopped. The call path for this is: fterfirstS`S=95 _ `S=58 : `S>64 `S<91 SSS <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>ong::stopRolling calls fterfirstS`S=95 _ `S=58 : `S>64 `S<91 SSS <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>ong::processAutomationEvents calls fterfirstA`A=95 _ `A=58 : `A>64 `A<91 AAA <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>udioTrack::processAutomationEvents. This function removes the old events from the track's controller list and replaces them with the new events from fterfirst_`_=95 _ `_=58 : `_>64 `_<91 ___ <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>recEvents. In fterfirstA`A=95 _ `A=58 : A <`<=95 _ `<=58 : < u@nderscorehyph<271>>UTO_WRITE mode, just all controller events within the recorded range are erased; in fterfirstA`A=95 _ `A=58 : A <`<=95 _ `<=58 : < u@nderscorehyph<271>>UTO_TOUCH mode, the fterfirstA`A=95 _ `A=58 : A <`<=95 _ `<=58 : < u@nderscorehyph<271>>RVT_START and fterfirstA`A=95 _ `A=58 : A <`<=95 _ `<=58 : < u@nderscorehyph<271>>RVT_STOP types of the fterfirstC`C=95 _ `C=58 : `C>64 `C<91 CCC <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trlRecVal events are used to determine the range(s) which should be wiped.

How it's stored

Automation data is kept in fterfirstA`A=95 _ `A=58 : `A>64 `A<91 AAA <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>udioTrack::_controller, which is a fterfirstC`C=95 _ `C=58 : `C>64 `C<91 CCC <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trlListList, that is, a list of fterfirstC`C=95 _ `C=58 : `C>64 `C<91 CCC <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trlLists, that is, a list of lists of controller-objects which hold the control points of the automation graph. The fterfirstC`C=95 _ `C=58 : `C>64 `C<91 CCC <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trlList also stores whether the list is meant discrete (a new control point results in a value-jump) or continuous (a new control point results in the value slowly sloping to the new value). Furthermore, it stores a fterfirst_`_=95 _ `_=58 : `_>64 `_<91 ___ <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>curVal (accessed by fterfirstc`c=95 _ `c=58 : `c>64 `c<91 ccc <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>urVal()), which holds the currently active value, which can be different from the actually stored value because of user interaction. This value is also used when there is no stored automation data.

fterfirstA`A=95 _ `A=58 : `A>64 `A<91 AAA <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>udioTrack::addController and fterfirstr`r=95 _ `r=58 : `r>64 `r<91 rrr <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>emoveController are used to add/remove whole controller types; the most important functions which access fterfirst_`_=95 _ `_=58 : `_>64 `_<91 ___ <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>controller are:

Whenever a fterfirstC`C=95 _ `C=58 : `C>64 `C<91 CCC <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trlList has been manipulated, fterfirstM`M=95 _ `M=58 : `M>64 `M<91 MMM <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>usEGlobal::song->controllerChange(Track*) shall be called, which emits the fterfirstM`M=95 _ `M=58 : `M>64 `M<91 MMM <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>usEGlobal::song->controllerChanged(Track*) signal in order to inform any parts of MusE about the change (currently, only the arranger's part canvas utilizes this).

Enabling and disabling controllers

Disabling the controller is both dependent from the current automation mode and from whether the GUI is native or not. In fterfirstA`A=95 _ `A=58 : A <`<=95 _ `<=58 : < u@nderscorehyph<271>>UTO_WRITE mode, once a slider is touched (for MusE-GUIs) or once a OSC control change is received (for native GUIs), the control is disabled until the song is stopped or seeked.

In fterfirstA`A=95 _ `A=58 : A <`<=95 _ `<=58 : < u@nderscorehyph<271>>UTO_TOUCH (and currently (r1492) fterfirstA`A=95 _ `A=58 : A <`<=95 _ `<=58 : < u@nderscorehyph<271>>UTO_READ, but that's to be fixed) mode, once a MusE-GUI's slider is pressed down, the corresponding control is disabled. Once the slider is released, the control is re-enabled again. Checkboxes remain in "disabled" mode, however they only affect the recorded automation until the last toggle of the checkbox. (Example: start the song, toggle the checkbox, toggle it again, wait 10 seconds, stop the song. This will NOT overwrite the last 10 seconds of automation data, but everything between the first and the last toggle.). For native GUIs, this is a bit tricky, because we don't have direct access to the GUI widgets. That is, we have no way to find out whether the user doesn't touch a control at all, or whether he has it held down, but just doesn't operate it. The current behaviour for native GUIs is to behave like in fterfirstA`A=95 _ `A=58 : A <`<=95 _ `<=58 : < u@nderscorehyph<271>>UTO_WRITE mode.

The responsible functions are: fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginI::oscControl and fterfirstD`D=95 _ `D=58 : `D>64 `D<91 DDD <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>ssiSynthIF::oscControl for handling native GUIs, fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginI::ctrlPressed and fterfirstc`c=95 _ `c=58 : `c>64 `c<91 ccc <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trlReleased for MusE default GUIs and fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginI::guiParamPressed, fterfirstg`g=95 _ `g=58 : `g>64 `g<91 ggg <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>uiParamReleased, fterfirstg`g=95 _ `g=58 : `g>64 `g<91 ggg <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>uiSliderPressed and fterfirstg`g=95 _ `g=58 : `g>64 `g<91 ggg <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>uiSliderReleased for MusE GUIs read from a UI file; fterfirstg`g=95 _ `g=58 : `g>64 `g<91 ggg <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>uiSlider* obviously handle sliders, while fterfirstg`g=95 _ `g=58 : `g>64 `g<91 ggg <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>uiParam* handle everything else which is not a slider. They call fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginI::enableController to enable/disable it.

Furthermore, on every song stop or seek, fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginI::enableAllControllers is called, which re-enables all controllers again. The call paths for this are:

Design decisions

Automation

As of revision 1490, automation is handled in two ways: User-generated (live) automation data (generated by the user moving sliders while playing) is fed into fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginI::_controlFifo. Automation data is kept in fterfirstA`A=95 _ `A=58 : `A>64 `A<91 AAA <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>udioTrack::_controller, which is a fterfirstC`C=95 _ `C=58 : `C>64 `C<91 CCC <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trlListList, that is, a list of fterfirstC`C=95 _ `C=58 : `C>64 `C<91 CCC <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trlLists, that is, a list of lists of controller-objects which hold the control points of the automation graph. The fterfirstC`C=95 _ `C=58 : `C>64 `C<91 CCC <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trlList also stores whether the list is meant discrete (a new control point results in a value-jump) or continous (a new control point results in the value slowly sloping to the new value).

While fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginI::_controlFifo can be queried very quickly and thus is processed with a very high resolution (only limited by the minimum control period setting), the automation value are expensive to query, and are only processed once in an audio driver period. This might lead to noticeable jumps in value.

This could possibly be solved in two ways:

Maintaining a slave control list

This approach would maintain a fully redundant slave control list, similar to fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginI::_controlFifo. This list must be updated every time any automation-related thing is changed, and shall contain every controller change as a tuple of controller number and value. This could be processed in the same loop as fterfirstP`P=95 _ `P=58 : `P>64 `P<91 PPP <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>luginI::_controlFifo, making it comfortable to implement; furthermore, it allows to cleanly offer automation-settings at other places in future (such as storing automation data in parts or similar).

Holding iterators

We also could hold a list of iterators of the single fterfirstC`C=95 _ `C=58 : `C>64 `C<91 CCC <`<=95 _ `<=58 : `<>64 `<<91 <<< c@amelhyph<269>>trlLists. This would also cause low CPU usage, because usually, the iterators only need to be incremented once. However, it is pretty complex to implement, because the iterators may become totally wrong (because of a seek in the song), and we must iterate through a whole list of iterators.

Just use the current data access functions

By just using the current functions for accessing automation data, we might get a quick-and-dirty solution, which however wastes way too much CPU ressources. This is because on every single frame, we need to do a binary search on multiple controller lists.

Feature requests

Per-Part automation and more on automation

Automation shall be undo-able. Automation shall reside in parts which are exchangeable, clonable etc (like the MIDI- and Wave-Parts). Global per-synth/per-audiotrack automation shall also be available, but this can also be implemented as special case of part automation (one long part).

Pre-Rendering tracks

The feature

All tracks shall be able to be "pre-renderable". Pre-rendering shall be "layered". Pre-rendering shall act like a transparent audio cache: Audio data is (redundantly) stored, wasting memory in order to save CPU.

That is: Each track owns one or more wave-recordings of the length of the song. If the user calls "pre-render" on a track, then this track is played quasi-solo (see below), and the raw audio data is recorded and stored in the "layer 0" wave recording. If the user has any effects set up to be applied, then each effect is applied on a different layer (creating layer 1, layer 2 etc).

This means, that also MIDI and drum tracks can have effects (which usually only operate on audio, but we HAVE audio data because of this prerendering).

Furthermore, MusE by default does not send MIDI events to the synthesizers but instead just plays back the last layer of the prerecording (for MIDI tracks), or does not pipe the audio data through the whole plugin chain (causing cpu usage), but instead just plays back the last layer. The hearable result shall be the same.

Once the user changes any parameter (automation data or plugins for wave tracks, MIDI events or effect plugin stuff for MIDI tracks), then MusE shall generate the sound for this particular track in the "old" way (send MIDI data to synthes, or pipe audio data through plugins). (So that the user will not even notice that MusE actually pre-renderered stuff.) Either MusE automatically records this while playback (if possible) or prompts the user to accordingly set up his cabling and then record it. Or (temporarily) disables prerecording for this track, falling back to the plain old way of generating sound.

Quasi-solo means: For wave tracks, just solo the track. For MIDI tracks, mute all tracks which are not on the same synth (channel?), and mute all note events which are not on the quasi-soloed track. This causes MusE to still play any controller events from different tracks, because they might have effects on the quasi-soloed track. (You can have notes on channel 1 on one track and controller stuff on channel 1 on another track; then you would need quasi-solo to get proper results.)

Use cases

Saving CPU

On slow systems, this is neccessary for songs with lots of, or demanding (or both) soft synthes / plugins. Even if the synth or plugin is so demanding that your system is not able to produce sound in real-time, then with this feature you'll be able to use the synth (this will make editing pretty laggish, because for a change you need to re-render at least a part before you can listen to it, but better than being unable to use the synth at all!)

Exporting as audio project

Using pre-rendering on all tracks, you easily can export your project as multi-track audio file (for use with Ardour or similar DAWs). Just take the last layer of each track, and write the raw audio data into the file, and you're done. (Maybe we are even able to write down the raw-raw layer0 audio data plus information about used plugins and settings etc..?)

Mobile audio workstations

You might want to work a bit on your audio projects on your notebook while you're not at home, not having access to your hardware synthesizers. Using this feature, you could have pre-recorded the stuff in your studio before, and now can at least fiddle around with the non-hw-synth-dependent parts of your song, while still having your full song with you.

Applying effects on MIDI tracks

If you have many physical audio inputs, you might already be able to apply effect chains on MIDI tracks, by wiring the synthes' audio outputs to your soundcard's inputs, and applying the effects on dedicated input tracks you have to create. This requires you to have expensive hardware, and is pretty complicated, because you need one additional track per MIDI synth.

This feature allows you to apply effects on single MIDI tracks, and not only on full MIDI synthes, and doesn't require you to be have that many physical audio inputs (you need to manually replug your synthes, however).

Possible scenarios

Setting it up

Create a wave track, MusE will allow you to set or unset prerendering for every plugin in the plugin rack (recording the actual track is useless because it would be a plain copy). Create a MIDI track, MusE will ask you on which physical audio input your synth is connected. Setting up multiple synthes on one physical audio in is allowed, see below.

Pre-rendering stuff

When the user presses the "pre-render" button, all tracks which have been changed since their last pre-rendering will be re-rendered. If you have multiple hardware synthes set up as they were connected to one physical audio input port, MusE will prompt you to first plug the proper cable in.

Making changes

Change a note in a MIDI part, move or delete a part or change automation parameters. MusE will temporarily disable the pre-rendered information and instead generate the sound via sending out MIDI events, piping stuff through effect chains or similar. If you play back the whole song, or if you manually trigger a re-rendering of a track via the context menu, MusE will play back the stuff, record it again and re-enable the pre-rendered information.

Extensions

Automatic discovery of physical audio connections

The user plugs all (or only some) synthes' audio outs into the available audio inputs, then runs automatic discovery. This will send MIDI events to each synthesizer, and look on which audio in there's activity. Then it will assume that the synthesizer is connected to that particular audio in. Audio ins which show activity before any MIDI events were sent are not considered, as they're probably connected to microphones or other noise-generating non-synthes.

Audio export

As described in the Use cases, MusE can allow you to export your song in some multitrack audio format.

Cheap/Faked changes

For expensive or unavailable synthes, changing the Volume midi controller, the Pan controller or similar "easy" controllers will not trigger a complete re-rendering, but instead "fake" the change, by changing the volume data directly on the recorded wave. This might require some learning and might even get pretty complicated.

Intelligent re-recording

For tiny changes, MusE shall only re-render the relevant part. If you change some MIDI notes, then begin re-recording shortly before the changes, and end re-recording as soon as the recorded stuff doesn't differ to much from the stuff coming from the synth. Then properly blend the old recording with the updated part.

Slotted editors

Currently, MusE has the pianoroll editor, drum editor, score editor, then the controller editor which is inside the pianoroll/drum editor. All these editors have a very similar concept: the "time axis" is vertical and (almost) linear, they handle parts, and events are manipulated similarly.

A unified editor shall be created which allows you to combine different kinds of editors in one window, properly aligned against each other. These "different kinds of editors" shall be handled as "slots"; one unified editor window consists of:

Each slot contains the following:

The main window does not show its scroll bar if there is only one slot, because the slot's scrollbar is sufficient then.

Slots can be added, destroyed, moved around, maybe even merged (if the slot types allow it); basically, you can compare them with the staves in the score editor.

The slots shall align against each other, that is, if a score editor slot displays a key change with lots of accidentials, then all other slots shall either also display the key change (if they're score slots) or display a gap. Events which happen at the same time shall be at the same x-coordinate, regardless which slot they are.

Controller master values

All controllers (MIDI-controllers and also automation controllers) shall have one set of "master values" which allow you to set a gain and a bias. Instead of the actual set value, $\textrm{value} * \textrm{bias}
+ textrm{bias}$ shall be sent to the MIDI device / the plugin. For controllers like "pan", the unbiased values shall be transformed, that is, a pan of 64, with $\textrm{bias}=2$ and $\textrm{gain}=0.5$, shall be transformed to 66 (because 64 is actually 0, while 0 is actually -64). These values shall be set in the arranger and whereever the actual controller/automation values can be edited.

Enabled-indicator while recording

The MusE-plugin-GUIs shall display a small LED displaying whether a controller is currently enabled or disabled. By clicking this LED, the enabled state shall be switched.

Furthermore, there shall be a dedicated window which only lets you switch enabled/disabled states. This will be useful when using external GUIs or the MIDI-controller-to-automation feature, to re-enable a controller when in fterfirstA`A=95 _ `A=58 : A <`<=95 _ `<=58 : < u@nderscorehyph<271>>UTO_TOUCH mode.

Linear automation editing

While holding some modifier key (like shift), operating the MusE-native- GUI sliders shall only generate control points when clicking and when releasing the slider. This will result in linear graphs for continous controllers, and in large steps for discrete controllers (which is in particular useful for stuff like "which low/high-pass filter type to use").

Maybe make this behaviour default for discrete controllers?


Symbolic names for MIDI ports

MIDI ports shall have a user-defined symbolic name (like "Korg" or "Yamaha DX 7"). The mapping between these symbolic names and the hardware port (like "ALSA midi out port") is stored in the global configuration.

Song files only specify the symbolic names as the ports associated with their tracks. No information about physical devices/port names, but only symbolic names are stored in the song file.

This resolves the issues mentioned in 1.2, and also allows the user to share his pieces with other people: They would only have to set up that symbolic-to-hardware mapping once (collisions are unlikely, because an equal symbolic name should usually mean the same device) and are happy, instead of having to re-map every port for every song.