thanks @deadsy and @SmashedTransistors some really interesting points.
this post was kind of related...
yes this is a case in point...
so, as mentioned in the link above, Olivier from MI (pinchenettes in post above), stated that he uses float because he saw little advantage of fixed point maths, due to the FPU present on the chips (, and his experiments with converting the elements resonator reinforced this for him) , and i think we can all agree, MI modules are very efficient in their use of the CPU - so show with proper use, floats can be efficient.(*)
so when we moved the MI code to axoloti, we weren't going to 'convert' the code to fixed point (that would be a complete re-write), so wrapped it with conversion calls in/out.
generally i'll say ive been happy with the performance of the MI objects.... esp. baring in mnd clouds/elements are run on the same chip as axoloti.
so its does seem a valid conclusion that using floats can yield good performance - i think so.
however, seems they do need to be used with the same care as int32.
you will see the MI also use tables rather than float point operations for things like exp, and im sure he has many other optimisations.
so perhaps the take-away, is floats are not intrinsically 'bad', but be careful what operations/functions you use... its very easy with floats to start using std functions that are costly.
also I think for clarity we need to remember to to stay with float, and do not use doubles as this I'm assuming are 64 bit, and so very expensive... aren't there times when floats get automatically coerced in to doubles... do we need to take care to avoid this?
also float constants.... i think we use the compiler options to assume floats, but we should really be explicit e.g. use 24.0f rather than 24.0
(*) as a complete aside, I think the MI code, also shows that if used 'intelligently' C++ can also be used for audio code, you just have to know what to use, and what not too.