I am comfortable with using [xyz]tickformat, even with setting it to a user-defined function. My problem (or at least one way of stating it) is that you can't effectively define such a function, at least not as well as IDL default does, without knowing the axis range, and that doesn't seem to be possible.
IDL default clearly knows the axis range, as an axis range from 0 to 20 is labeled [0,5,10,15,20], while a range from 0 to 0.2 is labeled [0.00,0.05,0.10,0.15,0.20]. The first index is the same for both axes, but the formatting is different because the axis range is different. Yet a user-defined tickformat function must decide the format of the first index value before it knows what the other index values are.
My even more basic problem -- the reason I'm worrying about this -- is that IDL default switches over to scientific notation when changes occur in the fifth decimal place. Thus, an axis range from 0.6 to 0.6004 is very reasonably labeled [0.6000,0.6001,0.6002,0.6003,0.6004], while an axis range from 0.6 to 0.6003 is labeled [ 6.0000x10^-1, 6.0005x10^-1, 6.0010x10^-1, 6.0015x10^-1, 6.0020x10^-1, 6.0025x10^-1, 6.0030x10^-1 ]. I find the latter to be perverse, as it takes up more space than staying in decimal would have, and I want to force IDL to stay decimal. But I can't duplicate the default behavior without knowing the axis range in advance, which seems impossible as stated above. Obviously [xyz]tickformat='(F8.5)' would fix the problem case, but then you get unwanted trailing zeroes when the axis range is larger.
Any ideas? Any chance that RSI will fix this bug, or allow better user fixes? Or does a fix exist that I'm missing? -Matt
|