Approved for me, but perhaps you could educate me, see below:
>>
>>>
>>> +uint Item_func_get_user_var::decimal_precision() const
>>> +{
>>> + uint precision= max_length;
>>> + Item_result restype= result_type();
>>> +
>>> + /* Default to maximum as the precision is unknown a priori. */
>>> + if ((restype == DECIMAL_RESULT) || (restype == INT_RESULT))
>>> + precision= DECIMAL_MAX_PRECISION;
>>> +
>>> + return precision;
>>> +}
>>> +
>>> +
>> What made you add this?
>>
>
> Item::decimal_precision calculates the precision from the maximum
> length, yet the maximum length in the case of a decimal type user
> variable is the maximum string length of a decimal (see method
> fix_length_and_dec, class Item_func_get_user_var). Hence, it is not
> correct (for the decimal type user variable case) to calculate the
> precision from the length. Furthermore, the precision and scale for
> user variables are set to the maximum in any case.
Alright, so max_length (the display length) is 83 after
fix_length_and_dec(), and 'decimals' is the default 30.
Item::decimal_precision() returns 81 while
Item_func_get_user_var::decimal_precision() returns 65 . But the actual
field created in the below test is DECIMAL(65, 14) without the above fix.
SET @decimal= 1.1;
CREATE TABLE t1 SELECT @decimal AS c1;
DESC t1;
Incidentally, the difference is the same: 81 - 65 = 14 = 30 - 16. Where
is this adjustment done?
Regards
Martin