>>>>> "Christian" == Christian Kirsch <ck@stripped> writes:
Christian> I'm not sure if this is the right place to send a bug report
Christian> concerning MyODBC to, but since I didn't see any other
Christian> address mentioned in the MyODBC sources, I'll give it a try.
Christian> Please send me to the correct place if I'm wrong here.
Christian> I'm trying to get MySQL/MyODBC to work with StarOffice under
Christian> Linux. So far, I've been mostly successful, either by
Christian> tweaking MyODBC a little bit or by adding stuff to iodbc.
Christian> However, there is one point where MyODBC seems to deviate
Christian> from the ODBC standard which is particularly annoying with
Christian> SQLDescribeCol is supposed to return the precision of a
Christian> column in the result set. The precision is defined as "The
Christian> maximum number of digits used by the data type of the column
Christian> or parameter." The following table states clearly that the
Christian> precision for a CHAR(10) column must be 10. However, MyODBC
Christian> "optimizes" this into returning the maximum number of
Christian> characters _used_ for a column in the current statement. If,
Christian> for example, I do a "SELECT name FROM addresses where
Christian> name='Miller'", SQLDescribeCol would return 6 as precision
Christian> for column 1. To me, this seems to be a violation of the
Christian> ODBC requirements. I'm aware that one can set OPTION to 1 in
Christian> the SQL connect string to get the correct behavior. However,
Christian> I suggest that the OPTION parameter should rather allow to
Christian> _deviate_ from the correct behavior, not _enforce_ it.
Christian> The relevant code is in util.c, function
Christian> unireg_to_sql_datatype. I suggest to change the conditions
Christian> in the if-Statement at the beginning of this function.
Christian> Furthermore, the call to "max" in this statement is not
Christian> needed, since field->length is always the maximum of
field-> length and field->max_length, the latter can never be
Christian> greater then the former.
I have checked this up a couple of times; According to Microsoft's
ODBC specification, SQLDescribeCol returns:
'The column name, type and length generated by the sql statement'.
I can't find anything that says that it's not allowed to create a
temporary set of the SELECT statement with lower CHAR() bounds!
For columns that includes expressions, this is definitely allowed!
ODBC provides the SQLColumns() call if you want to get information about
a column in a table. SQLDescribeCol() is used to get a result of
a result column. The only reason to provide both functions is to
allow optimization like the one MySQL does.
The big problem is that the ODBC specification is not the least clear on this
point (this is just one of many unclear points!)
Where did you find the above information? I am using 'Microsoft ODBC
3.0, Programmer's Reference, Volume 2' !
The problem is that the current optimization helps you save a lot of
memory in your client, if your client is correctly coded.
For example, TEXT types will be very hard to handle (read almost
impossible) without this optimization!
The major problem is that if I change the default, its very likely
that I break a lot of ODBC clients that are using the TEXT or BLOB column !