>>>>> "M" == M -A Lemburg <mal@stripped> writes:
M> Well, I guess you both are right: for expressions the column
M> is temporary and generated by the database, so there's no
M> reference as to what its precision should be. Yet for raw
M> database table columns the Appendix D above clearly states
M> that precision for a CHAR(10) column is 10. For SQL_LONGVARCHAR
M> and other huge data containers its the maximum length available
M> (used or not).
Note that we are talking about the precision of a result set, not
the precision of the original column.
If you make an SELECT statement, the result set doesn't have to have the
same precision as the original columns.
(This is actual my major point here; I would be very interesting to know
if you can find ant documentation that proves/disapproves the above.
If we can settle this issue, then the rest is easy!)
In this case libmysql emulates a temporary set with an optimized
CHAR() column. We could easily do this also in the SQL server, and I
think it would be perfectly legal to do so.
(Whenever we introduce cursors in MySQL, this will be done in the
>> ODBC provides the SQLColumns() call if you want to get information about
>> a column in a table. SQLDescribeCol() is used to get a result of
>> a result column. The only reason to provide both functions is to
>> allow optimization like the one MySQL does.
>> The big problem is that the ODBC specification is not the least clear on this
>> point (this is just one of many unclear points!)
>> Where did you find the above information? I am using 'Microsoft ODBC
>> 3.0, Programmer's Reference, Volume 2' !
>> The problem is that the current optimization helps you save a lot of
>> memory in your client, if your client is correctly coded.
>> For example, TEXT types will be very hard to handle (read almost
>> impossible) without this optimization!
M> Note that a properly coded client will use SQLGetData() to fetch
M> SQL with precision beyond a certain limit; they will not simply
M> try a malloc(precision) to obtain buffer space for the column.
With MySQL TEXT/BLOB handling this coding is often unnecessary. If you
always select short TEXT/BLOB:s I see no reason why you should have to
use SQLGetData(); SQLGetData() is for big results!
>> The major problem is that if I change the default, its very likely
>> that I break a lot of ODBC clients that are using the TEXT or BLOB column !
M> Actually, I don't think so. TEXT and BLOB column are passed to
M> the client as SQL_LONGVARCHAR or SQL_LONGVARBINARY types and these
M> are certainly transferred using SQLGetData() rather than via a
M> malloced buffer and SQLBindCol().
This depends on the client; It's perfectly legal to use CHAR
bindings on short TEXT columns and I would be very surprised if no ODBC
client would do this; These will probably crash if the MyODBC reports
that the precision is 24M!
As the use of TEXT/BLOB is very normal with MySQL (because they are
very simple to use), you will get problems with simple clients that
doesn't use (or need) SQLGetData() to handle TEXT.
PS: I agree that I am confused about the ODBC specification; My only
defense is that it's very unlikely that I am the only one :)