On Thu, Sep 13, 2001 at 12:02:57PM +0300, monty@stripped wrote:
> >>>>> "Benjamin" == Benjamin Pflugmann <philemon@stripped>
> (You are at least missing only locking the first/last table)
Yes, as initially said, I will do that last, because at first I can
live without it.
Btw, meanwhile I got the impression that this isn't so easy. At least
I do not see any easy way to determine (in handler::lock_count and
handler::store_lock) whether the lock is to UPDATE (= all tables?!) or
to INSERT/REPLACE (= one table). SELECT would be rather simple (read
> Benjamin> Additionally, I yesterday made a real-world test and found that
> Benjamin> REPLACE doesn't work as expected (if replacing an existing row):
> Benjamin> - If the old row is in a different table that the inserts are made,
> Benjamin> one has to rows with same id afterwards.
> Benjamin> - If the tables of the old row and the insert are the same, the new
> Benjamin> one doesn't replace the old one but on gets an "duplicate key"
> Benjamin> error.
> The above happens of course because you can't guarantee that something
> is UNIQUE over a set of tables, even if each table has a UNIQUE key.
Yes, of course (at least not without additional checking).
> For example, you can have an auto_increment key in each table, which
> is not unique for the whole set.
I am sorry, I did not make myself clear. I am aware of the problem,
that unique columns in underlying tables don't guarantee uniqueness
for the merge table.
My problem was that I did falsely assume REPLACE to be a high level
feature and did not think of the fact that the error codes of the
underlying table handler are used to find the rows which conflicts
with the new row (in principle I could have known that, because I
already read it once or twice, but simply did not think of it in the
Anyhow, it should never result in a "duplicate key" error. Meanwhile I
found the cause (current_table not set accordingly in myrg_write.cc,
while this is used to retrieve the conflicting key position).
For the first error to resolve one could check whether there is a
unique key declared in the merge table and only then check the other
tables for duplicates. This needs a new function (something that calls
mi_check_unique for all tables). An unwanted side effect would be that
REPLACE, too, would need a read lock on all tables, if the merge table
has a unique index, wouldn't it?
If that is not obvious now :), I would expect the merge table to
define only a normal key over a column, even if the underlying key is
unique, if the unique constraint shall not be enforced on the merge
> I don't think that REPLACE is that import for merge tables.
> It' would be trivial to disable REPLACE for this table type.
Well, unfortunatly, it's important to me. The main reason for me to
extend MERGE tables was to avoid to change the application in
question. If it gets too complicated, I will have to do that anyhow,
but I try to avoid this (for this part of the application at least).
> Benjamin> Another thing is, that on identical tables (the unique columns are
> Benjamin> also unique for the MERGE table), EXPLAIN gives different results in
> Benjamin> some cases, using filesort to satisfy an ORDER BY although it uses an
> Benjamin> index to read the column in.
> Benjamin> Well, some more things for me to investigate ;-)
> Actually, for MERGE tables, a filesort will probably be more efficient
> than using an index, especially when you are are not using LIMIT.
In my case it decreased overall performance of the applicaton by a
factor of 3-4, so the database part must be even worse. I had a
problem with filesort some weeks before, which I posted, too, but I
did not manage to create a reproducible test case. :-(
Eventually, I changed the ORDER BY in a way which did not need
filesort and added a sort stage in the application which made the
whole application faster by the mentioned factor of 3.
> Benjamin> Now, how do we proceed most reasonably? Would it be better that I wait
> Benjamin> (with changes) until you can say something about the original problem
> Benjamin> with ALTER TABLE. And if not, are patches better against my former
> Benjamin> patch or against the original?
> I plan to look at your patch tonight; Hope you can wait until
> tomorrow regarding this..
Seems you didn't. Or did you wait for some kind of approvement from
me? Meanwhile I poked around with the source somewhat, but since I did
not get any brilliant idea, it doesn't hurt. ;-)
> My current plan is to add this to 4.0 and then let you backport it to
> MySQL 3.23 for your own usage.
> Any change you could experiment in the 4.0 tree ?
> (We would have much easier to take your patches for the 4.0 tree...)
Sorry, you mean "chance"?
I don't like saying this, especially, because you spend a lot of your
time to deal with my problem(s), but I would rather refrain from doing
so. My main reasons are:
- I have to use it in 3.23.x and developing and backporting it in 4.0.x
will duplicate some of the efforts and quality assurances before I
use it in production.
- I am quite familiar with the behaviour of 3.23.x and this is a clear
advantage when testing and debugging new stuff.
- I assume that 4.0 is only available via bitkeeper (correct?) and I
+ don't like what I have seen from bk until now (it's prejudice, I know)
+ don't really have the time to become acquainted to it
(I know, this sounds arrogant, because that's my time vs. yours,
but you probably already noted that I have only so much time to
spend and therefore this isn't really my choice)
If there is a way to get a tar for the 4.0 tree, I am more than
willing to try to find the time to port my patches to 4.0 before I
send them in. But that will leave the question of reproducibility of
problems. What do you think? Do you see a feasable compromise? Sorry,
about being difficult in this matter. :-(