On 11/19/10 11:45 AM, Vladislav Vaintroub wrote:
>> -----Original Message----- From: Davi Arnaut
>> [mailto:davi.arnaut@stripped] Sent: Friday, November 19, 2010
>> 1:46 PM To: Vladislav Vaintroub Cc: commits@stripped
>> Subject: Re: bzr commit into mysql-5.5-runtime branch (davi:3186)
>> On 11/19/10 10:41 AM, Vladislav Vaintroub wrote:
>>>> -----Original Message----- From: Davi Arnaut
>>>> [mailto:davi.arnaut@stripped] Sent: Friday, November 19,
>>>> 2010 1:08 PM To: Vladislav Vaintroub Cc:
>>>> commits@stripped Subject: Re: bzr commit into
>>>> mysql-5.5-runtime branch (davi:3186) Bug#54790
>>>> On 11/19/10 9:48 AM, Vladislav Vaintroub wrote:
>>>>>>>>> end up with WaitForSingleObject in an optimized
>>>>>>> Asynchronous I/O is nice and all, but its not viable to
>>>>>>> change the server design to adapt to the pattern of
>>>>>>> overlapped I/O. We should be looking for alternative that
>>>>>>> fit our pattern, not the other way around.
>>>>> The only thing we can have is poll()..
>>>> and WSAEventSelect. This is the sad reality given the
>>> The reality has been so far closesocket(). What was the real
>>> problem with it? I think it is pretty good and reliable thing to
>> The problem with it is that there will be attempts to read or write
>> from invalid sockets. This is race prone and simply wrong. In order
>> to implement it properly we would require quite a bit of locking.
> How often do we concurrently access the same vio? If there is no
> contention, we can do locking no problem, it does not cost
It is accessed currently when one connection attempts to kill another
connection. There is a contention cost because this needs to be done
under "hot" mutexes since the neither of the connections can go away
during this period.
>>> break the read and write.. Even if current command deep in
>>> optimizer would like to say its last words before it dies , why
>>> care? It is
>> This work is for 5.5, my goal was not to re-design the server due
>> to the lack of proper support for canceling I/O across the
>> platforms that we have to support.
> Well, the redesigning happens sort of right now. You found some
> design you liked and you insist on this design. Not redesigning is a
> simple short fix, and this is not what is happening . While I do like
> the things that happened so far with timeouts (even if windows is not
> involved), and I do not care how fast the vio_io_wait() is going to
> be on Windows on a single place it is used, I do care if the whole
> read and write would be revamped and 2/3 extra syscalls are
> introduced per each read and write. Kill on Windows was a problem
> for named pipes the problem has long gone, but I'm not aware it
> would be for sockets.
I've already explained the problems. Which one was not clear?
> If the alternative on Unix, pthread_kill() did not work due to
> signals being unreliable, so be it, lets redesign that stuff somewhat
> and I trust you to do the right thing, and there are many Linux
> benchmarkers who would look pretty closely at what you do and cry
> loudly if you break something, so on Linux side I'm not concerned. I
> am concerned on Windows though, intuition says currently anything
> additional poll-like, including WSAEventSelect, combined with
> WaitForXXXObjects , combined with recv/send is going to suck compared
> to simple send/recv. And while intuition is a bad argument, and
> benchmark is a good one, so lets benchmark your redesign, profile
> with some decent profiler and we'll see. If it is going to suck
> indeed, we should let Windows out of this " make kill reliable"
> story, and let it be as it is now.
No, Windows is not being left behind.
>> Vlad, our conversation is not going to move forward if you keep
>> focusing on adjusting everything else to fit the pattern that you
>> would like to see.
> Same to you :) Anyway, I'm kinda of more scared of your Windows
> plans more..
But at least what I'm proposing is something to address the problem that
is being fixed. I'm not trying to fit the problem to the solution.
>>> going to die anyway, as somebody have chosen to kill it. So end
>>> user (client library) gets a socket error, which it must be
>>> prepared for, socket error are quite common in the TCP/IP.
>> Citation needed.
> Me. What you would expect server to do ? Send "I'm dying" packet
> reliably to the client, if client happens to be active right now, and
> then in fact die? And if the client is going to be active in 0.002
> seconds, but not right now, then what ? This does sound like it is
> worth a major revamp.
I have no idea what you are talking about. You said that socket error is
quite common in TCP/IP. This makes no sense at all and I asked for
something you to base your argument on.