I do something like the following:
On a single machine (actually Master, with slaves),
1. In an atomic operation, do
1.1. Find an item to work on
1.2. Mark the item with worker id (eg, hostname and process id) and
2. Proceed to work on it. This may take minutes.
3. Go back to the item and clear the worker-id.
* The locking mechanism is independent of the processing that the worker
does. In particular, it does not share any locks like InnoDB's
* Because of the timestamp, there is the possibility of coming along
later and discovering that a worker died, and restarting that item.
* The processing can take arbitrarily long. (But there needs to be a
limit, if you wish to reap dead workers.)
On 11/9/10 10:25 AM, Justin Edwards wrote:
> It sounds like you need to program in the intelligence for it to behave
> correctly. It shouldn't have anything to do with table locking. Why not
> create a field on the table that says Processing, and assign a true or false
> value, have your program check this before starting work on it, and if it is
> false, assign it to true?
> I personally wouldn't do the processing from both servers unless it was so
> slow that it required two servers, although at that point it's probably good
> to slice up the filesystem.
> Justin Edwards
> TeleLanguage Inc.
> Network Administrator
> On Tue, Nov 9, 2010 at 12:19 PM, Tears !<unix.co@stripped> wrote:
>> Dear Mats!
>> Thanks for your response, Here is what we want,
>> We need the lock to insert records from our multiple web servers that will
>> synchronize some scheduled tasks. Here is a scenario:
>> There are two web servers WEB1 and WEB2. Both have same scheduled jobs
>> will process some files on a network storage server. The files belong to
>> users so we can classify the files by users. Let's suppose WEB1 started the
>> scheduled job to process files. It will first check if the files for User 1
>> are already being processed if not then insert a record in db to prevent
>> WEB2 to process the same user's data. So now when WEB2 will start the same
>> scheduled job it will not touch User1's files rather move on to the next
>> available user. Here is the algo:
>> 1 - Check if a lock is already acquired on User1's files (a row in db
>> represents a lock)
>> 2 - do some processing and reserve some relevant resources
>> 3 - If no lock exists then acquire a lock for User1's files by inserting a
>> row in db
>> 4 - If lock already has been acquired then move on to the next user and
>> start from step 1.
>> The problem here is that during step 1 and step 3 we do not want other
>> servers to query for any existing locks to avoid duplicate locks. Since our
>> two db servers are running in a master-master configuration on a shared IP
>> with heatbeat so it might happen that web1 locks a db table on db1 and web2
>> ends up querying db2. That's why we want a table to be locked on both db
>> On Mon, Nov 8, 2010 at 12:47 PM, Mats Kindahl<mats.kindahl@stripped
>>> On 11/08/2010 07:13 AM, Tears ! wrote:
>>>> Dear All,
>>>> I am running MySQL with Master/Master replication. Yesterday I had been
>>>> a table on Server B. But on Server A the table was not locked.
>>>> Is it possible when we lock table, it should be locked on both ends.
>>> In general, no.
>>> Also, I wonder why you want to lock the other table?
>>> Best wishes,
>>> Mats Kindahl
>>> MySQL Replication Mailing List
>>> For list archives: http://lists.mysql.com/replication
>>> To unsubscribe:
>> Umar Draz
>> Network Administrator
Rick James - MySQL Geek