List:General Discussion« Previous MessageNext Message »
From:Johnny Withers Date:February 22 2012 2:40pm
Subject:Re: Removing Data Duplicacy
View as plain text  
You can also handle this with transactions:

CREATE TABLE `seq` (
  `seq_num` int(10) unsigned NOT NULL DEFAULT '1000'
) ENGINE=InnoDB DEFAULT CHARSET=latin1


#Initialize sequence numbers
INSERT INTO seq(seq_num) VALUES(1000);

#Get next sequence number
START TRANSACTION;

UPDATE seq SET seq_num=LAST_INSERT_ID(seq_num+1);

#Do other inserts into other tables with this sequence number

COMMIT;

#ROLLBACK if something fails

Other inserts will be blocked until this process is either COMMIT'd or
ROLLBACK'd, preventing you from using the same sequence # again. As soon as
the COMMIT or ROLLBACK occurs, the other transactions will continue.

-JW

On Wed, Feb 22, 2012 at 7:57 AM, Arthur Fuller <fuller.artful@stripped>wrote:

> I agree with the testicular remedy, but in the case of the iron codpiece, I
> can think of another approach which may work for you. It still uses Select,
> but reads a one-row table, so it shouldn't hurt performance much. The table
> serves no other purpose than storing the next available PK; call the table
> NextPK, say. The algorithm might go like this:
>
> 1. Lock the table NextPK.
> 2. Select its value.
> 3. Update the column with current value + 1.
> 4. Unlock the table.
> 5. Do your inserts.
>
> The lock will be very brief, perhaps brief enough to satisfy your
> requirement.
>
> --
> Arthur
> Cell: 647.710.1314
>
> Only two businesses refer to their clientele as users: drug dealing and
> software development.
>
> -- Arthur Fuller
>



-- 
-----------------------------
Johnny Withers
601.209.4985
johnny@stripped

Thread
Removing Data DuplicacyAdarsh Sharma22 Feb
  • Re: Removing Data DuplicacyJohan De Meersman22 Feb
    • Re: Removing Data DuplicacyArthur Fuller22 Feb
      • Re: Removing Data DuplicacyJohnny Withers22 Feb
      • Re: Removing Data DuplicacyAdarsh Sharma23 Feb
        • Re: Removing Data DuplicacyJohan De Meersman23 Feb
Re: Removing Data DuplicacyJohnny Withers22 Feb
  • Re: Removing Data DuplicacyJohan De Meersman22 Feb