>I'm reticent to consume any more of this lists bandwidth and trust this will
>end the thread, but here is my point. There are many 'gotchas' to consider
>when developing an application that uses any dynamic file structure from
>simple flat ascii
>files to engorged dbms's. Add multi-user and multi-tasking to the mix and
>you have a tiger by the tail. The bottom line is, it is the responsibility
>the programmer to ensure that every tool they use is safely and
>correctly implemented. Transactions or commit and rollback levels help, but
>they are only a small piece of the package. It is dangerous to assume that
>because referential integrity has been maintained, that the data is as
>intended. Any application that allows its data to be manipulated in other
>than a read-only status is responsible for its integrity.
Forgive me if I'm missing the point here I have only just finished a
database programming course in university and do not have extensive real
practice. However I was tought that databases are responsible (among
other things) of data integrity and consistency. Yhe client's job is to
add data and ask for data. The rest should be database's job. Database
ensures there are no violation of business rules (constraints, foreign
keys) and that full data not just part of it will be added
(transactions). Of course a client must be programmed in a way to make
use of these features. But all the things you were talking about really
are part of RDBMS definition.
>appropriate result to the client. You certainly would not want to lock a
>region or even a row from a
>stateless client. If two people access the same record/row at the same time
>and change non-key information but commit the changes sequentially 2 seconds
>apart the referential integrity could be correct, but the data is not as the
>first person intended. In this scenario, the programmer needs to provide a
>mechanism to advise the first
>client that his changes were overwritten, thereby turning a stateless event
>into a stateful one and maintaining control of the application.
This situation is one of the main reasons you want to use a database. It
is database's job to handle multiple simultaneous accesses on the same
data/row. This is not an easy task, there are several different locking
mechanisms to implement this and different types of transactions for end
users/client applications. Database creators have spent a lot of time on
implementing these things so it is probably quite wise not to try to
invent a bicycle here.
>A robust file handler or dbms is a wonderful tool but it is only a tool and
>does not relieve a programmer of their responsibility. That's my point.
It is not the APPLICATION programmers responsibility to handle
simultaneous accesses to the same data, to ensure data integrity. The
application/database user only sends data to database and asks for data.
The rest is the job of RDBMS. Thats the whole point of using a RDBMS imho.