List:Backup« Previous MessageNext Message »
From:Ingo Strüwing Date:October 15 2009 8:20pm
Subject:Re: WL#4056: Pluggable storage modules for backup
View as plain text  
Hi Rafal,

thank you for the detailed answers to my comments. I do now understand
and agree with some of your points. I removed them from the below
discussion.

Some misunderstandings or disagreements are left. I am sorry that I
split my comments over two emails. I guess that happened because the new
HLS raised my attention at some points that either were not in the first
version or that I didn't notice them previously.

I tried to keep the already begun discussions here and the new ones in
the other email. As usual I might not have been very successful in a
clean division.

I marked some paragraphs with headings for better referral from the
other email.

Rafal Somla, 13.10.2009 14:51:

...
> I think some of your questions come from different understanding of the
> HLS specifications which I wrote, so I want to explain this first.
> 
> My intention with HLS specifications is to give a general design on an
> abstract level. This level is far above details such as concrete
> function declarations, data types used to pass information etc. On this
> level I only want to specify what services a backup module should
> implement. Also, for each service I want to specify what information
> must be provided and what information is returned from the service.
> 
> I intentionally do not specify how the information is passed and how it
> is represented. I consider these to be implementation details and I
> expect them to be specified in LLD, not in HLS.

I admit that I have difficulties to decide, which information belongs to
what level of abstraction. But I still feel that the different services
are specified in different detail. I would like to have that fixed. I'll
point out my observations in a later email, commenting on the new HLS.

...

> Ingo Strüwing wrote:
...
>> Rafal Somla, 07.07.2009 17:05:
...
>> 1. "Responsibilities": The verification if magic number and version
>> should not be duplicated in every storage module. It should be done in a
>> central place, I'll call it the "core". Same with compression and
>> encryption.
> 
> I'm not so sure that it is obvious that "it should be done in a central
> place".

...

> I think both options are viable and choice is a matter of weighting
> advantages and disadvantages of each. I'll describe the alternative in HLS.


"What do we consider part of the backup image?"
-----------------------------------------------

Sorry to add more to this. I believe, I understand your point of view.
You see the backup image as a structure that shall be handled as a
"black box" blob by external modules. These should not assume *any*
piece of structure in it. Since the format of that image can change, one
needs to know the version to be able to interpret the image correctly.
The version number is "meta" information, which must be present before
trying to extract information from the image.

To me this is an unusual, unexpected, complicated way do handle
versioned data formats. I think that some predefined information at the
begin of a data structure, which determines the rest of the format, is
more common, expected, and simpler to handle. External modules don't
need to care about the image's version. They don't even need to know
that there are different formats. When they feed a backup image of
arbitrary format into a backup kernel, they can expect that the kernel
is able to figure out, how to treat that image.

Also, with your approach, we need to put the burden of keeping the
version number consistent with the image blob on the storage modules. If
a module would become screwed up, and deliver a wrong version number
with the image (or deliver a wrong "stream offset"), we won't have a
chance to detect the inconsistency. It could take some time until some
function doesn't work due to being fed with wrong data. We give away the
most basic, effective and simple consistency check.

If we define "magic bytes" + "version number" as part of the image, we
would immediately detect a failing storage module.

...


> The intention of my design was to make it a responsibility of the
> storage module to mark locations which contain backup images so that
> when a location is opened, the module can tell if it is a backup image
> or something else. You propose a different design, where recognizing
> that location contains backup image data will be done by backup kernel
> by means of a magic number (as is done now).


"Marking of backup images"
--------------------------

My thinking is that the user supplies the "location string". By using
distinguishable names, he "marks" backup image as such. And he can tell,
what is a backup image and what is not. In case of human failure I want
the backup kernel to detect that error as a last resort. If storage
modules are able to detect such failure before it hits the backup kernel
- fine with me.

I always try to keep the plain file storage in mind. I would like to
keep it simple. I don't expect users to be confused about the contents
of their files frequently. So I do not want the plain file storage to
mark and distinguish backup image files from other files.

...
>> 6. "acknowledgement": The services look like they all return a status.
>> Why do we need a second status return here? Can't the return status have
>> different values that encode different (non-)problems? Another method
>> could be to set algorithm "none" if the preferred one was rejected.
> 
> On this level of specification, I only want to say *what* information is
> passed in and out of each service. I do not want to say *how* it is
> done. I consider it part of LLD.
> 
> In this case I want to say that "Set compression algorithm service"
> informs its user whether it successfully set-up compression or not. I do
> not want to specify yet, how it is done - all the options which you
> propose are valid. I expect this to be specified/clarified in LLD.


Ok. You say that you want to say that the service says whether it could
process the request successfully. Fine. But why don't any other service
need to do that?

In other cases you say, the service reports an error. So why is this not
possible for the "Set compression algorithm service"? What is that
special with it?

It is the inconsistency I'm stumbling over. I want either all services
"report error" on failure, or all services to "transport" an
"acknowledgment" information.

...
>> 9. "amount of data that has been written": What can this information be
>> used for (except of stopping backup)?
> 
> This is similar to standard POSIX write() function. Client of the
> service request writing of 100 bytes. But implementation can write only
> 10 bytes at the moment and returns to the caller to avoid waiting. Then
> client must take into account that only 10 bytes have been written and
> send remaining 90 bytes again.
> 
> Such specifications allows for greater flexibility in implementing I/O
> operations.


"Amount of data that has been written"
--------------------------------------

I don't see us implement asynchronous I/O over the storage module
interface. I just see the added complexity for the backup kernel to
handle incomplete writes.

The POSIX write() function has to handle a vast amount of weird devices.
We have the implementation of storage modules under control. We can
insist in complete writes or failure. This reduces the complexity of the
backup kernel, which is good. I still don't foresee a real limitation.

Having flexible and future-proof interfaces is a good thing. But if
there is no foreseeable use for a feature, we might need to live with
the burden forever, without taking any profit from it.

OTOH, to assume we could specify an interface that won't need to change
forever is naive.

So please come up with a real, foreseeable use case, or get rid of the
parameter and accept an interface change one day in the future, if a
real, but unforeseen need arises.

...
>> 11. "information about end of stream": Why do you dislike the common
>> style to report this with a zero length read? I think this could be
>> simpler to implement in the module, and as simple to use in the core. If
>> the module reads in blocks, and the last block is full, it would need to
>> read ahead to decide if this was the last block. A simple module won't
>> do double buffering and thus return an empty buffer at the next call
>> anyway. I like simplicity.
> 
> I hope it is clear already that I don't dislike any particular
> implementation, especially a simple one. On this level, I avoid
> specifying how information is represented and focus on realizing what
> information needs to be passed around.


"Information about end of stream"
---------------------------------

If you see it this way, why don't you think it could be good that during
a stream write the storage module gets some idea about when the stream
is at its end?

You probably will answer that the SM can assume end of stream when a
"close" is requested. And the I will ask, why the backup kernel cannot
assume end of stream when the SM does not deliver any data any more
(zero length read)?

In my eyes the "information about end of stream" is an inconsistency in
level of detail with other service specifications. I suggest to get rid
of it. It is obvious that end of stream needs to be signaled somehow.

...
>> 15. "Error reporting": The methods should return non-zero on error.
>> There should be a get_errno() and a get_errmsg() method. Internally the
>> modules must be allowed to use include/mysqld_error.h and my_error() or
>> my_printf_error() from include/my_sys.h.
...
> From a more abstract perspective, the most important design decision to
> make w.r.t. error reporting is: who reports error to the final user? Is
> it the backup storage module or the backup kernel. If I understand
> correctly, your suggestion means that it is the storage module which
> reports error to the final user.


I'm not sure what I had in mind. The get_errno() and a get_errmsg()
methods mean that the backup kernel retrieves error number and and error
text after a storage module service failed. The other request looks like
direct error reporting by the module. I agree that this should not be
the normal way. But there might be situations where this is necessary in
exceptional situations.

>   I am afraid of such design because it
> requires a lot of knowledge from the module and makes it harder to
> separate it from the rest of the system.


What do you mean by "separate it from the rest of the system"? I guess,
we agree that backup storage modules shall be dynamically loadable
plugins. So they are limited to what such plugins are allowed to access.
Do you have additional limitations in mind?

>   Among other things, a module
> which wants to report errors to the user must know these things:
> 
> - How to send error messages to the user: is it server error log, or
> connection error stack or backup log or other locations as specified by
> the user. For some of these options the module would have to be aware of
> such things as active connection between mysql client and server.


Accepted.

> 
> - What are user preferences regarding error reporting: for example the
> language in which they should be reported, the level of errors which
> should be reported etc.


This is handled through my_error() already. Unfortunately there is one
language per server only, no session specific language yet. But this is
how MySQL works. We should not try to solve it in a backup specific way.

> 
> For all these reasons I'm more inclined to a solution where it is the
> backup kernel which reports errors and backup storage module only
> signals errors to its client.


Ok, Agreed.

...
> Note: using global error numbers is a bit problematic in an open
> architecture. The global set of error numbers must fit any possible
> implementation of a backup storage module. It is difficult to predict
> all possible errors, thus either storage modules will not be able to
> signal some errors or they will hack-around, breaking the modularity of
> the system. In the ideal world, I'd like to have a system to which an
> independently developed backup storage module can be plugged and it will
> integrate smoothly, including satisfactory error reporting.


While I see the advantages of dynamic error messages to the developer, I
insist that global error numbers are a major requirement. When a
customer gets an error message, he cannot understand, he'll contact the
support team. If they don't get a unique error number, but just the
Chinese error text, they may not be able to help.

What we could do ideally would be to reserve number ranges for plugins
and register their error messages when mounting them. I guess there
exist some such ideas already, but I don't know if/when anything like
that could be available.

I also strongly insist that backup storage module plugins must not be
limited below the level of storage engine plugins. That is, they must be
allowed to use libmysys (including my_error()), libdbug,
system_charset_info, libmystrings (which can handle
system_charset_info), etc.

> 
>> 16. "non-ascii strings": The interface should define strings in system
>> character set.
> 
> Using system character set creates, unnecessary in my opinion,
> dependence between a storage module and the server code. The module
> would have to access this variable. Instead, we could specify "utf8" to
> be the standard encoding for strings. This seems to be an obvious option
> but I have not put it down yet, because I'm not sure about all the
> consequences. And perhaps, we can be fine with just us-ascii strings?


No. Location names and error messages must be internationalizable.

Regards
Ingo
-- 
Ingo Strüwing, Database Group
Sun Microsystems GmbH, Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Geschäftsführer: Thomas Schröder,   Wolfgang Engels,   Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin Häring   HRB München 161028
Thread
Re: WL#4056: Pluggable storage modules for backupRafal Somla13 Oct
  • Re: WL#4056: Pluggable storage modules for backupIngo Strüwing15 Oct
    • Re: WL#4056: Pluggable storage modules for backupRafal Somla22 Oct
      • Re: WL#4056: Pluggable storage modules for backupIngo Strüwing22 Oct
        • RE: WL#4056: Pluggable storage modules for backupAndreas Almroth22 Oct
        • Re: WL#4056: Pluggable storage modules for backupRafal Somla23 Oct
          • Re: WL#4056: Pluggable storage modules for backupIngo Strüwing23 Oct
            • Re: WL#4056: Pluggable storage modules for backupRafal Somla3 Nov