List:General Discussion« Previous MessageNext Message »
From:Eric Bergen Date:April 18 2005 10:40pm
Subject:Re: mysql import or write my own Perl parser
View as plain text  
awk is probably the best tool I can think of for cutting columns out
of a text file. Something like
awk -F \\t '{ print $2 "," $3 }' my_file
could be used to pick the second and third column out of a file prior
to importing it.

-Eric

On 4/18/05, newbie c <newbie_st@stripped> wrote:
> Hi,
> 
> I am about to create a database and there are a number of files that I need
> to load into the database.  They are tab delimited files.  One of the files contains
> about 4 or 5 columns.  I am only interested in the second and the third column right now
> but I will load the whole table.  The values in the second column can occur more than once
> in the file.
> As well the values in the third column can occur more than once in the file.
> 
> Another file that I want to load as a table into the databse only contains two
> column and one column will be unique while the second column will have
> duplicate values in the file.
> 
> My question is when should I use mysqlimport, or load data and when should I write my
> own Perl parser to help load the table?
> What criteria would be needed to decide to read a file into a hash?
> 
> Also, if I decide to use mysqlimport is there anything I should watch out for?
> 
> thanks!
> 
> 
> ---------------------------------
> Post your free ad now! Yahoo! Canada Personals
> 
> 


-- 
Eric Bergen
eric.bergen@stripped
http://www.ebergen.net
Thread
mysql import or write my own Perl parsernewbie c18 Apr
  • Re: mysql import or write my own Perl parserEric Bergen19 Apr
Re: mysql import or write my own Perl parsernewbie c19 Apr
  • Re: mysql import or write my own Perl parserEric Bergen19 Apr