基本使用

如下面这个shell脚本:

#Oracle的连接字符串,其中包含了Oracle的地址,SID,和端口号
CONNECTURL=jdbc:oracle:thin:@20.135.60.21:1521:DWRAC2
#使用的用户名
ORACLENAME=kkaa
#使用的密码
ORACLEPASSWORD=kkaa123
#需要从Oracle中导入的表名
oralceTableName=tt
#需要从Oracle中导入的表中的字段名
columns=AREA_ID,TEAM_NAME
#将Oracle中的数据导入到HDFS后的存放路径
hdfsPath=apps/as/hive/$oralceTableName

#执行导入逻辑。将Oracle中的数据导入到HDFS中
sqoop import --append --connect $CONNECTURL --username $ORACLENAME --password $ORACLEPASSWORD --target-dir $hdfsPath  --num-mappers 1 --table $oralceTableName --columns $columns --fields-terminated-by '\001'

执行这个脚本之后,导入程序就完成了。

接下来,用户可以自己创建外部表,将外部表的路径和HDFS中存放Oracle数据的路径对应上即可。

注意:这个程序导入到HDFS中的数据是文本格式,所以在创建Hive外部表的时候,不需要指定文件的格式为RCFile,而使用默认的TextFile即可。数据间的分隔符为'\001'。如果多次导入同一个表中的数据,数据以append的形式插入到HDFS目录中。

并行导入

假设有这样这个sqoop命令,需要将Oracle中的数据导入到HDFS中:

sqoop import --append --connect $CONNECTURL --username $ORACLENAME --password $ORACLEPASSWORD --target-dir $hdfsPath  --m 1 --table $oralceTableName --columns $columns --fields-terminated-by '\001'  --where "data_desc='2011-02-26'"

请注意,在这个命令中,有一个参数“-m”,代表的含义是使用多少个并行,这个参数的值是1,说明没有开启并行功能。

现在,我们可以将“-m”参数的值调大,使用并行导入的功能,如下面这个命令:

sqoop import --append --connect $CONNECTURL --username $ORACLENAME --password $ORACLEPASSWORD --target-dir $hdfsPath  --m 4 --table $oralceTableName --columns $columns --fields-terminated-by '\001'  --where "data_desc='2011-02-26'"

一般来说,Sqoop就会开启4个进程,同时进行数据的导入操作。

但是,如果从Oracle中导入的表没有主键,那么会出现如下的错误提示:

ERROR tool.ImportTool: Error during import: No primary key could be found for table creater_user.popt_cas_redirect_his. Please specify one with --split-by or perform a sequential import with '-m 1'.

在这种情况下,为了更好的使用Sqoop的并行导入功能,我们就需要从原理上理解Sqoop并行导入的实现机制。

如果需要并行导入的Oracle表的主键是id,并行的数量是4,那么Sqoop首先会执行如下一个查询:

select max(id) as max, select min(id) as min from table [where 如果指定了where子句];

通过这个查询,获取到需要拆分字段(id)的最大值和最小值,假设分别是1和1000。

然后,Sqoop会根据需要并行导入的数量,进行拆分查询,比如上面的这个例子,并行导入将拆分为如下4条SQL同时执行:

select * from table where 0 <= id < 250;

select * from table where 250 <= id < 500;

select * from table where 500 <= id < 750;

select * from table where 750 <= id < 1000;

注意,这个拆分的字段需要是整数。

从上面的例子可以看出,如果需要导入的表没有主键,我们应该如何手动选取一个合适的拆分字段,以及选择合适的并行数。

再举一个实际的例子来说明:

我们要从Oracle中导入creater_user.popt_cas_redirect_his。

这个表没有主键,所以我们需要手动选取一个合适的拆分字段。

首先看看这个表都有哪些字段:

然后,我假设ds_name字段是一个可以选取的拆分字段,然后执行下面的sql去验证我的想法:

select min(ds_name), max(ds_name) from creater_user.popt_cas_redirect_his where data_desc='2011-02-26'

发现结果不理想,min和max的值都是相等的。所以这个字段不合适作为拆分字段。

再测试一下另一个字段:CLIENTIP
select min(CLIENTIP), max(CLIENTIP) from creater_user.popt_cas_redirect_his where data_desc='2011-02-26'

这个结果还是不错的。所以我们使用CLIENTIP字段作为拆分字段。

所以,我们使用如下命令并行导入:

sqoop import --append --connect $CONNECTURL --username $ORACLENAME --password $ORACLEPASSWORD --target-dir $hdfsPath  --m 12 --split-by CLIENTIP --table $oralceTableName --columns $columns --fields-terminated-by '\001'  --where "data_desc='2011-02-26'"

这次执行这个命令,可以看到,消耗的时间为:20mins, 35sec,导入了33,222,896条数据。

另外,如果觉得这种拆分不能很好满足我们的需求,可以同时执行多个Sqoop命令,然后在where的参数后面指定拆分的规则。如:

sqoop import --append --connect $CONNECTURL --username $ORACLENAME --password $ORACLEPASSWORD --target-dir $hdfsPath  --m 1 --table $oralceTableName --columns $columns --fields-terminated-by '\001'  --where "data_desc='2011-02-26' logtime<10:00:00"

sqoop import --append --connect $CONNECTURL --username $ORACLENAME --password $ORACLEPASSWORD --target-dir $hdfsPath  --m 1 --table $oralceTableName --columns $columns --fields-terminated-by '\001'  --where "data_desc='2011-02-26' logtime>=10:00:00"

从而达到并行导入的目的。

 

/home/wanghai01/cloudera/sqoop-1.2.0-CDH3B4/bin/sqoop import --connect jdbc:mysql://XXXX/crm --username XX --password XX --table tb_keyword_data_201104 --split-by winfo_id --target-dir /user/wanghai01/data/ --fields-terminated-by '\t' --lines-terminated-by '\n' --input-null-string '' --input-null-non-string ''

/home/wanghai01/cloudera/sqoop-1.2.0-CDH3B4/bin/sqoop export --connect jdbc:mysql://XXXX/crm --username XX --password XX --table tb_keyword_data_201104 --export-dir /user/wanghai01/data/ --fields-terminated-by '\t' --lines-terminated-by '\n' --input-null-string '' --input-null-non-string ''


更多关于Hadoop的文章,可以参考:

===========================================

Sqoop有较多的命令和参数,我这里从实践和源码的角度将他们一一整理出来,这里Sqoop版本是1.3

Sqoop大约有13种命令,和几种通用的参数(都支持这13种命令).这里先列出这13种命令.

 

 

序号

命令/command


说明

1

impor

ImportTool

从关系型数据库中导入数据(来自表或者查询语句)到HDFS中

2

export

ExportTool

将HDFS中的数据导入到关系型数据库中

3

codegen

CodeGenTool

获取数据库中某张表数据生成Java并打成jar包

4

create-hive-table

CreateHiveTableTool

创建Hive表

5

eval

EvalSqlTool

查看SQL执行结果

6

import-all-tables

ImportAllTablesTool

导入某个数据库下所有表到HDFS中

7

job

JobTool

 

8

list-databases

ListDatabasesTool

列出所有数据库名

9

list-tables

ListTablesTool

列出某个数据库下所有表

10

merge

MergeTool

 

11

metastore

MetastoreTool

 

12

help

HelpTool

查看帮助

13

version

VersionTool

查看版本

 

        接着列出Sqoop的各种通用参数,然后针对以上13个命令列出他们自己的参数.Sqoop通用参数又分Common arguments,Incremental import arguments,Output line formatting arguments,Input parsing arguments,Hive arguments,HBase arguments,Generic Hadoop command-line arguments,下面一一说明:

       1.Common arguments

           通用参数,主要是针对关系型数据库链接的一些参数

 

 

序号

参数

说明

样例

1

connect

连接关系型数据库的URL

jdbc:mysql://localhost/sqoop_datas

2

connection-manager

连接管理类,一般不用

 

3

driver

连接驱动



4

hadoop-home 

hadoop目录

/home/guoyun/hadoop

5

help

查看帮助信息

 

6

password

连接关系型数据库的密码

 

7

username

链接关系型数据库的用户名

 

8

verbose

查看更多的信息,其实是将日志级别调低

该参数后面不接值

===========================================

sqoop 导入数据 报 java heap space 错误



java heap space error。经过总结,解决方法有两个:

1、 修改每个运行子进程的jvm大小



修改mapred-site.xml文件,添加以下属性:

<property> 
   
 
   
java.opts</name> 
   
 
   

      <value>-Xmx512M</value> 
   
 
   

     </property> 
   
 
   

     <property> 
   
 
   
java.opts</name> 
   
 
   

      <value>-Xmx512M</value> 
   
 
   

     </property> 
   
 
   

     <property> 
   
 
   
java.opts</name>? 
   
 
   

      <value>-Xmx512M</value> 
   
 
   

     </property>




java



sqoop ... -m <map 数量>



==================================================



12/02/08 14:36:52 ERROR tool.ImportTool: Error during import: Import job failed!
user@ubuntu:~$ sqoop-1.3/bin/sqoop import --append --connect jdbc:oracle:thin:@192.168.5.100:1522:orcl2 --username olapgz --password jt888 --m 5 --split-by id --table cb_vio2 --columns id,pzbh  --hbase-table cb_vio1 --hbase-create-table --column-family cb_viobase --hbase-row-key id --input-null-string '' --input-null-non-string ''
12/02/08 14:41:58 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
12/02/08 14:41:58 INFO manager.SqlManager: Using default fetchSize of 1000
12/02/08 14:41:58 INFO tool.CodeGenTool: Beginning code generation
12/02/08 14:41:59 INFO manager.OracleManager: Time zone has been set to GMT
12/02/08 14:41:59 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM cb_vio2 t WHERE 1=0
12/02/08 14:42:00 INFO orm.CompilationManager: HADOOP_HOME is /home/user/hadoop
12/02/08 14:42:02 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-user/compile/5fc8b26d1ba6035bb6c35eb6ed2f53c2/cb_vio2.jar
12/02/08 14:42:02 INFO mapreduce.ImportJobBase: Beginning import of cb_vio2
12/02/08 14:42:02 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.3.4-cdh3u3--1, built on 01/26/2012 18:07 GMT
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:host.name=ubuntu
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_26
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/user/hadoop/conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/user/hadoop:/home/user/hadoop/hadoop-core-0.20.2-cdh3u3.jar:/home/user/hadoop/lib/ant-contrib-1.0b3.jar:/home/user/hadoop/lib/aspectjrt-1.6.5.jar:/home/user/hadoop/lib/aspectjtools-1.6.5.jar:/home/user/hadoop/lib/commons-cli-1.2.jar:/home/user/hadoop/lib/commons-codec-1.4.jar:/home/user/hadoop/lib/commons-daemon-1.0.1.jar:/home/user/hadoop/lib/commons-el-1.0.jar:/home/user/hadoop/lib/commons-httpclient-3.1.jar:/home/user/hadoop/lib/commons-lang-2.4.jar:/home/user/hadoop/lib/commons-logging-1.0.4.jar:/home/user/hadoop/lib/commons-logging-api-1.0.4.jar:/home/user/hadoop/lib/commons-net-1.4.1.jar:/home/user/hadoop/lib/core-3.1.1.jar:/home/user/hadoop/lib/guava-r09-jarjar.jar:/home/user/hadoop/lib/hadoop-fairscheduler-0.20.2-cdh3u3.jar:/home/user/hadoop/lib/hsqldb-1.8.0.10.jar:/home/user/hadoop/lib/jackson-core-asl-1.5.2.jar:/home/user/hadoop/lib/jackson-mapper-asl-1.5.2.jar:/home/user/hadoop/lib/jasper-compiler-5.5.12.jar:/home/user/hadoop/lib/jasper-runtime-5.5.12.jar:/home/user/hadoop/lib/jets3t-0.6.1.jar:/home/user/hadoop/lib/jetty-6.1.26.cloudera.1.jar:/home/user/hadoop/lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/home/user/hadoop/lib/jetty-util-6.1.26.cloudera.1.jar:/home/user/hadoop/lib/jsch-0.1.42.jar:/home/user/hadoop/lib/junit-4.5.jar:/home/user/hadoop/lib/kfs-0.2.2.jar:/home/user/hadoop/lib/log4j-1.2.15.jar:/home/user/hadoop/lib/mockito-all-1.8.2.jar:/home/user/hadoop/lib/oro-2.0.8.jar:/home/user/hadoop/lib/servlet-api-2.5-20081211.jar:/home/user/hadoop/lib/servlet-api-2.5-6.1.14.jar:/home/user/hadoop/lib/slf4j-api-1.4.3.jar:/home/user/hadoop/lib/slf4j-log4j12-1.4.3.jar:/home/user/hadoop/lib/xmlenc-0.52.jar:/home/user/hadoop/lib/jsp-2.1/jsp-2.1.jar:/home/user/hadoop/lib/jsp-2.1/jsp-api-2.1.jar:/home/user/sqoop-1.3/bin/../conf::/home/user/sqoop-1.3/bin/../lib/ant-contrib-1.0b3.jar:/home/user/sqoop-1.3/bin/../lib/ant-eclipse-1.0-jvm1.2.jar:/home/user/sqoop-1.3/bin/../lib/avro-1.5.4.jar:/home/user/sqoop-1.3/bin/../lib/avro-ipc-1.5.4.jar:/home/user/sqoop-1.3/bin/../lib/avro-mapred-1.5.4.jar:/home/user/sqoop-1.3/bin/../lib/commons-io-1.4.jar:/home/user/sqoop-1.3/bin/../lib/hadoop-core-0.20.2-cdh3u3.jar:/home/user/sqoop-1.3/bin/../lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar:/home/user/sqoop-1.3/bin/../lib/jackson-core-asl-1.7.3.jar:/home/user/sqoop-1.3/bin/../lib/jackson-mapper-asl-1.7.3.jar:/home/user/sqoop-1.3/bin/../lib/jopt-simple-3.2.jar:/home/user/sqoop-1.3/bin/../lib/ojdbc14.jar:/home/user/sqoop-1.3/bin/../lib/paranamer-2.3.jar:/home/user/sqoop-1.3/bin/../lib/snappy-java-1.0.3.2.jar:/home/user/hadoop/hbase/conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/user/hadoop/hbase:/home/user/hadoop/hbase/hbase-0.90.4-cdh3u3.jar:/home/user/hadoop/hbase/hbase-0.90.4-cdh3u3-tests.jar:/home/user/hadoop/hbase/lib/activation-1.1.jar:/home/user/hadoop/hbase/lib/asm-3.1.jar:/home/user/hadoop/hbase/lib/avro-1.5.4.jar:/home/user/hadoop/hbase/lib/avro-ipc-1.5.4.jar:/home/user/hadoop/hbase/lib/commons-cli-1.2.jar:/home/user/hadoop/hbase/lib/commons-codec-1.4.jar:/home/user/hadoop/hbase/lib/commons-el-1.0.jar:/home/user/hadoop/hbase/lib/commons-httpclient-3.1.jar:/home/user/hadoop/hbase/lib/commons-lang-2.5.jar:/home/user/hadoop/hbase/lib/commons-logging-1.1.1.jar:/home/user/hadoop/hbase/lib/commons-net-1.4.1.jar:/home/user/hadoop/hbase/lib/core-3.1.1.jar:/home/user/hadoop/hbase/lib/guava-r06.jar:/home/user/hadoop/hbase/lib/guava-r09-jarjar.jar:/home/user/hadoop/hbase/lib/hadoop-core-0.20.2-cdh3u3.jar:/home/user/hadoop/hbase/lib/jackson-core-asl-1.5.2.jar:/home/user/hadoop/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/user/hadoop/hbase/lib/jackson-mapper-asl-1.5.2.jar:/home/user/hadoop/hbase/lib/jackson-xc-1.5.5.jar:/home/user/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/home/user/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/home/user/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/home/user/hadoop/hbase/lib/jaxb-api-2.1.jar:/home/user/hadoop/hbase/lib/jaxb-impl-2.1.12.jar:/home/user/hadoop/hbase/lib/jersey-core-1.4.jar:/home/user/hadoop/hbase/lib/jersey-json-1.4.jar:/home/user/hadoop/hbase/lib/jersey-server-1.4.jar:/home/user/hadoop/hbase/lib/jettison-1.1.jar:/home/user/hadoop/hbase/lib/jetty-6.1.26.jar:/home/user/hadoop/hbase/lib/jetty-util-6.1.26.jar:/home/user/hadoop/hbase/lib/jruby-complete-1.6.0.jar:/home/user/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/user/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/user/hadoop/hbase/lib/jsp-api-2.1.jar:/home/user/hadoop/hbase/lib/jsr311-api-1.1.1.jar:/home/user/hadoop/hbase/lib/log4j-1.2.16.jar:/home/user/hadoop/hbase/lib/netty-3.2.4.Final.jar:/home/user/hadoop/hbase/lib/protobuf-java-2.3.0.jar:/home/user/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/user/hadoop/hbase/lib/servlet-api-2.5.jar:/home/user/hadoop/hbase/lib/slf4j-api-1.5.8.jar:/home/user/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/user/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/home/user/hadoop/hbase/lib/stax-api-1.0.1.jar:/home/user/hadoop/hbase/lib/thrift-0.2.0.jar:/home/user/hadoop/hbase/lib/velocity-1.5.jar:/home/user/hadoop/hbase/lib/xmlenc-0.52.jar:/home/user/hadoop/hbase/lib/zookeeper-3.3.4-cdh3u3.jar:/home/user/hadoop/conf:/home/user/hadoop/conf:/home/user/hadoop/hadoop-core-0.20.2-cdh3u3.jar:/home/user/hadoop/lib/ant-contrib-1.0b3.jar:/home/user/hadoop/lib/aspectjrt-1.6.5.jar:/home/user/hadoop/lib/aspectjtools-1.6.5.jar:/home/user/hadoop/lib/commons-cli-1.2.jar:/home/user/hadoop/lib/commons-codec-1.4.jar:/home/user/hadoop/lib/commons-daemon-1.0.1.jar:/home/user/hadoop/lib/commons-el-1.0.jar:/home/user/hadoop/lib/commons-httpclient-3.1.jar:/home/user/hadoop/lib/commons-lang-2.4.jar:/home/user/hadoop/lib/commons-logging-1.0.4.jar:/home/user/hadoop/lib/commons-logging-api-1.0.4.jar:/home/user/hadoop/lib/commons-net-1.4.1.jar:/home/user/hadoop/lib/core-3.1.1.jar:/home/user/hadoop/lib/guava-r09-jarjar.jar:/home/user/hadoop/lib/hadoop-fairscheduler-0.20.2-cdh3u3.jar:/home/user/hadoop/lib/hsqldb-1.8.0.10.jar:/home/user/hadoop/lib/jackson-core-asl-1.5.2.jar:/home/user/hadoop/lib/jackson-mapper-asl-1.5.2.jar:/home/user/hadoop/lib/jasper-compiler-5.5.12.jar:/home/user/hadoop/lib/jasper-runtime-5.5.12.jar:/home/user/hadoop/lib/jets3t-0.6.1.jar:/home/user/hadoop/lib/jetty-6.1.26.cloudera.1.jar:/home/user/hadoop/lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/home/user/hadoop/lib/jetty-util-6.1.26.cloudera.1.jar:/home/user/hadoop/lib/jsch-0.1.42.jar:/home/user/hadoop/lib/junit-4.5.jar:/home/user/hadoop/lib/kfs-0.2.2.jar:/home/user/hadoop/lib/log4j-1.2.15.jar:/home/user/hadoop/lib/mockito-all-1.8.2.jar:/home/user/hadoop/lib/oro-2.0.8.jar:/home/user/hadoop/lib/servlet-api-2.5-20081211.jar:/home/user/hadoop/lib/servlet-api-2.5-6.1.14.jar:/home/user/hadoop/lib/slf4j-api-1.4.3.jar:/home/user/hadoop/lib/slf4j-log4j12-1.4.3.jar:/home/user/hadoop/lib/xmlenc-0.52.jar:/home/user/sqoop-1.3/bin/../sqoop-1.3.0-cdh3u3.jar:/home/user/sqoop-1.3/bin/../sqoop-test-1.3.0-cdh3u3.jar:
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/user/hadoop/lib/native/Linux-i386-32
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-37-generic-pae
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:user.name=user
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/user
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/user
12/02/08 14:42:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=server2:2181,ubuntu:2181,server5:2181 sessionTimeout=180000 watcher=hconnection
12/02/08 14:42:03 INFO zookeeper.ClientCnxn: Opening socket connection to server server5/192.168.5.7:2181
12/02/08 14:42:03 INFO zookeeper.ClientCnxn: Socket connection established to server5/192.168.5.7:2181, initiating session
12/02/08 14:42:03 INFO zookeeper.ClientCnxn: Session establishment complete on server server5/192.168.5.7:2181, sessionid = 0x2355a5cd9320032, negotiated timeout = 180000
12/02/08 14:42:04 INFO mapreduce.HBaseImportJob: Creating missing column family cb_viobase
12/02/08 14:42:04 INFO client.HBaseAdmin: Started disable of cb_vio1
12/02/08 14:42:06 INFO client.HBaseAdmin: Disabled cb_vio1
12/02/08 14:42:06 INFO client.HBaseAdmin: Started enable of cb_vio1
12/02/08 14:42:08 INFO client.HBaseAdmin: Enabled table cb_vio1
12/02/08 14:42:10 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(id), MAX(id) FROM cb_vio2
12/02/08 14:42:17 WARN db.TextSplitter: Generating splits for a textual index column.
12/02/08 14:42:17 WARN db.TextSplitter: If your database sorts in a case-insensitive order, this may result in a partial import or duplicate records.
12/02/08 14:42:17 WARN db.TextSplitter: You are strongly encouraged to choose an integral split column.
12/02/08 14:42:18 INFO mapred.JobClient: Running job: job_201202071501_0012
12/02/08 14:42:19 INFO mapred.JobClient:  map 0% reduce 0%
12/02/08 14:42:30 INFO mapred.JobClient:  map 14% reduce 0%
12/02/08 14:42:34 INFO mapred.JobClient: Task Id : attempt_201202071501_0012_m_000001_0, Status : FAILED
java.lang.IllegalArgumentException: No columns to insert
        at org.apache.hadoop.hbase.client.HTable.validatePut(HTable.java:871)
        at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:691)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:681)
        at com.cloudera.sqoop.hbase.HBasePutProcessor.accept(HBasePutProcessor.java:122)
        at com.cloudera.sqoop.mapreduce.DelegatingOutputFormat$DelegatingRecordWriter.write(DelegatingOutputFormat.java:132)
        at com.cloudera.sqoop.mapreduce.DelegatingOutputFormat$DelegatingRecordWriter.write(DelegatingOutputFormat.java:96)
        at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:531)
        at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
        at com.cloudera.sqoop.mapreduce.HBaseImportMapper.map(HBaseImportMapper.java:40)
        at com.cloudera.sqoop.mapreduce.HBaseImportMapper.map(HBaseImportMapper.java:33)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at com.cloudera.sqoop.mapreduce.AutoProgressMap
12/02/08 14:42:36 INFO mapred.JobClient:  map 71% reduce 0%
12/02/08 14:42:42 INFO mapred.JobClient:  map 85% reduce 0%
12/02/08 14:42:43 INFO mapred.JobClient: Task Id : attempt_201202071501_0012_m_000001_1, Status : FAILED