查询的一些例子:

1.query
hive> SELECT name,subordinates[0] FROM employees;
John Doe Mary Smith
Mary Smith Bill King
Todd Jones NULL
2.expression
hive> SELECT upper(name),salary, deductions["Federal Taxes"],
round(salary * (1 -deductions["Federal Taxes"])) FROM employees;
3.expression
SELECT count(*), avg(salary) FROMemployees;
4.distinct
SELECT count(DISTINCT symbol) FROMstocks;
5.limit
hive> SELECT upper(name),salary, deductions["Federal Taxes"],
> round(salary * (1 -deductions["Federal Taxes"])) FROM employees
> LIMIT 2;
JOHN DOE 100000.0 0.2 80000
MARY SMITH 80000.0 0.2 64000
6.列名 别名
SELECT upper(name), salary,deductions["Federal Taxes"] as fed_taxes,
> round(salary * (1 -deductions["Federal Taxes"])) as 
salary_minus_fed_taxes
> FROM employees LIMIT 2;
7.嵌套select (不可有having在内部select)
hive> FROM (
> SELECT upper(name), salary, deductions["FederalTaxes"] as fed_taxes,
> round(salary * (1 -deductions["Federal Taxes"])) as 
salary_minus_fed_taxes
> FROM employees
> ) e
> SELECT e.name,e.salary_minus_fed_taxes
> WHEREe.salary_minus_fed_taxes > 70000;
JOHN DOE 100000.0 0.2 80000
8. case when then
hive> SELECT name, salary,
> CASE
> WHEN salary < 50000.0 THEN'low'
> WHEN salary >= 50000.0 ANDsalary < 70000.0 THEN 'middle'
> WHEN salary >= 70000.0 ANDsalary < 100000.0 THEN 'high'
> ELSE 'very high'
> END AS bracket FROM employees;
John Doe 100000.0 very high
Mary Smith 80000.0 high
Todd Jones 70000.0 high
Bill King 60000.0 middle
Boss Man 200000.0 very high
9.hive不适用map reduce
SELECT * FROM employees;
SELECT * FROM employees
WHERE country = 'US' AND state ='CA'
LIMIT 100;
10.使用like和rlick
like是如同sql语句
hive> SELECT name,address.street FROM employees WHERE address.street LIKE 
'%Chi%';
rlick可以使用如同java的正则
hive> SELECT name,address.street
> FROM employees WHEREaddress.street RLIKE '.*(Chicago|Ontario).*';
Mary Smith 100 Ontario St.
Todd Jones 200 Chicago Ave.
11 group by语句
hive> SELECT year(ymd),avg(price_close) FROM stocks
> WHERE exchange = 'NASDAQ' ANDsymbol = 'AAPL'
> GROUP BY year(ymd);
1984 25.578625440597534
12 having语句
hive> SELECT year(ymd),avg(price_close) FROM stocks
> WHERE exchange = 'NASDAQ' ANDsymbol = 'AAPL'
> GROUP BY year(ymd)
> HAVING avg(price_close) >50.0;
1987 53.88968399108163
1991 52.49553383386182
13 join inner(建议将最大的table放在最后)
hive> SELECT a.ymd,a.price_close, b.price_close
> FROM stocks a JOIN stocks bON a.ymd = b.ymd
> WHERE a.symbol = 'AAPL' ANDb.symbol = 'IBM';
(注意,像比较这样的join条件是不允许的,条件中也不能用or)
SELECT a.ymd, a.price_close,b.price_close
FROM stocks a JOIN stocks b
ON a.ymd <= b.ymd
WHERE a.symbol = 'AAPL' ANDb.symbol = 'IBM';
14.left outer join
hive> SELECT s.ymd, s.symbol,s.price_close, d.dividend
> FROM stocks s LEFT OUTER JOINdividends d ON s.ymd = d.ymd AND s.symbol = 
d.symbol
> WHERE s.symbol = 'AAPL';
...
1987-05-01 AAPL 80.0 NULL
1987-05-04 AAPL 79.75 NULL
1987-05-05 AAPL 80.25 NULL
不同类型比较
不同类型的数字float double做比较,要注意0.2float大于0.2double
可以cat(0.2 as float)
order by and sort by
hive的order by是全部数据的排序,在一个reduce中处理排序,默认升序。效率比较低,通常跟limit一起用
可以用hive.mapred.mode=strict来强制,order后跟着limit
sort by是在每个reduce中,进行排序,
(是否在一个reduce中,由groupcomparator决定,如果没有就是key的compare,在hive中UDAF在控制mapper的key输出到reducer上,普通的compact是hash分布到key上,或是自定义compact的hash均匀分布ketama算法)
distribut by是让相同的key归到同一个reducer中,这样sort可以进行reducer中的排序
(注意 sort需要放在distribute前边)
cluster
cluster by是一种distribute和sort的简写,让按照clauses中语句分组,并且按照其他字段排序
cast函数
cast类型转换函数,当string不符合条件,则为unknowndata为null
可以嵌套cast(cast(cast(binary asstring)as double))
将float转换为int 可以用round floor
取样查询 sample
rand随机
select * from numberstablesample(bucket 3 out of 10 on rand()) s;
column随机:以一个column,这样会在多个线程里跑multiple runs
select * from numberstablesample(bucket 3 out of 10 on number) s;
block随机: 另一个取样函数(blocksampling 当表小于block size128mb,则全部rows返回)
有个hive.sample.seednumber来控制seed information for block based
select * from numbersflattablesample(0.1 percent) s;
使用number做sample,用以下语句,让sample按照sample存储,可以只访问hash file中需要的bucket
create tablenumbers_bucketed(number int) clustered by (number) into 3 buckets
set hive.enforce.bucketing = true;
union
select * from table1
union all
select * from table2
from(
from src select src.key ,src.value where src.key < 100
union all
from src select src.* wheresrc.key >100
) unioninput
insert overwrite directory'/tmp/union.out' select unioninput.*

更多精彩内容请关注:http://bbs.superwu.cn  

关注超人学院微信二维码:hive查询详解_超人学院