这很容易发生。 您可以通过添加新列来调整表:
ALTER TABLE payments ADD code NUMBER(3);
您继续执行您的业务逻辑–绝对没有问题。 但是,随后(可能在生产中)某些批处理作业失败,因为它对数据类型做出了一些强有力的假设。 即,假设两个表payments
和payments_archive
具有相同的行类型:
CREATE TABLE payments
(
id NUMBER(18) NOT NULL,
account_id NUMBER(18) NOT NULL,
value_date DATE,
amount NUMBER(25, 2) NOT NULL
);
CREATE TABLE payments_archive
(
id NUMBER(18) NOT NULL,
account_id NUMBER(18) NOT NULL,
value_date DATE,
amount NUMBER(25, 2) NOT NULL
);
由于具有相同的行类型,您可以简单地将一行从一个表移动到另一个表,例如,使用如下查询:
INSERT INTO payments_archive
SELECT * FROM payments
WHERE value_date < SYSDATE - 30;
(并不是说使用上面的语法通常是一个好主意,实际上不是一个好主意。但是您明白了)
您现在得到的是:
ORA-00913: too many values
解决的方法很明显,但可能必须解决此问题的可怜的灵魂不是您,而是必须找出可能不匹配的数百列的其他人。 这是(在Oracle中)方法:
使用PIVOT比较两个表!
您当然不能使用PIVOT
而只需从字典视图中的任何一个表中选择所有列:
SELECT
table_name,
column_name
FROM all_tab_cols
WHERE table_name LIKE 'PAYMENTS%'
这将产生以下结果:
TABLE_NAME COLUMN_NAME
------------------ ---------------
PAYMENTS ID
PAYMENTS ACCOUNT_ID
PAYMENTS VALUE_DATE
PAYMENTS AMOUNT
PAYMENTS CODE
PAYMENTS_ARCHIVE ID
PAYMENTS_ARCHIVE ACCOUNT_ID
PAYMENTS_ARCHIVE VALUE_DATE
PAYMENTS_ARCHIVE AMOUNT
不太可读。 当然,您可以使用set操作并应用INTERSECT
和MINUS
( EXCEPT
)来过滤出匹配的值。 但是好得多:
SELECT *
FROM (
SELECT
table_name,
column_name
FROM all_tab_cols
WHERE table_name LIKE 'PAYMENTS%'
)
PIVOT (
COUNT(*) AS cnt
FOR (table_name)
IN (
'PAYMENTS' AS payments,
'PAYMENTS_ARCHIVE' AS payments_archive
)
) t;
以上内容产生了:
COLUMN_NAME PAYMENTS_CNT PAYMENTS_ARCHIVE_CNT
------------ ------------ --------------------
CODE 1 0
ACCOUNT_ID 1 1
ID 1 1
VALUE_DATE 1 1
AMOUNT 1 1
现在,很容易识别出PAYMENTS_ARCHIVE
表中缺少的列。 如您所见,原始查询的结果每列和每张表产生了一行。 我们采用了该结果并将其旋转为“ FOR”表名,这样我们现在每列只获得一行
如何阅读PIVOT?
这简单。 内联评论:
SELECT *
-- This is the table that we're pivoting. Note that
-- we select only the minimum to prevent side-effects
FROM (
SELECT
table_name,
column_name
FROM all_tab_cols
WHERE table_name LIKE 'PAYMENTS%'
)
-- PIVOT is a keyword that is applied to the above
-- table. It generates a new table, similar to JOIN
PIVOT (
-- This is the aggregated value that we want to
-- produce for each pivoted value
COUNT(*) AS available
-- This is the source of the values that we want to
-- pivot
FOR (table_name)
-- These are the values that we accept as pivot
-- columns. The columns names are produced from
-- these values concatenated with the corresponding
-- aggregate function name
IN (
'PAYMENTS' AS payments,
'PAYMENTS_ARCHIVE' AS payments_archive
)
) t;
而已。 不是那么难,不是吗?
这种语法的好处是,我们可以很容易地生成任意数量的其他列:
SELECT *
FROM (
SELECT
table_name,
column_name,
cast(data_type as varchar(6)) data_type
FROM all_tab_cols
WHERE table_name LIKE 'PAYMENTS%'
)
PIVOT (
COUNT(*) AS cnt,
MAX(data_type) AS type -- new function here
FOR (table_name)
IN (
'PAYMENTS' AS p,
'PAYMENTS_ARCHIVE' AS a
)
) t;
…制作(在附加错误的DDL之后)…
COLUMN_NAME P_CNT P_TYPE A_CNT A_TYPE
----------- ---------- ------ ---------- ------
CODE 1 NUMBER 0
ACCOUNT_ID 1 NUMBER 1 NUMBER
ID 1 NUMBER 1 NUMBER
VALUE_DATE 1 DATE 1 TIMESTAMP
AMOUNT 1 NUMBER 1 NUMBER
这样,我们可以发现表的不同行类型之间的更多缺陷。 在上面的示例中,我们使用了MAX()
,因为我们必须提供一个聚合函数,即使每个枢转列在我们的示例中恰好对应于一行,但也不一定如此。
如果我不使用Oracle怎么办?
SQL Server还支持PIVOT,但其他数据库则不支持。 您始终可以使用GROUP BY
和CASE
来模拟PIVOT
。 以下语句与上一个语句等效:
SELECT
t.column_name,
count(CASE table_name
WHEN 'PAYMENTS' THEN 1 END) p_cnt,
max (CASE table_name
WHEN 'PAYMENTS' THEN data_type END) p_type,
count(CASE table_name
WHEN 'PAYMENTS_ARCHIVE' THEN 1 END) a_cnt,
max (CASE table_name
WHEN 'PAYMENTS_ARCHIVE' THEN data_type END) a_type
FROM (
SELECT
table_name,
column_name,
data_type
FROM all_tab_cols
WHERE table_name LIKE 'PAYMENTS%'
) t
GROUP BY
t.column_name;
现在,该查询还将在所有其他数据库上产生相同的结果。
是不是……?
是的! 使用SQL标准的FILTER子句,可以更进一步地缩短聚合函数与CASE
结合使用,我们最近在博客中对此进行了博客介绍 。
因此,在PostgreSQL中,您可以编写以下查询:
SELECT
t.column_name,
count(table_name)
FILTER (WHERE table_name = 'payments') p_cnt,
max(data_type)
FILTER (WHERE table_name = 'payments') p_type,
count(table_name)
FILTER (WHERE table_name = 'payments_archive') a_cnt,
max(data_type)
FILTER (WHERE table_name = 'payments_archive') a_type
FROM (
SELECT
table_name,
column_name,
data_type
FROM information_schema.columns
WHERE table_name LIKE 'payments%'
) t
GROUP BY
t.column_name;