前言

最近有搭建 greenplum 集群的需求 

然后 在搭建的过程中碰到了一些问题, 还是有一些时间开销

并且问题也稍微有些复杂, 因此记录一下

 

 

1. Do not have enough valid segments to start the array.

报错日志信息如下 

20220408:14:15:29:021638 gpstart:gp1:gpadmin-[INFO]:-Master Started...
20220408:14:15:29:021638 gpstart:gp1:gpadmin-[INFO]:-Shutting down master
20220408:14:15:30:021638 gpstart:gp1:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-Process results...
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[ERROR]:-No segment started for content: 2.
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-dumping success segments: ['localhost.localdomain:/data/gpdb/pdata1/gpseg0:content=0:dbid=2:role=p:preferred_role=p:mode=n:status=u', 'localhost.localdomain:/data/gpdb/pdata2/gpseg1:content=1:dbid=3:role=p:preferred_role=p:mode=n:status=u']
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-----------------------------------------------------
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-DBID:4  FAILED  host:'localhost.localdomain' datadir:'/data/gpdb/pdata1/gpseg2' with reason:'Segment data directory does not exist for: '/data/gpdb/pdata1/gpseg2''
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-DBID:5  FAILED  host:'localhost.localdomain' datadir:'/data/gpdb/pdata2/gpseg3' with reason:'Segment data directory does not exist for: '/data/gpdb/pdata2/gpseg3''
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-----------------------------------------------------
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-----------------------------------------------------
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-   Successful segment starts                                            = 2
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[WARNING]:-Failed segment starts                                                = 2   <<<<<<<<
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-----------------------------------------------------
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-Successfully started 2 of 4 segment instances <<<<<<<<
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-----------------------------------------------------
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[WARNING]:-Segment instance startup failures reported
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[WARNING]:-Failed start 2 of 4 segment instances <<<<<<<<
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[WARNING]:-Review /home/gpadmin/gpAdminLogs/gpstart_20220408.log
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-----------------------------------------------------
20220408:14:15:31:021638 gpstart:gp1:gpadmin-[INFO]:-Commencing parallel segment instance shutdown, please wait...
20220408:14:15:32:021638 gpstart:gp1:gpadmin-[ERROR]:-gpstart error: Do not have enough valid segments to start the array.

 

问题分析 

这里观察一个细节, 启动成功的两个 segment 是 localhost.localdomain:/data/gpdb/pdata1/gpseg0 和 localhost.localdomain:/data/gpdb/pdata2/gpseg1
然后 启动失败的两个 segment 是 localhost.localdomain:/data/gpdb/pdata1/gpseg2 和 localhost.localdomain:/data/gpdb/pdata2/gpseg3
但是 实际我们期望的应该是 gp2:/data/gpdb/pdata1/gpseg0 和 gp2:/data/gpdb/pdata2/gpseg1 以及 gp3:/data/gpdb/pdata1/gpseg2 和 gp3:/data/gpdb/pdata2/gpseg3
因为可能涉及到需要 ssh 到对应的主机上面执行 启动的相关命令
但是由于 hostname 本身存在问题, 导致 ssh 的时候出现问题
此处有一些不太明晰的地方, 为什么 gp2 所在的机器 两个 segment 启动成功了?
从日志中可以看到的是 在一个地方 ssh 到了 gp2 所在的机器

 

gpstart 启动 segments 

相关代码
bin/gpstart
    self.gparray = GpArray.initFromCatalog(self.dburl, utility=True)
    // 省略部分代码
    segmentStartResult = segmentStartOp.startSegments(self.gparray, segmentsToStart, startMode, self.era)
gparray.initFromCatalog
    SELECT dbid, content, role, preferred_role, mode, status, hostname, address, port, datadir FROM pg_catalog.gp_segment_configuration ORDER BY content, preferred_role DESC
    然后通过这个查询结果封装 gparray
# 然后通过 hostname, 以及 datadir 的相关配置拿去 远程执行 pg_ctl 命令
    20220409:15:12:18:007540 gpsegstart.py_gp3:gpadmin:gp3:gpadmin-[DEBUG]:-[worker1] finished cmd: Starting seg at dir /data/gpdb/pdata2/gpseg3 cmdStr='env GPSESSID=0000000000 GPERA=ea7fa135b0ce8dae_220409151208 $GPHOME/bin/pg_ctl -D /data/gpdb/pdata2/gpseg3 -l /data/gpdb/pdata2/gpseg3/pg_log/startup.log -w -t 600 -o " -p 6001 " start 2>&1'  had result: cmd had rc=0 completed=True halted=False
    20220409:15:12:18:007540 gpsegstart.py_gp3:gpadmin:gp3:gpadmin-[DEBUG]:-[worker0] finished cmd: Starting seg at dir /data/gpdb/pdata1/gpseg2 cmdStr='env GPSESSID=0000000000 GPERA=ea7fa135b0ce8dae_220409151208 $GPHOME/bin/pg_ctl -D /data/gpdb/pdata1/gpseg2 -l /data/gpdb/pdata1/gpseg2/pg_log/startup.log -w -t 600 -o " -p 6000 " start 2>&1'  had result: cmd had rc=0 completed=True halted=False
    20220409:15:12:17:008990 gpsegstart.py_gp2:gpadmin:gp2:gpadmin-[DEBUG]:-[worker0] finished cmd: Starting seg at dir /data/gpdb/pdata1/gpseg0 cmdStr='env GPSESSID=0000000000 GPERA=ea7fa135b0ce8dae_220409151208 $GPHOME/bin/pg_ctl -D /data/gpdb/pdata1/gpseg0 -l /data/gpdb/pdata1/gpseg0/pg_log/startup.log -w -t 600 -o " -p 6000 " start 2>&1'  had result: cmd had rc=0 completed=True halted=False
    20220409:15:12:17:008990 gpsegstart.py_gp2:gpadmin:gp2:gpadmin-[DEBUG]:-[worker1] finished cmd: Starting seg at dir /data/gpdb/pdata2/gpseg1 cmdStr='env GPSESSID=0000000000 GPERA=ea7fa135b0ce8dae_220409151208 $GPHOME/bin/pg_ctl -D /data/gpdb/pdata2/gpseg1 -l /data/gpdb/pdata2/gpseg1/pg_log/startup.log -w -t 600 -o " -p 6001 " start 2>&1'  had result: cmd had rc=0 completed=True halted=False
涉及到的相关脚本, gpstart, startSegments, gpsegstart, gp

 

gp_segment_configuraion 的来源

pg_catalog.gp_segment_configuration 的数据来自于 gpinitsystem.LOAD_QE_SYSTEM_DATA
    拿到 seg_hosts 之后, HOST_LOOKUP 根据 ip 获取 hostname
    具体的方式是优先从 ~/.gphostcache 中获取 ip -> hostname, 其次 远程到 ip 对应的机器 通过 hostname 获取到主机名, 然后 缓存到 ~/.gphostcache
    注意到这个缓存, 是存在时间有效性的, 因此 我之前更新了三台机器的 hostname 之后, 数据库表中的 gp_segment_configuration 的 hostname 依然为 localhost.localdomain
    涉及到的相关脚本 gpinitsystem, gphostcachelookup, gphostcache, unix

 

查看 gp_segment_configuraion

进入 postgres 的方式, PGOPTIONS="-c gp_session_role=utility" psql -d postgres
查询 segment 的相关信息, select * from gp_segment_configuration order by dbid;
postgres=# select * from gp_segment_configuration order by dbid;
 dbid | content | role | preferred_role | mode | status | port |     hostname      | address |          datadir
------+---------+------+----------------+------+--------+------+-------------------+---------+---------------------------
    1 |      -1 | p    | p              | n    | u      | 5432 | host-172-19-165-8 | gp1     | /data/gpdb/master/gpseg-1
    2 |       0 | p    | p              | n    | u      | 6000 | localhost.localdomain       | gp2     | /data/gpdb/pdata1/gpseg0
    3 |       1 | p    | p              | n    | u      | 6001 | localhost.localdomain       | gp2     | /data/gpdb/pdata2/gpseg1
    4 |       2 | p    | p              | n    | u      | 6000 | localhost.localdomain       | gp3     | /data/gpdb/pdata1/gpseg2
    5 |       3 | p    | p              | n    | u      | 6001 | localhost.localdomain       | gp3     | /data/gpdb/pdata2/gpseg3

 

处理方式

将 hostname 修改为对应的 gp2, gp3
    ERROR:  permission denied: "gp_segment_configuration" is a system catalog 的处理方式 set allow_system_table_mods='TRUE';

 

 

2. 业务表查询阻塞

现象

创建一张测试表, 然后查询这张测试表, 客户端阻塞住

 

查询 当前查询相关的进程信息

[root@gp1 gpdb]# ps -ef | grep SELEC
gpadmin   7784  7761  0 16:02 ?        00:00:00 postgres:  5432, gpadmin test 192.168.33.2(50275) con10 cmd28 SELECT
root      7900 25953  0 16:09 pts/0    00:00:00 grep --color=auto SELEC

 

查看 查询进程 的堆栈信息如下, 可以看到 udp 的相关函数

Thread 2 (Thread 0x7fd751f9c700 (LWP 29767)):
    #0  0x00007fd74da9420d in poll () from /lib64/libc.so.6
    #1  0x0000000000a7e031 in rxThreadFunc (arg=<optimized out>) at ic_udpifc.c:6234
    #2  0x00007fd74e60ddd5 in start_thread () from /lib64/libpthread.so.0
    #3  0x00007fd74da9eead in clone () from /lib64/libc.so.6
    Thread 1 (Thread 0x7fd752042880 (LWP 29766)):
    #0  0x00007fd74da9420d in poll () from /lib64/libc.so.6
    #1  0x00000000008012f3 in WaitLatchOrSocket (latch=latch@entry=0x1072530 <ic_control_info+144>, wakeEvents=wakeEvents@entry=27, sock=sock@entry=17, timeout=timeout@entry=250) at pg_latch.c:334
    #2  0x0000000000a7c38e in receiveChunksUDPIFC (pTransportStates=pTransportStates@entry=0x1566a78, pEntry=pEntry@entry=0x14637b8, motNodeID=motNodeID@entry=1, srcRoute=srcRoute@entry=0x7ffc897b6d6e, conn=conn@entry=0x0) at ic_udpifc.c:3799
    #3  0x0000000000a7c892 in RecvTupleChunkFromAnyUDPIFC_Internal (srcRoute=0x7ffc897b6d6e, motNodeID=1, transportStates=0x1566a78) at ic_udpifc.c:3913
    #4  RecvTupleChunkFromAnyUDPIFC (transportStates=0x1566a78, motNodeID=1, srcRoute=0x7ffc897b6d6e) at ic_udpifc.c:3933
    #5  0x0000000000a70eee in processIncomingChunks (srcRoute=<optimized out>, motNodeID=1, pMNEntry=0x13e1b88, transportStates=0x1566a78, mlStates=0x13e1ae8) at cdbmotion.c:659
    #6  RecvTupleFrom (mlStates=<optimized out>, transportStates=0x1566a78, motNodeID=1, srcRoute=srcRoute@entry=-100) at cdbmotion.c:615
    #7  0x0000000000706488 in execMotionUnsortedReceiver (node=0x14547a0) at nodeMotion.c:392
    #8  ExecMotion (node=node@entry=0x14547a0) at nodeMotion.c:213
    #9  0x00000000006c7d48 in ExecProcNode (node=node@entry=0x14547a0) at execProcnode.c:1121
    #10 0x00000000006eb528 in ExecLimit_guts (node=node@entry=0x14540a0) at nodeLimit.c:94
    #11 ExecLimit (node=node@entry=0x14540a0) at nodeLimit.c:234
    #12 0x00000000006c7d58 in ExecProcNode (node=node@entry=0x14540a0) at execProcnode.c:1117
    #13 0x00000000006bf269 in ExecutePlan (estate=estate@entry=0x1453ac8, planstate=0x14540a0, operation=operation@entry=CMD_SELECT, sendTuples=sendTuples@entry=1 '\001', numberTuples=numberTuples@entry=0, direction=direction@entry=ForwardScanDirection, dest=0x12cc9c0) at execMain.c:2980
    #14 0x00000000006bffdc in standard_ExecutorRun (queryDesc=0x156ea48, direction=ForwardScanDirection, count=0) at execMain.c:969
    #15 0x000000000088dbe7 in PortalRunSelect (portal=0x12d6a08, forward=<optimized out>, count=0, dest=<optimized out>) at pquery.c:1149
    #16 0x000000000088fb91 in PortalRun (portal=portal@entry=0x12d6a08, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x12cc9c0, altdest=altdest@entry=0x12cc9c0, completionTag=completionTag@entry=0x7ffc897b7380 "") at pquery.c:990
    #17 0x000000000088a260 in exec_simple_query (query_string=0x12caaa8 "SELECT * FROM \"public\".\"test\" LIMIT 1000 OFFSET 0") at postgres.c:1798
    #18 0x000000000088ca05 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x12aa4c0, dbname=<optimized out>, username=<optimized out>) at postgres.c:5214
    #19 0x00000000004cb8cb in BackendRun (port=0x12daf10) at postmaster.c:4803
    #20 BackendStartup (port=0x12daf10) at postmaster.c:4460
    #21 ServerLoop () at postmaster.c:1945
    #22 0x0000000000814a86 in PostmasterMain (argc=argc@entry=6, argv=argv@entry=0x12a8540) at postmaster.c:1515
    #23 0x00000000004ce0e1 in main (argc=6, argv=0x12a8540) at main.c:245

 

查看 进程 7784 占用 upd 端口的相关情况, 可以看到的是占用了 两个高位端口

多半可能是防火墙的问题, 增加 30000-65535 的端口开放, 限定 udp 协议

[root@gp1 gpdb]# netstat -nlp | grep 7784
udp6       0      0 :::54096                :::*                                7784/postgres:  543
udp6       0      0 :::47865                :::*                                7784/postgres:  543

 

处理方式

防火墙增加 30000-65535 端口的 udp 服务开放, 客户端重新查询, 查询成功

 

 

完