本文介绍了 Soul 网关框架中 divide 插件的使用,并简单分析了实现原理。
背景
Soul 网关框架内置丰富的插件支持。divide 插件用于 http 正向代理,所有的 http 类型请求由该插件进行负载均衡的调用。
divide 插件的使用
依次启动soul-admin(网关管理后台),soul-bootstrap(网关服务),soul-examples-http(后端服务);
登录网关管理后台可以看到 soul-examples-http 中的 API 注册到了 divide 模块中;
SelectorList 面板中列出了在后端服务 soul-examples-http 中配置的一个 selector 选择器 ;
http: adminUrl: http://localhost:9095 port: 8188 contextPath: /http appName: http full: false复制代码
- 代码配置项的 contextPath 是转发后端后端服务的路径前缀上下文,对应 Selector 配置中的 Name;
- 可以通过设置 Selector 中 weight 配置请求后端服务的权重;
RulesList 面板中列出了后端服务 soul-examples-http 中的API路路径和规则选项,如制定请求匹配的条件,负载均衡策略等;
测试客户端 http 请求访问网关服务;
启动两个后端服务实例 127.0.0.1:8188 和 127.0.0.1:8189,修改实例的权重分别为 1 和 100,观察网关日志可以看到连续多次请求,均转发至 127.0.0.1:8189,因为后者权重更大;
divide 插件源码解析
divide 插件处理流程
所有的插件通过责任链模式依次调用 AbstractSoulPlugin#execute 方法;
AbstractSoulPlugin#execute 方法中查询到该插件配置的选择器和规则后,调用 DividePlugin#doExecute 方法;
规则的匹配
- 一个插件对应多个选择器,一个选择器对应多种规则。选择器相当于是对流量的第一次筛选,规则是最终的筛选。
- 请求经过选择器匹配后,会进入规则的匹配;
private RuleData matchRule(final ServerWebExchange exchange, final Collection<RuleData> rules) { return rules.stream() .filter(rule -> filterRule(rule, exchange)) .findFirst().orElse(null); } private Boolean filterRule(final RuleData ruleData, final ServerWebExchange exchange) { return ruleData.getEnabled() && MatchStrategyUtils.match(ruleData.getMatchMode(), ruleData.getConditionDataList(), exchange); } public class MatchStrategyUtils { /** * Match boolean. * * @param strategy the strategy * @param conditionDataList the condition data list * @param exchange the exchange * @return the boolean */ public static boolean match(final Integer strategy, final List<ConditionData> conditionDataList, final ServerWebExchange exchange) { // 获取匹配方式 and 还是 or String matchMode = MatchModeEnum.getMatchModeByCode(strategy); // 通过SPI的方式获取匹配策略的实现类 MatchStrategy matchStrategy = ExtensionLoader.getExtensionLoader(MatchStrategy.class).getJoin(matchMode); // 执行匹配规则 return matchStrategy.match(conditionDataList, exchange); } }复制代码
负载均衡策略
负载均衡策略是通过配置的SPI扩展点实现,包含3种实现策略;
random 随机选择策略
public DivideUpstream doSelect(final List<DivideUpstream> upstreamList, final String ip) { // 计算后端服务的权重之和 int totalWeight = calculateTotalWeight(upstreamList); // 判断是否配置的同一权重 boolean sameWeight = isAllUpStreamSameWeight(upstreamList); // 非同一权重 if (totalWeight > 0 && !sameWeight) { return random(totalWeight, upstreamList); } // If the weights are the same or the weights are 0 then random return random(upstreamList); } // 有权重的随机类似分段分布,权重大的在线段中占比大,随机时概率高 private DivideUpstream random(final int totalWeight, final List<DivideUpstream> upstreamList) { // If the weights are not the same and the weights are greater than 0, then random by the total number of weights int offset = RANDOM.nextInt(totalWeight); // Determine which segment the random value falls on for (DivideUpstream divideUpstream : upstreamList) { offset -= getWeight(divideUpstream); if (offset < 0) { return divideUpstream; } } return upstreamList.get(0); } // 权重相同时直接通过随机数选择 private DivideUpstream random(final List<DivideUpstream> upstreamList) { return upstreamList.get(RANDOM.nextInt(upstreamList.size())); }复制代码
- roundRobin 轮询策略
public DivideUpstream doSelect(final List<DivideUpstream> upstreamList, final String ip) { String key = upstreamList.get(0).getUpstreamUrl(); ConcurrentMap<String, WeightedRoundRobin> map = methodWeightMap.get(key); if (map == null) { methodWeightMap.putIfAbsent(key, new ConcurrentHashMap<>(16)); map = methodWeightMap.get(key); } int totalWeight = 0; long maxCurrent = Long.MIN_VALUE; long now = System.currentTimeMillis(); DivideUpstream selectedInvoker = null; WeightedRoundRobin selectedWRR = null; for (DivideUpstream upstream : upstreamList) { String rKey = upstream.getUpstreamUrl(); WeightedRoundRobin weightedRoundRobin = map.get(rKey); int weight = getWeight(upstream); if (weightedRoundRobin == null) { weightedRoundRobin = new WeightedRoundRobin(); weightedRoundRobin.setWeight(weight); map.putIfAbsent(rKey, weightedRoundRobin); } if (weight != weightedRoundRobin.getWeight()) { //weight changed weightedRoundRobin.setWeight(weight); } long cur = weightedRoundRobin.increaseCurrent(); weightedRoundRobin.setLastUpdate(now); if (cur > maxCurrent) { maxCurrent = cur; selectedInvoker = upstream; selectedWRR = weightedRoundRobin; } totalWeight += weight; } if (!updateLock.get() && upstreamList.size() != map.size() && updateLock.compareAndSet(false, true)) { try { // copy -> modify -> update reference ConcurrentMap<String, WeightedRoundRobin> newMap = new ConcurrentHashMap<>(map); newMap.entrySet().removeIf(item -> now - item.getValue().getLastUpdate() > recyclePeriod); methodWeightMap.put(key, newMap); } finally { updateLock.set(false); } } if (selectedInvoker != null) { selectedWRR.sel(totalWeight); return selectedInvoker; } // should not happen here return upstreamList.get(0); } 复制代码
- hash 哈希策略
public DivideUpstream doSelect(final List<DivideUpstream> upstreamList, final String ip) { final ConcurrentSkipListMap<Long, DivideUpstream> treeMap = new ConcurrentSkipListMap<>(); for (DivideUpstream address : upstreamList) { for (int i = 0; i < VIRTUAL_NODE_NUM; i++) { long addressHash = hash("SOUL-" + address.getUpstreamUrl() + "-HASH-" + i); treeMap.put(addressHash, address); } } long hash = hash(String.valueOf(ip)); SortedMap<Long, DivideUpstream> lastRing = treeMap.tailMap(hash); if (!lastRing.isEmpty()) { return lastRing.get(lastRing.firstKey()); } return treeMap.firstEntry().getValue(); } private static long hash(final String key) { // md5 byte MessageDigest md5; try { md5 = MessageDigest.getInstance("MD5"); } catch (NoSuchAlgorithmException e) { throw new SoulException("MD5 not supported", e); } md5.reset(); byte[] keyBytes; keyBytes = key.getBytes(StandardCharsets.UTF_8); md5.update(keyBytes); byte[] digest = md5.digest(); // hash code, Truncate to 32-bits long hashCode = (long) (digest[3] & 0xFF) << 24 | ((long) (digest[2] & 0xFF) << 16) | ((long) (digest[1] & 0xFF) << 8) | (digest[0] & 0xFF); return hashCode & 0xffffffffL; }复制代码
总结
- 学习了divide插件的用法和源码,结合实践经验,目前敝司使用的系统是将域名权重和后端服务割裂开的,有专门的平台申请域名和挂载机器权重,有 API 网关平台用于后端服务 API 的注册和转发;相比之下 Soul 的负载均衡策略更加灵活,可以精确到请求路径;
- 源码分析还是一次只专注于一块,其他的可以理解成黑盒,后序慢慢分析,这样可以有效的控制学习节奏;