参考:https://www.youtube.com/watch?v=NWcShtqr8kc&list=PLvOO0btloRnuTUGN4XqO85eKPeFSZsEqK&index=2特征生成:数字特征: 比如年龄,不需要编码.分类特征: 比如国家,需要用ONE-HOT 编码, [1, 0, 0, 0, … , 0]. 长度为所有可能值.为啥不直接用数字?
POS 订单模型:属性名称描述POSOrder.statusTotaled (订单完成,销售或退货)TransactionVoided (终止交易, 未支付)PostVoided (取消交易,已支付)Suspended (挂起)POSOrderLine.voidFlagtrue/false 订单行是否被删除POSOrderSummary.typeCodeSale  
Key features of Reactor:· Avoiding callback hell. Multiple tasks can be orchestrated.· Analogous to assembly line.· &
Ways to create coroutine: Launch {} : Non blocking. Code after the coroutine block can run immediately while the coroutine is running. Launch can only be invoked within a coroutine sc
kafka error handling
退货流程:前提条件: 这里的退货流程是指客户已经收到了订单中商品的售后流程,这里有两种不同实现方式.1. 对于订单提供售后退货的功能的前提条件是订单状态为”已签收”. 2. 对于订单状态是“已发货”的订单就提供退货申请功能,等客户实际收到包裹再完成退货. 退货一般分为三个步骤: 1. 创建退货单: 客户选择某笔订单中的一件或多件商品,&nbs
Purpose of testing:Understand performance of building up drools rule engine and evaluating rules when number of rules grow. In this testing, we generate rules that contain 3 variables, each vari
Core features of creidt card payment
Various netty tricks, mostly about error handling
KafkaProducerBlockingPoint:max.block.ms:maxtimetowaitforthewholekafka.sendmethod,althoughkafka.sendisasynchronous,thereareseveralsynchronous/blockingoperationswithinthemethod:max.block.msincludes:Meta
收单场景多币种支付概述:在跨境收单的场景下,一笔支付交易会涉及以下几个币种:支付交易币种:发起支付订单客户实际支付的币种渠道交易币种:渠道可以接受的支付币种.渠道结算币种:渠道给支付平台对账单中交易的结算币种.一般来说,渠道支持的结算币种是交易币种的一个子集,所以选择渠道的时候我可以只考虑渠道的结算币种是否支持支付交易币种.MERCHANT结算币种:支付平台结算给MERCHANT的币种对于每个支付
事务特性:Atomicity(原子性):定义:用户发起多次写请求:中间某次写失败则所有写操作都被回滚实现方式:灾害恢复日志Consistency(一致性):定义:应用层面的不变性在事务中被保持Isolation(隔离性):定义:多个事务操作同一笔数据的时候效果互不影响,不产生racingcondition比如多个事务同时从一个账号扣减金额,最终金额应该等于初始金额-总扣减金额隔离级别:解决事务并发
Producerbehavior:Ifkafkabrokerisnotstarted,surprisingly,evenifspringcloudkafkastreamfailstocreateakafkaproducer,itwillnotfailfastandapplicationwillstartupasnormalandonlyanexceptionappearsonconsole.See
Deploynewversiontopodsofaservice:1.Modifyimagenameinreplicationcontroller’spodtemplate.Manuallydeleteoldpods.Drawback:slightdowntimebeforenewpodsarecreatedbyreplicationcontroller.2.Blue/greendeploymen
Timetoliveisagoodindicatorofnumberofhopsamessagehasexperienced.Identificationisagoodindicatorofsourceofmessage.(Identfication=0willalwayssendidentification=0).TCPdelayacknowledgement:delaybetweenmessa
Service:Waystocreateservice:KubectlexposecreatedaServiceresourcewiththesamepodselectorastheoneusedbytheReplicationControllerKubectlcreatewithservicespecsapiVersion:v1kind:Servicemetadata:name:kubiaspe
Replicationcontroller:ReplicationControllerschedulespodtooneworknode,kubeletapplicationinnodepullsimageandcreatecontainers.Podstartedby“kubectlrun”isnotcreateddirectly.ReplicationControlleriscreatedby
PodDefinition:Createpodbydefinition:kubectlcreate-f[filename]Displaypoddefinition:kubectlgetpo[podname]-oyaml/jsonGettinglog:kubectllogs[podname]-c[containername]Portforwarding:kubectlport-forward[pod
ByteBuddy是一个非常强大的JAVA二进制码生成工具,以前我使用过JAVASIST,那时的主要应用场景是根据为没有源代码的JAVA类(如一些第三方JAVA库)添加一些日志方便排查问题,但是JAVASIST有一些缺陷:首先添加日志在方法的开始和结束比较容易,如果想更改方法本身的二进制码会相当麻烦.其次是不支持动态代码而ByteBuddy则提供了许多方便好用的HELPER方法帮助我们生成或者增强
Keycharacters:Storetimeseriesdata:DatavolumeislargeHighnumberofconcurrentreads/writesOptimizedfor(C)reate,(R)ead:TSMTreeLimitedsupportfor(U)pdate,(D)eletSerialdataismoreimportantthansingledatapoint:No
PROBLEM:AstrangeissueoccurswhenIusespringbootwithspringdata.Whenthespringbootapplicationstarts,contextcreationfailswiththefollowingexceptionstacktrace.Thestacktraceisprettylong,soonlythecrucialexcerpt
何为SHARDING:将大数据集分为多个块,存储在不同的服务器上目的:可扩展性:不同的分片可以放在不同的服务器上,分散读请求复杂查询可以并行的在不同的分片上执行写请求分散到各个服务器上问题1:怎么分?每个服务器上数据保持均匀,避免数据倾斜随机分配:优点:数据均匀缺点:无法知道数据在哪个节点每个分片保存主键一个范围内连续键值(partitionbykeyrange)优点:容易算出主键在哪个节点.主键
复制目的:访问的数据地理位置更接近用户减少访问延时(多数据中心)节点宕机数据仍可用(高可用)多副本可读,增加读吞吐量主从复制:同步复制->强一致性,弱可用性一旦某个从节点宕机则写失败解决方案:一个从节点同步复制,其他异步复制,一旦同步节点宕机提升一个异步节点为同步节点新强一致性复制算法:chainreplication异步复制->强可用性,弱一致性增加从节点:挑战:主节点数据仍在写入,
Processfilename'sprefixmustmatchprocessidinprocessdefinitionitself.Egifprocessidis"compliance-deposit-process",processfilenamemustbe"compliance-deposit-process-xxx.bpmn.xml"Tov
Setup1Nodeclusteronmylocallaptop:8core,Xms=8G,Xmx=8GIndexingperformance(Singleindex):10millionpayments,eachoneabout5KB,withbatchsize=10000.Eachbatchtakesroughly2.5s→4s,totaltimetoindex10millionpayment
Producer:Brokercansendtoproduceraconfirmationafterreceivingmessageandsuccessfullypersistedit.Theproducerneedstoregisteraconfirmationlistenertoprocesstheconfirmationmessage.Inconfirmationmode,eachprodu
Aggregation概述Aggregation可以和普通查询结果并存,一个查询结果中也允许包含多个不相关的Aggregation.如果只关心聚合结果而不关心查询结果的话会把SearchSource的size设置为0,能有效提高性能.Aggregation类型Metrics:简单聚合类型,对于目标集和中的所有文档计算聚合指标,一般没有嵌套的subaggregations.比如平均值(avg),求和
Elasticsearchprovidesometestfacilitiesofficially.ESIntegTestCaseallowyoutostartalocalelasticsearchclusterfromtestcontainer,sothatyoucantestelasticsearchindex/search/aggregationwithoutmocking.However,t
问题描述:线上出现一台服务器特别慢,于是关闭了服务器上的kafkabroker.关闭后发现一些kafkaconsumer无法正常消费数据了,日志错误:o.a.kakfa.clients.consumer.internals.AbstractCordinatorMarkingthecoordinator(39.0.2.100)asdead.原因:经过一番排查,发现consumergroup信息:(k
数据复制机制:客户端根据HASH算出主节点,数据只从客户端发到主节点GridNearAtomicSingleUpdateFuture.mapOnTopologyGridNearAtomicSingleUpdateFuture.mapGridNearAtomicAbstractUpdateFuture.sendSingleRequestGridCacheIoManager.send当数据复制到主节点
Copyright © 2005-2024 51CTO.COM 版权所有 京ICP证060544号