mirror of
https://github.com/duanhf2012/origin.git
synced 2026-02-05 15:34:49 +08:00
Compare commits
95 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4ad8204fde | ||
|
|
8f15546fb1 | ||
|
|
0f3a965d73 | ||
|
|
dfb6959843 | ||
|
|
dd4aaf9c57 | ||
|
|
6ef98a2104 | ||
|
|
1890b300ee | ||
|
|
6fea2226e1 | ||
|
|
ec1c2b4517 | ||
|
|
4b84d9a1d5 | ||
|
|
85a8ec58e5 | ||
|
|
962016d476 | ||
|
|
a61979e985 | ||
|
|
6de25d1c6d | ||
|
|
b392617d6e | ||
|
|
92fdb7860c | ||
|
|
f78d0d58be | ||
|
|
5675681ab1 | ||
|
|
ddeaaf7d77 | ||
|
|
1174b47475 | ||
|
|
18fff3b567 | ||
|
|
7ab6c88f9c | ||
|
|
6b64de06a2 | ||
|
|
95b153f8cf | ||
|
|
f3ff09b90f | ||
|
|
f9738fb9d0 | ||
|
|
91e773aa8c | ||
|
|
c9b96404f4 | ||
|
|
aaae63a674 | ||
|
|
47dc21aee1 | ||
|
|
4d09532801 | ||
|
|
d3ad7fc898 | ||
|
|
ba2b0568b2 | ||
|
|
5a3600bd62 | ||
|
|
4783d05e75 | ||
|
|
8cc1b1afcb | ||
|
|
53d9392901 | ||
|
|
8111b12da5 | ||
|
|
0ebbe0e31d | ||
|
|
e326e342f2 | ||
|
|
a7c6b45764 | ||
|
|
541abd93b4 | ||
|
|
8c8d681093 | ||
|
|
b8150cfc51 | ||
|
|
3833884777 | ||
|
|
60064cbba6 | ||
|
|
66770f07a5 | ||
|
|
76c8541b34 | ||
|
|
b1fee9bc57 | ||
|
|
284d43dc71 | ||
|
|
fd43863b73 | ||
|
|
1fcd870f1d | ||
|
|
11b78f84c4 | ||
|
|
8c6ee24b16 | ||
|
|
ca23925796 | ||
|
|
afb04cac7f | ||
|
|
975cf93d58 | ||
|
|
c7e0fcbdbb | ||
|
|
5bea050f63 | ||
|
|
95b4e2f8de | ||
|
|
5601ab5ae2 | ||
|
|
d28094eefa | ||
|
|
68dfbc46f0 | ||
|
|
80c73b0bdb | ||
|
|
d9afeed6ee | ||
|
|
a32ff59676 | ||
|
|
2101c8903c | ||
|
|
5214f094bf | ||
|
|
fd364cf579 | ||
|
|
1eab31209c | ||
|
|
2da3ccae39 | ||
|
|
da18cf3158 | ||
|
|
c3484e9d5b | ||
|
|
b87a78c85b | ||
|
|
17a448f75c | ||
|
|
d87ad419c8 | ||
|
|
298a5d3721 | ||
|
|
64fb9368bf | ||
|
|
7f93aa5ff9 | ||
|
|
7a8d312aeb | ||
|
|
f931f61f7b | ||
|
|
151ed123f4 | ||
|
|
5a6a4c8a0d | ||
|
|
280c04a5d7 | ||
|
|
1520dae223 | ||
|
|
84f3429564 | ||
|
|
89fd5d273b | ||
|
|
3ce873ef04 | ||
|
|
3763f7d848 | ||
|
|
769f680b17 | ||
|
|
77988906f8 | ||
|
|
ae0ba1d966 | ||
|
|
f61fd5d1be | ||
|
|
eb1867c5fd | ||
|
|
8823d5fba4 |
251
README.md
251
README.md
@@ -1,10 +1,10 @@
|
||||
origin 游戏服务器引擎简介
|
||||
==================
|
||||
|
||||
=========================
|
||||
|
||||
origin 是一个由 Go 语言(golang)编写的分布式开源游戏服务器引擎。origin适用于各类游戏服务器的开发,包括 H5(HTML5)游戏服务器。
|
||||
|
||||
origin 解决的问题:
|
||||
|
||||
* origin总体设计如go语言设计一样,总是尽可能的提供简洁和易用的模式,快速开发。
|
||||
* 能够根据业务需求快速并灵活的制定服务器架构。
|
||||
* 利用多核优势,将不同的service配置到不同的node,并能高效的协同工作。
|
||||
@@ -12,12 +12,16 @@ origin 解决的问题:
|
||||
* 有丰富并健壮的工具库。
|
||||
|
||||
Hello world!
|
||||
---------------
|
||||
------------
|
||||
|
||||
下面我们来一步步的建立origin服务器,先下载[origin引擎](https://github.com/duanhf2012/origin "origin引擎"),或者使用如下命令:
|
||||
|
||||
```go
|
||||
go get -v -u github.com/duanhf2012/origin
|
||||
```
|
||||
|
||||
于是下载到GOPATH环境目录中,在src中加入main.go,内容如下:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
@@ -29,16 +33,20 @@ func main() {
|
||||
node.Start()
|
||||
}
|
||||
```
|
||||
|
||||
以上只是基础代码,具体运行参数和配置请参照第一章节。
|
||||
|
||||
一个origin进程需要创建一个node对象,Start开始运行。您也可以直接下载origin引擎示例:
|
||||
|
||||
```
|
||||
go get -v -u github.com/duanhf2012/originserver
|
||||
```
|
||||
|
||||
本文所有的说明都是基于该示例为主。
|
||||
|
||||
origin引擎三大对象关系
|
||||
---------------
|
||||
----------------------
|
||||
|
||||
* Node: 可以认为每一个Node代表着一个origin进程
|
||||
* Service:一个独立的服务可以认为是一个大的功能模块,他是Node的子集,创建完成并安装Node对象中。服务可以支持对外部RPC等功能。
|
||||
* Module: 这是origin最小对象单元,强烈建议所有的业务模块都划分成各个小的Module组合,origin引擎将监控所有服务与Module运行状态,例如可以监控它们的慢处理和死循环函数。Module可以建立树状关系。Service本身也是Module的类型。
|
||||
@@ -46,7 +54,8 @@ origin引擎三大对象关系
|
||||
origin集群核心配置文件在config的cluster目录下,如github.com/duanhf2012/originserver的config/cluster目录下有cluster.json与service.json配置:
|
||||
|
||||
cluster.json如下:
|
||||
---------------
|
||||
------------------
|
||||
|
||||
```
|
||||
{
|
||||
"NodeList":[
|
||||
@@ -55,36 +64,44 @@ cluster.json如下:
|
||||
"Private": false,
|
||||
"ListenAddr":"127.0.0.1:8001",
|
||||
"MaxRpcParamLen": 409600,
|
||||
"CompressBytesLen": 20480,
|
||||
"NodeName": "Node_Test1",
|
||||
"remark":"//以_打头的,表示只在本机进程,不对整个子网开发",
|
||||
"remark":"//以_打头的,表示只在本机进程,不对整个子网公开",
|
||||
"ServiceList": ["TestService1","TestService2","TestServiceCall","GateService","_TcpService","HttpService","WSService"]
|
||||
},
|
||||
{
|
||||
{
|
||||
"NodeId": 2,
|
||||
"Private": false,
|
||||
"ListenAddr":"127.0.0.1:8002",
|
||||
"MaxRpcParamLen": 409600,
|
||||
"CompressBytesLen": 20480,
|
||||
"NodeName": "Node_Test1",
|
||||
"remark":"//以_打头的,表示只在本机进程,不对整个子网开发",
|
||||
"remark":"//以_打头的,表示只在本机进程,不对整个子网公开",
|
||||
"ServiceList": ["TestService1","TestService2","TestServiceCall","GateService","TcpService","HttpService","WSService"]
|
||||
}
|
||||
]
|
||||
```
|
||||
---------------
|
||||
|
||||
---
|
||||
|
||||
以上配置了两个结点服务器程序:
|
||||
|
||||
* NodeId: 表示origin程序的结点Id标识,不允许重复。
|
||||
* Private: 是否私有结点,如果为true,表示其他结点不会发现它,但可以自我运行。
|
||||
* ListenAddr:Rpc通信服务的监听地址
|
||||
* MaxRpcParamLen:Rpc参数数据包最大长度,该参数可以缺省,默认一次Rpc调用支持最大4294967295byte长度数据。
|
||||
* CompressBytesLen:Rpc网络数据压缩,当数据>=20480byte时将被压缩。该参数可以缺省或者填0时不进行压缩。
|
||||
* NodeName:结点名称
|
||||
* remark:备注,可选项
|
||||
* ServiceList:该Node将安装的服务列表
|
||||
---------------
|
||||
* ServiceList:该Node拥有的服务列表,注意:origin按配置的顺序进行安装初始化。但停止服务的顺序是相反。
|
||||
|
||||
---
|
||||
|
||||
在启动程序命令originserver -start nodeid=1中nodeid就是根据该配置装载服务。
|
||||
更多参数使用,请使用originserver -help查看。
|
||||
service.json如下:
|
||||
---------------
|
||||
------------------
|
||||
|
||||
```
|
||||
{
|
||||
"Global": {
|
||||
@@ -96,13 +113,14 @@ service.json如下:
|
||||
"ReadTimeout":10000,
|
||||
"WriteTimeout":10000,
|
||||
"ProcessTimeout":10000,
|
||||
"ManualStart": false,
|
||||
"CAFile":[
|
||||
{
|
||||
"Certfile":"",
|
||||
"Keyfile":""
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
},
|
||||
"TcpService":{
|
||||
"ListenAddr":"0.0.0.0:9030",
|
||||
@@ -159,17 +177,21 @@ service.json如下:
|
||||
}
|
||||
```
|
||||
|
||||
---------------
|
||||
---
|
||||
|
||||
以上配置分为两个部分:Global,Service与NodeService。Global是全局配置,在任何服务中都可以通过cluster.GetCluster().GetGlobalCfg()获取,NodeService中配置的对应结点中服务的配置,如果启动程序中根据nodeid查找该域的对应的服务,如果找不到时,从Service公共部分查找。
|
||||
|
||||
**HttpService配置**
|
||||
|
||||
* ListenAddr:Http监听地址
|
||||
* ReadTimeout:读网络超时毫秒
|
||||
* WriteTimeout:写网络超时毫秒
|
||||
* ProcessTimeout: 处理超时毫秒
|
||||
* ManualStart: 是否手动控制开始监听,如果true,需要手动调用StartListen()函数
|
||||
* CAFile: 证书文件,如果您的服务器通过web服务器代理配置https可以忽略该配置
|
||||
|
||||
**TcpService配置**
|
||||
|
||||
* ListenAddr: 监听地址
|
||||
* MaxConnNum: 允许最大连接数
|
||||
* PendingWriteNum:发送网络队列最大数量
|
||||
@@ -178,20 +200,21 @@ service.json如下:
|
||||
* MaxMsgLen:包最大长度
|
||||
|
||||
**WSService配置**
|
||||
|
||||
* ListenAddr: 监听地址
|
||||
* MaxConnNum: 允许最大连接数
|
||||
* PendingWriteNum:发送网络队列最大数量
|
||||
* MaxMsgLen:包最大长度
|
||||
---------------
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
第一章:origin基础:
|
||||
---------------
|
||||
-------------------
|
||||
|
||||
查看github.com/duanhf2012/originserver中的simple_service中新建两个服务,分别是TestService1.go与CTestService2.go。
|
||||
|
||||
simple_service/TestService1.go如下:
|
||||
|
||||
```
|
||||
package simple_service
|
||||
|
||||
@@ -221,7 +244,9 @@ func (slf *TestService1) OnInit() error {
|
||||
|
||||
|
||||
```
|
||||
|
||||
simple_service/TestService2.go如下:
|
||||
|
||||
```
|
||||
import (
|
||||
"github.com/duanhf2012/origin/node"
|
||||
@@ -261,6 +286,7 @@ func main(){
|
||||
```
|
||||
|
||||
* config/cluster/cluster.json如下:
|
||||
|
||||
```
|
||||
{
|
||||
"NodeList":[
|
||||
@@ -277,6 +303,7 @@ func main(){
|
||||
```
|
||||
|
||||
编译后运行结果如下:
|
||||
|
||||
```
|
||||
#originserver -start nodeid=1
|
||||
TestService1 OnInit.
|
||||
@@ -284,13 +311,15 @@ TestService2 OnInit.
|
||||
```
|
||||
|
||||
第二章:Service中常用功能:
|
||||
---------------
|
||||
--------------------------
|
||||
|
||||
定时器:
|
||||
---------------
|
||||
-------
|
||||
|
||||
在开发中最常用的功能有定时任务,origin提供两种定时方式:
|
||||
|
||||
一种AfterFunc函数,可以间隔一定时间触发回调,参照simple_service/TestService2.go,实现如下:
|
||||
|
||||
```
|
||||
func (slf *TestService2) OnInit() error {
|
||||
fmt.Printf("TestService2 OnInit.\n")
|
||||
@@ -303,10 +332,11 @@ func (slf *TestService2) OnSecondTick(){
|
||||
slf.AfterFunc(time.Second*1,slf.OnSecondTick)
|
||||
}
|
||||
```
|
||||
|
||||
此时日志可以看到每隔1秒钟会print一次"tick.",如果下次还需要触发,需要重新设置定时器
|
||||
|
||||
|
||||
另一种方式是类似Linux系统的crontab命令,使用如下:
|
||||
|
||||
```
|
||||
|
||||
func (slf *TestService2) OnInit() error {
|
||||
@@ -325,27 +355,29 @@ func (slf *TestService2) OnCron(cron *timer.Cron){
|
||||
fmt.Printf(":A minute passed!\n")
|
||||
}
|
||||
```
|
||||
以上运行结果每换分钟时打印:A minute passed!
|
||||
|
||||
以上运行结果每换分钟时打印:A minute passed!
|
||||
|
||||
打开多协程模式:
|
||||
---------------
|
||||
|
||||
在origin引擎设计中,所有的服务是单协程模式,这样在编写逻辑代码时,不用考虑线程安全问题。极大的减少开发难度,但某些开发场景下不用考虑这个问题,而且需要并发执行的情况,比如,某服务只处理数据库操作控制,而数据库处理中发生阻塞等待的问题,因为一个协程,该服务接受的数据库操作只能是一个
|
||||
一个的排队处理,效率过低。于是可以打开此模式指定处理协程数,代码如下:
|
||||
|
||||
```
|
||||
func (slf *TestService1) OnInit() error {
|
||||
fmt.Printf("TestService1 OnInit.\n")
|
||||
|
||||
|
||||
//打开多线程处理模式,10个协程并发处理
|
||||
slf.SetGoRoutineNum(10)
|
||||
return nil
|
||||
}
|
||||
```
|
||||
为了
|
||||
|
||||
|
||||
性能监控功能:
|
||||
---------------
|
||||
-------------
|
||||
|
||||
我们在开发一个大型的系统时,经常由于一些代码质量的原因,产生处理过慢或者死循环的产生,该功能可以被监测到。使用方法如下:
|
||||
|
||||
```
|
||||
@@ -380,6 +412,7 @@ func main(){
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
上面通过GetProfiler().SetOverTime与slf.GetProfiler().SetMaxOverTimer设置监控时间
|
||||
并在main.go中,打开了性能报告器,以每10秒汇报一次,因为上面的例子中,定时器是有死循环,所以可以得到以下报告:
|
||||
|
||||
@@ -388,10 +421,11 @@ process count 0,take time 0 Milliseconds,average 0 Milliseconds/per.
|
||||
too slow process:Timer_orginserver/simple_service.(*TestService1).Loop-fm is take 38003 Milliseconds
|
||||
直接帮助找到TestService1服务中的Loop函数
|
||||
|
||||
|
||||
结点连接和断开事件监听:
|
||||
---------------
|
||||
-----------------------
|
||||
|
||||
在有些业务中需要关注某结点是否断开连接,可以注册回调如下:
|
||||
|
||||
```
|
||||
func (ts *TestService) OnInit() error{
|
||||
ts.RegRpcListener(ts)
|
||||
@@ -406,13 +440,14 @@ func (ts *TestService) OnNodeDisconnect(nodeId int){
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
第三章:Module使用:
|
||||
---------------
|
||||
-------------------
|
||||
|
||||
Module创建与销毁:
|
||||
---------------
|
||||
-----------------
|
||||
|
||||
可以认为Service就是一种Module,它有Module所有的功能。在示例代码中可以参考originserver/simple_module/TestService3.go。
|
||||
|
||||
```
|
||||
package simple_module
|
||||
|
||||
@@ -474,7 +509,9 @@ func (slf *TestService3) OnInit() error {
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
在OnInit中创建了一条线型的模块关系TestService3->module1->module2,调用AddModule后会返回Module的Id,自动生成的Id从10e17开始,内部的id,您可以自己设置Id。当调用ReleaseModule释放时module1时,同样会将module2释放。会自动调用OnRelease函数,日志顺序如下:
|
||||
|
||||
```
|
||||
Module1 OnInit.
|
||||
Module2 OnInit.
|
||||
@@ -482,14 +519,16 @@ module1 id is 100000000000000001, module2 id is 100000000000000002
|
||||
Module2 Release.
|
||||
Module1 Release.
|
||||
```
|
||||
|
||||
在Module中同样可以使用定时器功能,请参照第二章节的定时器部分。
|
||||
|
||||
|
||||
第四章:事件使用
|
||||
---------------
|
||||
----------------
|
||||
|
||||
事件是origin中一个重要的组成部分,可以在同一个node中的service与service或者与module之间进行事件通知。系统内置的几个服务,如:TcpService/HttpService等都是通过事件功能实现。他也是一个典型的观察者设计模型。在event中有两个类型的interface,一个是event.IEventProcessor它提供注册与卸载功能,另一个是event.IEventHandler提供消息广播等功能。
|
||||
|
||||
在目录simple_event/TestService4.go中
|
||||
|
||||
```
|
||||
package simple_event
|
||||
|
||||
@@ -533,6 +572,7 @@ func (slf *TestService4) TriggerEvent(){
|
||||
```
|
||||
|
||||
在目录simple_event/TestService5.go中
|
||||
|
||||
```
|
||||
package simple_event
|
||||
|
||||
@@ -588,19 +628,24 @@ func (slf *TestService5) OnServiceEvent(ev event.IEvent){
|
||||
|
||||
|
||||
```
|
||||
|
||||
程序运行10秒后,调用slf.TriggerEvent函数广播事件,于是在TestService5中会收到
|
||||
|
||||
```
|
||||
OnServiceEvent type :1001 data:event data.
|
||||
OnModuleEvent type :1001 data:event data.
|
||||
```
|
||||
|
||||
在上面的TestModule中监听的事情,当这个Module被Release时监听会自动卸载。
|
||||
|
||||
第五章:RPC使用
|
||||
---------------
|
||||
|
||||
RPC是service与service间通信的重要方式,它允许跨进程node互相访问,当然也可以指定nodeid进行调用。如下示例:
|
||||
|
||||
simple_rpc/TestService6.go文件如下:
|
||||
```
|
||||
|
||||
```go
|
||||
package simple_rpc
|
||||
|
||||
import (
|
||||
@@ -625,6 +670,7 @@ type InputData struct {
|
||||
B int
|
||||
}
|
||||
|
||||
// 注意RPC函数名的格式必需为RPC_FunctionName或者是RPCFunctionName,如下的RPC_Sum也可以写成RPCSum
|
||||
func (slf *TestService6) RPC_Sum(input *InputData,output *int) error{
|
||||
*output = input.A+input.B
|
||||
return nil
|
||||
@@ -633,6 +679,7 @@ func (slf *TestService6) RPC_Sum(input *InputData,output *int) error{
|
||||
```
|
||||
|
||||
simple_rpc/TestService7.go文件如下:
|
||||
|
||||
```
|
||||
package simple_rpc
|
||||
|
||||
@@ -671,6 +718,15 @@ func (slf *TestService7) CallTest(){
|
||||
}else{
|
||||
fmt.Printf("Call output %d\n",output)
|
||||
}
|
||||
|
||||
|
||||
//自定义超时,默认rpc超时时间为15s
|
||||
err = slf.CallWithTimeout(time.Second*1, "TestService6.RPC_Sum", &input, &output)
|
||||
if err != nil {
|
||||
fmt.Printf("Call error :%+v\n", err)
|
||||
} else {
|
||||
fmt.Printf("Call output %d\n", output)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -682,13 +738,27 @@ func (slf *TestService7) AsyncCallTest(){
|
||||
})*/
|
||||
//异步调用,在数据返回时,会回调传入函数
|
||||
//注意函数的第一个参数一定是RPC_Sum函数的第二个参数,err error为RPC_Sum返回值
|
||||
slf.AsyncCall("TestService6.RPC_Sum",&input,func(output *int,err error){
|
||||
err := slf.AsyncCall("TestService6.RPC_Sum", &input, func(output *int, err error) {
|
||||
if err != nil {
|
||||
fmt.Printf("AsyncCall error :%+v\n",err)
|
||||
}else{
|
||||
fmt.Printf("AsyncCall output %d\n",*output)
|
||||
fmt.Printf("AsyncCall error :%+v\n", err)
|
||||
} else {
|
||||
fmt.Printf("AsyncCall output %d\n", *output)
|
||||
}
|
||||
})
|
||||
fmt.Println(err)
|
||||
|
||||
//自定义超时,返回一个cancel函数,可以在业务需要时取消rpc调用
|
||||
rpcCancel, err := slf.AsyncCallWithTimeout(time.Second*1, "TestService6.RPC_Sum", &input, func(output *int, err error) {
|
||||
//如果下面注释的rpcCancel()函数被调用,这里可能将不再返回
|
||||
if err != nil {
|
||||
fmt.Printf("AsyncCall error :%+v\n", err)
|
||||
} else {
|
||||
fmt.Printf("AsyncCall output %d\n", *output)
|
||||
}
|
||||
})
|
||||
//rpcCancel()
|
||||
fmt.Println(err, rpcCancel)
|
||||
|
||||
}
|
||||
|
||||
func (slf *TestService7) GoTest(){
|
||||
@@ -707,11 +777,82 @@ func (slf *TestService7) GoTest(){
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
您可以把TestService6配置到其他的Node中,比如NodeId为2中。只要在一个子网,origin引擎可以无差别调用。开发者只需要关注Service关系。同样它也是您服务器架构设计的核心需要思考的部分。
|
||||
|
||||
第六章:配置服务发现
|
||||
|
||||
第六章:并发函数调用
|
||||
---------------
|
||||
在开发中经常会有将某些任务放到其他协程中并发执行,执行完成后,将服务的工作线程去回调。使用方式很简单,先打开该功能如下代码:
|
||||
```
|
||||
//以下通过cpu数量来定开启协程并发数量,建议:(1)cpu密集型计算使用1.0 (2)i/o密集型使用2.0或者更高
|
||||
slf.OpenConcurrentByNumCPU(1.0)
|
||||
|
||||
//以下通过函数打开并发协程数,以下协程数最小5,最大10,任务管道的cap数量1000000
|
||||
//origin会根据任务的数量在最小与最大协程数间动态伸缩
|
||||
//slf.OpenConcurrent(5, 10, 1000000)
|
||||
```
|
||||
|
||||
使用示例如下:
|
||||
```
|
||||
|
||||
func (slf *TestService13) testAsyncDo() {
|
||||
var context struct {
|
||||
data int64
|
||||
}
|
||||
|
||||
//1.示例普通使用
|
||||
//参数一的函数在其他协程池中执行完成,将执行完成事件放入服务工作协程,
|
||||
//参数二的函数在服务协程中执行,是协程安全的。
|
||||
slf.AsyncDo(func() bool {
|
||||
//该函数回调在协程池中执行
|
||||
context.data = 100
|
||||
return true
|
||||
}, func(err error) {
|
||||
//函数将在服务协程中执行
|
||||
fmt.Print(context.data) //显示100
|
||||
})
|
||||
|
||||
//2.示例按队列顺序
|
||||
//参数一传入队列Id,同一个队列Id将在协程池中被排队执行
|
||||
//以下进行两次调用,因为两次都传入参数queueId都为1,所以它们会都进入queueId为1的排队执行
|
||||
queueId := int64(1)
|
||||
for i := 0; i < 2; i++ {
|
||||
slf.AsyncDoByQueue(queueId, func() bool {
|
||||
//该函数会被2次调用,但是会排队执行
|
||||
return true
|
||||
}, func(err error) {
|
||||
//函数将在服务协程中执行
|
||||
})
|
||||
}
|
||||
|
||||
//3.函数参数可以某中一个为空
|
||||
//参数二函数将被延迟执行
|
||||
slf.AsyncDo(nil, func(err error) {
|
||||
//将在下
|
||||
})
|
||||
|
||||
//参数一函数在协程池中执行,但没有在服务协程中回调
|
||||
slf.AsyncDo(func() bool {
|
||||
return true
|
||||
}, nil)
|
||||
|
||||
//4.函数返回值控制不进行回调
|
||||
slf.AsyncDo(func() bool {
|
||||
//返回false时,参数二函数将不会被执行; 为true时,则会被执行
|
||||
return false
|
||||
}, func(err error) {
|
||||
//该函数将不会被执行
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
第七章:配置服务发现
|
||||
--------------------
|
||||
|
||||
origin引擎默认使用读取所有结点配置的进行确认结点有哪些Service。引擎也支持动态服务发现的方式,使用了内置的DiscoveryMaster服务用于中心Service,DiscoveryClient用于向DiscoveryMaster获取整个origin网络中所有的结点以及服务信息。具体实现细节请查看这两部分的服务实现。具体使用方式,在以下cluster配置中加入以下内容:
|
||||
|
||||
```
|
||||
{
|
||||
"MasterDiscoveryNode": [{
|
||||
@@ -725,8 +866,8 @@ origin引擎默认使用读取所有结点配置的进行确认结点有哪些Se
|
||||
"ListenAddr": "127.0.0.1:8801",
|
||||
"MaxRpcParamLen": 409600
|
||||
}],
|
||||
|
||||
|
||||
|
||||
|
||||
"NodeList": [{
|
||||
"NodeId": 1,
|
||||
"ListenAddr": "127.0.0.1:8801",
|
||||
@@ -739,6 +880,7 @@ origin引擎默认使用读取所有结点配置的进行确认结点有哪些Se
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
新上有两新不同的字段分别为MasterDiscoveryNode与DiscoveryService。其中:
|
||||
|
||||
MasterDiscoveryNode中配置了结点Id为1的服务发现Master,他的监听地址ListenAddr为127.0.0.1:8801,结点为2的也是一个服务发现Master,不同在于多了"NeighborService":["HttpGateService"]配置。如果"NeighborService"有配置具体的服务时,则表示该结点是一个邻居Master结点。当前运行的Node结点会从该Master结点上筛选HttpGateService的服务,并且当前运行的Node结点不会向上同步本地所有公开的服务,和邻居结点关系是单向的。
|
||||
@@ -746,14 +888,13 @@ MasterDiscoveryNode中配置了结点Id为1的服务发现Master,他的监听
|
||||
NeighborService可以用在当有多个以Master中心结点的网络,发现跨网络的服务场景。
|
||||
DiscoveryService表示将筛选origin网络中的TestService8服务,注意如果DiscoveryService不配置,则筛选功能不生效。
|
||||
|
||||
第八章:HttpService使用
|
||||
-----------------------
|
||||
|
||||
|
||||
|
||||
第七章:HttpService使用
|
||||
---------------
|
||||
HttpService是origin引擎中系统实现的http服务,http接口中常用的GET,POST以及url路由处理。
|
||||
|
||||
simple_http/TestHttpService.go文件如下:
|
||||
|
||||
```
|
||||
package simple_http
|
||||
|
||||
@@ -777,11 +918,11 @@ type TestHttpService struct {
|
||||
|
||||
func (slf *TestHttpService) OnInit() error {
|
||||
//获取系统httpservice服务
|
||||
httpervice := node.GetService("HttpService").(*sysservice.HttpService)
|
||||
httpservice := node.GetService("HttpService").(*sysservice.HttpService)
|
||||
|
||||
//新建并设置路由对象
|
||||
httpRouter := sysservice.NewHttpHttpRouter()
|
||||
httpervice.SetHttpRouter(httpRouter,slf.GetEventHandler())
|
||||
httpservice.SetHttpRouter(httpRouter,slf.GetEventHandler())
|
||||
|
||||
//GET方法,请求url:http://127.0.0.1:9402/get/query?nickname=boyce
|
||||
//并header中新增key为uid,value为1000的头,则用postman测试返回结果为:
|
||||
@@ -795,6 +936,8 @@ func (slf *TestHttpService) OnInit() error {
|
||||
//GET方式获取目录下的资源,http://127.0.0.1:port/img/head/a.jpg
|
||||
httpRouter.SetServeFile(sysservice.METHOD_GET,"/img/head/","d:/img")
|
||||
|
||||
//如果配置"ManualStart": true配置为true,则使用以下方法进行开启http监听
|
||||
//httpservice.StartListen()
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -821,15 +964,16 @@ func (slf *TestHttpService) HttpPost(session *sysservice.HttpSession){
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
注意,要在main.go中加入import _ "orginserver/simple_service",并且在config/cluster/cluster.json中的ServiceList加入服务。
|
||||
|
||||
第九章:TcpService服务使用
|
||||
--------------------------
|
||||
|
||||
|
||||
第七章:TcpService服务使用
|
||||
---------------
|
||||
TcpService是origin引擎中系统实现的Tcp服务,可以支持自定义消息格式处理器。只要重新实现network.Processor接口。目前内置已经实现最常用的protobuf处理器。
|
||||
|
||||
simple_tcp/TestTcpService.go文件如下:
|
||||
|
||||
```
|
||||
package simple_tcp
|
||||
|
||||
@@ -897,9 +1041,9 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
|
||||
}
|
||||
```
|
||||
|
||||
第十章:其他系统模块介绍
|
||||
------------------------
|
||||
|
||||
第八章:其他系统模块介绍
|
||||
---------------
|
||||
* sysservice/wsservice.go:支持了WebSocket协议,使用方法与TcpService类似
|
||||
* sysmodule/DBModule.go:对mysql数据库操作
|
||||
* sysmodule/RedisModule.go:对Redis数据进行操作
|
||||
@@ -908,9 +1052,9 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
|
||||
* util:在该目录下,有常用的uuid,hash,md5,协程封装等工具库
|
||||
* https://github.com/duanhf2012/originservice: 其他扩展支持的服务可以在该工程上看到,目前支持firebase推送的封装。
|
||||
|
||||
|
||||
备注:
|
||||
---------------
|
||||
-----
|
||||
|
||||
**感觉不错请star, 谢谢!**
|
||||
|
||||
**欢迎加入origin服务器开发QQ交流群:168306674,有任何疑问我都会及时解答**
|
||||
@@ -920,6 +1064,7 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
|
||||
[因服务器是由个人维护,如果这个项目对您有帮助,您可以点我进行捐赠,感谢!](http://www.cppblog.com/images/cppblog_com/API/21416/r_pay.jpg "Thanks!")
|
||||
|
||||
特别感谢以下赞助网友:
|
||||
|
||||
```
|
||||
咕咕兽
|
||||
_
|
||||
|
||||
@@ -26,7 +26,8 @@ type NodeInfo struct {
|
||||
Private bool
|
||||
ListenAddr string
|
||||
MaxRpcParamLen uint32 //最大Rpc参数长度
|
||||
ServiceList []string //所有的服务列表
|
||||
CompressBytesLen int //超过字节进行压缩的长度
|
||||
ServiceList []string //所有的有序服务列表
|
||||
PublicServiceList []string //对外公开的服务列表
|
||||
DiscoveryService []string //筛选发现的服务,如果不配置,不进行筛选
|
||||
NeighborService []string
|
||||
@@ -46,16 +47,18 @@ type Cluster struct {
|
||||
globalCfg interface{} //全局配置
|
||||
|
||||
localServiceCfg map[string]interface{} //map[serviceName]配置数据*
|
||||
mapRpc map[int]NodeRpcInfo //nodeId
|
||||
serviceDiscovery IServiceDiscovery //服务发现接口
|
||||
|
||||
|
||||
locker sync.RWMutex //结点与服务关系保护锁
|
||||
mapRpc map[int]NodeRpcInfo //nodeId
|
||||
mapIdNode map[int]NodeInfo //map[NodeId]NodeInfo
|
||||
mapServiceNode map[string]map[int]struct{} //map[serviceName]map[NodeId]
|
||||
|
||||
rpcServer rpc.Server
|
||||
rpcEventLocker sync.RWMutex //Rpc事件监听保护锁
|
||||
mapServiceListenRpcEvent map[string]struct{} //ServiceName
|
||||
mapServiceListenDiscoveryEvent map[string]struct{} //ServiceName
|
||||
}
|
||||
|
||||
func GetCluster() *Cluster {
|
||||
@@ -71,7 +74,7 @@ func SetServiceDiscovery(serviceDiscovery IServiceDiscovery) {
|
||||
}
|
||||
|
||||
func (cls *Cluster) Start() {
|
||||
cls.rpcServer.Start(cls.localNodeInfo.ListenAddr, cls.localNodeInfo.MaxRpcParamLen)
|
||||
cls.rpcServer.Start(cls.localNodeInfo.ListenAddr, cls.localNodeInfo.MaxRpcParamLen,cls.localNodeInfo.CompressBytesLen)
|
||||
}
|
||||
|
||||
func (cls *Cluster) Stop() {
|
||||
@@ -94,9 +97,10 @@ func (cls *Cluster) DelNode(nodeId int, immediately bool) {
|
||||
return
|
||||
}
|
||||
cls.locker.Lock()
|
||||
defer cls.locker.Unlock()
|
||||
|
||||
nodeInfo, ok := cls.mapIdNode[nodeId]
|
||||
if ok == false {
|
||||
cls.locker.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
@@ -107,16 +111,13 @@ func (cls *Cluster) DelNode(nodeId int, immediately bool) {
|
||||
break
|
||||
}
|
||||
|
||||
rpc.client.Lock()
|
||||
//正在连接中不主动断开,只断开没有连接中的
|
||||
if rpc.client.IsConnected() {
|
||||
nodeInfo.status = Discard
|
||||
rpc.client.Unlock()
|
||||
cls.locker.Unlock()
|
||||
log.SRelease("Discard node ", nodeInfo.NodeId, " ", nodeInfo.ListenAddr)
|
||||
return
|
||||
}
|
||||
rpc.client.Unlock()
|
||||
|
||||
break
|
||||
}
|
||||
|
||||
@@ -126,7 +127,6 @@ func (cls *Cluster) DelNode(nodeId int, immediately bool) {
|
||||
|
||||
delete(cls.mapIdNode, nodeId)
|
||||
delete(cls.mapRpc, nodeId)
|
||||
cls.locker.Unlock()
|
||||
if ok == true {
|
||||
rpc.client.Close(false)
|
||||
}
|
||||
@@ -193,20 +193,17 @@ func (cls *Cluster) serviceDiscoverySetNodeInfo(nodeInfo *NodeInfo) {
|
||||
if _, rpcInfoOK := cls.mapRpc[nodeInfo.NodeId]; rpcInfoOK == true {
|
||||
return
|
||||
}
|
||||
|
||||
rpcInfo := NodeRpcInfo{}
|
||||
rpcInfo.nodeInfo = *nodeInfo
|
||||
rpcInfo.client = &rpc.Client{}
|
||||
rpcInfo.client.TriggerRpcEvent = cls.triggerRpcEvent
|
||||
rpcInfo.client.Connect(nodeInfo.NodeId, nodeInfo.ListenAddr, nodeInfo.MaxRpcParamLen)
|
||||
rpcInfo.client =rpc.NewRClient(nodeInfo.NodeId, nodeInfo.ListenAddr, nodeInfo.MaxRpcParamLen,cls.localNodeInfo.CompressBytesLen,cls.triggerRpcEvent)
|
||||
cls.mapRpc[nodeInfo.NodeId] = rpcInfo
|
||||
|
||||
}
|
||||
|
||||
func (cls *Cluster) buildLocalRpc() {
|
||||
rpcInfo := NodeRpcInfo{}
|
||||
rpcInfo.nodeInfo = cls.localNodeInfo
|
||||
rpcInfo.client = &rpc.Client{}
|
||||
rpcInfo.client.Connect(rpcInfo.nodeInfo.NodeId, "", 0)
|
||||
rpcInfo.client = rpc.NewLClient(rpcInfo.nodeInfo.NodeId)
|
||||
|
||||
cls.mapRpc[cls.localNodeInfo.NodeId] = rpcInfo
|
||||
}
|
||||
@@ -224,6 +221,9 @@ func (cls *Cluster) Init(localNodeId int, setupServiceFun SetupServiceFun) error
|
||||
//2.安装服务发现结点
|
||||
cls.SetupServiceDiscovery(localNodeId, setupServiceFun)
|
||||
service.RegRpcEventFun = cls.RegRpcEvent
|
||||
service.UnRegRpcEventFun = cls.UnRegRpcEvent
|
||||
service.RegDiscoveryServiceEventFun = cls.RegDiscoveryEvent
|
||||
service.UnRegDiscoveryServiceEventFun = cls.UnReDiscoveryEvent
|
||||
|
||||
err = cls.serviceDiscovery.InitDiscovery(localNodeId, cls.serviceDiscoveryDelNode, cls.serviceDiscoverySetNodeInfo)
|
||||
if err != nil {
|
||||
@@ -249,8 +249,9 @@ func (cls *Cluster) checkDynamicDiscovery(localNodeId int) (bool, bool) {
|
||||
return localMaster, hasMaster
|
||||
}
|
||||
|
||||
func (cls *Cluster) appendService(serviceName string, bPublicService bool) {
|
||||
cls.localNodeInfo.ServiceList = append(cls.localNodeInfo.ServiceList, serviceName)
|
||||
func (cls *Cluster) AddDynamicDiscoveryService(serviceName string, bPublicService bool) {
|
||||
addServiceList := append([]string{},serviceName)
|
||||
cls.localNodeInfo.ServiceList = append(addServiceList,cls.localNodeInfo.ServiceList...)
|
||||
if bPublicService {
|
||||
cls.localNodeInfo.PublicServiceList = append(cls.localNodeInfo.PublicServiceList, serviceName)
|
||||
}
|
||||
@@ -294,11 +295,10 @@ func (cls *Cluster) SetupServiceDiscovery(localNodeId int, setupServiceFun Setup
|
||||
|
||||
//2.如果为动态服务发现安装本地发现服务
|
||||
cls.serviceDiscovery = getDynamicDiscovery()
|
||||
cls.AddDynamicDiscoveryService(DynamicDiscoveryClientName, true)
|
||||
if localMaster == true {
|
||||
cls.appendService(DynamicDiscoveryMasterName, false)
|
||||
cls.AddDynamicDiscoveryService(DynamicDiscoveryMasterName, false)
|
||||
}
|
||||
cls.appendService(DynamicDiscoveryClientName, true)
|
||||
|
||||
}
|
||||
|
||||
func (cls *Cluster) FindRpcHandler(serviceName string) rpc.IRpcHandler {
|
||||
@@ -354,16 +354,17 @@ func (cls *Cluster) IsNodeConnected(nodeId int) bool {
|
||||
return pClient != nil && pClient.IsConnected()
|
||||
}
|
||||
|
||||
func (cls *Cluster) triggerRpcEvent(bConnect bool, clientSeq uint32, nodeId int) {
|
||||
func (cls *Cluster) triggerRpcEvent(bConnect bool, clientId uint32, nodeId int) {
|
||||
cls.locker.Lock()
|
||||
nodeInfo, ok := cls.mapRpc[nodeId]
|
||||
if ok == false || nodeInfo.client == nil || nodeInfo.client.GetClientSeq() != clientSeq {
|
||||
if ok == false || nodeInfo.client == nil || nodeInfo.client.GetClientId() != clientId {
|
||||
cls.locker.Unlock()
|
||||
return
|
||||
}
|
||||
cls.locker.Unlock()
|
||||
|
||||
cls.rpcEventLocker.Lock()
|
||||
defer cls.rpcEventLocker.Unlock()
|
||||
for serviceName, _ := range cls.mapServiceListenRpcEvent {
|
||||
ser := service.GetService(serviceName)
|
||||
if ser == nil {
|
||||
@@ -376,7 +377,26 @@ func (cls *Cluster) triggerRpcEvent(bConnect bool, clientSeq uint32, nodeId int)
|
||||
eventData.NodeId = nodeId
|
||||
ser.(service.IModule).NotifyEvent(&eventData)
|
||||
}
|
||||
cls.rpcEventLocker.Unlock()
|
||||
}
|
||||
|
||||
func (cls *Cluster) TriggerDiscoveryEvent(bDiscovery bool, nodeId int, serviceName []string) {
|
||||
cls.rpcEventLocker.Lock()
|
||||
defer cls.rpcEventLocker.Unlock()
|
||||
|
||||
for sName, _ := range cls.mapServiceListenDiscoveryEvent {
|
||||
ser := service.GetService(sName)
|
||||
if ser == nil {
|
||||
log.SError("cannot find service name ", serviceName)
|
||||
continue
|
||||
}
|
||||
|
||||
var eventData service.DiscoveryServiceEvent
|
||||
eventData.IsDiscovery = bDiscovery
|
||||
eventData.NodeId = nodeId
|
||||
eventData.ServiceName = serviceName
|
||||
ser.(service.IModule).NotifyEvent(&eventData)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (cls *Cluster) GetLocalNodeInfo() *NodeInfo {
|
||||
@@ -399,14 +419,25 @@ func (cls *Cluster) UnRegRpcEvent(serviceName string) {
|
||||
cls.rpcEventLocker.Unlock()
|
||||
}
|
||||
|
||||
func (cls *Cluster) FetchAllNodeId(fetchFun func(nodeId int)) {
|
||||
cls.locker.Lock()
|
||||
for nodeId, _ := range cls.mapIdNode {
|
||||
fetchFun(nodeId)
|
||||
|
||||
func (cls *Cluster) RegDiscoveryEvent(serviceName string) {
|
||||
cls.rpcEventLocker.Lock()
|
||||
if cls.mapServiceListenDiscoveryEvent == nil {
|
||||
cls.mapServiceListenDiscoveryEvent = map[string]struct{}{}
|
||||
}
|
||||
cls.locker.Unlock()
|
||||
|
||||
cls.mapServiceListenDiscoveryEvent[serviceName] = struct{}{}
|
||||
cls.rpcEventLocker.Unlock()
|
||||
}
|
||||
|
||||
func (cls *Cluster) UnReDiscoveryEvent(serviceName string) {
|
||||
cls.rpcEventLocker.Lock()
|
||||
delete(cls.mapServiceListenDiscoveryEvent, serviceName)
|
||||
cls.rpcEventLocker.Unlock()
|
||||
}
|
||||
|
||||
|
||||
|
||||
func HasService(nodeId int, serviceName string) bool {
|
||||
cluster.locker.RLock()
|
||||
defer cluster.locker.RUnlock()
|
||||
@@ -420,6 +451,32 @@ func HasService(nodeId int, serviceName string) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func GetNodeByServiceName(serviceName string) map[int]struct{} {
|
||||
cluster.locker.RLock()
|
||||
defer cluster.locker.RUnlock()
|
||||
|
||||
mapNode, ok := cluster.mapServiceNode[serviceName]
|
||||
if ok == false {
|
||||
return nil
|
||||
}
|
||||
|
||||
mapNodeId := map[int]struct{}{}
|
||||
for nodeId,_ := range mapNode {
|
||||
mapNodeId[nodeId] = struct{}{}
|
||||
}
|
||||
|
||||
return mapNodeId
|
||||
}
|
||||
|
||||
func (cls *Cluster) GetGlobalCfg() interface{} {
|
||||
return cls.globalCfg
|
||||
}
|
||||
|
||||
|
||||
func (cls *Cluster) GetNodeInfo(nodeId int) (NodeInfo,bool) {
|
||||
cls.locker.RLock()
|
||||
defer cls.locker.RUnlock()
|
||||
|
||||
nodeInfo,ok:= cls.mapIdNode[nodeId]
|
||||
return nodeInfo,ok
|
||||
}
|
||||
|
||||
@@ -5,6 +5,8 @@ import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/service"
|
||||
"time"
|
||||
"github.com/duanhf2012/origin/util/timer"
|
||||
)
|
||||
|
||||
const DynamicDiscoveryMasterName = "DiscoveryMaster"
|
||||
@@ -60,6 +62,21 @@ func (ds *DynamicDiscoveryMaster) addNodeInfo(nodeInfo *rpc.NodeInfo) {
|
||||
ds.nodeInfo = append(ds.nodeInfo, nodeInfo)
|
||||
}
|
||||
|
||||
func (ds *DynamicDiscoveryMaster) removeNodeInfo(nodeId int32) {
|
||||
if _,ok:= ds.mapNodeInfo[nodeId];ok == false {
|
||||
return
|
||||
}
|
||||
|
||||
for i:=0;i<len(ds.nodeInfo);i++ {
|
||||
if ds.nodeInfo[i].NodeId == nodeId {
|
||||
ds.nodeInfo = append(ds.nodeInfo[:i],ds.nodeInfo[i+1:]...)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
delete(ds.mapNodeInfo,nodeId)
|
||||
}
|
||||
|
||||
func (ds *DynamicDiscoveryMaster) OnInit() error {
|
||||
ds.mapNodeInfo = make(map[int32]struct{}, 20)
|
||||
ds.RegRpcListener(ds)
|
||||
@@ -103,6 +120,8 @@ func (ds *DynamicDiscoveryMaster) OnNodeDisconnect(nodeId int) {
|
||||
return
|
||||
}
|
||||
|
||||
ds.removeNodeInfo(int32(nodeId))
|
||||
|
||||
var notifyDiscover rpc.SubscribeDiscoverNotify
|
||||
notifyDiscover.MasterNodeId = int32(cluster.GetLocalNodeInfo().NodeId)
|
||||
notifyDiscover.DelNodeId = int32(nodeId)
|
||||
@@ -290,6 +309,8 @@ func (dc *DynamicDiscoveryClient) RPC_SubServiceDiscover(req *rpc.SubscribeDisco
|
||||
|
||||
//删除不必要的结点
|
||||
for _, nodeId := range willDelNodeId {
|
||||
nodeInfo,_ := cluster.GetNodeInfo(int(nodeId))
|
||||
cluster.TriggerDiscoveryEvent(false,int(nodeId),nodeInfo.PublicServiceList)
|
||||
dc.removeMasterNode(req.MasterNodeId, int32(nodeId))
|
||||
if dc.findNodeId(nodeId) == false {
|
||||
dc.funDelService(int(nodeId), false)
|
||||
@@ -300,6 +321,12 @@ func (dc *DynamicDiscoveryClient) RPC_SubServiceDiscover(req *rpc.SubscribeDisco
|
||||
for _, nodeInfo := range mapNodeInfo {
|
||||
dc.addMasterNode(req.MasterNodeId, nodeInfo.NodeId)
|
||||
dc.setNodeInfo(nodeInfo)
|
||||
|
||||
if len(nodeInfo.PublicServiceList) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
cluster.TriggerDiscoveryEvent(true,int(nodeInfo.NodeId),nodeInfo.PublicServiceList)
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -316,6 +343,10 @@ func (dc *DynamicDiscoveryClient) isDiscoverNode(nodeId int) bool {
|
||||
}
|
||||
|
||||
func (dc *DynamicDiscoveryClient) OnNodeConnected(nodeId int) {
|
||||
dc.regServiceDiscover(nodeId)
|
||||
}
|
||||
|
||||
func (dc *DynamicDiscoveryClient) regServiceDiscover(nodeId int){
|
||||
nodeInfo := cluster.GetMasterDiscoveryNodeInfo(nodeId)
|
||||
if nodeInfo == nil {
|
||||
return
|
||||
@@ -339,6 +370,10 @@ func (dc *DynamicDiscoveryClient) OnNodeConnected(nodeId int) {
|
||||
err := dc.AsyncCallNode(nodeId, RegServiceDiscover, &req, func(res *rpc.Empty, err error) {
|
||||
if err != nil {
|
||||
log.SError("call ", RegServiceDiscover, " is fail :", err.Error())
|
||||
dc.AfterFunc(time.Second*3, func(timer *timer.Timer) {
|
||||
dc.regServiceDiscover(nodeId)
|
||||
})
|
||||
|
||||
return
|
||||
}
|
||||
})
|
||||
|
||||
@@ -5,7 +5,8 @@ import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
jsoniter "github.com/json-iterator/go"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
@@ -18,7 +19,7 @@ type NodeInfoList struct {
|
||||
|
||||
func (cls *Cluster) ReadClusterConfig(filepath string) (*NodeInfoList, error) {
|
||||
c := &NodeInfoList{}
|
||||
d, err := ioutil.ReadFile(filepath)
|
||||
d, err := os.ReadFile(filepath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -33,7 +34,7 @@ func (cls *Cluster) ReadClusterConfig(filepath string) (*NodeInfoList, error) {
|
||||
func (cls *Cluster) readServiceConfig(filepath string) (interface{}, map[string]interface{}, map[int]map[string]interface{}, error) {
|
||||
c := map[string]interface{}{}
|
||||
//读取配置
|
||||
d, err := ioutil.ReadFile(filepath)
|
||||
d, err := os.ReadFile(filepath)
|
||||
if err != nil {
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
@@ -69,7 +70,7 @@ func (cls *Cluster) readLocalClusterConfig(nodeId int) ([]NodeInfo, []NodeInfo,
|
||||
var nodeInfoList []NodeInfo
|
||||
var masterDiscoverNodeList []NodeInfo
|
||||
clusterCfgPath := strings.TrimRight(configDir, "/") + "/cluster"
|
||||
fileInfoList, err := ioutil.ReadDir(clusterCfgPath)
|
||||
fileInfoList, err := os.ReadDir(clusterCfgPath)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("Read dir %s is fail :%+v", clusterCfgPath, err)
|
||||
}
|
||||
@@ -111,49 +112,89 @@ func (cls *Cluster) readLocalClusterConfig(nodeId int) ([]NodeInfo, []NodeInfo,
|
||||
|
||||
func (cls *Cluster) readLocalService(localNodeId int) error {
|
||||
clusterCfgPath := strings.TrimRight(configDir, "/") + "/cluster"
|
||||
fileInfoList, err := ioutil.ReadDir(clusterCfgPath)
|
||||
fileInfoList, err := os.ReadDir(clusterCfgPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Read dir %s is fail :%+v", clusterCfgPath, err)
|
||||
}
|
||||
|
||||
var globalCfg interface{}
|
||||
publicService := map[string]interface{}{}
|
||||
nodeService := map[string]interface{}{}
|
||||
|
||||
//读取任何文件,只读符合格式的配置,目录下的文件可以自定义分文件
|
||||
for _, f := range fileInfoList {
|
||||
if f.IsDir() == false {
|
||||
filePath := strings.TrimRight(strings.TrimRight(clusterCfgPath, "/"), "\\") + "/" + f.Name()
|
||||
if f.IsDir() == true {
|
||||
continue
|
||||
}
|
||||
|
||||
if filepath.Ext(f.Name())!= ".json" {
|
||||
continue
|
||||
}
|
||||
|
||||
filePath := strings.TrimRight(strings.TrimRight(clusterCfgPath, "/"), "\\") + "/" + f.Name()
|
||||
currGlobalCfg, serviceConfig, mapNodeService, err := cls.readServiceConfig(filePath)
|
||||
if err != nil {
|
||||
continue
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if currGlobalCfg != nil {
|
||||
//不允许重复的配置global配置
|
||||
if globalCfg != nil {
|
||||
return fmt.Errorf("[Global] does not allow repeated configuration in %s.",f.Name())
|
||||
}
|
||||
globalCfg = currGlobalCfg
|
||||
}
|
||||
|
||||
if currGlobalCfg != nil {
|
||||
cls.globalCfg = currGlobalCfg
|
||||
}
|
||||
|
||||
for _, s := range cls.localNodeInfo.ServiceList {
|
||||
for {
|
||||
//取公共服务配置
|
||||
pubCfg, ok := serviceConfig[s]
|
||||
if ok == true {
|
||||
cls.localServiceCfg[s] = pubCfg
|
||||
//保存公共配置
|
||||
for _, s := range cls.localNodeInfo.ServiceList {
|
||||
for {
|
||||
//取公共服务配置
|
||||
pubCfg, ok := serviceConfig[s]
|
||||
if ok == true {
|
||||
if _,publicOk := publicService[s];publicOk == true {
|
||||
return fmt.Errorf("public service [%s] does not allow repeated configuration in %s.",s,f.Name())
|
||||
}
|
||||
publicService[s] = pubCfg
|
||||
}
|
||||
|
||||
//如果结点也配置了该服务,则覆盖之
|
||||
nodeService, ok := mapNodeService[localNodeId]
|
||||
if ok == false {
|
||||
break
|
||||
}
|
||||
sCfg, ok := nodeService[s]
|
||||
if ok == false {
|
||||
break
|
||||
}
|
||||
|
||||
cls.localServiceCfg[s] = sCfg
|
||||
//取指定结点配置的服务
|
||||
nodeServiceCfg,ok := mapNodeService[localNodeId]
|
||||
if ok == false {
|
||||
break
|
||||
}
|
||||
nodeCfg, ok := nodeServiceCfg[s]
|
||||
if ok == false {
|
||||
break
|
||||
}
|
||||
|
||||
if _,nodeOK := nodeService[s];nodeOK == true {
|
||||
return fmt.Errorf("NodeService NodeId[%d] Service[%s] does not allow repeated configuration in %s.",cls.localNodeInfo.NodeId,s,f.Name())
|
||||
}
|
||||
nodeService[s] = nodeCfg
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//组合所有的配置
|
||||
for _, s := range cls.localNodeInfo.ServiceList {
|
||||
//先从NodeService中找
|
||||
var serviceCfg interface{}
|
||||
var ok bool
|
||||
serviceCfg,ok = nodeService[s]
|
||||
if ok == true {
|
||||
cls.localServiceCfg[s] =serviceCfg
|
||||
continue
|
||||
}
|
||||
|
||||
//如果找不到从PublicService中找
|
||||
serviceCfg,ok = publicService[s]
|
||||
if ok == true {
|
||||
cls.localServiceCfg[s] =serviceCfg
|
||||
}
|
||||
}
|
||||
cls.globalCfg = globalCfg
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
93
concurrent/concurrent.go
Normal file
93
concurrent/concurrent.go
Normal file
@@ -0,0 +1,93 @@
|
||||
package concurrent
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"runtime"
|
||||
|
||||
"github.com/duanhf2012/origin/log"
|
||||
)
|
||||
|
||||
const defaultMaxTaskChannelNum = 1000000
|
||||
|
||||
type IConcurrent interface {
|
||||
OpenConcurrentByNumCPU(cpuMul float32)
|
||||
OpenConcurrent(minGoroutineNum int32, maxGoroutineNum int32, maxTaskChannelNum int)
|
||||
AsyncDoByQueue(queueId int64, fn func() bool, cb func(err error))
|
||||
AsyncDo(f func() bool, cb func(err error))
|
||||
}
|
||||
|
||||
type Concurrent struct {
|
||||
dispatch
|
||||
|
||||
tasks chan task
|
||||
cbChannel chan func(error)
|
||||
}
|
||||
|
||||
/*
|
||||
cpuMul 表示cpu的倍数
|
||||
建议:(1)cpu密集型 使用1 (2)i/o密集型使用2或者更高
|
||||
*/
|
||||
func (c *Concurrent) OpenConcurrentByNumCPU(cpuNumMul float32) {
|
||||
goroutineNum := int32(float32(runtime.NumCPU())*cpuNumMul + 1)
|
||||
c.OpenConcurrent(goroutineNum, goroutineNum, defaultMaxTaskChannelNum)
|
||||
}
|
||||
|
||||
func (c *Concurrent) OpenConcurrent(minGoroutineNum int32, maxGoroutineNum int32, maxTaskChannelNum int) {
|
||||
c.tasks = make(chan task, maxTaskChannelNum)
|
||||
c.cbChannel = make(chan func(error), maxTaskChannelNum)
|
||||
|
||||
//打开dispach
|
||||
c.dispatch.open(minGoroutineNum, maxGoroutineNum, c.tasks, c.cbChannel)
|
||||
}
|
||||
|
||||
func (c *Concurrent) AsyncDo(f func() bool, cb func(err error)) {
|
||||
c.AsyncDoByQueue(0, f, cb)
|
||||
}
|
||||
|
||||
func (c *Concurrent) AsyncDoByQueue(queueId int64, fn func() bool, cb func(err error)) {
|
||||
if cap(c.tasks) == 0 {
|
||||
panic("not open concurrent")
|
||||
}
|
||||
|
||||
if fn == nil && cb == nil {
|
||||
log.SStack("fn and cb is nil")
|
||||
return
|
||||
}
|
||||
|
||||
if fn == nil {
|
||||
c.pushAsyncDoCallbackEvent(cb)
|
||||
return
|
||||
}
|
||||
|
||||
if queueId != 0 {
|
||||
queueId = queueId % maxTaskQueueSessionId+1
|
||||
}
|
||||
|
||||
select {
|
||||
case c.tasks <- task{queueId, fn, cb}:
|
||||
default:
|
||||
log.SError("tasks channel is full")
|
||||
if cb != nil {
|
||||
c.pushAsyncDoCallbackEvent(func(err error) {
|
||||
cb(errors.New("tasks channel is full"))
|
||||
})
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Concurrent) Close() {
|
||||
if cap(c.tasks) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
log.SRelease("wait close concurrent")
|
||||
|
||||
c.dispatch.close()
|
||||
|
||||
log.SRelease("concurrent has successfully exited")
|
||||
}
|
||||
|
||||
func (c *Concurrent) GetCallBackChannel() chan func(error) {
|
||||
return c.cbChannel
|
||||
}
|
||||
196
concurrent/dispatch.go
Normal file
196
concurrent/dispatch.go
Normal file
@@ -0,0 +1,196 @@
|
||||
package concurrent
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"fmt"
|
||||
"runtime"
|
||||
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/util/queue"
|
||||
)
|
||||
|
||||
var idleTimeout = int64(2 * time.Second)
|
||||
const maxTaskQueueSessionId = 10000
|
||||
|
||||
type dispatch struct {
|
||||
minConcurrentNum int32
|
||||
maxConcurrentNum int32
|
||||
|
||||
queueIdChannel chan int64
|
||||
workerQueue chan task
|
||||
tasks chan task
|
||||
idle bool
|
||||
workerNum int32
|
||||
cbChannel chan func(error)
|
||||
|
||||
mapTaskQueueSession map[int64]*queue.Deque[task]
|
||||
|
||||
waitWorker sync.WaitGroup
|
||||
waitDispatch sync.WaitGroup
|
||||
}
|
||||
|
||||
func (d *dispatch) open(minGoroutineNum int32, maxGoroutineNum int32, tasks chan task, cbChannel chan func(error)) {
|
||||
d.minConcurrentNum = minGoroutineNum
|
||||
d.maxConcurrentNum = maxGoroutineNum
|
||||
d.tasks = tasks
|
||||
d.mapTaskQueueSession = make(map[int64]*queue.Deque[task], maxTaskQueueSessionId)
|
||||
d.workerQueue = make(chan task)
|
||||
d.cbChannel = cbChannel
|
||||
d.queueIdChannel = make(chan int64, cap(tasks))
|
||||
|
||||
d.waitDispatch.Add(1)
|
||||
go d.run()
|
||||
}
|
||||
|
||||
func (d *dispatch) run() {
|
||||
defer d.waitDispatch.Done()
|
||||
timeout := time.NewTimer(time.Duration(atomic.LoadInt64(&idleTimeout)))
|
||||
|
||||
for {
|
||||
select {
|
||||
case queueId := <-d.queueIdChannel:
|
||||
d.processqueueEvent(queueId)
|
||||
default:
|
||||
select {
|
||||
case t, ok := <-d.tasks:
|
||||
if ok == false {
|
||||
return
|
||||
}
|
||||
d.processTask(&t)
|
||||
case queueId := <-d.queueIdChannel:
|
||||
d.processqueueEvent(queueId)
|
||||
case <-timeout.C:
|
||||
d.processTimer()
|
||||
if atomic.LoadInt32(&d.minConcurrentNum) == -1 && len(d.tasks) == 0 {
|
||||
atomic.StoreInt64(&idleTimeout,int64(time.Millisecond * 10))
|
||||
}
|
||||
timeout.Reset(time.Duration(atomic.LoadInt64(&idleTimeout)))
|
||||
}
|
||||
}
|
||||
|
||||
if atomic.LoadInt32(&d.minConcurrentNum) == -1 && d.workerNum == 0 {
|
||||
d.waitWorker.Wait()
|
||||
d.cbChannel <- nil
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (d *dispatch) processTimer() {
|
||||
if d.idle == true && d.workerNum > atomic.LoadInt32(&d.minConcurrentNum) {
|
||||
d.processIdle()
|
||||
}
|
||||
|
||||
d.idle = true
|
||||
}
|
||||
|
||||
func (d *dispatch) processqueueEvent(queueId int64) {
|
||||
d.idle = false
|
||||
|
||||
queueSession := d.mapTaskQueueSession[queueId]
|
||||
if queueSession == nil {
|
||||
return
|
||||
}
|
||||
|
||||
queueSession.PopFront()
|
||||
if queueSession.Len() == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
t := queueSession.Front()
|
||||
d.executeTask(&t)
|
||||
}
|
||||
|
||||
func (d *dispatch) executeTask(t *task) {
|
||||
select {
|
||||
case d.workerQueue <- *t:
|
||||
return
|
||||
default:
|
||||
if d.workerNum < d.maxConcurrentNum {
|
||||
var work worker
|
||||
work.start(&d.waitWorker, t, d)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
d.workerQueue <- *t
|
||||
}
|
||||
|
||||
func (d *dispatch) processTask(t *task) {
|
||||
d.idle = false
|
||||
|
||||
//处理有排队任务
|
||||
if t.queueId != 0 {
|
||||
queueSession := d.mapTaskQueueSession[t.queueId]
|
||||
if queueSession == nil {
|
||||
queueSession = &queue.Deque[task]{}
|
||||
d.mapTaskQueueSession[t.queueId] = queueSession
|
||||
}
|
||||
|
||||
//没有正在执行的任务,则直接执行
|
||||
if queueSession.Len() == 0 {
|
||||
d.executeTask(t)
|
||||
}
|
||||
|
||||
queueSession.PushBack(*t)
|
||||
return
|
||||
}
|
||||
|
||||
//普通任务
|
||||
d.executeTask(t)
|
||||
}
|
||||
|
||||
func (d *dispatch) processIdle() {
|
||||
select {
|
||||
case d.workerQueue <- task{}:
|
||||
d.workerNum--
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
func (d *dispatch) pushQueueTaskFinishEvent(queueId int64) {
|
||||
d.queueIdChannel <- queueId
|
||||
}
|
||||
|
||||
func (c *dispatch) pushAsyncDoCallbackEvent(cb func(err error)) {
|
||||
if cb == nil {
|
||||
//不需要回调的情况
|
||||
return
|
||||
}
|
||||
|
||||
c.cbChannel <- cb
|
||||
}
|
||||
|
||||
func (d *dispatch) close() {
|
||||
atomic.StoreInt32(&d.minConcurrentNum, -1)
|
||||
|
||||
breakFor:
|
||||
for {
|
||||
select {
|
||||
case cb := <-d.cbChannel:
|
||||
if cb == nil {
|
||||
break breakFor
|
||||
}
|
||||
cb(nil)
|
||||
}
|
||||
}
|
||||
|
||||
d.waitDispatch.Wait()
|
||||
}
|
||||
|
||||
func (d *dispatch) DoCallback(cb func(err error)) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
|
||||
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||
}
|
||||
}()
|
||||
|
||||
cb(nil)
|
||||
}
|
||||
79
concurrent/worker.go
Normal file
79
concurrent/worker.go
Normal file
@@ -0,0 +1,79 @@
|
||||
package concurrent
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"errors"
|
||||
"fmt"
|
||||
"runtime"
|
||||
|
||||
"github.com/duanhf2012/origin/log"
|
||||
)
|
||||
|
||||
type task struct {
|
||||
queueId int64
|
||||
fn func() bool
|
||||
cb func(err error)
|
||||
}
|
||||
|
||||
type worker struct {
|
||||
*dispatch
|
||||
}
|
||||
|
||||
func (t *task) isExistTask() bool {
|
||||
return t.fn == nil
|
||||
}
|
||||
|
||||
func (w *worker) start(waitGroup *sync.WaitGroup, t *task, d *dispatch) {
|
||||
w.dispatch = d
|
||||
d.workerNum += 1
|
||||
waitGroup.Add(1)
|
||||
go w.run(waitGroup, *t)
|
||||
}
|
||||
|
||||
func (w *worker) run(waitGroup *sync.WaitGroup, t task) {
|
||||
defer waitGroup.Done()
|
||||
|
||||
w.exec(&t)
|
||||
for {
|
||||
select {
|
||||
case tw := <-w.workerQueue:
|
||||
if tw.isExistTask() {
|
||||
//exit goroutine
|
||||
log.SRelease("worker goroutine exit")
|
||||
return
|
||||
}
|
||||
w.exec(&tw)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (w *worker) exec(t *task) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
|
||||
cb := t.cb
|
||||
t.cb = func(err error) {
|
||||
cb(errors.New(errString))
|
||||
}
|
||||
|
||||
w.endCallFun(true,t)
|
||||
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||
}
|
||||
}()
|
||||
|
||||
w.endCallFun(t.fn(),t)
|
||||
}
|
||||
|
||||
func (w *worker) endCallFun(isDocallBack bool,t *task) {
|
||||
if isDocallBack {
|
||||
w.pushAsyncDoCallbackEvent(t.cb)
|
||||
}
|
||||
|
||||
if t.queueId != 0 {
|
||||
w.pushQueueTaskFinishEvent(t.queueId)
|
||||
}
|
||||
}
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
|
||||
//事件接受器
|
||||
type EventCallBack func(event IEvent)
|
||||
|
||||
@@ -229,7 +228,6 @@ func (processor *EventProcessor) EventHandler(ev IEvent) {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
func (processor *EventProcessor) castEvent(event IEvent){
|
||||
if processor.mapListenerEvent == nil {
|
||||
log.SError("mapListenerEvent not init!")
|
||||
@@ -246,3 +244,4 @@ func (processor *EventProcessor) castEvent(event IEvent){
|
||||
proc.PushEvent(event)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
24
event/eventpool.go
Normal file
24
event/eventpool.go
Normal file
@@ -0,0 +1,24 @@
|
||||
package event
|
||||
|
||||
import "github.com/duanhf2012/origin/util/sync"
|
||||
|
||||
// eventPool的内存池,缓存Event
|
||||
const defaultMaxEventChannelNum = 2000000
|
||||
|
||||
var eventPool = sync.NewPoolEx(make(chan sync.IPoolData, defaultMaxEventChannelNum), func() sync.IPoolData {
|
||||
return &Event{}
|
||||
})
|
||||
|
||||
func NewEvent() *Event{
|
||||
return eventPool.Get().(*Event)
|
||||
}
|
||||
|
||||
func DeleteEvent(event IEvent){
|
||||
eventPool.Put(event.(sync.IPoolData))
|
||||
}
|
||||
|
||||
func SetEventPoolSize(eventPoolSize int){
|
||||
eventPool = sync.NewPoolEx(make(chan sync.IPoolData, eventPoolSize), func() sync.IPoolData {
|
||||
return &Event{}
|
||||
})
|
||||
}
|
||||
@@ -9,9 +9,14 @@ const (
|
||||
|
||||
Sys_Event_Tcp EventType = -3
|
||||
Sys_Event_Http_Event EventType = -4
|
||||
Sys_Event_WebSocket EventType = -5
|
||||
Sys_Event_Rpc_Event EventType = -6
|
||||
|
||||
Sys_Event_WebSocket EventType = -5
|
||||
Sys_Event_Node_Event EventType = -6
|
||||
Sys_Event_DiscoverService EventType = -7
|
||||
Sys_Event_DiscardGoroutine EventType = -8
|
||||
Sys_Event_QueueTaskFinish EventType = -9
|
||||
|
||||
Sys_Event_User_Define EventType = 1
|
||||
|
||||
|
||||
)
|
||||
|
||||
|
||||
6
go.mod
6
go.mod
@@ -1,6 +1,6 @@
|
||||
module github.com/duanhf2012/origin
|
||||
|
||||
go 1.18
|
||||
go 1.19
|
||||
|
||||
require (
|
||||
github.com/go-sql-driver/mysql v1.6.0
|
||||
@@ -23,8 +23,8 @@ require (
|
||||
github.com/xdg-go/scram v1.0.2 // indirect
|
||||
github.com/xdg-go/stringprep v1.0.2 // indirect
|
||||
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d // indirect
|
||||
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f // indirect
|
||||
golang.org/x/crypto v0.1.0 // indirect
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 // indirect
|
||||
golang.org/x/text v0.3.6 // indirect
|
||||
golang.org/x/text v0.4.0 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
)
|
||||
|
||||
7
go.sum
7
go.sum
@@ -58,8 +58,9 @@ go.mongodb.org/mongo-driver v1.9.1/go.mod h1:0sQWfOeY63QTntERDJJ/0SuKK0T1uVSgKCu
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f h1:aZp0e2vLN4MToVqnjNEYEtrEA8RH8U8FN1CU7JgqsPU=
|
||||
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
|
||||
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
|
||||
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
@@ -79,8 +80,8 @@ golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXR
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
|
||||
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190531172133-b3315ee88b7d/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
|
||||
@@ -2,6 +2,7 @@ package network
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"net/http"
|
||||
"time"
|
||||
@@ -37,6 +38,10 @@ func (slf *HttpServer) Start() {
|
||||
}
|
||||
|
||||
func (slf *HttpServer) startListen() error {
|
||||
if slf.httpServer != nil {
|
||||
return errors.New("Duplicate start not allowed")
|
||||
}
|
||||
|
||||
var tlsCaList []tls.Certificate
|
||||
var tlsConfig *tls.Config
|
||||
for _, caFile := range slf.caFileList {
|
||||
|
||||
@@ -68,6 +68,11 @@ func (pbProcessor *PBProcessor) MsgRoute(clientId uint64, msg interface{}) error
|
||||
// must goroutine safe
|
||||
func (pbProcessor *PBProcessor) Unmarshal(clientId uint64, data []byte) (interface{}, error) {
|
||||
defer pbProcessor.ReleaseByteSlice(data)
|
||||
return pbProcessor.UnmarshalWithOutRelease(clientId, data)
|
||||
}
|
||||
|
||||
// unmarshal but not release data
|
||||
func (pbProcessor *PBProcessor) UnmarshalWithOutRelease(clientId uint64, data []byte) (interface{}, error) {
|
||||
var msgType uint16
|
||||
if pbProcessor.LittleEndian == true {
|
||||
msgType = binary.LittleEndian.Uint16(data[:2])
|
||||
|
||||
@@ -78,7 +78,6 @@ func (pbRawProcessor *PBRawProcessor) SetRawMsgHandler(handle RawMessageHandler)
|
||||
func (pbRawProcessor *PBRawProcessor) MakeRawMsg(msgType uint16,msg []byte,pbRawPackInfo *PBRawPackInfo) {
|
||||
pbRawPackInfo.typ = msgType
|
||||
pbRawPackInfo.rawMsg = msg
|
||||
//return &PBRawPackInfo{typ:msgType,rawMsg:msg}
|
||||
}
|
||||
|
||||
func (pbRawProcessor *PBRawProcessor) UnknownMsgRoute(clientId uint64,msg interface{}){
|
||||
|
||||
@@ -17,17 +17,11 @@ type IProcessor interface {
|
||||
}
|
||||
|
||||
type IRawProcessor interface {
|
||||
SetByteOrder(littleEndian bool)
|
||||
MsgRoute(clientId uint64,msg interface{}) error
|
||||
Unmarshal(clientId uint64,data []byte) (interface{}, error)
|
||||
Marshal(clientId uint64,msg interface{}) ([]byte, error)
|
||||
IProcessor
|
||||
|
||||
SetByteOrder(littleEndian bool)
|
||||
SetRawMsgHandler(handle RawMessageHandler)
|
||||
MakeRawMsg(msgType uint16,msg []byte,pbRawPackInfo *PBRawPackInfo)
|
||||
UnknownMsgRoute(clientId uint64,msg interface{})
|
||||
ConnectedRoute(clientId uint64)
|
||||
DisConnectedRoute(clientId uint64)
|
||||
|
||||
SetUnknownMsgHandler(unknownMessageHandler UnknownRawMessageHandler)
|
||||
SetConnectedHandler(connectHandler RawConnectHandler)
|
||||
SetDisConnectedHandler(disconnectHandler RawConnectHandler)
|
||||
|
||||
@@ -16,7 +16,7 @@ type memAreaPool struct {
|
||||
pool []sync.Pool
|
||||
}
|
||||
|
||||
var memAreaPoolList = [3]*memAreaPool{&memAreaPool{minAreaValue: 1, maxAreaValue: 4096, growthValue: 512}, &memAreaPool{minAreaValue: 4097, maxAreaValue: 40960, growthValue: 4096}, &memAreaPool{minAreaValue: 40961, maxAreaValue: 417792, growthValue: 16384}}
|
||||
var memAreaPoolList = [4]*memAreaPool{&memAreaPool{minAreaValue: 1, maxAreaValue: 4096, growthValue: 512}, &memAreaPool{minAreaValue: 4097, maxAreaValue: 40960, growthValue: 4096}, &memAreaPool{minAreaValue: 40961, maxAreaValue: 417792, growthValue: 16384}, &memAreaPool{minAreaValue: 417793, maxAreaValue: 1925120, growthValue: 65536}}
|
||||
|
||||
func init() {
|
||||
for i := 0; i < len(memAreaPoolList); i++ {
|
||||
@@ -34,7 +34,6 @@ func (areaPool *memAreaPool) makePool() {
|
||||
for i := 0; i < poolLen; i++ {
|
||||
memSize := (areaPool.minAreaValue - 1) + (i+1)*areaPool.growthValue
|
||||
areaPool.pool[i] = sync.Pool{New: func() interface{} {
|
||||
//fmt.Println("make memsize:",memSize)
|
||||
return make([]byte, memSize)
|
||||
}}
|
||||
}
|
||||
|
||||
@@ -13,6 +13,8 @@ type TCPClient struct {
|
||||
ConnNum int
|
||||
ConnectInterval time.Duration
|
||||
PendingWriteNum int
|
||||
ReadDeadline time.Duration
|
||||
WriteDeadline time.Duration
|
||||
AutoReconnect bool
|
||||
NewAgent func(*TCPConn) Agent
|
||||
cons ConnSet
|
||||
@@ -20,11 +22,7 @@ type TCPClient struct {
|
||||
closeFlag bool
|
||||
|
||||
// msg parser
|
||||
LenMsgLen int
|
||||
MinMsgLen uint32
|
||||
MaxMsgLen uint32
|
||||
LittleEndian bool
|
||||
msgParser *MsgParser
|
||||
MsgParser
|
||||
}
|
||||
|
||||
func (client *TCPClient) Start() {
|
||||
@@ -52,6 +50,14 @@ func (client *TCPClient) init() {
|
||||
client.PendingWriteNum = 1000
|
||||
log.SRelease("invalid PendingWriteNum, reset to ", client.PendingWriteNum)
|
||||
}
|
||||
if client.ReadDeadline == 0 {
|
||||
client.ReadDeadline = 15*time.Second
|
||||
log.SRelease("invalid ReadDeadline, reset to ", int64(client.ReadDeadline.Seconds()),"s")
|
||||
}
|
||||
if client.WriteDeadline == 0 {
|
||||
client.WriteDeadline = 15*time.Second
|
||||
log.SRelease("invalid WriteDeadline, reset to ", int64(client.WriteDeadline.Seconds()),"s")
|
||||
}
|
||||
if client.NewAgent == nil {
|
||||
log.SFatal("NewAgent must not be nil")
|
||||
}
|
||||
@@ -59,14 +65,31 @@ func (client *TCPClient) init() {
|
||||
log.SFatal("client is running")
|
||||
}
|
||||
|
||||
if client.MinMsgLen == 0 {
|
||||
client.MinMsgLen = Default_MinMsgLen
|
||||
}
|
||||
if client.MaxMsgLen == 0 {
|
||||
client.MaxMsgLen = Default_MaxMsgLen
|
||||
}
|
||||
if client.LenMsgLen ==0 {
|
||||
client.LenMsgLen = Default_LenMsgLen
|
||||
}
|
||||
maxMsgLen := client.MsgParser.getMaxMsgLen(client.LenMsgLen)
|
||||
if client.MaxMsgLen > maxMsgLen {
|
||||
client.MaxMsgLen = maxMsgLen
|
||||
log.SRelease("invalid MaxMsgLen, reset to ", maxMsgLen)
|
||||
}
|
||||
|
||||
client.cons = make(ConnSet)
|
||||
client.closeFlag = false
|
||||
client.MsgParser.init()
|
||||
}
|
||||
|
||||
// msg parser
|
||||
msgParser := NewMsgParser()
|
||||
msgParser.SetMsgLen(client.LenMsgLen, client.MinMsgLen, client.MaxMsgLen)
|
||||
msgParser.SetByteOrder(client.LittleEndian)
|
||||
client.msgParser = msgParser
|
||||
func (client *TCPClient) GetCloseFlag() bool{
|
||||
client.Lock()
|
||||
defer client.Unlock()
|
||||
|
||||
return client.closeFlag
|
||||
}
|
||||
|
||||
func (client *TCPClient) dial() net.Conn {
|
||||
@@ -93,7 +116,7 @@ reconnect:
|
||||
if conn == nil {
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
client.Lock()
|
||||
if client.closeFlag {
|
||||
client.Unlock()
|
||||
@@ -103,7 +126,7 @@ reconnect:
|
||||
client.cons[conn] = struct{}{}
|
||||
client.Unlock()
|
||||
|
||||
tcpConn := newTCPConn(conn, client.PendingWriteNum, client.msgParser)
|
||||
tcpConn := newTCPConn(conn, client.PendingWriteNum, &client.MsgParser,client.WriteDeadline)
|
||||
agent := client.NewAgent(tcpConn)
|
||||
agent.Run()
|
||||
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
package network
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"net"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
"errors"
|
||||
)
|
||||
|
||||
type ConnSet map[net.Conn]struct{}
|
||||
@@ -14,7 +15,7 @@ type TCPConn struct {
|
||||
sync.Mutex
|
||||
conn net.Conn
|
||||
writeChan chan []byte
|
||||
closeFlag bool
|
||||
closeFlag int32
|
||||
msgParser *MsgParser
|
||||
}
|
||||
|
||||
@@ -27,7 +28,7 @@ func freeChannel(conn *TCPConn){
|
||||
}
|
||||
}
|
||||
|
||||
func newTCPConn(conn net.Conn, pendingWriteNum int, msgParser *MsgParser) *TCPConn {
|
||||
func newTCPConn(conn net.Conn, pendingWriteNum int, msgParser *MsgParser,writeDeadline time.Duration) *TCPConn {
|
||||
tcpConn := new(TCPConn)
|
||||
tcpConn.conn = conn
|
||||
tcpConn.writeChan = make(chan []byte, pendingWriteNum)
|
||||
@@ -37,6 +38,8 @@ func newTCPConn(conn net.Conn, pendingWriteNum int, msgParser *MsgParser) *TCPCo
|
||||
if b == nil {
|
||||
break
|
||||
}
|
||||
|
||||
conn.SetWriteDeadline(time.Now().Add(writeDeadline))
|
||||
_, err := conn.Write(b)
|
||||
tcpConn.msgParser.ReleaseByteSlice(b)
|
||||
|
||||
@@ -47,7 +50,7 @@ func newTCPConn(conn net.Conn, pendingWriteNum int, msgParser *MsgParser) *TCPCo
|
||||
conn.Close()
|
||||
tcpConn.Lock()
|
||||
freeChannel(tcpConn)
|
||||
tcpConn.closeFlag = true
|
||||
atomic.StoreInt32(&tcpConn.closeFlag,1)
|
||||
tcpConn.Unlock()
|
||||
}()
|
||||
|
||||
@@ -58,9 +61,9 @@ func (tcpConn *TCPConn) doDestroy() {
|
||||
tcpConn.conn.(*net.TCPConn).SetLinger(0)
|
||||
tcpConn.conn.Close()
|
||||
|
||||
if !tcpConn.closeFlag {
|
||||
if atomic.LoadInt32(&tcpConn.closeFlag)==0 {
|
||||
close(tcpConn.writeChan)
|
||||
tcpConn.closeFlag = true
|
||||
atomic.StoreInt32(&tcpConn.closeFlag,1)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -74,12 +77,12 @@ func (tcpConn *TCPConn) Destroy() {
|
||||
func (tcpConn *TCPConn) Close() {
|
||||
tcpConn.Lock()
|
||||
defer tcpConn.Unlock()
|
||||
if tcpConn.closeFlag {
|
||||
if atomic.LoadInt32(&tcpConn.closeFlag)==1 {
|
||||
return
|
||||
}
|
||||
|
||||
tcpConn.doWrite(nil)
|
||||
tcpConn.closeFlag = true
|
||||
atomic.StoreInt32(&tcpConn.closeFlag,1)
|
||||
}
|
||||
|
||||
func (tcpConn *TCPConn) GetRemoteIp() string {
|
||||
@@ -102,7 +105,7 @@ func (tcpConn *TCPConn) doWrite(b []byte) error{
|
||||
func (tcpConn *TCPConn) Write(b []byte) error{
|
||||
tcpConn.Lock()
|
||||
defer tcpConn.Unlock()
|
||||
if tcpConn.closeFlag || b == nil {
|
||||
if atomic.LoadInt32(&tcpConn.closeFlag)==1 || b == nil {
|
||||
tcpConn.ReleaseReadMsg(b)
|
||||
return errors.New("conn is close")
|
||||
}
|
||||
@@ -131,14 +134,14 @@ func (tcpConn *TCPConn) ReleaseReadMsg(byteBuff []byte){
|
||||
}
|
||||
|
||||
func (tcpConn *TCPConn) WriteMsg(args ...[]byte) error {
|
||||
if tcpConn.closeFlag == true {
|
||||
if atomic.LoadInt32(&tcpConn.closeFlag) == 1 {
|
||||
return errors.New("conn is close")
|
||||
}
|
||||
return tcpConn.msgParser.Write(tcpConn, args...)
|
||||
}
|
||||
|
||||
func (tcpConn *TCPConn) WriteRawMsg(args []byte) error {
|
||||
if tcpConn.closeFlag == true {
|
||||
if atomic.LoadInt32(&tcpConn.closeFlag) == 1 {
|
||||
return errors.New("conn is close")
|
||||
}
|
||||
|
||||
@@ -147,7 +150,7 @@ func (tcpConn *TCPConn) WriteRawMsg(args []byte) error {
|
||||
|
||||
|
||||
func (tcpConn *TCPConn) IsConnected() bool {
|
||||
return tcpConn.closeFlag == false
|
||||
return atomic.LoadInt32(&tcpConn.closeFlag) == 0
|
||||
}
|
||||
|
||||
func (tcpConn *TCPConn) SetReadDeadline(d time.Duration) {
|
||||
|
||||
@@ -11,62 +11,36 @@ import (
|
||||
// | len | data |
|
||||
// --------------
|
||||
type MsgParser struct {
|
||||
lenMsgLen int
|
||||
minMsgLen uint32
|
||||
maxMsgLen uint32
|
||||
littleEndian bool
|
||||
LenMsgLen int
|
||||
MinMsgLen uint32
|
||||
MaxMsgLen uint32
|
||||
LittleEndian bool
|
||||
|
||||
INetMempool
|
||||
}
|
||||
|
||||
func NewMsgParser() *MsgParser {
|
||||
p := new(MsgParser)
|
||||
p.lenMsgLen = 2
|
||||
p.minMsgLen = 1
|
||||
p.maxMsgLen = 4096
|
||||
p.littleEndian = false
|
||||
p.INetMempool = NewMemAreaPool()
|
||||
return p
|
||||
}
|
||||
|
||||
// It's dangerous to call the method on reading or writing
|
||||
func (p *MsgParser) SetMsgLen(lenMsgLen int, minMsgLen uint32, maxMsgLen uint32) {
|
||||
if lenMsgLen == 1 || lenMsgLen == 2 || lenMsgLen == 4 {
|
||||
p.lenMsgLen = lenMsgLen
|
||||
}
|
||||
if minMsgLen != 0 {
|
||||
p.minMsgLen = minMsgLen
|
||||
}
|
||||
if maxMsgLen != 0 {
|
||||
p.maxMsgLen = maxMsgLen
|
||||
}
|
||||
|
||||
var max uint32
|
||||
switch p.lenMsgLen {
|
||||
func (p *MsgParser) getMaxMsgLen(lenMsgLen int) uint32 {
|
||||
switch p.LenMsgLen {
|
||||
case 1:
|
||||
max = math.MaxUint8
|
||||
return math.MaxUint8
|
||||
case 2:
|
||||
max = math.MaxUint16
|
||||
return math.MaxUint16
|
||||
case 4:
|
||||
max = math.MaxUint32
|
||||
}
|
||||
if p.minMsgLen > max {
|
||||
p.minMsgLen = max
|
||||
}
|
||||
if p.maxMsgLen > max {
|
||||
p.maxMsgLen = max
|
||||
return math.MaxUint32
|
||||
default:
|
||||
panic("LenMsgLen value must be 1 or 2 or 4")
|
||||
}
|
||||
}
|
||||
|
||||
// It's dangerous to call the method on reading or writing
|
||||
func (p *MsgParser) SetByteOrder(littleEndian bool) {
|
||||
p.littleEndian = littleEndian
|
||||
func (p *MsgParser) init(){
|
||||
p.INetMempool = NewMemAreaPool()
|
||||
}
|
||||
|
||||
// goroutine safe
|
||||
func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
|
||||
var b [4]byte
|
||||
bufMsgLen := b[:p.lenMsgLen]
|
||||
bufMsgLen := b[:p.LenMsgLen]
|
||||
|
||||
// read len
|
||||
if _, err := io.ReadFull(conn, bufMsgLen); err != nil {
|
||||
@@ -75,17 +49,17 @@ func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
|
||||
|
||||
// parse len
|
||||
var msgLen uint32
|
||||
switch p.lenMsgLen {
|
||||
switch p.LenMsgLen {
|
||||
case 1:
|
||||
msgLen = uint32(bufMsgLen[0])
|
||||
case 2:
|
||||
if p.littleEndian {
|
||||
if p.LittleEndian {
|
||||
msgLen = uint32(binary.LittleEndian.Uint16(bufMsgLen))
|
||||
} else {
|
||||
msgLen = uint32(binary.BigEndian.Uint16(bufMsgLen))
|
||||
}
|
||||
case 4:
|
||||
if p.littleEndian {
|
||||
if p.LittleEndian {
|
||||
msgLen = binary.LittleEndian.Uint32(bufMsgLen)
|
||||
} else {
|
||||
msgLen = binary.BigEndian.Uint32(bufMsgLen)
|
||||
@@ -93,9 +67,9 @@ func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
|
||||
}
|
||||
|
||||
// check len
|
||||
if msgLen > p.maxMsgLen {
|
||||
if msgLen > p.MaxMsgLen {
|
||||
return nil, errors.New("message too long")
|
||||
} else if msgLen < p.minMsgLen {
|
||||
} else if msgLen < p.MinMsgLen {
|
||||
return nil, errors.New("message too short")
|
||||
}
|
||||
|
||||
@@ -118,26 +92,26 @@ func (p *MsgParser) Write(conn *TCPConn, args ...[]byte) error {
|
||||
}
|
||||
|
||||
// check len
|
||||
if msgLen > p.maxMsgLen {
|
||||
if msgLen > p.MaxMsgLen {
|
||||
return errors.New("message too long")
|
||||
} else if msgLen < p.minMsgLen {
|
||||
} else if msgLen < p.MinMsgLen {
|
||||
return errors.New("message too short")
|
||||
}
|
||||
|
||||
//msg := make([]byte, uint32(p.lenMsgLen)+msgLen)
|
||||
msg := p.MakeByteSlice(p.lenMsgLen+int(msgLen))
|
||||
msg := p.MakeByteSlice(p.LenMsgLen+int(msgLen))
|
||||
// write len
|
||||
switch p.lenMsgLen {
|
||||
switch p.LenMsgLen {
|
||||
case 1:
|
||||
msg[0] = byte(msgLen)
|
||||
case 2:
|
||||
if p.littleEndian {
|
||||
if p.LittleEndian {
|
||||
binary.LittleEndian.PutUint16(msg, uint16(msgLen))
|
||||
} else {
|
||||
binary.BigEndian.PutUint16(msg, uint16(msgLen))
|
||||
}
|
||||
case 4:
|
||||
if p.littleEndian {
|
||||
if p.LittleEndian {
|
||||
binary.LittleEndian.PutUint32(msg, msgLen)
|
||||
} else {
|
||||
binary.BigEndian.PutUint32(msg, msgLen)
|
||||
@@ -145,7 +119,7 @@ func (p *MsgParser) Write(conn *TCPConn, args ...[]byte) error {
|
||||
}
|
||||
|
||||
// write data
|
||||
l := p.lenMsgLen
|
||||
l := p.LenMsgLen
|
||||
for i := 0; i < len(args); i++ {
|
||||
copy(msg[l:], args[i])
|
||||
l += len(args[i])
|
||||
|
||||
@@ -7,10 +7,24 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
const(
|
||||
Default_ReadDeadline = time.Second*30 //默认读超时30s
|
||||
Default_WriteDeadline = time.Second*30 //默认写超时30s
|
||||
Default_MaxConnNum = 1000000 //默认最大连接数
|
||||
Default_PendingWriteNum = 100000 //单连接写消息Channel容量
|
||||
Default_LittleEndian = false //默认大小端
|
||||
Default_MinMsgLen = 2 //最小消息长度2byte
|
||||
Default_LenMsgLen = 2 //包头字段长度占用2byte
|
||||
Default_MaxMsgLen = 65535 //最大消息长度
|
||||
)
|
||||
|
||||
type TCPServer struct {
|
||||
Addr string
|
||||
MaxConnNum int
|
||||
PendingWriteNum int
|
||||
ReadDeadline time.Duration
|
||||
WriteDeadline time.Duration
|
||||
|
||||
NewAgent func(*TCPConn) Agent
|
||||
ln net.Listener
|
||||
conns ConnSet
|
||||
@@ -18,13 +32,7 @@ type TCPServer struct {
|
||||
wgLn sync.WaitGroup
|
||||
wgConns sync.WaitGroup
|
||||
|
||||
// msg parser
|
||||
LenMsgLen int
|
||||
MinMsgLen uint32
|
||||
MaxMsgLen uint32
|
||||
LittleEndian bool
|
||||
msgParser *MsgParser
|
||||
netMemPool INetMempool
|
||||
MsgParser
|
||||
}
|
||||
|
||||
func (server *TCPServer) Start() {
|
||||
@@ -39,37 +47,61 @@ func (server *TCPServer) init() {
|
||||
}
|
||||
|
||||
if server.MaxConnNum <= 0 {
|
||||
server.MaxConnNum = 100
|
||||
server.MaxConnNum = Default_MaxConnNum
|
||||
log.SRelease("invalid MaxConnNum, reset to ", server.MaxConnNum)
|
||||
}
|
||||
|
||||
if server.PendingWriteNum <= 0 {
|
||||
server.PendingWriteNum = 100
|
||||
server.PendingWriteNum = Default_PendingWriteNum
|
||||
log.SRelease("invalid PendingWriteNum, reset to ", server.PendingWriteNum)
|
||||
}
|
||||
|
||||
if server.LenMsgLen <= 0 {
|
||||
server.LenMsgLen = Default_LenMsgLen
|
||||
log.SRelease("invalid LenMsgLen, reset to ", server.LenMsgLen)
|
||||
}
|
||||
|
||||
if server.MaxMsgLen <= 0 {
|
||||
server.MaxMsgLen = Default_MaxMsgLen
|
||||
log.SRelease("invalid MaxMsgLen, reset to ", server.MaxMsgLen)
|
||||
}
|
||||
|
||||
maxMsgLen := server.MsgParser.getMaxMsgLen(server.LenMsgLen)
|
||||
if server.MaxMsgLen > maxMsgLen {
|
||||
server.MaxMsgLen = maxMsgLen
|
||||
log.SRelease("invalid MaxMsgLen, reset to ", maxMsgLen)
|
||||
}
|
||||
|
||||
if server.MinMsgLen <= 0 {
|
||||
server.MinMsgLen = Default_MinMsgLen
|
||||
log.SRelease("invalid MinMsgLen, reset to ", server.MinMsgLen)
|
||||
}
|
||||
|
||||
if server.WriteDeadline == 0 {
|
||||
server.WriteDeadline = Default_WriteDeadline
|
||||
log.SRelease("invalid WriteDeadline, reset to ", server.WriteDeadline.Seconds(),"s")
|
||||
}
|
||||
|
||||
if server.ReadDeadline == 0 {
|
||||
server.ReadDeadline = Default_ReadDeadline
|
||||
log.SRelease("invalid ReadDeadline, reset to ", server.ReadDeadline.Seconds(),"s")
|
||||
}
|
||||
|
||||
if server.NewAgent == nil {
|
||||
log.SFatal("NewAgent must not be nil")
|
||||
}
|
||||
|
||||
server.ln = ln
|
||||
server.conns = make(ConnSet)
|
||||
|
||||
// msg parser
|
||||
msgParser := NewMsgParser()
|
||||
if msgParser.INetMempool == nil {
|
||||
msgParser.INetMempool = NewMemAreaPool()
|
||||
}
|
||||
|
||||
msgParser.SetMsgLen(server.LenMsgLen, server.MinMsgLen, server.MaxMsgLen)
|
||||
msgParser.SetByteOrder(server.LittleEndian)
|
||||
server.msgParser = msgParser
|
||||
server.MsgParser.init()
|
||||
}
|
||||
|
||||
func (server *TCPServer) SetNetMempool(mempool INetMempool){
|
||||
server.msgParser.INetMempool = mempool
|
||||
server.INetMempool = mempool
|
||||
}
|
||||
|
||||
func (server *TCPServer) GetNetMempool() INetMempool{
|
||||
return server.msgParser.INetMempool
|
||||
return server.INetMempool
|
||||
}
|
||||
|
||||
func (server *TCPServer) run() {
|
||||
@@ -95,6 +127,7 @@ func (server *TCPServer) run() {
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
conn.(*net.TCPConn).SetNoDelay(true)
|
||||
tempDelay = 0
|
||||
|
||||
@@ -105,16 +138,16 @@ func (server *TCPServer) run() {
|
||||
log.SWarning("too many connections")
|
||||
continue
|
||||
}
|
||||
|
||||
server.conns[conn] = struct{}{}
|
||||
server.mutexConns.Unlock()
|
||||
|
||||
server.wgConns.Add(1)
|
||||
|
||||
tcpConn := newTCPConn(conn, server.PendingWriteNum, server.msgParser)
|
||||
tcpConn := newTCPConn(conn, server.PendingWriteNum, &server.MsgParser,server.WriteDeadline)
|
||||
agent := server.NewAgent(tcpConn)
|
||||
|
||||
go func() {
|
||||
agent.Run()
|
||||
|
||||
// cleanup
|
||||
tcpConn.Close()
|
||||
server.mutexConns.Lock()
|
||||
|
||||
@@ -14,6 +14,7 @@ type WSClient struct {
|
||||
ConnectInterval time.Duration
|
||||
PendingWriteNum int
|
||||
MaxMsgLen uint32
|
||||
MessageType int
|
||||
HandshakeTimeout time.Duration
|
||||
AutoReconnect bool
|
||||
NewAgent func(*WSConn) Agent
|
||||
@@ -21,7 +22,7 @@ type WSClient struct {
|
||||
cons WebsocketConnSet
|
||||
wg sync.WaitGroup
|
||||
closeFlag bool
|
||||
messageType int
|
||||
|
||||
}
|
||||
|
||||
func (client *WSClient) Start() {
|
||||
@@ -63,7 +64,11 @@ func (client *WSClient) init() {
|
||||
if client.cons != nil {
|
||||
log.SFatal("client is running")
|
||||
}
|
||||
client.messageType = websocket.TextMessage
|
||||
|
||||
if client.MessageType == 0 {
|
||||
client.MessageType = websocket.TextMessage
|
||||
}
|
||||
|
||||
client.cons = make(WebsocketConnSet)
|
||||
client.closeFlag = false
|
||||
client.dialer = websocket.Dialer{
|
||||
@@ -84,9 +89,6 @@ func (client *WSClient) dial() *websocket.Conn {
|
||||
}
|
||||
}
|
||||
|
||||
func (client *WSClient) SetMessageType(messageType int){
|
||||
client.messageType = messageType
|
||||
}
|
||||
func (client *WSClient) connect() {
|
||||
defer client.wg.Done()
|
||||
|
||||
@@ -106,7 +108,7 @@ reconnect:
|
||||
client.cons[conn] = struct{}{}
|
||||
client.Unlock()
|
||||
|
||||
wsConn := newWSConn(conn, client.PendingWriteNum, client.MaxMsgLen,client.messageType)
|
||||
wsConn := newWSConn(conn, client.PendingWriteNum, client.MaxMsgLen,client.MessageType)
|
||||
agent := client.NewAgent(wsConn)
|
||||
agent.Run()
|
||||
|
||||
|
||||
@@ -139,6 +139,7 @@ func (server *WSServer) Start() {
|
||||
maxMsgLen: server.MaxMsgLen,
|
||||
newAgent: server.NewAgent,
|
||||
conns: make(WebsocketConnSet),
|
||||
messageType:server.messageType,
|
||||
upgrader: websocket.Upgrader{
|
||||
HandshakeTimeout: server.HTTPTimeout,
|
||||
CheckOrigin: func(_ *http.Request) bool { return true },
|
||||
|
||||
191
node/node.go
191
node/node.go
@@ -8,9 +8,9 @@ import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/profiler"
|
||||
"github.com/duanhf2012/origin/service"
|
||||
"github.com/duanhf2012/origin/util/timer"
|
||||
"github.com/duanhf2012/origin/util/buildtime"
|
||||
"io/ioutil"
|
||||
"github.com/duanhf2012/origin/util/timer"
|
||||
"io"
|
||||
slog "log"
|
||||
"net/http"
|
||||
_ "net/http/pprof"
|
||||
@@ -22,7 +22,6 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
var closeSig chan bool
|
||||
var sig chan os.Signal
|
||||
var nodeId int
|
||||
var preSetupService []service.IService //预安装
|
||||
@@ -31,33 +30,38 @@ var bValid bool
|
||||
var configDir = "./config/"
|
||||
var logLevel string = "debug"
|
||||
var logPath string
|
||||
type BuildOSType = int8
|
||||
|
||||
const(
|
||||
Windows BuildOSType = 0
|
||||
Linux BuildOSType = 1
|
||||
Mac BuildOSType = 2
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
closeSig = make(chan bool,1)
|
||||
sig = make(chan os.Signal, 3)
|
||||
signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM,syscall.Signal(10))
|
||||
signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM, syscall.Signal(10))
|
||||
|
||||
console.RegisterCommandBool("help",false,"<-help> This help.",usage)
|
||||
console.RegisterCommandString("name","","<-name nodeName> Node's name.",setName)
|
||||
console.RegisterCommandString("start","","<-start nodeid=nodeid> Run originserver.",startNode)
|
||||
console.RegisterCommandString("stop","","<-stop nodeid=nodeid> Stop originserver process.",stopNode)
|
||||
console.RegisterCommandString("config","","<-config path> Configuration file path.",setConfigPath)
|
||||
console.RegisterCommandBool("help", false, "<-help> This help.", usage)
|
||||
console.RegisterCommandString("name", "", "<-name nodeName> Node's name.", setName)
|
||||
console.RegisterCommandString("start", "", "<-start nodeid=nodeid> Run originserver.", startNode)
|
||||
console.RegisterCommandString("stop", "", "<-stop nodeid=nodeid> Stop originserver process.", stopNode)
|
||||
console.RegisterCommandString("config", "", "<-config path> Configuration file path.", setConfigPath)
|
||||
console.RegisterCommandString("console", "", "<-console true|false> Turn on or off screen log output.", openConsole)
|
||||
console.RegisterCommandString("loglevel", "debug", "<-loglevel debug|release|warning|error|fatal> Set loglevel.", setLevel)
|
||||
console.RegisterCommandString("logpath", "", "<-logpath path> Set log file path.", setLogPath)
|
||||
console.RegisterCommandString("pprof","","<-pprof ip:port> Open performance analysis.",setPprof)
|
||||
console.RegisterCommandString("pprof", "", "<-pprof ip:port> Open performance analysis.", setPprof)
|
||||
}
|
||||
|
||||
func usage(val interface{}) error{
|
||||
func usage(val interface{}) error {
|
||||
ret := val.(bool)
|
||||
if ret == false {
|
||||
return nil
|
||||
}
|
||||
|
||||
if len(buildtime.GetBuildDateTime())>0 {
|
||||
fmt.Fprintf(os.Stderr, "Welcome to Origin(build info: %s)\nUsage: originserver [-help] [-start node=1] [-stop] [-config path] [-pprof 0.0.0.0:6060]...\n",buildtime.GetBuildDateTime())
|
||||
}else{
|
||||
if len(buildtime.GetBuildDateTime()) > 0 {
|
||||
fmt.Fprintf(os.Stderr, "Welcome to Origin(build info: %s)\nUsage: originserver [-help] [-start node=1] [-stop] [-config path] [-pprof 0.0.0.0:6060]...\n", buildtime.GetBuildDateTime())
|
||||
} else {
|
||||
fmt.Fprintf(os.Stderr, "Welcome to Origin\nUsage: originserver [-help] [-start node=1] [-stop] [-config path] [-pprof 0.0.0.0:6060]...\n")
|
||||
}
|
||||
|
||||
@@ -71,28 +75,28 @@ func setName(val interface{}) error {
|
||||
|
||||
func setPprof(val interface{}) error {
|
||||
listenAddr := val.(string)
|
||||
if listenAddr==""{
|
||||
if listenAddr == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
go func(){
|
||||
go func() {
|
||||
err := http.ListenAndServe(listenAddr, nil)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("%+v",err))
|
||||
panic(fmt.Errorf("%+v", err))
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func setConfigPath(val interface{}) error{
|
||||
func setConfigPath(val interface{}) error {
|
||||
configPath := val.(string)
|
||||
if configPath==""{
|
||||
if configPath == "" {
|
||||
return nil
|
||||
}
|
||||
_, err := os.Stat(configPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Cannot find file path %s",configPath)
|
||||
return fmt.Errorf("Cannot find file path %s", configPath)
|
||||
}
|
||||
|
||||
cluster.SetConfigDir(configPath)
|
||||
@@ -100,16 +104,16 @@ func setConfigPath(val interface{}) error{
|
||||
return nil
|
||||
}
|
||||
|
||||
func getRunProcessPid(nodeId int) (int,error) {
|
||||
f, err := os.OpenFile(fmt.Sprintf("%s_%d.pid",os.Args[0],nodeId), os.O_RDONLY, 0600)
|
||||
func getRunProcessPid(nodeId int) (int, error) {
|
||||
f, err := os.OpenFile(fmt.Sprintf("%s_%d.pid", os.Args[0], nodeId), os.O_RDONLY, 0600)
|
||||
defer f.Close()
|
||||
if err!= nil {
|
||||
return 0,err
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
pidByte,errs := ioutil.ReadAll(f)
|
||||
if errs!=nil {
|
||||
return 0,errs
|
||||
pidByte, errs := io.ReadAll(f)
|
||||
if errs != nil {
|
||||
return 0, errs
|
||||
}
|
||||
|
||||
return strconv.Atoi(string(pidByte))
|
||||
@@ -117,13 +121,13 @@ func getRunProcessPid(nodeId int) (int,error) {
|
||||
|
||||
func writeProcessPid(nodeId int) {
|
||||
//pid
|
||||
f, err := os.OpenFile(fmt.Sprintf("%s_%d.pid",os.Args[0],nodeId), os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0600)
|
||||
f, err := os.OpenFile(fmt.Sprintf("%s_%d.pid", os.Args[0], nodeId), os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0600)
|
||||
defer f.Close()
|
||||
if err != nil {
|
||||
fmt.Println(err.Error())
|
||||
os.Exit(-1)
|
||||
} else {
|
||||
_,err=f.Write([]byte(fmt.Sprintf("%d",os.Getpid())))
|
||||
_, err = f.Write([]byte(fmt.Sprintf("%d", os.Getpid())))
|
||||
if err != nil {
|
||||
fmt.Println(err.Error())
|
||||
os.Exit(-1)
|
||||
@@ -135,44 +139,51 @@ func GetNodeId() int {
|
||||
return nodeId
|
||||
}
|
||||
|
||||
func initNode(id int){
|
||||
func initNode(id int) {
|
||||
//1.初始化集群
|
||||
nodeId = id
|
||||
err := cluster.GetCluster().Init(GetNodeId(),Setup)
|
||||
err := cluster.GetCluster().Init(GetNodeId(), Setup)
|
||||
if err != nil {
|
||||
log.SFatal("read system config is error ",err.Error())
|
||||
log.SFatal("read system config is error ", err.Error())
|
||||
}
|
||||
|
||||
err = initLog()
|
||||
if err != nil{
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
//2.setup service
|
||||
for _,s := range preSetupService {
|
||||
//是否配置的service
|
||||
if cluster.GetCluster().IsConfigService(s.GetName()) == false {
|
||||
continue
|
||||
//2.顺序安装服务
|
||||
serviceOrder := cluster.GetCluster().GetLocalNodeInfo().ServiceList
|
||||
for _,serviceName:= range serviceOrder{
|
||||
bSetup := false
|
||||
for _, s := range preSetupService {
|
||||
if s.GetName() != serviceName {
|
||||
continue
|
||||
}
|
||||
bSetup = true
|
||||
pServiceCfg := cluster.GetCluster().GetServiceCfg(s.GetName())
|
||||
s.Init(s, cluster.GetRpcClient, cluster.GetRpcServer, pServiceCfg)
|
||||
|
||||
service.Setup(s)
|
||||
}
|
||||
|
||||
pServiceCfg := cluster.GetCluster().GetServiceCfg(s.GetName())
|
||||
s.Init(s,cluster.GetRpcClient,cluster.GetRpcServer,pServiceCfg)
|
||||
|
||||
service.Setup(s)
|
||||
if bSetup == false {
|
||||
log.SFatal("Service name "+serviceName+" configuration error")
|
||||
}
|
||||
}
|
||||
|
||||
//3.service初始化
|
||||
service.Init(closeSig)
|
||||
service.Init()
|
||||
}
|
||||
|
||||
func initLog() error{
|
||||
if logPath == ""{
|
||||
func initLog() error {
|
||||
if logPath == "" {
|
||||
setLogPath("./log")
|
||||
}
|
||||
|
||||
localnodeinfo := cluster.GetCluster().GetLocalNodeInfo()
|
||||
filepre := fmt.Sprintf("%s_%d_", localnodeinfo.NodeName, localnodeinfo.NodeId)
|
||||
logger,err := log.New(logLevel,logPath,filepre,slog.LstdFlags|slog.Lshortfile,10)
|
||||
logger, err := log.New(logLevel, logPath, filepre, slog.LstdFlags|slog.Lshortfile, 10)
|
||||
if err != nil {
|
||||
fmt.Printf("cannot create log file!\n")
|
||||
return err
|
||||
@@ -183,8 +194,8 @@ func initLog() error{
|
||||
|
||||
func Start() {
|
||||
err := console.Run(os.Args)
|
||||
if err!=nil {
|
||||
fmt.Printf("%+v\n",err)
|
||||
if err != nil {
|
||||
fmt.Printf("%+v\n", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -196,19 +207,19 @@ func stopNode(args interface{}) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
sParam := strings.Split(param,"=")
|
||||
sParam := strings.Split(param, "=")
|
||||
if len(sParam) != 2 {
|
||||
return fmt.Errorf("invalid option %s",param)
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
if sParam[0]!="nodeid" {
|
||||
return fmt.Errorf("invalid option %s",param)
|
||||
if sParam[0] != "nodeid" {
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
nodeId,err:= strconv.Atoi(sParam[1])
|
||||
nodeId, err := strconv.Atoi(sParam[1])
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid option %s",param)
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
|
||||
processId,err := getRunProcessPid(nodeId)
|
||||
processId, err := getRunProcessPid(nodeId)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -217,26 +228,26 @@ func stopNode(args interface{}) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func startNode(args interface{}) error{
|
||||
func startNode(args interface{}) error {
|
||||
//1.解析参数
|
||||
param := args.(string)
|
||||
if param == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
sParam := strings.Split(param,"=")
|
||||
sParam := strings.Split(param, "=")
|
||||
if len(sParam) != 2 {
|
||||
return fmt.Errorf("invalid option %s",param)
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
if sParam[0]!="nodeid" {
|
||||
return fmt.Errorf("invalid option %s",param)
|
||||
if sParam[0] != "nodeid" {
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
nodeId,err:= strconv.Atoi(sParam[1])
|
||||
nodeId, err := strconv.Atoi(sParam[1])
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid option %s",param)
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
|
||||
timer.StartTimer(10*time.Millisecond,1000000)
|
||||
timer.StartTimer(10*time.Millisecond, 1000000)
|
||||
log.SRelease("Start running server.")
|
||||
//2.初始化node
|
||||
initNode(nodeId)
|
||||
@@ -253,7 +264,7 @@ func startNode(args interface{}) error{
|
||||
//6.监听程序退出信号&性能报告
|
||||
bRun := true
|
||||
var pProfilerTicker *time.Ticker = &time.Ticker{}
|
||||
if profilerInterval>0 {
|
||||
if profilerInterval > 0 {
|
||||
pProfilerTicker = time.NewTicker(profilerInterval)
|
||||
}
|
||||
for bRun {
|
||||
@@ -261,24 +272,22 @@ func startNode(args interface{}) error{
|
||||
case <-sig:
|
||||
log.SRelease("receipt stop signal.")
|
||||
bRun = false
|
||||
case <- pProfilerTicker.C:
|
||||
case <-pProfilerTicker.C:
|
||||
profiler.Report()
|
||||
}
|
||||
}
|
||||
cluster.GetCluster().Stop()
|
||||
//7.退出
|
||||
close(closeSig)
|
||||
service.WaitStop()
|
||||
service.StopAllService()
|
||||
|
||||
log.SRelease("Server is stop.")
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
func Setup(s ...service.IService) {
|
||||
for _,sv := range s {
|
||||
func Setup(s ...service.IService) {
|
||||
for _, sv := range s {
|
||||
sv.OnSetup(sv)
|
||||
preSetupService = append(preSetupService,sv)
|
||||
preSetupService = append(preSetupService, sv)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -286,67 +295,67 @@ func GetService(serviceName string) service.IService {
|
||||
return service.GetService(serviceName)
|
||||
}
|
||||
|
||||
func SetConfigDir(configDir string){
|
||||
configDir = configDir
|
||||
cluster.SetConfigDir(configDir)
|
||||
func SetConfigDir(cfgDir string) {
|
||||
configDir = cfgDir
|
||||
cluster.SetConfigDir(cfgDir)
|
||||
}
|
||||
|
||||
func GetConfigDir() string {
|
||||
return configDir
|
||||
}
|
||||
|
||||
func SetSysLog(strLevel string, pathname string, flag int){
|
||||
logs,_:= log.New(strLevel,pathname, "", flag,10)
|
||||
func SetSysLog(strLevel string, pathname string, flag int) {
|
||||
logs, _ := log.New(strLevel, pathname, "", flag, 10)
|
||||
log.Export(logs)
|
||||
}
|
||||
|
||||
func OpenProfilerReport(interval time.Duration){
|
||||
func OpenProfilerReport(interval time.Duration) {
|
||||
profilerInterval = interval
|
||||
}
|
||||
|
||||
func openConsole(args interface{}) error{
|
||||
func openConsole(args interface{}) error {
|
||||
if args == "" {
|
||||
return nil
|
||||
}
|
||||
strOpen := strings.ToLower(strings.TrimSpace(args.(string)))
|
||||
if strOpen == "false" {
|
||||
log.OpenConsole = false
|
||||
}else if strOpen == "true" {
|
||||
} else if strOpen == "true" {
|
||||
log.OpenConsole = true
|
||||
}else{
|
||||
} else {
|
||||
return errors.New("Parameter console error!")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func setLevel(args interface{}) error{
|
||||
if args==""{
|
||||
func setLevel(args interface{}) error {
|
||||
if args == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
logLevel = strings.TrimSpace(args.(string))
|
||||
if logLevel!= "debug" && logLevel!="release"&& logLevel!="warning"&&logLevel!="error"&&logLevel!="fatal" {
|
||||
if logLevel != "debug" && logLevel != "release" && logLevel != "warning" && logLevel != "error" && logLevel != "fatal" {
|
||||
return errors.New("unknown level: " + logLevel)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func setLogPath(args interface{}) error{
|
||||
if args == ""{
|
||||
func setLogPath(args interface{}) error {
|
||||
if args == "" {
|
||||
return nil
|
||||
}
|
||||
logPath = strings.TrimSpace(args.(string))
|
||||
dir, err := os.Stat(logPath) //这个文件夹不存在
|
||||
if err == nil && dir.IsDir()==false {
|
||||
return errors.New("Not found dir "+logPath)
|
||||
if err == nil && dir.IsDir() == false {
|
||||
return errors.New("Not found dir " + logPath)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
err = os.Mkdir(logPath, os.ModePerm)
|
||||
if err != nil {
|
||||
return errors.New("Cannot create dir "+logPath)
|
||||
return errors.New("Cannot create dir " + logPath)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
@@ -15,3 +15,7 @@ func KillProcess(processId int){
|
||||
fmt.Printf("kill processid %d is successful.\n",processId)
|
||||
}
|
||||
}
|
||||
|
||||
func GetBuildOSType() BuildOSType{
|
||||
return Linux
|
||||
}
|
||||
|
||||
@@ -15,3 +15,7 @@ func KillProcess(processId int){
|
||||
fmt.Printf("kill processid %d is successful.\n",processId)
|
||||
}
|
||||
}
|
||||
|
||||
func GetBuildOSType() BuildOSType{
|
||||
return Mac
|
||||
}
|
||||
|
||||
@@ -4,4 +4,8 @@ package node
|
||||
|
||||
func KillProcess(processId int){
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
func GetBuildOSType() BuildOSType{
|
||||
return Windows
|
||||
}
|
||||
|
||||
@@ -193,9 +193,11 @@ func Report() {
|
||||
|
||||
record = prof.record
|
||||
prof.record = list.New()
|
||||
callNum := prof.callNum
|
||||
totalCostTime := prof.totalCostTime
|
||||
prof.stackLocker.RUnlock()
|
||||
|
||||
DefaultReportFunction(name,prof.callNum,prof.totalCostTime,record)
|
||||
DefaultReportFunction(name,callNum,totalCostTime,record)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
413
rpc/client.go
413
rpc/client.go
@@ -1,93 +1,64 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"container/list"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/network"
|
||||
"github.com/duanhf2012/origin/util/timer"
|
||||
"math"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
)
|
||||
|
||||
type Client struct {
|
||||
clientSeq uint32
|
||||
id int
|
||||
bSelfNode bool
|
||||
network.TCPClient
|
||||
conn *network.TCPConn
|
||||
const(
|
||||
DefaultRpcConnNum = 1
|
||||
DefaultRpcLenMsgLen = 4
|
||||
DefaultRpcMinMsgLen = 2
|
||||
DefaultMaxCheckCallRpcCount = 1000
|
||||
DefaultMaxPendingWriteNum = 200000
|
||||
|
||||
pendingLock sync.RWMutex
|
||||
startSeq uint64
|
||||
pending map[uint64]*list.Element
|
||||
pendingTimer *list.List
|
||||
callRpcTimeout time.Duration
|
||||
maxCheckCallRpcCount int
|
||||
TriggerRpcEvent
|
||||
}
|
||||
|
||||
DefaultConnectInterval = 2*time.Second
|
||||
DefaultCheckRpcCallTimeoutInterval = 1*time.Second
|
||||
DefaultRpcTimeout = 15*time.Second
|
||||
)
|
||||
|
||||
var clientSeq uint32
|
||||
|
||||
type IRealClient interface {
|
||||
SetConn(conn *network.TCPConn)
|
||||
Close(waitDone bool)
|
||||
|
||||
AsyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{},cancelable bool) (CancelRpc,error)
|
||||
Go(timeout time.Duration,rpcHandler IRpcHandler, noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call
|
||||
RawGo(timeout time.Duration,rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, rawArgs []byte, reply interface{}) *Call
|
||||
IsConnected() bool
|
||||
|
||||
Run()
|
||||
OnClose()
|
||||
}
|
||||
|
||||
type Client struct {
|
||||
clientId uint32
|
||||
nodeId int
|
||||
pendingLock sync.RWMutex
|
||||
startSeq uint64
|
||||
pending map[uint64]*Call
|
||||
callRpcTimeout time.Duration
|
||||
maxCheckCallRpcCount int
|
||||
|
||||
callTimerHeap CallTimerHeap
|
||||
IRealClient
|
||||
}
|
||||
|
||||
func (client *Client) NewClientAgent(conn *network.TCPConn) network.Agent {
|
||||
client.conn = conn
|
||||
client.ResetPending()
|
||||
client.SetConn(conn)
|
||||
|
||||
return client
|
||||
}
|
||||
|
||||
func (client *Client) Connect(id int, addr string, maxRpcParamLen uint32) error {
|
||||
client.clientSeq = atomic.AddUint32(&clientSeq, 1)
|
||||
client.id = id
|
||||
client.Addr = addr
|
||||
client.maxCheckCallRpcCount = 1000
|
||||
client.callRpcTimeout = 15 * time.Second
|
||||
client.ConnNum = 1
|
||||
client.ConnectInterval = time.Second * 2
|
||||
client.PendingWriteNum = 200000
|
||||
client.AutoReconnect = true
|
||||
client.LenMsgLen = 4
|
||||
client.MinMsgLen = 2
|
||||
if maxRpcParamLen > 0 {
|
||||
client.MaxMsgLen = maxRpcParamLen
|
||||
} else {
|
||||
client.MaxMsgLen = math.MaxUint32
|
||||
}
|
||||
|
||||
client.NewAgent = client.NewClientAgent
|
||||
client.LittleEndian = LittleEndian
|
||||
client.ResetPending()
|
||||
go client.startCheckRpcCallTimer()
|
||||
if addr == "" {
|
||||
client.bSelfNode = true
|
||||
return nil
|
||||
}
|
||||
|
||||
client.Start()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (client *Client) startCheckRpcCallTimer() {
|
||||
t := timer.NewTimer(5 * time.Second)
|
||||
for {
|
||||
select {
|
||||
case cTimer := <-t.C:
|
||||
cTimer.SetupTimer(time.Now())
|
||||
client.checkRpcCallTimeout()
|
||||
}
|
||||
}
|
||||
|
||||
t.Cancel()
|
||||
timer.ReleaseTimer(t)
|
||||
}
|
||||
|
||||
func (client *Client) makeCallFail(call *Call) {
|
||||
client.removePending(call.Seq)
|
||||
func (bc *Client) makeCallFail(call *Call) {
|
||||
if call.callback != nil && call.callback.IsValid() {
|
||||
call.rpcHandler.PushRpcResponse(call)
|
||||
} else {
|
||||
@@ -95,254 +66,120 @@ func (client *Client) makeCallFail(call *Call) {
|
||||
}
|
||||
}
|
||||
|
||||
func (client *Client) checkRpcCallTimeout() {
|
||||
now := time.Now()
|
||||
func (bc *Client) checkRpcCallTimeout() {
|
||||
for{
|
||||
time.Sleep(DefaultCheckRpcCallTimeoutInterval)
|
||||
for i := 0; i < bc.maxCheckCallRpcCount; i++ {
|
||||
bc.pendingLock.Lock()
|
||||
|
||||
callSeq := bc.callTimerHeap.PopTimeout()
|
||||
if callSeq == 0 {
|
||||
bc.pendingLock.Unlock()
|
||||
break
|
||||
}
|
||||
|
||||
for i := 0; i < client.maxCheckCallRpcCount; i++ {
|
||||
client.pendingLock.Lock()
|
||||
pElem := client.pendingTimer.Front()
|
||||
if pElem == nil {
|
||||
client.pendingLock.Unlock()
|
||||
break
|
||||
}
|
||||
pCall := pElem.Value.(*Call)
|
||||
if now.Sub(pCall.callTime) > client.callRpcTimeout {
|
||||
strTimeout := strconv.FormatInt(int64(client.callRpcTimeout/time.Second), 10)
|
||||
pCall.Err = errors.New("RPC call takes more than " + strTimeout + " seconds")
|
||||
client.makeCallFail(pCall)
|
||||
client.pendingLock.Unlock()
|
||||
pCall := bc.pending[callSeq]
|
||||
if pCall == nil {
|
||||
bc.pendingLock.Unlock()
|
||||
log.SError("callSeq ",callSeq," is not find")
|
||||
continue
|
||||
}
|
||||
|
||||
delete(bc.pending,callSeq)
|
||||
strTimeout := strconv.FormatInt(int64(pCall.TimeOut.Seconds()), 10)
|
||||
pCall.Err = errors.New("RPC call takes more than " + strTimeout + " seconds,method is "+pCall.ServiceMethod)
|
||||
log.SError(pCall.Err.Error())
|
||||
bc.makeCallFail(pCall)
|
||||
bc.pendingLock.Unlock()
|
||||
continue
|
||||
}
|
||||
client.pendingLock.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
func (client *Client) ResetPending() {
|
||||
func (client *Client) InitPending() {
|
||||
client.pendingLock.Lock()
|
||||
if client.pending != nil {
|
||||
for _, v := range client.pending {
|
||||
v.Value.(*Call).Err = errors.New("node is disconnect")
|
||||
v.Value.(*Call).done <- v.Value.(*Call)
|
||||
}
|
||||
client.callTimerHeap.Init()
|
||||
client.pending = make(map[uint64]*Call,4096)
|
||||
client.pendingLock.Unlock()
|
||||
}
|
||||
|
||||
func (bc *Client) AddPending(call *Call) {
|
||||
bc.pendingLock.Lock()
|
||||
|
||||
if call.Seq == 0 {
|
||||
bc.pendingLock.Unlock()
|
||||
log.SStack("call is error.")
|
||||
return
|
||||
}
|
||||
|
||||
client.pending = make(map[uint64]*list.Element, 4096)
|
||||
client.pendingTimer = list.New()
|
||||
client.pendingLock.Unlock()
|
||||
bc.pending[call.Seq] = call
|
||||
bc.callTimerHeap.AddTimer(call.Seq,call.TimeOut)
|
||||
|
||||
bc.pendingLock.Unlock()
|
||||
}
|
||||
|
||||
func (client *Client) AddPending(call *Call) {
|
||||
client.pendingLock.Lock()
|
||||
call.callTime = time.Now()
|
||||
elemTimer := client.pendingTimer.PushBack(call)
|
||||
client.pending[call.Seq] = elemTimer //如果下面发送失败,将会一一直存在这里
|
||||
client.pendingLock.Unlock()
|
||||
func (bc *Client) RemovePending(seq uint64) *Call {
|
||||
if seq == 0 {
|
||||
return nil
|
||||
}
|
||||
bc.pendingLock.Lock()
|
||||
call := bc.removePending(seq)
|
||||
bc.pendingLock.Unlock()
|
||||
return call
|
||||
}
|
||||
|
||||
func (client *Client) RemovePending(seq uint64) *Call {
|
||||
func (bc *Client) removePending(seq uint64) *Call {
|
||||
v, ok := bc.pending[seq]
|
||||
if ok == false {
|
||||
return nil
|
||||
}
|
||||
|
||||
bc.callTimerHeap.Cancel(seq)
|
||||
delete(bc.pending, seq)
|
||||
return v
|
||||
}
|
||||
|
||||
func (bc *Client) FindPending(seq uint64) (pCall *Call) {
|
||||
if seq == 0 {
|
||||
return nil
|
||||
}
|
||||
client.pendingLock.Lock()
|
||||
call := client.removePending(seq)
|
||||
client.pendingLock.Unlock()
|
||||
return call
|
||||
}
|
||||
|
||||
func (client *Client) removePending(seq uint64) *Call {
|
||||
v, ok := client.pending[seq]
|
||||
if ok == false {
|
||||
return nil
|
||||
}
|
||||
call := v.Value.(*Call)
|
||||
client.pendingTimer.Remove(v)
|
||||
delete(client.pending, seq)
|
||||
return call
|
||||
}
|
||||
|
||||
func (client *Client) FindPending(seq uint64) *Call {
|
||||
client.pendingLock.Lock()
|
||||
v, ok := client.pending[seq]
|
||||
if ok == false {
|
||||
client.pendingLock.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
pCall := v.Value.(*Call)
|
||||
client.pendingLock.Unlock()
|
||||
|
||||
bc.pendingLock.Lock()
|
||||
pCall = bc.pending[seq]
|
||||
bc.pendingLock.Unlock()
|
||||
|
||||
return pCall
|
||||
}
|
||||
|
||||
func (client *Client) generateSeq() uint64 {
|
||||
return atomic.AddUint64(&client.startSeq, 1)
|
||||
}
|
||||
|
||||
func (client *Client) AsyncCall(rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{}) error {
|
||||
processorType, processor := GetProcessorType(args)
|
||||
InParam, herr := processor.Marshal(args)
|
||||
if herr != nil {
|
||||
return herr
|
||||
}
|
||||
|
||||
seq := client.generateSeq()
|
||||
request := MakeRpcRequest(processor, seq, 0, serviceMethod, false, InParam)
|
||||
bytes, err := processor.Marshal(request.RpcRequestData)
|
||||
ReleaseRpcRequest(request)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if client.conn == nil {
|
||||
return errors.New("Rpc server is disconnect,call " + serviceMethod)
|
||||
}
|
||||
|
||||
call := MakeCall()
|
||||
call.Reply = replyParam
|
||||
call.callback = &callback
|
||||
call.rpcHandler = rpcHandler
|
||||
call.ServiceMethod = serviceMethod
|
||||
call.Seq = seq
|
||||
client.AddPending(call)
|
||||
|
||||
err = client.conn.WriteMsg([]byte{uint8(processorType)}, bytes)
|
||||
if err != nil {
|
||||
client.RemovePending(call.Seq)
|
||||
ReleaseCall(call)
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (client *Client) RawGo(processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, args []byte, reply interface{}) *Call {
|
||||
call := MakeCall()
|
||||
call.ServiceMethod = serviceMethod
|
||||
call.Reply = reply
|
||||
call.Seq = client.generateSeq()
|
||||
|
||||
request := MakeRpcRequest(processor, call.Seq, rpcMethodId, serviceMethod, noReply, args)
|
||||
bytes, err := processor.Marshal(request.RpcRequestData)
|
||||
ReleaseRpcRequest(request)
|
||||
if err != nil {
|
||||
call.Seq = 0
|
||||
call.Err = err
|
||||
return call
|
||||
}
|
||||
|
||||
if client.conn == nil {
|
||||
call.Seq = 0
|
||||
call.Err = errors.New(serviceMethod + " was called failed,rpc client is disconnect")
|
||||
return call
|
||||
}
|
||||
|
||||
if noReply == false {
|
||||
client.AddPending(call)
|
||||
}
|
||||
|
||||
err = client.conn.WriteMsg([]byte{uint8(processor.GetProcessorType())}, bytes)
|
||||
if err != nil {
|
||||
client.RemovePending(call.Seq)
|
||||
call.Seq = 0
|
||||
call.Err = err
|
||||
}
|
||||
|
||||
return call
|
||||
}
|
||||
|
||||
func (client *Client) Go(noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
|
||||
_, processor := GetProcessorType(args)
|
||||
InParam, err := processor.Marshal(args)
|
||||
if err != nil {
|
||||
call := MakeCall()
|
||||
call.Err = err
|
||||
return call
|
||||
}
|
||||
|
||||
return client.RawGo(processor, noReply, 0, serviceMethod, InParam, reply)
|
||||
}
|
||||
|
||||
func (client *Client) Run() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||
func (bc *Client) cleanPending(){
|
||||
bc.pendingLock.Lock()
|
||||
for {
|
||||
callSeq := bc.callTimerHeap.PopFirst()
|
||||
if callSeq == 0 {
|
||||
break
|
||||
}
|
||||
}()
|
||||
|
||||
client.TriggerRpcEvent(true, client.GetClientSeq(), client.GetId())
|
||||
for {
|
||||
bytes, err := client.conn.ReadMsg()
|
||||
if err != nil {
|
||||
log.SError("rpcClient ", client.Addr, " ReadMsg error:", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
processor := GetProcessor(bytes[0])
|
||||
if processor == nil {
|
||||
client.conn.ReleaseReadMsg(bytes)
|
||||
log.SError("rpcClient ", client.Addr, " ReadMsg head error:", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
//1.解析head
|
||||
response := RpcResponse{}
|
||||
response.RpcResponseData = processor.MakeRpcResponse(0, "", nil)
|
||||
|
||||
err = processor.Unmarshal(bytes[1:], response.RpcResponseData)
|
||||
client.conn.ReleaseReadMsg(bytes)
|
||||
if err != nil {
|
||||
processor.ReleaseRpcResponse(response.RpcResponseData)
|
||||
log.SError("rpcClient Unmarshal head error:", err.Error())
|
||||
pCall := bc.pending[callSeq]
|
||||
if pCall == nil {
|
||||
log.SError("callSeq ",callSeq," is not find")
|
||||
continue
|
||||
}
|
||||
|
||||
v := client.RemovePending(response.RpcResponseData.GetSeq())
|
||||
if v == nil {
|
||||
log.SError("rpcClient cannot find seq ", response.RpcResponseData.GetSeq(), " in pending")
|
||||
} else {
|
||||
v.Err = nil
|
||||
if len(response.RpcResponseData.GetReply()) > 0 {
|
||||
err = processor.Unmarshal(response.RpcResponseData.GetReply(), v.Reply)
|
||||
if err != nil {
|
||||
log.SError("rpcClient Unmarshal body error:", err.Error())
|
||||
v.Err = err
|
||||
}
|
||||
}
|
||||
|
||||
if response.RpcResponseData.GetErr() != nil {
|
||||
v.Err = response.RpcResponseData.GetErr()
|
||||
}
|
||||
|
||||
if v.callback != nil && v.callback.IsValid() {
|
||||
v.rpcHandler.PushRpcResponse(v)
|
||||
} else {
|
||||
v.done <- v
|
||||
}
|
||||
}
|
||||
|
||||
processor.ReleaseRpcResponse(response.RpcResponseData)
|
||||
delete(bc.pending,callSeq)
|
||||
pCall.Err = errors.New("nodeid is disconnect ")
|
||||
bc.makeCallFail(pCall)
|
||||
}
|
||||
|
||||
bc.pendingLock.Unlock()
|
||||
}
|
||||
|
||||
func (client *Client) OnClose() {
|
||||
client.TriggerRpcEvent(false, client.GetClientSeq(), client.GetId())
|
||||
func (bc *Client) generateSeq() uint64 {
|
||||
return atomic.AddUint64(&bc.startSeq, 1)
|
||||
}
|
||||
|
||||
func (client *Client) IsConnected() bool {
|
||||
return client.bSelfNode || (client.conn != nil && client.conn.IsConnected() == true)
|
||||
func (client *Client) GetNodeId() int {
|
||||
return client.nodeId
|
||||
}
|
||||
|
||||
func (client *Client) GetId() int {
|
||||
return client.id
|
||||
}
|
||||
|
||||
func (client *Client) Close(waitDone bool) {
|
||||
client.TCPClient.Close(waitDone)
|
||||
}
|
||||
|
||||
func (client *Client) GetClientSeq() uint32 {
|
||||
return client.clientSeq
|
||||
func (client *Client) GetClientId() uint32 {
|
||||
return client.clientId
|
||||
}
|
||||
|
||||
102
rpc/compressor.go
Normal file
102
rpc/compressor.go
Normal file
@@ -0,0 +1,102 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
"errors"
|
||||
"github.com/pierrec/lz4/v4"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/network"
|
||||
)
|
||||
|
||||
var memPool network.INetMempool = network.NewMemAreaPool()
|
||||
|
||||
type ICompressor interface {
|
||||
CompressBlock(src []byte) ([]byte, error) //dst如果有预申请使用dst内存,传入nil时内部申请
|
||||
UncompressBlock(src []byte) ([]byte, error) //dst如果有预申请使用dst内存,传入nil时内部申请
|
||||
|
||||
CompressBufferCollection(buffer []byte) //压缩的Buffer内存回收
|
||||
UnCompressBufferCollection(buffer []byte) //解压缩的Buffer内存回收
|
||||
}
|
||||
|
||||
var compressor ICompressor
|
||||
func init(){
|
||||
SetCompressor(&Lz4Compressor{})
|
||||
}
|
||||
|
||||
func SetCompressor(cp ICompressor){
|
||||
compressor = cp
|
||||
}
|
||||
|
||||
type Lz4Compressor struct {
|
||||
}
|
||||
|
||||
func (lc *Lz4Compressor) CompressBlock(src []byte) (dest []byte, err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
err = errors.New("core dump info[" + errString + "]\n" + string(buf[:l]))
|
||||
}
|
||||
}()
|
||||
|
||||
var c lz4.Compressor
|
||||
var cnt int
|
||||
dest = memPool.MakeByteSlice(lz4.CompressBlockBound(len(src))+1)
|
||||
cnt, err = c.CompressBlock(src, dest[1:])
|
||||
if err != nil {
|
||||
memPool.ReleaseByteSlice(dest)
|
||||
return nil,err
|
||||
}
|
||||
|
||||
ratio := len(src) / cnt
|
||||
if len(src) % cnt > 0 {
|
||||
ratio += 1
|
||||
}
|
||||
|
||||
if ratio > 255 {
|
||||
memPool.ReleaseByteSlice(dest)
|
||||
return nil,fmt.Errorf("Impermissible errors")
|
||||
}
|
||||
|
||||
dest[0] = uint8(ratio)
|
||||
dest = dest[:cnt+1]
|
||||
return
|
||||
}
|
||||
|
||||
func (lc *Lz4Compressor) UncompressBlock(src []byte) (dest []byte, err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
err = errors.New("core dump info[" + errString + "]\n" + string(buf[:l]))
|
||||
}
|
||||
}()
|
||||
|
||||
radio := uint8(src[0])
|
||||
if radio == 0 {
|
||||
return nil,fmt.Errorf("Impermissible errors")
|
||||
}
|
||||
|
||||
dest = memPool.MakeByteSlice(len(src)*int(radio))
|
||||
cnt, err := lz4.UncompressBlock(src[1:], dest)
|
||||
if err != nil {
|
||||
memPool.ReleaseByteSlice(dest)
|
||||
return nil,err
|
||||
}
|
||||
|
||||
return dest[:cnt],nil
|
||||
}
|
||||
|
||||
func (lc *Lz4Compressor) compressBlockBound(n int) int{
|
||||
return lz4.CompressBlockBound(n)
|
||||
}
|
||||
|
||||
func (lc *Lz4Compressor) CompressBufferCollection(buffer []byte){
|
||||
memPool.ReleaseByteSlice(buffer)
|
||||
}
|
||||
|
||||
func (lc *Lz4Compressor) UnCompressBufferCollection(buffer []byte) {
|
||||
memPool.ReleaseByteSlice(buffer)
|
||||
}
|
||||
@@ -3,6 +3,7 @@ package rpc
|
||||
import (
|
||||
"github.com/duanhf2012/origin/util/sync"
|
||||
"github.com/gogo/protobuf/proto"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type GoGoPBProcessor struct {
|
||||
@@ -40,7 +41,10 @@ func (slf *GoGoPBProcessor) Marshal(v interface{}) ([]byte, error){
|
||||
}
|
||||
|
||||
func (slf *GoGoPBProcessor) Unmarshal(data []byte, msg interface{}) error{
|
||||
protoMsg := msg.(proto.Message)
|
||||
protoMsg,ok := msg.(proto.Message)
|
||||
if ok == false {
|
||||
return fmt.Errorf("%+v is not of proto.Message type",msg)
|
||||
}
|
||||
return proto.Unmarshal(data, protoMsg)
|
||||
}
|
||||
|
||||
@@ -73,6 +77,15 @@ func (slf *GoGoPBProcessor) GetProcessorType() RpcProcessorType{
|
||||
return RpcProcessorGoGoPB
|
||||
}
|
||||
|
||||
func (slf *GoGoPBProcessor) Clone(src interface{}) (interface{},error){
|
||||
srcMsg,ok := src.(proto.Message)
|
||||
if ok == false {
|
||||
return nil,fmt.Errorf("param is not of proto.message type")
|
||||
}
|
||||
|
||||
return proto.Clone(srcMsg),nil
|
||||
}
|
||||
|
||||
func (slf *GoGoPBRpcRequestData) IsNoReply() bool{
|
||||
return slf.GetNoReply()
|
||||
}
|
||||
@@ -91,5 +104,3 @@ func (slf *GoGoPBRpcResponseData) GetErr() *RpcError {
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ package rpc
|
||||
import (
|
||||
"github.com/duanhf2012/origin/util/sync"
|
||||
jsoniter "github.com/json-iterator/go"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
var json = jsoniter.ConfigCompatibleWithStandardLibrary
|
||||
@@ -119,6 +120,22 @@ func (jsonRpcResponseData *JsonRpcResponseData) GetReply() []byte{
|
||||
}
|
||||
|
||||
|
||||
func (jsonProcessor *JsonProcessor) Clone(src interface{}) (interface{},error){
|
||||
dstValue := reflect.New(reflect.ValueOf(src).Type().Elem())
|
||||
bytes,err := json.Marshal(src)
|
||||
if err != nil {
|
||||
return nil,err
|
||||
}
|
||||
|
||||
dst := dstValue.Interface()
|
||||
err = json.Unmarshal(bytes,dst)
|
||||
if err != nil {
|
||||
return nil,err
|
||||
}
|
||||
|
||||
return dst,nil
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
135
rpc/lclient.go
Normal file
135
rpc/lclient.go
Normal file
@@ -0,0 +1,135 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/network"
|
||||
"reflect"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
//本结点的Client
|
||||
type LClient struct {
|
||||
selfClient *Client
|
||||
}
|
||||
|
||||
func (rc *LClient) Lock(){
|
||||
}
|
||||
|
||||
func (rc *LClient) Unlock(){
|
||||
}
|
||||
|
||||
func (lc *LClient) Run(){
|
||||
}
|
||||
|
||||
func (lc *LClient) OnClose(){
|
||||
}
|
||||
|
||||
func (lc *LClient) IsConnected() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (lc *LClient) SetConn(conn *network.TCPConn){
|
||||
}
|
||||
|
||||
func (lc *LClient) Close(waitDone bool){
|
||||
}
|
||||
|
||||
func (lc *LClient) Go(timeout time.Duration,rpcHandler IRpcHandler,noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
|
||||
pLocalRpcServer := rpcHandler.GetRpcServer()()
|
||||
//判断是否是同一服务
|
||||
findIndex := strings.Index(serviceMethod, ".")
|
||||
if findIndex == -1 {
|
||||
sErr := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
||||
log.SError(sErr.Error())
|
||||
call := MakeCall()
|
||||
call.DoError(sErr)
|
||||
|
||||
return call
|
||||
}
|
||||
|
||||
serviceName := serviceMethod[:findIndex]
|
||||
if serviceName == rpcHandler.GetName() { //自己服务调用
|
||||
//调用自己rpcHandler处理器
|
||||
err := pLocalRpcServer.myselfRpcHandlerGo(lc.selfClient,serviceName, serviceMethod, args, requestHandlerNull,reply)
|
||||
call := MakeCall()
|
||||
|
||||
if err != nil {
|
||||
call.DoError(err)
|
||||
return call
|
||||
}
|
||||
|
||||
call.DoOK()
|
||||
return call
|
||||
}
|
||||
|
||||
//其他的rpcHandler的处理器
|
||||
return pLocalRpcServer.selfNodeRpcHandlerGo(timeout,nil, lc.selfClient, noReply, serviceName, 0, serviceMethod, args, reply, nil)
|
||||
}
|
||||
|
||||
|
||||
func (rc *LClient) RawGo(timeout time.Duration,rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceName string, rawArgs []byte, reply interface{}) *Call {
|
||||
pLocalRpcServer := rpcHandler.GetRpcServer()()
|
||||
|
||||
//服务自我调用
|
||||
if serviceName == rpcHandler.GetName() {
|
||||
call := MakeCall()
|
||||
call.ServiceMethod = serviceName
|
||||
call.Reply = reply
|
||||
call.TimeOut = timeout
|
||||
|
||||
err := pLocalRpcServer.myselfRpcHandlerGo(rc.selfClient,serviceName, serviceName, rawArgs, requestHandlerNull,nil)
|
||||
call.Err = err
|
||||
call.done <- call
|
||||
|
||||
return call
|
||||
}
|
||||
|
||||
//其他的rpcHandler的处理器
|
||||
return pLocalRpcServer.selfNodeRpcHandlerGo(timeout,processor,rc.selfClient, true, serviceName, rpcMethodId, serviceName, nil, nil, rawArgs)
|
||||
}
|
||||
|
||||
|
||||
func (lc *LClient) AsyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, reply interface{},cancelable bool) (CancelRpc,error) {
|
||||
pLocalRpcServer := rpcHandler.GetRpcServer()()
|
||||
|
||||
//判断是否是同一服务
|
||||
findIndex := strings.Index(serviceMethod, ".")
|
||||
if findIndex == -1 {
|
||||
err := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
||||
callback.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
log.SError(err.Error())
|
||||
return emptyCancelRpc,nil
|
||||
}
|
||||
|
||||
serviceName := serviceMethod[:findIndex]
|
||||
//调用自己rpcHandler处理器
|
||||
if serviceName == rpcHandler.GetName() { //自己服务调用
|
||||
return emptyCancelRpc,pLocalRpcServer.myselfRpcHandlerGo(lc.selfClient,serviceName, serviceMethod, args,callback ,reply)
|
||||
}
|
||||
|
||||
//其他的rpcHandler的处理器
|
||||
calcelRpc,err := pLocalRpcServer.selfNodeRpcHandlerAsyncGo(timeout,lc.selfClient, rpcHandler, false, serviceName, serviceMethod, args, reply, callback,cancelable)
|
||||
if err != nil {
|
||||
callback.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
}
|
||||
|
||||
return calcelRpc,nil
|
||||
}
|
||||
|
||||
func NewLClient(nodeId int) *Client{
|
||||
client := &Client{}
|
||||
client.clientId = atomic.AddUint32(&clientSeq, 1)
|
||||
client.nodeId = nodeId
|
||||
client.maxCheckCallRpcCount = DefaultMaxCheckCallRpcCount
|
||||
client.callRpcTimeout = DefaultRpcTimeout
|
||||
|
||||
lClient := &LClient{}
|
||||
lClient.selfClient = client
|
||||
client.IRealClient = lClient
|
||||
client.InitPending()
|
||||
go client.checkRpcCallTimeout()
|
||||
return client
|
||||
}
|
||||
1777
rpc/messagequeue.pb.go
Normal file
1777
rpc/messagequeue.pb.go
Normal file
File diff suppressed because it is too large
Load Diff
51
rpc/messagequeue.proto
Normal file
51
rpc/messagequeue.proto
Normal file
@@ -0,0 +1,51 @@
|
||||
syntax = "proto3";
|
||||
|
||||
option go_package = ".;rpc";
|
||||
|
||||
|
||||
message DBQueuePopReq {
|
||||
string CustomerId = 1;
|
||||
string QueueName = 2;
|
||||
int32 PopStartPos = 3;
|
||||
int32 PopNum = 4;
|
||||
bytes pushData = 5;
|
||||
}
|
||||
|
||||
message DBQueuePopRes {
|
||||
string QueueName = 1;
|
||||
repeated bytes pushData = 2;
|
||||
}
|
||||
|
||||
enum SubscribeType {
|
||||
Subscribe = 0;
|
||||
Unsubscribe = 1;
|
||||
}
|
||||
|
||||
enum SubscribeMethod {
|
||||
Method_Custom = 0;//自定义模式,以消费者设置的StartIndex开始获取或订阅
|
||||
Method_Last = 1;//Last模式,以该消费者上次记录的位置开始订阅
|
||||
}
|
||||
|
||||
//订阅
|
||||
message DBQueueSubscribeReq {
|
||||
SubscribeType SubType = 1; //订阅类型
|
||||
SubscribeMethod Method = 2; //订阅方法
|
||||
string CustomerId = 3; //消费者Id
|
||||
int32 FromNodeId = 4;
|
||||
string RpcMethod = 5;
|
||||
string TopicName = 6; //主题名称
|
||||
uint64 StartIndex = 7; //开始位置 ,格式前4位是时间戳秒,后面是序号。如果填0时,服务自动修改成:(4bit 当前时间秒)| (0000 4bit)
|
||||
int32 OneBatchQuantity = 8;//订阅一次发送的数量,不设置有默认值1000条
|
||||
}
|
||||
|
||||
message DBQueueSubscribeRes {
|
||||
|
||||
}
|
||||
|
||||
message DBQueuePublishReq {
|
||||
string TopicName = 1; //主是,名称,数据
|
||||
repeated bytes pushData = 2;
|
||||
}
|
||||
|
||||
message DBQueuePublishRes {
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
package rpc
|
||||
|
||||
type IRpcProcessor interface {
|
||||
Clone(src interface{}) (interface{},error)
|
||||
Marshal(v interface{}) ([]byte, error) //b表示自定义缓冲区,可以填nil,由系统自动分配
|
||||
Unmarshal(data []byte, v interface{}) error
|
||||
MakeRpcRequest(seq uint64,rpcMethodId uint32,serviceMethod string,noReply bool,inParam []byte) IRpcRequestData
|
||||
|
||||
5399
rpc/rank.pb.go
Normal file
5399
rpc/rank.pb.go
Normal file
File diff suppressed because it is too large
Load Diff
130
rpc/rank.proto
Normal file
130
rpc/rank.proto
Normal file
@@ -0,0 +1,130 @@
|
||||
syntax = "proto3";
|
||||
package rpc;
|
||||
option go_package = ".;rpc";
|
||||
|
||||
message SetSortAndExtendData{
|
||||
bool IsSortData = 1; //是否为排序字段,为true时,修改Sort字段,否则修改Extend数据
|
||||
int32 Pos = 2; //排序位置
|
||||
int64 Data = 3; //排序值
|
||||
}
|
||||
|
||||
//自增值
|
||||
message IncreaseRankData {
|
||||
uint64 RankId = 1; //排行榜的ID
|
||||
uint64 Key = 2; //数据主建
|
||||
repeated ExtendIncData Extend = 3; //扩展数据
|
||||
repeated int64 IncreaseSortData = 4;//自增排行数值
|
||||
repeated SetSortAndExtendData SetSortAndExtendData = 5;//设置排序数据值
|
||||
bool ReturnRankData = 6; //是否查找最新排名,否则不返回排行Rank字段
|
||||
|
||||
bool InsertDataOnNonExistent = 7; //为true时:存在不进行更新,不存在则插入InitData与InitSortData数据。为false时:忽略不对InitData与InitSortData数据
|
||||
bytes InitData = 8; //不参与排行的数据
|
||||
repeated int64 InitSortData = 9; //参与排行的数据
|
||||
}
|
||||
|
||||
message IncreaseRankDataRet{
|
||||
RankPosData PosData = 1;
|
||||
}
|
||||
|
||||
//用于单独刷新排行榜数据
|
||||
message UpdateRankData {
|
||||
uint64 RankId = 1; //排行榜的ID
|
||||
uint64 Key = 2; //数据主建
|
||||
bytes Data = 3; //数据部分
|
||||
}
|
||||
|
||||
message UpdateRankDataRet {
|
||||
bool Ret = 1;
|
||||
}
|
||||
|
||||
// RankPosData 排行数据——查询返回
|
||||
message RankPosData {
|
||||
uint64 Key = 1; //数据主建
|
||||
uint64 Rank = 2; //名次
|
||||
repeated int64 SortData = 3; //参与排行的数据
|
||||
bytes Data = 4; //不参与排行的数据
|
||||
repeated int64 ExtendData = 5; //扩展数据
|
||||
}
|
||||
|
||||
// RankList 排行榜数据
|
||||
message RankList {
|
||||
uint64 RankId = 1; //排行榜类型
|
||||
string RankName = 2; //排行榜名称
|
||||
int32 SkipListLevel = 3; //排行榜level-生成的跳表的level, 8/16/32/64等
|
||||
bool IsDec = 4; //不参与排行的数据
|
||||
uint64 MaxRank = 5; //最大排名
|
||||
int64 ExpireMs = 6; //有效时间,0永不过期
|
||||
}
|
||||
|
||||
// UpsetRankData 更新排行榜数据
|
||||
message UpsetRankData {
|
||||
uint64 RankId = 1; //排行榜的ID
|
||||
repeated RankData RankDataList = 2; //排行数据
|
||||
bool FindNewRank = 3; //是否查找最新排名
|
||||
}
|
||||
|
||||
message ExtendIncData {
|
||||
int64 InitValue = 1;
|
||||
int64 IncreaseValue = 2;
|
||||
}
|
||||
|
||||
// RankData 排行数据
|
||||
message RankData {
|
||||
uint64 Key = 1; //数据主建
|
||||
repeated int64 SortData = 2; //参与排行的数据
|
||||
|
||||
bytes Data = 3; //不参与排行的数据
|
||||
|
||||
repeated ExtendIncData ExData = 4; //扩展增量数据
|
||||
}
|
||||
|
||||
// DeleteByKey 删除排行榜数据
|
||||
message DeleteByKey {
|
||||
uint64 RankId = 1; //排行榜的分类ID
|
||||
repeated uint64 KeyList = 2; //排行数据
|
||||
}
|
||||
|
||||
// AddRankList 新增排行榜
|
||||
message AddRankList {
|
||||
repeated RankList AddList = 1; //添加的排行榜列表
|
||||
}
|
||||
|
||||
// FindRankDataByKey 查找排行信息
|
||||
message FindRankDataByKey {
|
||||
uint64 RankId = 1; //排行榜的ID
|
||||
uint64 Key = 2; //排行的key
|
||||
}
|
||||
|
||||
// FindRankDataByRank 查找排行信息
|
||||
message FindRankDataByRank {
|
||||
uint64 RankId = 1; //排行榜的ID
|
||||
uint64 Rank = 2; //排行名次
|
||||
}
|
||||
|
||||
// FindRankDataList 查找排行信息
|
||||
message FindRankDataList {
|
||||
uint64 RankId = 1; //排行榜的ID
|
||||
uint64 StartRank = 2; //排行的位置 0开始
|
||||
uint64 Count = 3; //查询格式
|
||||
uint64 Key = 4; //附带一个Key查询排行信息
|
||||
}
|
||||
|
||||
// RankDataList
|
||||
message RankDataList {
|
||||
uint64 RankDataCount = 1; //排行长度
|
||||
repeated RankPosData RankPosDataList = 2; //排行数据
|
||||
RankPosData KeyRank = 3; //附带的Key查询排行结果信息
|
||||
}
|
||||
|
||||
message RankInfo{
|
||||
uint64 Key = 1;
|
||||
uint64 Rank = 2;
|
||||
}
|
||||
|
||||
// RankResult
|
||||
message RankResult {
|
||||
int32 AddCount = 1;//新增数量
|
||||
int32 ModifyCount = 2; //修改数量
|
||||
int32 RemoveCount = 3;//删除数量
|
||||
repeated RankInfo NewRank = 4; //新的排名名次,只有UpsetRankData.FindNewRank为true时才生效
|
||||
}
|
||||
323
rpc/rclient.go
Normal file
323
rpc/rclient.go
Normal file
@@ -0,0 +1,323 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/network"
|
||||
"math"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
//跨结点连接的Client
|
||||
type RClient struct {
|
||||
compressBytesLen int
|
||||
selfClient *Client
|
||||
network.TCPClient
|
||||
conn *network.TCPConn
|
||||
TriggerRpcConnEvent
|
||||
}
|
||||
|
||||
func (rc *RClient) IsConnected() bool {
|
||||
rc.Lock()
|
||||
defer rc.Unlock()
|
||||
|
||||
return rc.conn != nil && rc.conn.IsConnected() == true
|
||||
}
|
||||
|
||||
func (rc *RClient) GetConn() *network.TCPConn{
|
||||
rc.Lock()
|
||||
conn := rc.conn
|
||||
rc.Unlock()
|
||||
|
||||
return conn
|
||||
}
|
||||
|
||||
func (rc *RClient) SetConn(conn *network.TCPConn){
|
||||
rc.Lock()
|
||||
rc.conn = conn
|
||||
rc.Unlock()
|
||||
}
|
||||
|
||||
func (rc *RClient) Go(timeout time.Duration,rpcHandler IRpcHandler,noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
|
||||
_, processor := GetProcessorType(args)
|
||||
InParam, err := processor.Marshal(args)
|
||||
if err != nil {
|
||||
log.SError(err.Error())
|
||||
call := MakeCall()
|
||||
call.DoError(err)
|
||||
return call
|
||||
}
|
||||
|
||||
return rc.RawGo(timeout,rpcHandler,processor, noReply, 0, serviceMethod, InParam, reply)
|
||||
}
|
||||
|
||||
func (rc *RClient) RawGo(timeout time.Duration,rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, rawArgs []byte, reply interface{}) *Call {
|
||||
call := MakeCall()
|
||||
call.ServiceMethod = serviceMethod
|
||||
call.Reply = reply
|
||||
call.Seq = rc.selfClient.generateSeq()
|
||||
call.TimeOut = timeout
|
||||
|
||||
request := MakeRpcRequest(processor, call.Seq, rpcMethodId, serviceMethod, noReply, rawArgs)
|
||||
bytes, err := processor.Marshal(request.RpcRequestData)
|
||||
ReleaseRpcRequest(request)
|
||||
|
||||
if err != nil {
|
||||
call.Seq = 0
|
||||
log.SError(err.Error())
|
||||
call.DoError(err)
|
||||
return call
|
||||
}
|
||||
|
||||
conn := rc.GetConn()
|
||||
if conn == nil || conn.IsConnected()==false {
|
||||
call.Seq = 0
|
||||
sErr := errors.New(serviceMethod + " was called failed,rpc client is disconnect")
|
||||
log.SError(sErr.Error())
|
||||
call.DoError(sErr)
|
||||
return call
|
||||
}
|
||||
|
||||
var compressBuff[]byte
|
||||
bCompress := uint8(0)
|
||||
if rc.compressBytesLen > 0 && len(bytes) >= rc.compressBytesLen {
|
||||
var cErr error
|
||||
compressBuff,cErr = compressor.CompressBlock(bytes)
|
||||
if cErr != nil {
|
||||
call.Seq = 0
|
||||
log.SError(cErr.Error())
|
||||
call.DoError(cErr)
|
||||
return call
|
||||
}
|
||||
if len(compressBuff) < len(bytes) {
|
||||
bytes = compressBuff
|
||||
bCompress = 1<<7
|
||||
}
|
||||
}
|
||||
|
||||
if noReply == false {
|
||||
rc.selfClient.AddPending(call)
|
||||
}
|
||||
|
||||
err = conn.WriteMsg([]byte{uint8(processor.GetProcessorType())|bCompress}, bytes)
|
||||
if cap(compressBuff) >0 {
|
||||
compressor.CompressBufferCollection(compressBuff)
|
||||
}
|
||||
if err != nil {
|
||||
rc.selfClient.RemovePending(call.Seq)
|
||||
|
||||
log.SError(err.Error())
|
||||
|
||||
call.Seq = 0
|
||||
call.DoError(err)
|
||||
}
|
||||
|
||||
return call
|
||||
}
|
||||
|
||||
|
||||
func (rc *RClient) AsyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{},cancelable bool) (CancelRpc,error) {
|
||||
cancelRpc,err := rc.asyncCall(timeout,rpcHandler, serviceMethod, callback, args, replyParam,cancelable)
|
||||
if err != nil {
|
||||
callback.Call([]reflect.Value{reflect.ValueOf(replyParam), reflect.ValueOf(err)})
|
||||
}
|
||||
|
||||
return cancelRpc,nil
|
||||
}
|
||||
|
||||
func (rc *RClient) asyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{},cancelable bool) (CancelRpc,error) {
|
||||
processorType, processor := GetProcessorType(args)
|
||||
InParam, herr := processor.Marshal(args)
|
||||
if herr != nil {
|
||||
return emptyCancelRpc,herr
|
||||
}
|
||||
|
||||
seq := rc.selfClient.generateSeq()
|
||||
request := MakeRpcRequest(processor, seq, 0, serviceMethod, false, InParam)
|
||||
bytes, err := processor.Marshal(request.RpcRequestData)
|
||||
ReleaseRpcRequest(request)
|
||||
if err != nil {
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
conn := rc.GetConn()
|
||||
if conn == nil || conn.IsConnected()==false {
|
||||
return emptyCancelRpc,errors.New("Rpc server is disconnect,call " + serviceMethod)
|
||||
}
|
||||
|
||||
var compressBuff[]byte
|
||||
bCompress := uint8(0)
|
||||
if rc.compressBytesLen>0 &&len(bytes) >= rc.compressBytesLen {
|
||||
var cErr error
|
||||
compressBuff,cErr = compressor.CompressBlock(bytes)
|
||||
if cErr != nil {
|
||||
return emptyCancelRpc,cErr
|
||||
}
|
||||
|
||||
if len(compressBuff) < len(bytes) {
|
||||
bytes = compressBuff
|
||||
bCompress = 1<<7
|
||||
}
|
||||
}
|
||||
|
||||
call := MakeCall()
|
||||
call.Reply = replyParam
|
||||
call.callback = &callback
|
||||
call.rpcHandler = rpcHandler
|
||||
call.ServiceMethod = serviceMethod
|
||||
call.Seq = seq
|
||||
call.TimeOut = timeout
|
||||
rc.selfClient.AddPending(call)
|
||||
|
||||
err = conn.WriteMsg([]byte{uint8(processorType)|bCompress}, bytes)
|
||||
if cap(compressBuff) >0 {
|
||||
compressor.CompressBufferCollection(compressBuff)
|
||||
}
|
||||
if err != nil {
|
||||
rc.selfClient.RemovePending(call.Seq)
|
||||
ReleaseCall(call)
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
if cancelable {
|
||||
rpcCancel := RpcCancel{CallSeq:seq,Cli: rc.selfClient}
|
||||
return rpcCancel.CancelRpc,nil
|
||||
}
|
||||
|
||||
return emptyCancelRpc,nil
|
||||
}
|
||||
|
||||
|
||||
func (rc *RClient) Run() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||
}
|
||||
}()
|
||||
|
||||
rc.TriggerRpcConnEvent(true, rc.selfClient.GetClientId(), rc.selfClient.GetNodeId())
|
||||
for {
|
||||
bytes, err := rc.conn.ReadMsg()
|
||||
if err != nil {
|
||||
log.SError("rpcClient ", rc.Addr, " ReadMsg error:", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
bCompress := (bytes[0]>>7) > 0
|
||||
processor := GetProcessor(bytes[0]&0x7f)
|
||||
if processor == nil {
|
||||
rc.conn.ReleaseReadMsg(bytes)
|
||||
log.SError("rpcClient ", rc.Addr, " ReadMsg head error:", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
//1.解析head
|
||||
response := RpcResponse{}
|
||||
response.RpcResponseData = processor.MakeRpcResponse(0, "", nil)
|
||||
|
||||
//解压缩
|
||||
byteData := bytes[1:]
|
||||
var compressBuff []byte
|
||||
|
||||
if bCompress == true {
|
||||
var unCompressErr error
|
||||
compressBuff,unCompressErr = compressor.UncompressBlock(byteData)
|
||||
if unCompressErr!= nil {
|
||||
rc.conn.ReleaseReadMsg(bytes)
|
||||
log.SError("rpcClient ", rc.Addr, " ReadMsg head error:", unCompressErr.Error())
|
||||
return
|
||||
}
|
||||
byteData = compressBuff
|
||||
}
|
||||
|
||||
err = processor.Unmarshal(byteData, response.RpcResponseData)
|
||||
if cap(compressBuff) > 0 {
|
||||
compressor.UnCompressBufferCollection(compressBuff)
|
||||
}
|
||||
|
||||
rc.conn.ReleaseReadMsg(bytes)
|
||||
if err != nil {
|
||||
processor.ReleaseRpcResponse(response.RpcResponseData)
|
||||
log.SError("rpcClient Unmarshal head error:", err.Error())
|
||||
continue
|
||||
}
|
||||
|
||||
v := rc.selfClient.RemovePending(response.RpcResponseData.GetSeq())
|
||||
if v == nil {
|
||||
log.SError("rpcClient cannot find seq ", response.RpcResponseData.GetSeq(), " in pending")
|
||||
} else {
|
||||
v.Err = nil
|
||||
if len(response.RpcResponseData.GetReply()) > 0 {
|
||||
err = processor.Unmarshal(response.RpcResponseData.GetReply(), v.Reply)
|
||||
if err != nil {
|
||||
log.SError("rpcClient Unmarshal body error:", err.Error())
|
||||
v.Err = err
|
||||
}
|
||||
}
|
||||
|
||||
if response.RpcResponseData.GetErr() != nil {
|
||||
v.Err = response.RpcResponseData.GetErr()
|
||||
}
|
||||
|
||||
if v.callback != nil && v.callback.IsValid() {
|
||||
v.rpcHandler.PushRpcResponse(v)
|
||||
} else {
|
||||
v.done <- v
|
||||
}
|
||||
}
|
||||
|
||||
processor.ReleaseRpcResponse(response.RpcResponseData)
|
||||
}
|
||||
}
|
||||
|
||||
func (rc *RClient) OnClose() {
|
||||
rc.TriggerRpcConnEvent(false, rc.selfClient.GetClientId(), rc.selfClient.GetNodeId())
|
||||
}
|
||||
|
||||
func NewRClient(nodeId int, addr string, maxRpcParamLen uint32,compressBytesLen int,triggerRpcConnEvent TriggerRpcConnEvent) *Client{
|
||||
client := &Client{}
|
||||
client.clientId = atomic.AddUint32(&clientSeq, 1)
|
||||
client.nodeId = nodeId
|
||||
client.maxCheckCallRpcCount = DefaultMaxCheckCallRpcCount
|
||||
client.callRpcTimeout = DefaultRpcTimeout
|
||||
c:= &RClient{}
|
||||
c.compressBytesLen = compressBytesLen
|
||||
c.selfClient = client
|
||||
c.Addr = addr
|
||||
c.ConnectInterval = DefaultConnectInterval
|
||||
c.PendingWriteNum = DefaultMaxPendingWriteNum
|
||||
c.AutoReconnect = true
|
||||
c.TriggerRpcConnEvent = triggerRpcConnEvent
|
||||
c.ConnNum = DefaultRpcConnNum
|
||||
c.LenMsgLen = DefaultRpcLenMsgLen
|
||||
c.MinMsgLen = DefaultRpcMinMsgLen
|
||||
c.ReadDeadline = Default_ReadWriteDeadline
|
||||
c.WriteDeadline = Default_ReadWriteDeadline
|
||||
c.LittleEndian = LittleEndian
|
||||
c.NewAgent = client.NewClientAgent
|
||||
|
||||
if maxRpcParamLen > 0 {
|
||||
c.MaxMsgLen = maxRpcParamLen
|
||||
} else {
|
||||
c.MaxMsgLen = math.MaxUint32
|
||||
}
|
||||
client.IRealClient = c
|
||||
client.InitPending()
|
||||
go client.checkRpcCallTimeout()
|
||||
c.Start()
|
||||
return client
|
||||
}
|
||||
|
||||
|
||||
func (rc *RClient) Close(waitDone bool) {
|
||||
rc.TCPClient.Close(waitDone)
|
||||
rc.selfClient.cleanPending()
|
||||
}
|
||||
|
||||
28
rpc/rpc.go
28
rpc/rpc.go
@@ -51,12 +51,6 @@ type IRpcResponseData interface {
|
||||
GetReply() []byte
|
||||
}
|
||||
|
||||
type IRawInputArgs interface {
|
||||
GetRawData() []byte //获取原始数据
|
||||
DoFree() //处理完成,回收内存
|
||||
DoEscape() //逃逸,GC自动回收
|
||||
}
|
||||
|
||||
type RpcHandleFinder interface {
|
||||
FindRpcHandler(serviceMethod string) IRpcHandler
|
||||
}
|
||||
@@ -74,7 +68,16 @@ type Call struct {
|
||||
connId int
|
||||
callback *reflect.Value
|
||||
rpcHandler IRpcHandler
|
||||
callTime time.Time
|
||||
TimeOut time.Duration
|
||||
}
|
||||
|
||||
type RpcCancel struct {
|
||||
Cli *Client
|
||||
CallSeq uint64
|
||||
}
|
||||
|
||||
func (rc *RpcCancel) CancelRpc(){
|
||||
rc.Cli.RemovePending(rc.CallSeq)
|
||||
}
|
||||
|
||||
func (slf *RpcRequest) Clear() *RpcRequest{
|
||||
@@ -108,6 +111,15 @@ func (rpcResponse *RpcResponse) Clear() *RpcResponse{
|
||||
return rpcResponse
|
||||
}
|
||||
|
||||
func (call *Call) DoError(err error){
|
||||
call.Err = err
|
||||
call.done <- call
|
||||
}
|
||||
|
||||
func (call *Call) DoOK(){
|
||||
call.done <- call
|
||||
}
|
||||
|
||||
func (call *Call) Clear() *Call{
|
||||
call.Seq = 0
|
||||
call.ServiceMethod = ""
|
||||
@@ -121,6 +133,8 @@ func (call *Call) Clear() *Call{
|
||||
call.connId = 0
|
||||
call.callback = nil
|
||||
call.rpcHandler = nil
|
||||
call.TimeOut = 0
|
||||
|
||||
return call
|
||||
}
|
||||
|
||||
|
||||
@@ -6,10 +6,10 @@ import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
"time"
|
||||
)
|
||||
|
||||
const maxClusterNode int = 128
|
||||
@@ -17,6 +17,7 @@ const maxClusterNode int = 128
|
||||
type FuncRpcClient func(nodeId int, serviceMethod string, client []*Client) (error, int)
|
||||
type FuncRpcServer func() *Server
|
||||
|
||||
|
||||
var nilError = reflect.Zero(reflect.TypeOf((*error)(nil)).Elem())
|
||||
|
||||
type RpcError string
|
||||
@@ -45,10 +46,7 @@ type RpcMethodInfo struct {
|
||||
rpcProcessorType RpcProcessorType
|
||||
}
|
||||
|
||||
type RawRpcCallBack interface {
|
||||
Unmarshal(data []byte) (interface{}, error)
|
||||
CB(data interface{})
|
||||
}
|
||||
type RawRpcCallBack func(rawData []byte)
|
||||
|
||||
type IRpcHandlerChannel interface {
|
||||
PushRpcResponse(call *Call) error
|
||||
@@ -67,12 +65,20 @@ type RpcHandler struct {
|
||||
pClientList []*Client
|
||||
}
|
||||
|
||||
type TriggerRpcEvent func(bConnect bool, clientSeq uint32, nodeId int)
|
||||
type IRpcListener interface {
|
||||
type TriggerRpcConnEvent func(bConnect bool, clientSeq uint32, nodeId int)
|
||||
type INodeListener interface {
|
||||
OnNodeConnected(nodeId int)
|
||||
OnNodeDisconnect(nodeId int)
|
||||
}
|
||||
|
||||
type IDiscoveryServiceListener interface {
|
||||
OnDiscoveryService(nodeId int, serviceName []string)
|
||||
OnUnDiscoveryService(nodeId int, serviceName []string)
|
||||
}
|
||||
|
||||
type CancelRpc func()
|
||||
func emptyCancelRpc(){}
|
||||
|
||||
type IRpcHandler interface {
|
||||
IRpcHandlerChannel
|
||||
GetName() string
|
||||
@@ -80,17 +86,25 @@ type IRpcHandler interface {
|
||||
GetRpcHandler() IRpcHandler
|
||||
HandlerRpcRequest(request *RpcRequest)
|
||||
HandlerRpcResponseCB(call *Call)
|
||||
CallMethod(ServiceMethod string, param interface{}, reply interface{}) error
|
||||
AsyncCall(serviceMethod string, args interface{}, callback interface{}) error
|
||||
CallMethod(client *Client,ServiceMethod string, param interface{},callBack reflect.Value, reply interface{}) error
|
||||
|
||||
Call(serviceMethod string, args interface{}, reply interface{}) error
|
||||
Go(serviceMethod string, args interface{}) error
|
||||
AsyncCallNode(nodeId int, serviceMethod string, args interface{}, callback interface{}) error
|
||||
CallNode(nodeId int, serviceMethod string, args interface{}, reply interface{}) error
|
||||
AsyncCall(serviceMethod string, args interface{}, callback interface{}) error
|
||||
AsyncCallNode(nodeId int, serviceMethod string, args interface{}, callback interface{}) error
|
||||
|
||||
CallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, reply interface{}) error
|
||||
CallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, reply interface{}) error
|
||||
AsyncCallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error)
|
||||
AsyncCallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error)
|
||||
|
||||
Go(serviceMethod string, args interface{}) error
|
||||
GoNode(nodeId int, serviceMethod string, args interface{}) error
|
||||
RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs IRawInputArgs) error
|
||||
RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs []byte) error
|
||||
CastGo(serviceMethod string, args interface{}) error
|
||||
IsSingleCoroutine() bool
|
||||
UnmarshalInParam(rpcProcessor IRpcProcessor, serviceMethod string, rawRpcMethodId uint32, inParam []byte) (interface{}, error)
|
||||
GetRpcServer() FuncRpcServer
|
||||
}
|
||||
|
||||
func reqHandlerNull(Returns interface{}, Err RpcError) {
|
||||
@@ -135,7 +149,7 @@ func (handler *RpcHandler) isExportedOrBuiltinType(t reflect.Type) bool {
|
||||
|
||||
func (handler *RpcHandler) suitableMethods(method reflect.Method) error {
|
||||
//只有RPC_开头的才能被调用
|
||||
if strings.Index(method.Name, "RPC_") != 0 {
|
||||
if strings.Index(method.Name, "RPC_") != 0 && strings.Index(method.Name, "RPC") != 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -239,8 +253,13 @@ func (handler *RpcHandler) HandlerRpcRequest(request *RpcRequest) {
|
||||
log.SError("RpcHandler cannot find request rpc id", rawRpcId)
|
||||
return
|
||||
}
|
||||
rawData,ok := request.inParam.([]byte)
|
||||
if ok == false {
|
||||
log.SError("RpcHandler " + handler.rpcHandler.GetName()," cannot convert in param to []byte", rawRpcId)
|
||||
return
|
||||
}
|
||||
|
||||
v.CB(request.inParam)
|
||||
v(rawData)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -283,18 +302,20 @@ func (handler *RpcHandler) HandlerRpcRequest(request *RpcRequest) {
|
||||
request.requestHandle(nil, RpcError(rErr))
|
||||
return
|
||||
}
|
||||
|
||||
requestHanle := request.requestHandle
|
||||
returnValues := v.method.Func.Call(paramList)
|
||||
errInter := returnValues[0].Interface()
|
||||
if errInter != nil {
|
||||
err = errInter.(error)
|
||||
}
|
||||
|
||||
if request.requestHandle != nil && v.hasResponder == false {
|
||||
request.requestHandle(oParam.Interface(), ConvertError(err))
|
||||
if v.hasResponder == false && requestHanle != nil {
|
||||
requestHanle(oParam.Interface(), ConvertError(err))
|
||||
}
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) CallMethod(ServiceMethod string, param interface{}, reply interface{}) error {
|
||||
func (handler *RpcHandler) CallMethod(client *Client,ServiceMethod string, param interface{},callBack reflect.Value, reply interface{}) error {
|
||||
var err error
|
||||
v, ok := handler.mapFunctions[ServiceMethod]
|
||||
if ok == false {
|
||||
@@ -304,14 +325,102 @@ func (handler *RpcHandler) CallMethod(ServiceMethod string, param interface{}, r
|
||||
}
|
||||
|
||||
var paramList []reflect.Value
|
||||
paramList = append(paramList, reflect.ValueOf(handler.GetRpcHandler())) //接受者
|
||||
paramList = append(paramList, reflect.ValueOf(param))
|
||||
paramList = append(paramList, reflect.ValueOf(reply)) //输出参数
|
||||
var returnValues []reflect.Value
|
||||
var pCall *Call
|
||||
var callSeq uint64
|
||||
if v.hasResponder == true {
|
||||
paramList = append(paramList, reflect.ValueOf(handler.GetRpcHandler())) //接受者
|
||||
pCall = MakeCall()
|
||||
pCall.callback = &callBack
|
||||
pCall.Seq = client.generateSeq()
|
||||
callSeq = pCall.Seq
|
||||
pCall.TimeOut = DefaultRpcTimeout
|
||||
pCall.ServiceMethod = ServiceMethod
|
||||
client.AddPending(pCall)
|
||||
|
||||
returnValues := v.method.Func.Call(paramList)
|
||||
errInter := returnValues[0].Interface()
|
||||
if errInter != nil {
|
||||
err = errInter.(error)
|
||||
//有返回值时
|
||||
if reply != nil {
|
||||
//如果是Call同步调用
|
||||
hander :=func(Returns interface{}, Err RpcError) {
|
||||
rpcCall := client.RemovePending(callSeq)
|
||||
if rpcCall == nil {
|
||||
log.SError("cannot find call seq ",callSeq)
|
||||
return
|
||||
}
|
||||
|
||||
//解析数据
|
||||
if len(Err)!=0 {
|
||||
rpcCall.Err = Err
|
||||
}else if Returns != nil {
|
||||
_, processor := GetProcessorType(Returns)
|
||||
var bytes []byte
|
||||
bytes,rpcCall.Err = processor.Marshal(Returns)
|
||||
if rpcCall.Err == nil {
|
||||
rpcCall.Err = processor.Unmarshal(bytes,reply)
|
||||
}
|
||||
}
|
||||
|
||||
//如果找不到,说明已经超时
|
||||
rpcCall.Reply = reply
|
||||
rpcCall.done<-rpcCall
|
||||
}
|
||||
paramList = append(paramList, reflect.ValueOf(hander))
|
||||
}else{//无返回值时,是一个requestHandlerNull空回调
|
||||
paramList = append(paramList, callBack)
|
||||
}
|
||||
paramList = append(paramList, reflect.ValueOf(param))
|
||||
|
||||
//rpc函数被调用
|
||||
returnValues = v.method.Func.Call(paramList)
|
||||
|
||||
//判断返回值是否错误,有错误时则回调
|
||||
errInter := returnValues[0].Interface()
|
||||
if errInter != nil && callBack!=requestHandlerNull{
|
||||
err = errInter.(error)
|
||||
callBack.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
}
|
||||
}else{
|
||||
paramList = append(paramList, reflect.ValueOf(handler.GetRpcHandler())) //接受者
|
||||
paramList = append(paramList, reflect.ValueOf(param))
|
||||
|
||||
//被调用RPC函数有返回值时
|
||||
if v.outParamValue.IsValid() {
|
||||
//不带返回值参数的RPC函数
|
||||
if reply == nil {
|
||||
paramList = append(paramList, reflect.New(v.outParamValue.Type().Elem()))
|
||||
}else{
|
||||
//带返回值参数的RPC函数
|
||||
paramList = append(paramList, reflect.ValueOf(reply)) //输出参数
|
||||
}
|
||||
}
|
||||
|
||||
returnValues = v.method.Func.Call(paramList)
|
||||
errInter := returnValues[0].Interface()
|
||||
|
||||
//如果无回调
|
||||
if callBack != requestHandlerNull {
|
||||
valErr := nilError
|
||||
if errInter != nil {
|
||||
err = errInter.(error)
|
||||
valErr = reflect.ValueOf(err)
|
||||
}
|
||||
|
||||
callBack.Call([]reflect.Value{reflect.ValueOf(reply),valErr })
|
||||
}
|
||||
}
|
||||
|
||||
rpcCall := client.FindPending(callSeq)
|
||||
if rpcCall!=nil {
|
||||
err = rpcCall.Done().Err
|
||||
if rpcCall.callback!= nil {
|
||||
valErr := nilError
|
||||
if rpcCall.Err != nil {
|
||||
valErr = reflect.ValueOf(rpcCall.Err)
|
||||
}
|
||||
rpcCall.callback.Call([]reflect.Value{reflect.ValueOf(rpcCall.Reply), valErr})
|
||||
}
|
||||
client.RemovePending(rpcCall.Seq)
|
||||
ReleaseCall(rpcCall)
|
||||
}
|
||||
|
||||
return err
|
||||
@@ -335,36 +444,8 @@ func (handler *RpcHandler) goRpc(processor IRpcProcessor, bCast bool, nodeId int
|
||||
}
|
||||
|
||||
//2.rpcClient调用
|
||||
//如果调用本结点服务
|
||||
for i := 0; i < count; i++ {
|
||||
if pClientList[i].bSelfNode == true {
|
||||
pLocalRpcServer := handler.funcRpcServer()
|
||||
//判断是否是同一服务
|
||||
findIndex := strings.Index(serviceMethod, ".")
|
||||
if findIndex == -1 {
|
||||
sErr := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
||||
log.SError(sErr.Error())
|
||||
err = sErr
|
||||
|
||||
continue
|
||||
}
|
||||
serviceName := serviceMethod[:findIndex]
|
||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
||||
//调用自己rpcHandler处理器
|
||||
return pLocalRpcServer.myselfRpcHandlerGo(serviceName, serviceMethod, args, nil)
|
||||
}
|
||||
//其他的rpcHandler的处理器
|
||||
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(processor, pClientList[i], true, serviceName, 0, serviceMethod, args, nil, nil)
|
||||
if pCall.Err != nil {
|
||||
err = pCall.Err
|
||||
}
|
||||
pClientList[i].RemovePending(pCall.Seq)
|
||||
ReleaseCall(pCall)
|
||||
continue
|
||||
}
|
||||
|
||||
//跨node调用
|
||||
pCall := pClientList[i].Go(true, serviceMethod, args, nil)
|
||||
pCall := pClientList[i].Go(DefaultRpcTimeout,handler.rpcHandler,true, serviceMethod, args, nil)
|
||||
if pCall.Err != nil {
|
||||
err = pCall.Err
|
||||
}
|
||||
@@ -375,7 +456,7 @@ func (handler *RpcHandler) goRpc(processor IRpcProcessor, bCast bool, nodeId int
|
||||
return err
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) callRpc(nodeId int, serviceMethod string, args interface{}, reply interface{}) error {
|
||||
func (handler *RpcHandler) callRpc(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, reply interface{}) error {
|
||||
var pClientList [maxClusterNode]*Client
|
||||
err, count := handler.funcRpcClient(nodeId, serviceMethod, pClientList[:])
|
||||
if err != nil {
|
||||
@@ -390,122 +471,61 @@ func (handler *RpcHandler) callRpc(nodeId int, serviceMethod string, args interf
|
||||
return errors.New("cannot call more then 1 node")
|
||||
}
|
||||
|
||||
//2.rpcClient调用
|
||||
//如果调用本结点服务
|
||||
pClient := pClientList[0]
|
||||
if pClient.bSelfNode == true {
|
||||
pLocalRpcServer := handler.funcRpcServer()
|
||||
//判断是否是同一服务
|
||||
findIndex := strings.Index(serviceMethod, ".")
|
||||
if findIndex == -1 {
|
||||
err := errors.New("Call serviceMethod " + serviceMethod + "is error!")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
}
|
||||
serviceName := serviceMethod[:findIndex]
|
||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
||||
//调用自己rpcHandler处理器
|
||||
return pLocalRpcServer.myselfRpcHandlerGo(serviceName, serviceMethod, args, reply)
|
||||
}
|
||||
//其他的rpcHandler的处理器
|
||||
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(nil, pClient, false, serviceName, 0, serviceMethod, args, reply, nil)
|
||||
err = pCall.Done().Err
|
||||
pClient.RemovePending(pCall.Seq)
|
||||
ReleaseCall(pCall)
|
||||
return err
|
||||
}
|
||||
pCall := pClient.Go(timeout,handler.rpcHandler,false, serviceMethod, args, reply)
|
||||
|
||||
//跨node调用
|
||||
pCall := pClient.Go(false, serviceMethod, args, reply)
|
||||
if pCall.Err != nil {
|
||||
err = pCall.Err
|
||||
ReleaseCall(pCall)
|
||||
return err
|
||||
}
|
||||
err = pCall.Done().Err
|
||||
pClient.RemovePending(pCall.Seq)
|
||||
ReleaseCall(pCall)
|
||||
return err
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) asyncCallRpc(nodeId int, serviceMethod string, args interface{}, callback interface{}) error {
|
||||
func (handler *RpcHandler) asyncCallRpc(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error) {
|
||||
fVal := reflect.ValueOf(callback)
|
||||
if fVal.Kind() != reflect.Func {
|
||||
err := errors.New("call " + serviceMethod + " input callback param is error!")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
if fVal.Type().NumIn() != 2 {
|
||||
err := errors.New("call " + serviceMethod + " callback param function is error!")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
if fVal.Type().In(0).Kind() != reflect.Ptr || fVal.Type().In(1).String() != "error" {
|
||||
err := errors.New("call " + serviceMethod + " callback param function is error!")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
reply := reflect.New(fVal.Type().In(0).Elem()).Interface()
|
||||
var pClientList [maxClusterNode]*Client
|
||||
var pClientList [2]*Client
|
||||
err, count := handler.funcRpcClient(nodeId, serviceMethod, pClientList[:])
|
||||
if count == 0 || err != nil {
|
||||
strNodeId := strconv.Itoa(nodeId)
|
||||
if err == nil {
|
||||
err = errors.New("cannot find rpcClient from nodeId " + strNodeId + " " + serviceMethod)
|
||||
if nodeId > 0 {
|
||||
err = fmt.Errorf("cannot find %s from nodeId %d",serviceMethod,nodeId)
|
||||
}else {
|
||||
err = fmt.Errorf("No %s service found in the origin network",serviceMethod)
|
||||
}
|
||||
}
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
log.SError("Call serviceMethod is error:", err.Error())
|
||||
return nil
|
||||
return emptyCancelRpc,nil
|
||||
}
|
||||
|
||||
if count > 1 {
|
||||
err := errors.New("cannot call more then 1 node")
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
log.SError(err.Error())
|
||||
return nil
|
||||
return emptyCancelRpc,nil
|
||||
}
|
||||
|
||||
//2.rpcClient调用
|
||||
//如果调用本结点服务
|
||||
pClient := pClientList[0]
|
||||
if pClient.bSelfNode == true {
|
||||
pLocalRpcServer := handler.funcRpcServer()
|
||||
//判断是否是同一服务
|
||||
findIndex := strings.Index(serviceMethod, ".")
|
||||
if findIndex == -1 {
|
||||
err := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
log.SError(err.Error())
|
||||
return nil
|
||||
}
|
||||
serviceName := serviceMethod[:findIndex]
|
||||
//调用自己rpcHandler处理器
|
||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
||||
err := pLocalRpcServer.myselfRpcHandlerGo(serviceName, serviceMethod, args, reply)
|
||||
if err == nil {
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), nilError})
|
||||
} else {
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
}
|
||||
}
|
||||
|
||||
//其他的rpcHandler的处理器
|
||||
err = pLocalRpcServer.selfNodeRpcHandlerAsyncGo(pClient, handler, false, serviceName, serviceMethod, args, reply, fVal)
|
||||
if err != nil {
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
//跨node调用
|
||||
err = pClient.AsyncCall(handler, serviceMethod, fVal, args, reply)
|
||||
if err != nil {
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
}
|
||||
return nil
|
||||
return pClientList[0].AsyncCall(timeout,handler.rpcHandler, serviceMethod, fVal, args, reply,false)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) GetName() string {
|
||||
@@ -516,12 +536,29 @@ func (handler *RpcHandler) IsSingleCoroutine() bool {
|
||||
return handler.rpcHandler.IsSingleCoroutine()
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) CallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, reply interface{}) error {
|
||||
return handler.callRpc(timeout,0, serviceMethod, args, reply)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) CallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, reply interface{}) error{
|
||||
return handler.callRpc(timeout,nodeId, serviceMethod, args, reply)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) AsyncCallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error){
|
||||
return handler.asyncCallRpc(timeout,0, serviceMethod, args, callback)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) AsyncCallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error){
|
||||
return handler.asyncCallRpc(timeout,nodeId, serviceMethod, args, callback)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) AsyncCall(serviceMethod string, args interface{}, callback interface{}) error {
|
||||
return handler.asyncCallRpc(0, serviceMethod, args, callback)
|
||||
_,err := handler.asyncCallRpc(DefaultRpcTimeout,0, serviceMethod, args, callback)
|
||||
return err
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) Call(serviceMethod string, args interface{}, reply interface{}) error {
|
||||
return handler.callRpc(0, serviceMethod, args, reply)
|
||||
return handler.callRpc(DefaultRpcTimeout,0, serviceMethod, args, reply)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) Go(serviceMethod string, args interface{}) error {
|
||||
@@ -529,11 +566,13 @@ func (handler *RpcHandler) Go(serviceMethod string, args interface{}) error {
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) AsyncCallNode(nodeId int, serviceMethod string, args interface{}, callback interface{}) error {
|
||||
return handler.asyncCallRpc(nodeId, serviceMethod, args, callback)
|
||||
_,err:= handler.asyncCallRpc(DefaultRpcTimeout,nodeId, serviceMethod, args, callback)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) CallNode(nodeId int, serviceMethod string, args interface{}, reply interface{}) error {
|
||||
return handler.callRpc(nodeId, serviceMethod, args, reply)
|
||||
return handler.callRpc(DefaultRpcTimeout,nodeId, serviceMethod, args, reply)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) GoNode(nodeId int, serviceMethod string, args interface{}) error {
|
||||
@@ -544,16 +583,14 @@ func (handler *RpcHandler) CastGo(serviceMethod string, args interface{}) error
|
||||
return handler.goRpc(nil, true, 0, serviceMethod, args)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs IRawInputArgs) error {
|
||||
func (handler *RpcHandler) RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs []byte) error {
|
||||
processor := GetProcessor(uint8(rpcProcessorType))
|
||||
err, count := handler.funcRpcClient(nodeId, serviceName, handler.pClientList)
|
||||
if count == 0 || err != nil {
|
||||
//args.DoGc()
|
||||
log.SError("Call serviceMethod is error:", err.Error())
|
||||
return err
|
||||
}
|
||||
if count > 1 {
|
||||
//args.DoGc()
|
||||
err := errors.New("cannot call more then 1 node")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
@@ -562,32 +599,12 @@ func (handler *RpcHandler) RawGoNode(rpcProcessorType RpcProcessorType, nodeId i
|
||||
//2.rpcClient调用
|
||||
//如果调用本结点服务
|
||||
for i := 0; i < count; i++ {
|
||||
if handler.pClientList[i].bSelfNode == true {
|
||||
pLocalRpcServer := handler.funcRpcServer()
|
||||
//调用自己rpcHandler处理器
|
||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
||||
err := pLocalRpcServer.myselfRpcHandlerGo(serviceName, serviceName, rawArgs.GetRawData(), nil)
|
||||
//args.DoGc()
|
||||
return err
|
||||
}
|
||||
|
||||
//其他的rpcHandler的处理器
|
||||
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(processor, handler.pClientList[i], true, serviceName, rpcMethodId, serviceName, nil, nil, rawArgs.GetRawData())
|
||||
rawArgs.DoEscape()
|
||||
if pCall.Err != nil {
|
||||
err = pCall.Err
|
||||
}
|
||||
handler.pClientList[i].RemovePending(pCall.Seq)
|
||||
ReleaseCall(pCall)
|
||||
continue
|
||||
}
|
||||
|
||||
//跨node调用
|
||||
pCall := handler.pClientList[i].RawGo(processor, true, rpcMethodId, serviceName, rawArgs.GetRawData(), nil)
|
||||
rawArgs.DoFree()
|
||||
pCall := handler.pClientList[i].RawGo(DefaultRpcTimeout,handler.rpcHandler,processor, true, rpcMethodId, serviceName, rawArgs, nil)
|
||||
if pCall.Err != nil {
|
||||
err = pCall.Err
|
||||
}
|
||||
|
||||
handler.pClientList[i].RemovePending(pCall.Seq)
|
||||
ReleaseCall(pCall)
|
||||
}
|
||||
@@ -601,23 +618,7 @@ func (handler *RpcHandler) RegRawRpc(rpcMethodId uint32, rawRpcCB RawRpcCallBack
|
||||
|
||||
func (handler *RpcHandler) UnmarshalInParam(rpcProcessor IRpcProcessor, serviceMethod string, rawRpcMethodId uint32, inParam []byte) (interface{}, error) {
|
||||
if rawRpcMethodId > 0 {
|
||||
v, ok := handler.mapRawFunctions[rawRpcMethodId]
|
||||
if ok == false {
|
||||
strRawRpcMethodId := strconv.FormatUint(uint64(rawRpcMethodId), 10)
|
||||
err := errors.New("RpcHandler cannot find request rpc id " + strRawRpcMethodId)
|
||||
log.SError(err.Error())
|
||||
return nil, err
|
||||
}
|
||||
|
||||
msg, err := v.Unmarshal(inParam)
|
||||
if err != nil {
|
||||
strRawRpcMethodId := strconv.FormatUint(uint64(rawRpcMethodId), 10)
|
||||
err := errors.New("RpcHandler cannot Unmarshal rpc id " + strRawRpcMethodId)
|
||||
log.SError(err.Error())
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return msg, err
|
||||
return inParam,nil
|
||||
}
|
||||
|
||||
v, ok := handler.mapFunctions[serviceMethod]
|
||||
@@ -630,3 +631,8 @@ func (handler *RpcHandler) UnmarshalInParam(rpcProcessor IRpcProcessor, serviceM
|
||||
err = rpcProcessor.Unmarshal(inParam, param)
|
||||
return param, err
|
||||
}
|
||||
|
||||
|
||||
func (handler *RpcHandler) GetRpcServer() FuncRpcServer{
|
||||
return handler.funcRpcServer
|
||||
}
|
||||
|
||||
89
rpc/rpctimer.go
Normal file
89
rpc/rpctimer.go
Normal file
@@ -0,0 +1,89 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"container/heap"
|
||||
"time"
|
||||
)
|
||||
|
||||
type CallTimer struct {
|
||||
SeqId uint64
|
||||
FireTime int64
|
||||
}
|
||||
|
||||
type CallTimerHeap struct {
|
||||
callTimer []CallTimer
|
||||
mapSeqIndex map[uint64]int
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Init() {
|
||||
h.mapSeqIndex = make(map[uint64]int, 4096)
|
||||
h.callTimer = make([]CallTimer, 0, 4096)
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Len() int {
|
||||
return len(h.callTimer)
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Less(i, j int) bool {
|
||||
return h.callTimer[i].FireTime < h.callTimer[j].FireTime
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Swap(i, j int) {
|
||||
h.callTimer[i], h.callTimer[j] = h.callTimer[j], h.callTimer[i]
|
||||
h.mapSeqIndex[h.callTimer[i].SeqId] = i
|
||||
h.mapSeqIndex[h.callTimer[j].SeqId] = j
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Push(t any) {
|
||||
callTimer := t.(CallTimer)
|
||||
h.mapSeqIndex[callTimer.SeqId] = len(h.callTimer)
|
||||
h.callTimer = append(h.callTimer, callTimer)
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Pop() any {
|
||||
l := len(h.callTimer)
|
||||
seqId := h.callTimer[l-1].SeqId
|
||||
|
||||
h.callTimer = h.callTimer[:l-1]
|
||||
delete(h.mapSeqIndex, seqId)
|
||||
return seqId
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Cancel(seq uint64) bool {
|
||||
index, ok := h.mapSeqIndex[seq]
|
||||
if ok == false {
|
||||
return false
|
||||
}
|
||||
|
||||
heap.Remove(h, index)
|
||||
return true
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) AddTimer(seqId uint64,d time.Duration){
|
||||
heap.Push(h, CallTimer{
|
||||
SeqId: seqId,
|
||||
FireTime: time.Now().Add(d).UnixNano(),
|
||||
})
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) PopTimeout() uint64 {
|
||||
if h.Len() == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
nextFireTime := h.callTimer[0].FireTime
|
||||
if nextFireTime > time.Now().UnixNano() {
|
||||
return 0
|
||||
}
|
||||
|
||||
return heap.Pop(h).(uint64)
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) PopFirst() uint64 {
|
||||
if h.Len() == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
return heap.Pop(h).(uint64)
|
||||
}
|
||||
|
||||
179
rpc/server.go
179
rpc/server.go
@@ -9,6 +9,7 @@ import (
|
||||
"net"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type RpcProcessorType uint8
|
||||
@@ -18,7 +19,6 @@ const (
|
||||
RpcProcessorGoGoPB RpcProcessorType = 1
|
||||
)
|
||||
|
||||
//var processor IRpcProcessor = &JsonProcessor{}
|
||||
var arrayProcessor = []IRpcProcessor{&JsonProcessor{}, &GoGoPBProcessor{}}
|
||||
var arrayProcessorLen uint8 = 2
|
||||
var LittleEndian bool
|
||||
@@ -27,6 +27,8 @@ type Server struct {
|
||||
functions map[interface{}]interface{}
|
||||
rpcHandleFinder RpcHandleFinder
|
||||
rpcServer *network.TCPServer
|
||||
|
||||
compressBytesLen int
|
||||
}
|
||||
|
||||
type RpcAgent struct {
|
||||
@@ -62,25 +64,31 @@ func (server *Server) Init(rpcHandleFinder RpcHandleFinder) {
|
||||
server.rpcServer = &network.TCPServer{}
|
||||
}
|
||||
|
||||
func (server *Server) Start(listenAddr string, maxRpcParamLen uint32) {
|
||||
const Default_ReadWriteDeadline = 15*time.Second
|
||||
|
||||
func (server *Server) Start(listenAddr string, maxRpcParamLen uint32,compressBytesLen int) {
|
||||
splitAddr := strings.Split(listenAddr, ":")
|
||||
if len(splitAddr) != 2 {
|
||||
log.SFatal("listen addr is error :", listenAddr)
|
||||
}
|
||||
|
||||
server.rpcServer.Addr = ":" + splitAddr[1]
|
||||
server.rpcServer.LenMsgLen = 4 //uint16
|
||||
server.rpcServer.MinMsgLen = 2
|
||||
server.compressBytesLen = compressBytesLen
|
||||
if maxRpcParamLen > 0 {
|
||||
server.rpcServer.MaxMsgLen = maxRpcParamLen
|
||||
} else {
|
||||
server.rpcServer.MaxMsgLen = math.MaxUint32
|
||||
}
|
||||
|
||||
server.rpcServer.MaxConnNum = 10000
|
||||
server.rpcServer.MaxConnNum = 100000
|
||||
server.rpcServer.PendingWriteNum = 2000000
|
||||
server.rpcServer.NewAgent = server.NewAgent
|
||||
server.rpcServer.LittleEndian = LittleEndian
|
||||
server.rpcServer.WriteDeadline = Default_ReadWriteDeadline
|
||||
server.rpcServer.ReadDeadline = Default_ReadWriteDeadline
|
||||
server.rpcServer.LenMsgLen = DefaultRpcLenMsgLen
|
||||
|
||||
server.rpcServer.Start()
|
||||
}
|
||||
|
||||
@@ -107,7 +115,26 @@ func (agent *RpcAgent) WriteResponse(processor IRpcProcessor, serviceMethod stri
|
||||
return
|
||||
}
|
||||
|
||||
errM = agent.conn.WriteMsg([]byte{uint8(processor.GetProcessorType())}, bytes)
|
||||
var compressBuff[]byte
|
||||
bCompress := uint8(0)
|
||||
if agent.rpcServer.compressBytesLen >0 && len(bytes) >= agent.rpcServer.compressBytesLen {
|
||||
var cErr error
|
||||
|
||||
compressBuff,cErr = compressor.CompressBlock(bytes)
|
||||
if cErr != nil {
|
||||
log.SError("service method ", serviceMethod, " CompressBlock error:", cErr.Error())
|
||||
return
|
||||
}
|
||||
if len(compressBuff) < len(bytes) {
|
||||
bytes = compressBuff
|
||||
bCompress = 1<<7
|
||||
}
|
||||
}
|
||||
|
||||
errM = agent.conn.WriteMsg([]byte{uint8(processor.GetProcessorType())|bCompress}, bytes)
|
||||
if cap(compressBuff) >0 {
|
||||
compressor.CompressBufferCollection(compressBuff)
|
||||
}
|
||||
if errM != nil {
|
||||
log.SError("Rpc ", serviceMethod, " return is error:", errM.Error())
|
||||
}
|
||||
@@ -122,16 +149,34 @@ func (agent *RpcAgent) Run() {
|
||||
break
|
||||
}
|
||||
|
||||
processor := GetProcessor(data[0])
|
||||
bCompress := (data[0]>>7) > 0
|
||||
processor := GetProcessor(data[0]&0x7f)
|
||||
if processor == nil {
|
||||
agent.conn.ReleaseReadMsg(data)
|
||||
log.SError("remote rpc ", agent.conn.RemoteAddr(), " cannot find processor:", data[0])
|
||||
log.SError("remote rpc ", agent.conn.RemoteAddr().String(), " cannot find processor:", data[0])
|
||||
return
|
||||
}
|
||||
|
||||
//解析head
|
||||
var compressBuff []byte
|
||||
byteData := data[1:]
|
||||
if bCompress == true {
|
||||
var unCompressErr error
|
||||
|
||||
compressBuff,unCompressErr = compressor.UncompressBlock(byteData)
|
||||
if unCompressErr!= nil {
|
||||
agent.conn.ReleaseReadMsg(data)
|
||||
log.SError("rpcClient ", agent.conn.RemoteAddr().String(), " ReadMsg head error:", unCompressErr.Error())
|
||||
return
|
||||
}
|
||||
byteData = compressBuff
|
||||
}
|
||||
|
||||
req := MakeRpcRequest(processor, 0, 0, "", false, nil)
|
||||
err = processor.Unmarshal(data[1:], req.RpcRequestData)
|
||||
err = processor.Unmarshal(byteData, req.RpcRequestData)
|
||||
if cap(compressBuff) > 0 {
|
||||
compressor.UnCompressBufferCollection(compressBuff)
|
||||
}
|
||||
agent.conn.ReleaseReadMsg(data)
|
||||
if err != nil {
|
||||
log.SError("rpc Unmarshal request is error:", err.Error())
|
||||
@@ -143,7 +188,6 @@ func (agent *RpcAgent) Run() {
|
||||
ReleaseRpcRequest(req)
|
||||
continue
|
||||
} else {
|
||||
//will close tcpconn
|
||||
ReleaseRpcRequest(req)
|
||||
break
|
||||
}
|
||||
@@ -233,140 +277,175 @@ func (server *Server) NewAgent(c *network.TCPConn) network.Agent {
|
||||
return agent
|
||||
}
|
||||
|
||||
func (server *Server) myselfRpcHandlerGo(handlerName string, serviceMethod string, args interface{}, reply interface{}) error {
|
||||
func (server *Server) myselfRpcHandlerGo(client *Client,handlerName string, serviceMethod string, args interface{},callBack reflect.Value, reply interface{}) error {
|
||||
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
|
||||
if rpcHandler == nil {
|
||||
err := errors.New("service method " + serviceMethod + " not config!")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
return rpcHandler.CallMethod(serviceMethod, args, reply)
|
||||
|
||||
return rpcHandler.CallMethod(client,serviceMethod, args,callBack, reply)
|
||||
}
|
||||
|
||||
func (server *Server) selfNodeRpcHandlerGo(processor IRpcProcessor, client *Client, noReply bool, handlerName string, rpcMethodId uint32, serviceMethod string, args interface{}, reply interface{}, rawArgs []byte) *Call {
|
||||
func (server *Server) selfNodeRpcHandlerGo(timeout time.Duration,processor IRpcProcessor, client *Client, noReply bool, handlerName string, rpcMethodId uint32, serviceMethod string, args interface{}, reply interface{}, rawArgs []byte) *Call {
|
||||
pCall := MakeCall()
|
||||
pCall.Seq = client.generateSeq()
|
||||
pCall.TimeOut = timeout
|
||||
pCall.ServiceMethod = serviceMethod
|
||||
|
||||
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
|
||||
if rpcHandler == nil {
|
||||
err := errors.New("service method " + serviceMethod + " not config!")
|
||||
log.SError(err.Error())
|
||||
pCall.Seq = 0
|
||||
pCall.Err = errors.New("service method " + serviceMethod + " not config!")
|
||||
log.SError(pCall.Err.Error())
|
||||
pCall.done <- pCall
|
||||
pCall.DoError(err)
|
||||
|
||||
return pCall
|
||||
}
|
||||
|
||||
var iParam interface{}
|
||||
if processor == nil {
|
||||
_, processor = GetProcessorType(args)
|
||||
}
|
||||
|
||||
if args != nil {
|
||||
var err error
|
||||
iParam,err = processor.Clone(args)
|
||||
if err != nil {
|
||||
sErr := errors.New("RpcHandler " + handlerName + "."+serviceMethod+" deep copy inParam is error:" + err.Error())
|
||||
log.SError(sErr.Error())
|
||||
pCall.Seq = 0
|
||||
pCall.DoError(sErr)
|
||||
|
||||
return pCall
|
||||
}
|
||||
}
|
||||
|
||||
req := MakeRpcRequest(processor, 0, rpcMethodId, serviceMethod, noReply, nil)
|
||||
req.inParam = args
|
||||
req.inParam = iParam
|
||||
req.localReply = reply
|
||||
if rawArgs != nil {
|
||||
var err error
|
||||
req.inParam, err = rpcHandler.UnmarshalInParam(processor, serviceMethod, rpcMethodId, rawArgs)
|
||||
if err != nil {
|
||||
log.SError(err.Error())
|
||||
pCall.Seq = 0
|
||||
pCall.DoError(err)
|
||||
ReleaseRpcRequest(req)
|
||||
pCall.Err = err
|
||||
pCall.done <- pCall
|
||||
return pCall
|
||||
}
|
||||
}
|
||||
|
||||
if noReply == false {
|
||||
client.AddPending(pCall)
|
||||
callSeq := pCall.Seq
|
||||
req.requestHandle = func(Returns interface{}, Err RpcError) {
|
||||
if reply != nil && Returns != reply && Returns != nil {
|
||||
byteReturns, err := req.rpcProcessor.Marshal(Returns)
|
||||
if err != nil {
|
||||
log.SError("returns data cannot be marshal ", pCall.Seq)
|
||||
ReleaseRpcRequest(req)
|
||||
}
|
||||
|
||||
err = req.rpcProcessor.Unmarshal(byteReturns, reply)
|
||||
if err != nil {
|
||||
log.SError("returns data cannot be Unmarshal ", pCall.Seq)
|
||||
ReleaseRpcRequest(req)
|
||||
Err = ConvertError(err)
|
||||
log.SError("returns data cannot be marshal,callSeq is ", callSeq," error is ",err.Error())
|
||||
}else{
|
||||
err = req.rpcProcessor.Unmarshal(byteReturns, reply)
|
||||
if err != nil {
|
||||
Err = ConvertError(err)
|
||||
log.SError("returns data cannot be Unmarshal,callSeq is ", callSeq," error is ",err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
v := client.RemovePending(pCall.Seq)
|
||||
ReleaseRpcRequest(req)
|
||||
v := client.RemovePending(callSeq)
|
||||
if v == nil {
|
||||
log.SError("rpcClient cannot find seq ", pCall.Seq, " in pending")
|
||||
ReleaseRpcRequest(req)
|
||||
log.SError("rpcClient cannot find seq ",callSeq, " in pending")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if len(Err) == 0 {
|
||||
pCall.Err = nil
|
||||
v.Err = nil
|
||||
v.DoOK()
|
||||
} else {
|
||||
pCall.Err = Err
|
||||
log.SError(Err.Error())
|
||||
v.DoError(Err)
|
||||
}
|
||||
pCall.done <- pCall
|
||||
ReleaseRpcRequest(req)
|
||||
}
|
||||
}
|
||||
|
||||
err := rpcHandler.PushRpcRequest(req)
|
||||
if err != nil {
|
||||
log.SError(err.Error())
|
||||
pCall.DoError(err)
|
||||
ReleaseRpcRequest(req)
|
||||
pCall.Err = err
|
||||
pCall.done <- pCall
|
||||
}
|
||||
|
||||
return pCall
|
||||
}
|
||||
|
||||
func (server *Server) selfNodeRpcHandlerAsyncGo(client *Client, callerRpcHandler IRpcHandler, noReply bool, handlerName string, serviceMethod string, args interface{}, reply interface{}, callback reflect.Value) error {
|
||||
func (server *Server) selfNodeRpcHandlerAsyncGo(timeout time.Duration,client *Client, callerRpcHandler IRpcHandler, noReply bool, handlerName string, serviceMethod string, args interface{}, reply interface{}, callback reflect.Value,cancelable bool) (CancelRpc,error) {
|
||||
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
|
||||
if rpcHandler == nil {
|
||||
err := errors.New("service method " + serviceMethod + " not config!")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
_, processor := GetProcessorType(args)
|
||||
iParam,err := processor.Clone(args)
|
||||
if err != nil {
|
||||
errM := errors.New("RpcHandler " + handlerName + "."+serviceMethod+" deep copy inParam is error:" + err.Error())
|
||||
log.SError(errM.Error())
|
||||
return emptyCancelRpc,errM
|
||||
}
|
||||
|
||||
req := MakeRpcRequest(processor, 0, 0, serviceMethod, noReply, nil)
|
||||
req.inParam = args
|
||||
req.inParam = iParam
|
||||
req.localReply = reply
|
||||
|
||||
cancelRpc := emptyCancelRpc
|
||||
var callSeq uint64
|
||||
if noReply == false {
|
||||
callSeq := client.generateSeq()
|
||||
callSeq = client.generateSeq()
|
||||
pCall := MakeCall()
|
||||
pCall.Seq = callSeq
|
||||
pCall.rpcHandler = callerRpcHandler
|
||||
pCall.callback = &callback
|
||||
pCall.Reply = reply
|
||||
|
||||
pCall.ServiceMethod = serviceMethod
|
||||
pCall.TimeOut = timeout
|
||||
client.AddPending(pCall)
|
||||
rpcCancel := RpcCancel{CallSeq: callSeq,Cli: client}
|
||||
cancelRpc = rpcCancel.CancelRpc
|
||||
|
||||
req.requestHandle = func(Returns interface{}, Err RpcError) {
|
||||
v := client.RemovePending(callSeq)
|
||||
if v == nil {
|
||||
log.SError("rpcClient cannot find seq ", pCall.Seq, " in pending")
|
||||
//ReleaseCall(pCall)
|
||||
ReleaseRpcRequest(req)
|
||||
return
|
||||
}
|
||||
if len(Err) == 0 {
|
||||
pCall.Err = nil
|
||||
v.Err = nil
|
||||
} else {
|
||||
pCall.Err = Err
|
||||
v.Err = Err
|
||||
}
|
||||
|
||||
if Returns != nil {
|
||||
pCall.Reply = Returns
|
||||
v.Reply = Returns
|
||||
}
|
||||
pCall.rpcHandler.PushRpcResponse(pCall)
|
||||
v.rpcHandler.PushRpcResponse(v)
|
||||
ReleaseRpcRequest(req)
|
||||
}
|
||||
}
|
||||
|
||||
err := rpcHandler.PushRpcRequest(req)
|
||||
err = rpcHandler.PushRpcRequest(req)
|
||||
if err != nil {
|
||||
ReleaseRpcRequest(req)
|
||||
return err
|
||||
if callSeq > 0 {
|
||||
client.RemovePending(callSeq)
|
||||
}
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
return nil
|
||||
return cancelRpc,nil
|
||||
}
|
||||
|
||||
@@ -10,11 +10,13 @@ import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
rpcHandle "github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/util/timer"
|
||||
"github.com/duanhf2012/origin/concurrent"
|
||||
)
|
||||
|
||||
const InitModuleId = 1e9
|
||||
|
||||
type IModule interface {
|
||||
concurrent.IConcurrent
|
||||
SetModuleId(moduleId uint32) bool
|
||||
GetModuleId() uint32
|
||||
AddModule(module IModule) (uint32, error)
|
||||
@@ -56,6 +58,7 @@ type Module struct {
|
||||
|
||||
//事件管道
|
||||
eventHandler event.IEventHandler
|
||||
concurrent.IConcurrent
|
||||
}
|
||||
|
||||
func (m *Module) SetModuleId(moduleId uint32) bool {
|
||||
@@ -105,6 +108,7 @@ func (m *Module) AddModule(module IModule) (uint32, error) {
|
||||
pAddModule.moduleName = reflect.Indirect(reflect.ValueOf(module)).Type().Name()
|
||||
pAddModule.eventHandler = event.NewEventHandler()
|
||||
pAddModule.eventHandler.Init(m.eventHandler.GetEventProcessor())
|
||||
pAddModule.IConcurrent = m.IConcurrent
|
||||
err := module.OnInit()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
@@ -273,6 +277,11 @@ func (m *Module) SafeNewTicker(tickerId *uint64, d time.Duration, AdditionData i
|
||||
}
|
||||
|
||||
func (m *Module) CancelTimerId(timerId *uint64) bool {
|
||||
if timerId==nil || *timerId == 0 {
|
||||
log.SWarning("timerId is invalid")
|
||||
return false
|
||||
}
|
||||
|
||||
if m.mapActiveIdTimer == nil {
|
||||
log.SError("mapActiveIdTimer is nil")
|
||||
return false
|
||||
@@ -280,7 +289,7 @@ func (m *Module) CancelTimerId(timerId *uint64) bool {
|
||||
|
||||
t, ok := m.mapActiveIdTimer[*timerId]
|
||||
if ok == false {
|
||||
log.SError("cannot find timer id ", timerId)
|
||||
log.SStack("cannot find timer id ", timerId)
|
||||
return false
|
||||
}
|
||||
|
||||
|
||||
@@ -7,43 +7,44 @@ import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/profiler"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
originSync "github.com/duanhf2012/origin/util/sync"
|
||||
"github.com/duanhf2012/origin/util/timer"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"github.com/duanhf2012/origin/concurrent"
|
||||
)
|
||||
|
||||
|
||||
var closeSig chan bool
|
||||
var timerDispatcherLen = 100000
|
||||
var maxServiceEventChannelNum = 2000000
|
||||
|
||||
type IService interface {
|
||||
concurrent.IConcurrent
|
||||
Init(iService IService,getClientFun rpc.FuncRpcClient,getServerFun rpc.FuncRpcServer,serviceCfg interface{})
|
||||
SetName(serviceName string)
|
||||
GetName() string
|
||||
Stop()
|
||||
Start()
|
||||
|
||||
OnSetup(iService IService)
|
||||
OnInit() error
|
||||
OnStart()
|
||||
OnRelease()
|
||||
Wait()
|
||||
Start()
|
||||
|
||||
SetName(serviceName string)
|
||||
GetName() string
|
||||
GetRpcHandler() rpc.IRpcHandler
|
||||
GetServiceCfg()interface{}
|
||||
OpenProfiler()
|
||||
GetProfiler() *profiler.Profiler
|
||||
}
|
||||
GetServiceEventChannelNum() int
|
||||
GetServiceTimerChannelNum() int
|
||||
|
||||
// eventPool的内存池,缓存Event
|
||||
var maxServiceEventChannel = 2000000
|
||||
var eventPool = originSync.NewPoolEx(make(chan originSync.IPoolData, maxServiceEventChannel), func() originSync.IPoolData {
|
||||
return &event.Event{}
|
||||
})
|
||||
SetEventChannelNum(num int)
|
||||
OpenProfiler()
|
||||
}
|
||||
|
||||
type Service struct {
|
||||
Module
|
||||
|
||||
rpcHandler rpc.RpcHandler //rpc
|
||||
name string //service name
|
||||
wg sync.WaitGroup
|
||||
@@ -52,8 +53,10 @@ type Service struct {
|
||||
startStatus bool
|
||||
eventProcessor event.IEventProcessor
|
||||
profiler *profiler.Profiler //性能分析器
|
||||
rpcEventLister rpc.IRpcListener
|
||||
nodeEventLister rpc.INodeListener
|
||||
discoveryServiceLister rpc.IDiscoveryServiceListener
|
||||
chanEvent chan event.IEvent
|
||||
closeSig chan struct{}
|
||||
}
|
||||
|
||||
// RpcConnEvent Node结点连接事件
|
||||
@@ -62,15 +65,23 @@ type RpcConnEvent struct{
|
||||
NodeId int
|
||||
}
|
||||
|
||||
// DiscoveryServiceEvent 发现服务结点
|
||||
type DiscoveryServiceEvent struct{
|
||||
IsDiscovery bool
|
||||
ServiceName []string
|
||||
NodeId int
|
||||
}
|
||||
|
||||
func SetMaxServiceChannel(maxEventChannel int){
|
||||
maxServiceEventChannel = maxEventChannel
|
||||
eventPool = originSync.NewPoolEx(make(chan originSync.IPoolData, maxServiceEventChannel), func() originSync.IPoolData {
|
||||
return &event.Event{}
|
||||
})
|
||||
maxServiceEventChannelNum = maxEventChannel
|
||||
}
|
||||
|
||||
func (rpcEventData *DiscoveryServiceEvent) GetEventType() event.EventType{
|
||||
return event.Sys_Event_DiscoverService
|
||||
}
|
||||
|
||||
func (rpcEventData *RpcConnEvent) GetEventType() event.EventType{
|
||||
return event.Sys_Event_Rpc_Event
|
||||
return event.Sys_Event_Node_Event
|
||||
}
|
||||
|
||||
func (s *Service) OnSetup(iService IService){
|
||||
@@ -87,8 +98,12 @@ func (s *Service) OpenProfiler() {
|
||||
}
|
||||
|
||||
func (s *Service) Init(iService IService,getClientFun rpc.FuncRpcClient,getServerFun rpc.FuncRpcServer,serviceCfg interface{}) {
|
||||
s.closeSig = make(chan struct{})
|
||||
s.dispatcher =timer.NewDispatcher(timerDispatcherLen)
|
||||
s.chanEvent = make(chan event.IEvent,maxServiceEventChannel)
|
||||
if s.chanEvent == nil {
|
||||
s.chanEvent = make(chan event.IEvent,maxServiceEventChannelNum)
|
||||
}
|
||||
|
||||
s.rpcHandler.InitRpcHandler(iService.(rpc.IRpcHandler),getClientFun,getServerFun,iService.(rpc.IRpcHandlerChannel))
|
||||
s.IRpcHandler = &s.rpcHandler
|
||||
s.self = iService.(IModule)
|
||||
@@ -102,29 +117,42 @@ func (s *Service) Init(iService IService,getClientFun rpc.FuncRpcClient,getServe
|
||||
s.eventProcessor.Init(s)
|
||||
s.eventHandler = event.NewEventHandler()
|
||||
s.eventHandler.Init(s.eventProcessor)
|
||||
s.Module.IConcurrent = &concurrent.Concurrent{}
|
||||
}
|
||||
|
||||
|
||||
func (s *Service) Start() {
|
||||
s.startStatus = true
|
||||
var waitRun sync.WaitGroup
|
||||
|
||||
for i:=int32(0);i< s.goroutineNum;i++{
|
||||
s.wg.Add(1)
|
||||
waitRun.Add(1)
|
||||
go func(){
|
||||
log.SRelease(s.GetName()," service is running",)
|
||||
waitRun.Done()
|
||||
s.Run()
|
||||
}()
|
||||
}
|
||||
|
||||
waitRun.Wait()
|
||||
}
|
||||
|
||||
func (s *Service) Run() {
|
||||
log.SDebug("Start running Service ", s.GetName())
|
||||
defer s.wg.Done()
|
||||
var bStop = false
|
||||
|
||||
concurrent := s.IConcurrent.(*concurrent.Concurrent)
|
||||
concurrentCBChannel := concurrent.GetCallBackChannel()
|
||||
|
||||
s.self.(IService).OnStart()
|
||||
for{
|
||||
var analyzer *profiler.Analyzer
|
||||
select {
|
||||
case <- closeSig:
|
||||
case <- s.closeSig:
|
||||
bStop = true
|
||||
concurrent.Close()
|
||||
case cb:=<-concurrentCBChannel:
|
||||
concurrent.DoCallback(cb)
|
||||
case ev := <- s.chanEvent:
|
||||
switch ev.GetEventType() {
|
||||
case event.ServiceRpcRequestEvent:
|
||||
@@ -147,7 +175,7 @@ func (s *Service) Run() {
|
||||
analyzer.Pop()
|
||||
analyzer = nil
|
||||
}
|
||||
eventPool.Put(cEvent)
|
||||
event.DeleteEvent(cEvent)
|
||||
case event.ServiceRpcResponseEvent:
|
||||
cEvent,ok := ev.(*event.Event)
|
||||
if ok == false {
|
||||
@@ -167,7 +195,7 @@ func (s *Service) Run() {
|
||||
analyzer.Pop()
|
||||
analyzer = nil
|
||||
}
|
||||
eventPool.Put(cEvent)
|
||||
event.DeleteEvent(cEvent)
|
||||
default:
|
||||
if s.profiler!=nil {
|
||||
analyzer = s.profiler.Push("[SEvent]"+strconv.Itoa(int(ev.GetEventType())))
|
||||
@@ -217,8 +245,8 @@ func (s *Service) Release(){
|
||||
log.SError("core dump info[",errString,"]\n",string(buf[:l]))
|
||||
}
|
||||
}()
|
||||
|
||||
s.self.OnRelease()
|
||||
log.SDebug("Release Service ", s.GetName())
|
||||
}
|
||||
|
||||
func (s *Service) OnRelease(){
|
||||
@@ -228,8 +256,11 @@ func (s *Service) OnInit() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) Wait(){
|
||||
func (s *Service) Stop(){
|
||||
log.SRelease("stop ",s.GetName()," service ")
|
||||
close(s.closeSig)
|
||||
s.wg.Wait()
|
||||
log.SRelease(s.GetName()," service has been stopped")
|
||||
}
|
||||
|
||||
func (s *Service) GetServiceCfg()interface{}{
|
||||
@@ -259,29 +290,48 @@ func (s *Service) RegRawRpc(rpcMethodId uint32,rawRpcCB rpc.RawRpcCallBack){
|
||||
func (s *Service) OnStart(){
|
||||
}
|
||||
|
||||
func (s *Service) OnRpcEvent(ev event.IEvent){
|
||||
func (s *Service) OnNodeEvent(ev event.IEvent){
|
||||
event := ev.(*RpcConnEvent)
|
||||
if event.IsConnect {
|
||||
s.rpcEventLister.OnNodeConnected(event.NodeId)
|
||||
s.nodeEventLister.OnNodeConnected(event.NodeId)
|
||||
}else{
|
||||
s.rpcEventLister.OnNodeDisconnect(event.NodeId)
|
||||
s.nodeEventLister.OnNodeDisconnect(event.NodeId)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) RegRpcListener(rpcEventLister rpc.IRpcListener) {
|
||||
s.rpcEventLister = rpcEventLister
|
||||
s.RegEventReceiverFunc(event.Sys_Event_Rpc_Event,s.GetEventHandler(),s.OnRpcEvent)
|
||||
func (s *Service) OnDiscoverServiceEvent(ev event.IEvent){
|
||||
event := ev.(*DiscoveryServiceEvent)
|
||||
if event.IsDiscovery {
|
||||
s.discoveryServiceLister.OnDiscoveryService(event.NodeId,event.ServiceName)
|
||||
}else{
|
||||
s.discoveryServiceLister.OnUnDiscoveryService(event.NodeId,event.ServiceName)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) RegRpcListener(rpcEventLister rpc.INodeListener) {
|
||||
s.nodeEventLister = rpcEventLister
|
||||
s.RegEventReceiverFunc(event.Sys_Event_Node_Event,s.GetEventHandler(),s.OnNodeEvent)
|
||||
RegRpcEventFun(s.GetName())
|
||||
}
|
||||
|
||||
func (s *Service) UnRegRpcListener(rpcLister rpc.IRpcListener) {
|
||||
s.UnRegEventReceiverFunc(event.Sys_Event_Rpc_Event,s.GetEventHandler())
|
||||
RegRpcEventFun(s.GetName())
|
||||
func (s *Service) UnRegRpcListener(rpcLister rpc.INodeListener) {
|
||||
s.UnRegEventReceiverFunc(event.Sys_Event_Node_Event,s.GetEventHandler())
|
||||
UnRegRpcEventFun(s.GetName())
|
||||
}
|
||||
|
||||
func (s *Service) RegDiscoverListener(discoveryServiceListener rpc.IDiscoveryServiceListener) {
|
||||
s.discoveryServiceLister = discoveryServiceListener
|
||||
s.RegEventReceiverFunc(event.Sys_Event_DiscoverService,s.GetEventHandler(),s.OnDiscoverServiceEvent)
|
||||
RegDiscoveryServiceEventFun(s.GetName())
|
||||
}
|
||||
|
||||
func (s *Service) UnRegDiscoverListener(rpcLister rpc.INodeListener) {
|
||||
s.UnRegEventReceiverFunc(event.Sys_Event_DiscoverService,s.GetEventHandler())
|
||||
UnRegDiscoveryServiceEventFun(s.GetName())
|
||||
}
|
||||
|
||||
func (s *Service) PushRpcRequest(rpcRequest *rpc.RpcRequest) error{
|
||||
ev := eventPool.Get().(*event.Event)
|
||||
ev := event.NewEvent()
|
||||
ev.Type = event.ServiceRpcRequestEvent
|
||||
ev.Data = rpcRequest
|
||||
|
||||
@@ -289,7 +339,7 @@ func (s *Service) PushRpcRequest(rpcRequest *rpc.RpcRequest) error{
|
||||
}
|
||||
|
||||
func (s *Service) PushRpcResponse(call *rpc.Call) error{
|
||||
ev := eventPool.Get().(*event.Event)
|
||||
ev := event.NewEvent()
|
||||
ev.Type = event.ServiceRpcResponseEvent
|
||||
ev.Data = call
|
||||
|
||||
@@ -301,7 +351,7 @@ func (s *Service) PushEvent(ev event.IEvent) error{
|
||||
}
|
||||
|
||||
func (s *Service) pushEvent(ev event.IEvent) error{
|
||||
if len(s.chanEvent) >= maxServiceEventChannel {
|
||||
if len(s.chanEvent) >= maxServiceEventChannelNum {
|
||||
err := errors.New("The event channel in the service is full")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
@@ -311,6 +361,21 @@ func (s *Service) pushEvent(ev event.IEvent) error{
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) GetServiceEventChannelNum() int{
|
||||
return len(s.chanEvent)
|
||||
}
|
||||
|
||||
func (s *Service) GetServiceTimerChannelNum() int{
|
||||
return len(s.dispatcher.ChanTimer)
|
||||
}
|
||||
|
||||
func (s *Service) SetEventChannelNum(num int){
|
||||
if s.chanEvent == nil {
|
||||
s.chanEvent = make(chan event.IEvent,num)
|
||||
}else {
|
||||
panic("this stage cannot be set")
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) SetGoRoutineNum(goroutineNum int32) bool {
|
||||
//已经开始状态不允许修改协程数量,打开性能分析器不允许开多线程
|
||||
|
||||
@@ -1,24 +1,30 @@
|
||||
package service
|
||||
|
||||
import "errors"
|
||||
|
||||
//本地所有的service
|
||||
var mapServiceName map[string]IService
|
||||
var setupServiceList []IService
|
||||
|
||||
type RegRpcEventFunType func(serviceName string)
|
||||
type RegDiscoveryServiceEventFunType func(serviceName string)
|
||||
var RegRpcEventFun RegRpcEventFunType
|
||||
var UnRegRpcEventFun RegRpcEventFunType
|
||||
|
||||
var RegDiscoveryServiceEventFun RegDiscoveryServiceEventFunType
|
||||
var UnRegDiscoveryServiceEventFun RegDiscoveryServiceEventFunType
|
||||
|
||||
func init(){
|
||||
mapServiceName = map[string]IService{}
|
||||
setupServiceList = []IService{}
|
||||
}
|
||||
|
||||
func Init(chanCloseSig chan bool) {
|
||||
closeSig=chanCloseSig
|
||||
|
||||
func Init() {
|
||||
for _,s := range setupServiceList {
|
||||
err := s.OnInit()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
errs := errors.New("Failed to initialize "+s.GetName()+" service:"+err.Error())
|
||||
panic(errs)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -49,8 +55,8 @@ func Start(){
|
||||
}
|
||||
}
|
||||
|
||||
func WaitStop(){
|
||||
func StopAllService(){
|
||||
for i := len(setupServiceList) - 1; i >= 0; i-- {
|
||||
setupServiceList[i].Wait()
|
||||
setupServiceList[i].Stop()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,7 @@ import (
|
||||
"bytes"
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
@@ -64,7 +64,7 @@ func (m *HttpClientModule) Init(maxpool int, proxyUrl string) {
|
||||
Proxy: proxyFun,
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
|
||||
},
|
||||
Timeout: 5 * time.Second,
|
||||
Timeout: 5 * time.Second,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -103,7 +103,7 @@ func (m *HttpClientModule) Request(method string, url string, body []byte, heade
|
||||
}
|
||||
defer rsp.Body.Close()
|
||||
|
||||
ret.Body, err = ioutil.ReadAll(rsp.Body)
|
||||
ret.Body, err = io.ReadAll(rsp.Body)
|
||||
if err != nil {
|
||||
ret.Err = err
|
||||
return ret
|
||||
|
||||
@@ -52,10 +52,10 @@ func (mm *MongoModule) TakeSession() Session {
|
||||
return Session{Client: mm.client, maxOperatorTimeOut: mm.maxOperatorTimeOut}
|
||||
}
|
||||
|
||||
func (s *Session) CountDocument(db string, collection string) (int64, error) {
|
||||
func (s *Session) CountDocument(db string, collection string, filter interface{}) (int64, error) {
|
||||
ctxTimeout, cancel := s.GetDefaultContext()
|
||||
defer cancel()
|
||||
return s.Database(db).Collection(collection).CountDocuments(ctxTimeout, bson.D{})
|
||||
return s.Database(db).Collection(collection).CountDocuments(ctxTimeout, filter)
|
||||
}
|
||||
|
||||
func (s *Session) NextSeq(db string, collection string, id interface{}) (int, error) {
|
||||
@@ -68,34 +68,39 @@ func (s *Session) NextSeq(db string, collection string, id interface{}) (int, er
|
||||
|
||||
after := options.After
|
||||
updateOpts := options.FindOneAndUpdateOptions{ReturnDocument: &after}
|
||||
err := s.Client.Database(db).Collection(collection).FindOneAndUpdate(ctxTimeout, bson.M{"_id": id}, bson.M{"$inc": bson.M{"Seq": 1}},&updateOpts).Decode(&res)
|
||||
err := s.Client.Database(db).Collection(collection).FindOneAndUpdate(ctxTimeout, bson.M{"_id": id}, bson.M{"$inc": bson.M{"Seq": 1}}, &updateOpts).Decode(&res)
|
||||
return res.Seq, err
|
||||
}
|
||||
|
||||
//indexKeys[索引][每个索引key字段]
|
||||
func (s *Session) EnsureIndex(db string, collection string, indexKeys [][]string, bBackground bool,sparse bool) error {
|
||||
return s.ensureIndex(db, collection, indexKeys, bBackground, false,sparse)
|
||||
// indexKeys[索引][每个索引key字段]
|
||||
func (s *Session) EnsureIndex(db string, collection string, indexKeys [][]string, bBackground bool, sparse bool, asc bool) error {
|
||||
return s.ensureIndex(db, collection, indexKeys, bBackground, false, sparse, asc)
|
||||
}
|
||||
|
||||
//indexKeys[索引][每个索引key字段]
|
||||
func (s *Session) EnsureUniqueIndex(db string, collection string, indexKeys [][]string, bBackground bool,sparse bool) error {
|
||||
return s.ensureIndex(db, collection, indexKeys, bBackground, true,sparse)
|
||||
// indexKeys[索引][每个索引key字段]
|
||||
func (s *Session) EnsureUniqueIndex(db string, collection string, indexKeys [][]string, bBackground bool, sparse bool, asc bool) error {
|
||||
return s.ensureIndex(db, collection, indexKeys, bBackground, true, sparse, asc)
|
||||
}
|
||||
|
||||
//keys[索引][每个索引key字段]
|
||||
func (s *Session) ensureIndex(db string, collection string, indexKeys [][]string, bBackground bool, unique bool,sparse bool) error {
|
||||
// keys[索引][每个索引key字段]
|
||||
func (s *Session) ensureIndex(db string, collection string, indexKeys [][]string, bBackground bool, unique bool, sparse bool, asc bool) error {
|
||||
var indexes []mongo.IndexModel
|
||||
for _, keys := range indexKeys {
|
||||
keysDoc := bsonx.Doc{}
|
||||
for _, key := range keys {
|
||||
keysDoc = keysDoc.Append(key, bsonx.Int32(1))
|
||||
if asc {
|
||||
keysDoc = keysDoc.Append(key, bsonx.Int32(1))
|
||||
} else {
|
||||
keysDoc = keysDoc.Append(key, bsonx.Int32(-1))
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
options:= options.Index().SetUnique(unique).SetBackground(bBackground)
|
||||
options := options.Index().SetUnique(unique).SetBackground(bBackground)
|
||||
if sparse == true {
|
||||
options.SetSparse(true)
|
||||
}
|
||||
indexes = append(indexes, mongo.IndexModel{Keys: keysDoc, Options:options })
|
||||
indexes = append(indexes, mongo.IndexModel{Keys: keysDoc, Options: options})
|
||||
}
|
||||
|
||||
ctxTimeout, cancel := context.WithTimeout(context.Background(), s.maxOperatorTimeOut)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
package mysqlmondule
|
||||
package mysqlmodule
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
package mysqlmondule
|
||||
package mysqlmodule
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
@@ -42,17 +42,17 @@ type RedisModule struct {
|
||||
|
||||
// ConfigRedis 服务器配置
|
||||
type ConfigRedis struct {
|
||||
IP string
|
||||
Port int
|
||||
Password string
|
||||
DbIndex int
|
||||
MaxIdle int //最大的空闲连接数,表示即使没有redis连接时依然可以保持N个空闲的连接,而不被清除,随时处于待命状态。
|
||||
MaxActive int //最大的激活连接数,表示同时最多有N个连接
|
||||
IdleTimeout int //最大的空闲连接等待时间,超过此时间后,空闲连接将被关闭
|
||||
IP string
|
||||
Port int
|
||||
Password string
|
||||
DbIndex int
|
||||
MaxIdle int //最大的空闲连接数,表示即使没有redis连接时依然可以保持N个空闲的连接,而不被清除,随时处于待命状态。
|
||||
MaxActive int //最大的激活连接数,表示同时最多有N个连接
|
||||
IdleTimeout int //最大的空闲连接等待时间,超过此时间后,空闲连接将被关闭
|
||||
}
|
||||
|
||||
func (m *RedisModule) Init(redisCfg *ConfigRedis) {
|
||||
redisServer := fmt.Sprintf("%s:%d",redisCfg.IP, redisCfg.Port)
|
||||
redisServer := fmt.Sprintf("%s:%d", redisCfg.IP, redisCfg.Port)
|
||||
m.redisPool = &redis.Pool{
|
||||
Wait: true,
|
||||
MaxIdle: redisCfg.MaxIdle,
|
||||
@@ -192,7 +192,6 @@ func (m *RedisModule) HSetStruct(key string, val interface{}) error {
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
|
||||
_, err = conn.Do("HSET", redis.Args{}.Add(key).AddFlat(val)...)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -254,11 +253,11 @@ func (m *RedisModule) setMuchStringByExpire(mapInfo map[interface{}]interface{},
|
||||
}
|
||||
}
|
||||
|
||||
if serr!=nil {
|
||||
if serr != nil {
|
||||
log.Error("setMuchStringByExpire fail,reason:%v", serr)
|
||||
conn.Do("DISCARD")
|
||||
return serr
|
||||
}else{
|
||||
} else {
|
||||
_, err = conn.Do("EXEC")
|
||||
}
|
||||
|
||||
@@ -287,7 +286,7 @@ func (m *RedisModule) GetString(key interface{}) (string, error) {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return redis.String(ret,nil)
|
||||
return redis.String(ret, nil)
|
||||
}
|
||||
|
||||
func (m *RedisModule) GetStringJSON(key string, st interface{}) error {
|
||||
@@ -345,7 +344,7 @@ func (m *RedisModule) GetStringMap(keys []string) (retMap map[string]string, err
|
||||
if err != nil {
|
||||
log.Error("GetMuchString fail,reason:%v", err)
|
||||
conn.Do("DISCARD")
|
||||
return nil,err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
@@ -442,7 +441,7 @@ func (m *RedisModule) DelStringKeyList(keys []interface{}) (map[interface{}]bool
|
||||
if err != nil {
|
||||
log.Error("DelMuchString fail,reason:%v", err)
|
||||
conn.Do("DISCARD")
|
||||
return nil,err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
// 执行命令
|
||||
@@ -491,7 +490,7 @@ func (m *RedisModule) SetHash(redisKey, hashKey, value interface{}) error {
|
||||
return retErr
|
||||
}
|
||||
|
||||
//GetRedisAllHashJSON ...
|
||||
// GetRedisAllHashJSON ...
|
||||
func (m *RedisModule) GetAllHashJSON(redisKey string) (map[string]string, error) {
|
||||
if redisKey == "" {
|
||||
return nil, errors.New("Key Is Empty")
|
||||
@@ -531,7 +530,7 @@ func (m *RedisModule) GetHash(redisKey interface{}, fieldKey interface{}) (strin
|
||||
return "", errors.New("Reids Get Hash nil")
|
||||
}
|
||||
|
||||
return redis.String(value,nil)
|
||||
return redis.String(value, nil)
|
||||
}
|
||||
|
||||
func (m *RedisModule) GetMuchHash(args ...interface{}) ([]string, error) {
|
||||
@@ -556,7 +555,7 @@ func (m *RedisModule) GetMuchHash(args ...interface{}) ([]string, error) {
|
||||
|
||||
valueList := value.([]interface{})
|
||||
retList := []string{}
|
||||
for _, valueItem := range valueList{
|
||||
for _, valueItem := range valueList {
|
||||
valueByte, ok := valueItem.([]byte)
|
||||
if !ok {
|
||||
retList = append(retList, "")
|
||||
@@ -618,8 +617,8 @@ func (m *RedisModule) SetHashMapJSON(redisKey string, mapFieldValue map[interfac
|
||||
for symbol, val := range mapFieldValue {
|
||||
temp, err := json.Marshal(val)
|
||||
if err == nil {
|
||||
_,err = conn.Do("HSET", redisKey, symbol, temp)
|
||||
if err!=nil {
|
||||
_, err = conn.Do("HSET", redisKey, symbol, temp)
|
||||
if err != nil {
|
||||
log.Error("SetMuchHashJSON fail,reason:%v", err)
|
||||
conn.Send("DISCARD")
|
||||
return err
|
||||
@@ -650,25 +649,25 @@ func (m *RedisModule) DelHash(args ...interface{}) error {
|
||||
}
|
||||
|
||||
func (m *RedisModule) LPushList(args ...interface{}) error {
|
||||
err := m.setListPush("LPUSH",args...)
|
||||
err := m.setListPush("LPUSH", args...)
|
||||
return err
|
||||
}
|
||||
|
||||
func (m *RedisModule) LPushListJSON(key interface{}, value ...interface{}) error {
|
||||
return m.setListJSONPush("LPUSH",key,value...)
|
||||
return m.setListJSONPush("LPUSH", key, value...)
|
||||
}
|
||||
|
||||
func (m *RedisModule) RPushList(args ...interface{}) error {
|
||||
err := m.setListPush("RPUSH",args...)
|
||||
err := m.setListPush("RPUSH", args...)
|
||||
return err
|
||||
}
|
||||
|
||||
func (m *RedisModule) RPushListJSON(key interface{}, value ...interface{}) error {
|
||||
return m.setListJSONPush("RPUSH",key,value...)
|
||||
return m.setListJSONPush("RPUSH", key, value...)
|
||||
}
|
||||
|
||||
//LPUSH和RPUSH
|
||||
func (m *RedisModule) setListPush(setType string,args...interface{}) error {
|
||||
// LPUSH和RPUSH
|
||||
func (m *RedisModule) setListPush(setType string, args ...interface{}) error {
|
||||
if setType != "LPUSH" && setType != "RPUSH" {
|
||||
return errors.New("Redis List Push Type Error,Must Be LPUSH or RPUSH")
|
||||
}
|
||||
@@ -685,17 +684,17 @@ func (m *RedisModule) setListPush(setType string,args...interface{}) error {
|
||||
return retErr
|
||||
}
|
||||
|
||||
func (m *RedisModule) setListJSONPush(setType string,key interface{}, value ...interface{}) error {
|
||||
func (m *RedisModule) setListJSONPush(setType string, key interface{}, value ...interface{}) error {
|
||||
args := []interface{}{key}
|
||||
for _,v := range value{
|
||||
for _, v := range value {
|
||||
jData, err := json.Marshal(v)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
args = append(args,string(jData))
|
||||
args = append(args, string(jData))
|
||||
}
|
||||
|
||||
return m.setListPush(setType,args...)
|
||||
return m.setListPush(setType, args...)
|
||||
}
|
||||
|
||||
// Lrange ...
|
||||
@@ -715,7 +714,7 @@ func (m *RedisModule) LRangeList(key string, start, end int) ([]string, error) {
|
||||
return redis.Strings(reply, err)
|
||||
}
|
||||
|
||||
//获取List的长度
|
||||
// 获取List的长度
|
||||
func (m *RedisModule) GetListLen(key string) (int, error) {
|
||||
conn, err := m.getConn()
|
||||
if err != nil {
|
||||
@@ -731,11 +730,11 @@ func (m *RedisModule) GetListLen(key string) (int, error) {
|
||||
return redis.Int(reply, err)
|
||||
}
|
||||
|
||||
//弹出List最后条记录
|
||||
func (m *RedisModule) RPOPListValue(key string) (string,error) {
|
||||
// 弹出List最后条记录
|
||||
func (m *RedisModule) RPOPListValue(key string) (string, error) {
|
||||
conn, err := m.getConn()
|
||||
if err != nil {
|
||||
return "",err
|
||||
return "", err
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
@@ -783,7 +782,7 @@ func (m *RedisModule) LRange(key string, start, stop int) ([]byte, error) {
|
||||
return makeListJson(reply.([]interface{}), false), nil
|
||||
}
|
||||
|
||||
//弹出list(消息队列)数据,数据放入out fromLeft表示是否从左侧弹出 block表示是否阻塞 timeout表示阻塞超时
|
||||
// 弹出list(消息队列)数据,数据放入out fromLeft表示是否从左侧弹出 block表示是否阻塞 timeout表示阻塞超时
|
||||
func (m *RedisModule) ListPopJson(key string, fromLeft, block bool, timeout int, out interface{}) error {
|
||||
b, err := m.ListPop(key, fromLeft, block, timeout)
|
||||
if err != nil {
|
||||
@@ -796,7 +795,7 @@ func (m *RedisModule) ListPopJson(key string, fromLeft, block bool, timeout int,
|
||||
return nil
|
||||
}
|
||||
|
||||
//弹出list(消息队列)数据 fromLeft表示是否从左侧弹出 block表示是否阻塞 timeout表示阻塞超时
|
||||
// 弹出list(消息队列)数据 fromLeft表示是否从左侧弹出 block表示是否阻塞 timeout表示阻塞超时
|
||||
func (m *RedisModule) ListPop(key string, fromLeft, block bool, timeout int) ([]byte, error) {
|
||||
cmd := ""
|
||||
if fromLeft {
|
||||
@@ -838,7 +837,7 @@ func (m *RedisModule) ListPop(key string, fromLeft, block bool, timeout int) ([]
|
||||
return b, nil
|
||||
}
|
||||
|
||||
//有序集合插入Json
|
||||
// 有序集合插入Json
|
||||
func (m *RedisModule) ZADDInsertJson(key string, score float64, value interface{}) error {
|
||||
|
||||
conn, err := m.getConn()
|
||||
@@ -858,7 +857,7 @@ func (m *RedisModule) ZADDInsertJson(key string, score float64, value interface{
|
||||
return nil
|
||||
}
|
||||
|
||||
//有序集合插入
|
||||
// 有序集合插入
|
||||
func (m *RedisModule) ZADDInsert(key string, score float64, Data interface{}) error {
|
||||
conn, err := m.getConn()
|
||||
if err != nil {
|
||||
@@ -898,7 +897,7 @@ func (m *RedisModule) ZRangeJSON(key string, start, stop int, ascend bool, withS
|
||||
return nil
|
||||
}
|
||||
|
||||
//取有序set指定排名 ascend=true表示按升序遍历 否则按降序遍历
|
||||
// 取有序set指定排名 ascend=true表示按升序遍历 否则按降序遍历
|
||||
func (m *RedisModule) ZRange(key string, start, stop int, ascend bool, withScores bool) ([]byte, error) {
|
||||
conn, err := m.getConn()
|
||||
if err != nil {
|
||||
@@ -922,7 +921,7 @@ func (m *RedisModule) ZRange(key string, start, stop int, ascend bool, withScore
|
||||
return makeListJson(reply.([]interface{}), withScores), nil
|
||||
}
|
||||
|
||||
//获取有序集合长度
|
||||
// 获取有序集合长度
|
||||
func (m *RedisModule) Zcard(key string) (int, error) {
|
||||
conn, err := m.getConn()
|
||||
if err != nil {
|
||||
@@ -937,7 +936,7 @@ func (m *RedisModule) Zcard(key string) (int, error) {
|
||||
return int(reply.(int64)), nil
|
||||
}
|
||||
|
||||
//["123","234"]
|
||||
// ["123","234"]
|
||||
func makeListJson(redisReply []interface{}, withScores bool) []byte {
|
||||
var buf bytes.Buffer
|
||||
buf.WriteString("[")
|
||||
@@ -1006,7 +1005,7 @@ func (m *RedisModule) ZRangeByScore(key string, start, stop float64, ascend bool
|
||||
return makeListJson(reply.([]interface{}), withScores), nil
|
||||
}
|
||||
|
||||
//获取指定member的排名
|
||||
// 获取指定member的排名
|
||||
func (m *RedisModule) ZScore(key string, member interface{}) (float64, error) {
|
||||
conn, err := m.getConn()
|
||||
if err != nil {
|
||||
@@ -1022,7 +1021,7 @@ func (m *RedisModule) ZScore(key string, member interface{}) (float64, error) {
|
||||
return redis.Float64(reply, err)
|
||||
}
|
||||
|
||||
//获取指定member的排名
|
||||
// 获取指定member的排名
|
||||
func (m *RedisModule) ZRank(key string, member interface{}, ascend bool) (int, error) {
|
||||
conn, err := m.getConn()
|
||||
if err != nil {
|
||||
@@ -1100,17 +1099,17 @@ func (m *RedisModule) HincrbyHashInt(redisKey, hashKey string, value int) error
|
||||
func (m *RedisModule) EXPlREInsert(key string, TTl int) error {
|
||||
conn, err := m.getConn()
|
||||
if err != nil {
|
||||
return err
|
||||
return err
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
|
||||
_, err = conn.Do("expire", key, TTl)
|
||||
if err != nil {
|
||||
log.Error("expire fail,reason:%v", err)
|
||||
return err
|
||||
log.Error("expire fail,reason:%v", err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (m *RedisModule) Zremrangebyrank(redisKey string, start, end interface{}) (int, error) {
|
||||
conn, err := m.getConn()
|
||||
@@ -1151,3 +1150,9 @@ func (m *RedisModule) Keys(key string) ([]string, error) {
|
||||
}
|
||||
return strs, nil
|
||||
}
|
||||
|
||||
func (m *RedisModule) OnRelease() {
|
||||
if m.redisPool != nil {
|
||||
m.redisPool.Close()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
"github.com/duanhf2012/origin/util/uuid"
|
||||
jsoniter "github.com/json-iterator/go"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"os"
|
||||
"strings"
|
||||
@@ -17,13 +16,13 @@ import (
|
||||
|
||||
var json = jsoniter.ConfigCompatibleWithStandardLibrary
|
||||
|
||||
var DefaultReadTimeout time.Duration = time.Second*10
|
||||
var DefaultWriteTimeout time.Duration = time.Second*10
|
||||
var DefaultProcessTimeout time.Duration = time.Second*10
|
||||
var DefaultReadTimeout time.Duration = time.Second * 10
|
||||
var DefaultWriteTimeout time.Duration = time.Second * 10
|
||||
var DefaultProcessTimeout time.Duration = time.Second * 10
|
||||
|
||||
//http redirect
|
||||
type HttpRedirectData struct {
|
||||
Url string
|
||||
Url string
|
||||
CookieList []*http.Cookie
|
||||
}
|
||||
|
||||
@@ -44,7 +43,7 @@ type routerMatchData struct {
|
||||
}
|
||||
|
||||
type routerServeFileData struct {
|
||||
matchUrl string
|
||||
matchUrl string
|
||||
localPath string
|
||||
method HTTP_METHOD
|
||||
}
|
||||
@@ -56,45 +55,45 @@ type IHttpRouter interface {
|
||||
|
||||
SetServeFile(method HTTP_METHOD, urlpath string, dirname string) error
|
||||
SetFormFileKey(formFileKey string)
|
||||
GetFormFileKey()string
|
||||
GetFormFileKey() string
|
||||
AddHttpFiltrate(FiltrateFun HttpFiltrate) bool
|
||||
}
|
||||
|
||||
type HttpRouter struct {
|
||||
pathRouter map[HTTP_METHOD] map[string] routerMatchData //url地址,对应本service地址
|
||||
serveFileData map[string] *routerServeFileData
|
||||
httpFiltrateList [] HttpFiltrate
|
||||
pathRouter map[HTTP_METHOD]map[string]routerMatchData //url地址,对应本service地址
|
||||
serveFileData map[string]*routerServeFileData
|
||||
httpFiltrateList []HttpFiltrate
|
||||
|
||||
formFileKey string
|
||||
}
|
||||
|
||||
type HttpSession struct {
|
||||
httpRouter IHttpRouter
|
||||
r *http.Request
|
||||
w http.ResponseWriter
|
||||
r *http.Request
|
||||
w http.ResponseWriter
|
||||
|
||||
//parse result
|
||||
mapParam map[string]string
|
||||
body []byte
|
||||
body []byte
|
||||
|
||||
//processor result
|
||||
statusCode int
|
||||
msg []byte
|
||||
fileData *routerServeFileData
|
||||
statusCode int
|
||||
msg []byte
|
||||
fileData *routerServeFileData
|
||||
redirectData *HttpRedirectData
|
||||
sessionDone chan *HttpSession
|
||||
sessionDone chan *HttpSession
|
||||
}
|
||||
|
||||
|
||||
type HttpService struct {
|
||||
service.Service
|
||||
|
||||
httpServer network.HttpServer
|
||||
postAliasUrl map[HTTP_METHOD] map[string]routerMatchData //url地址,对应本service地址
|
||||
httpRouter IHttpRouter
|
||||
listenAddr string
|
||||
corsHeader *CORSHeader
|
||||
httpServer network.HttpServer
|
||||
postAliasUrl map[HTTP_METHOD]map[string]routerMatchData //url地址,对应本service地址
|
||||
httpRouter IHttpRouter
|
||||
listenAddr string
|
||||
corsHeader *CORSHeader
|
||||
processTimeout time.Duration
|
||||
manualStart bool
|
||||
}
|
||||
|
||||
type HttpFiltrate func(session *HttpSession) bool //true is pass
|
||||
@@ -109,16 +108,20 @@ func (httpService *HttpService) AddFiltrate(FiltrateFun HttpFiltrate) bool {
|
||||
|
||||
func NewHttpHttpRouter() IHttpRouter {
|
||||
httpRouter := &HttpRouter{}
|
||||
httpRouter.pathRouter =map[HTTP_METHOD] map[string] routerMatchData{}
|
||||
httpRouter.serveFileData = map[string] *routerServeFileData{}
|
||||
httpRouter.pathRouter = map[HTTP_METHOD]map[string]routerMatchData{}
|
||||
httpRouter.serveFileData = map[string]*routerServeFileData{}
|
||||
httpRouter.formFileKey = "file"
|
||||
for i:=METHOD_NONE+1;i<METHOD_INVALID;i++{
|
||||
httpRouter.pathRouter[i] = map[string] routerMatchData{}
|
||||
for i := METHOD_NONE + 1; i < METHOD_INVALID; i++ {
|
||||
httpRouter.pathRouter[i] = map[string]routerMatchData{}
|
||||
}
|
||||
|
||||
return httpRouter
|
||||
}
|
||||
|
||||
func (slf *HttpSession) GetRawQuery() string{
|
||||
return slf.r.URL.RawQuery
|
||||
}
|
||||
|
||||
func (slf *HttpSession) Query(key string) (string, bool) {
|
||||
if slf.mapParam == nil {
|
||||
slf.mapParam = make(map[string]string)
|
||||
@@ -137,7 +140,7 @@ func (slf *HttpSession) Query(key string) (string, bool) {
|
||||
return ret, ok
|
||||
}
|
||||
|
||||
func (slf *HttpSession) GetBody() []byte{
|
||||
func (slf *HttpSession) GetBody() []byte {
|
||||
return slf.body
|
||||
}
|
||||
|
||||
@@ -145,19 +148,19 @@ func (slf *HttpSession) GetMethod() HTTP_METHOD {
|
||||
return slf.getMethod(slf.r.Method)
|
||||
}
|
||||
|
||||
func (slf *HttpSession) GetPath() string{
|
||||
return strings.Trim(slf.r.URL.Path,"/")
|
||||
func (slf *HttpSession) GetPath() string {
|
||||
return strings.Trim(slf.r.URL.Path, "/")
|
||||
}
|
||||
|
||||
func (slf *HttpSession) SetHeader(key, value string) {
|
||||
slf.w.Header().Set(key,value)
|
||||
slf.w.Header().Set(key, value)
|
||||
}
|
||||
|
||||
func (slf *HttpSession) AddHeader(key, value string) {
|
||||
slf.w.Header().Add(key,value)
|
||||
slf.w.Header().Add(key, value)
|
||||
}
|
||||
|
||||
func (slf *HttpSession) GetHeader(key string) string{
|
||||
func (slf *HttpSession) GetHeader(key string) string {
|
||||
return slf.r.Header.Get(key)
|
||||
}
|
||||
|
||||
@@ -165,7 +168,7 @@ func (slf *HttpSession) DelHeader(key string) {
|
||||
slf.r.Header.Del(key)
|
||||
}
|
||||
|
||||
func (slf *HttpSession) WriteStatusCode(statusCode int){
|
||||
func (slf *HttpSession) WriteStatusCode(statusCode int) {
|
||||
slf.statusCode = statusCode
|
||||
}
|
||||
|
||||
@@ -173,7 +176,7 @@ func (slf *HttpSession) Write(msg []byte) {
|
||||
slf.msg = msg
|
||||
}
|
||||
|
||||
func (slf *HttpSession) WriteJsonDone(statusCode int,msgJson interface{}) error {
|
||||
func (slf *HttpSession) WriteJsonDone(statusCode int, msgJson interface{}) error {
|
||||
msg, err := json.Marshal(msgJson)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -187,12 +190,12 @@ func (slf *HttpSession) WriteJsonDone(statusCode int,msgJson interface{}) error
|
||||
|
||||
func (slf *HttpSession) flush() {
|
||||
slf.w.WriteHeader(slf.statusCode)
|
||||
if slf.msg!=nil {
|
||||
if slf.msg != nil {
|
||||
slf.w.Write(slf.msg)
|
||||
}
|
||||
}
|
||||
|
||||
func (slf *HttpSession) Done(){
|
||||
func (slf *HttpSession) Done() {
|
||||
slf.sessionDone <- slf
|
||||
}
|
||||
|
||||
@@ -219,15 +222,15 @@ func (slf *HttpRouter) analysisRouterUrl(url string) (string, error) {
|
||||
return strings.Trim(url, "/"), nil
|
||||
}
|
||||
|
||||
func (slf *HttpSession) Handle(){
|
||||
slf.httpRouter.Router(slf)
|
||||
func (slf *HttpSession) Handle() {
|
||||
slf.httpRouter.Router(slf)
|
||||
}
|
||||
|
||||
func (slf *HttpRouter) SetFormFileKey(formFileKey string){
|
||||
func (slf *HttpRouter) SetFormFileKey(formFileKey string) {
|
||||
slf.formFileKey = formFileKey
|
||||
}
|
||||
|
||||
func (slf *HttpRouter) GetFormFileKey()string{
|
||||
func (slf *HttpRouter) GetFormFileKey() string {
|
||||
return slf.formFileKey
|
||||
}
|
||||
|
||||
@@ -239,19 +242,19 @@ func (slf *HttpRouter) POST(url string, handle HttpHandle) bool {
|
||||
return slf.regRouter(METHOD_POST, url, handle)
|
||||
}
|
||||
|
||||
func (slf *HttpRouter) regRouter(method HTTP_METHOD, url string, handle HttpHandle) bool{
|
||||
mapRouter,ok := slf.pathRouter[method]
|
||||
if ok == false{
|
||||
func (slf *HttpRouter) regRouter(method HTTP_METHOD, url string, handle HttpHandle) bool {
|
||||
mapRouter, ok := slf.pathRouter[method]
|
||||
if ok == false {
|
||||
return false
|
||||
}
|
||||
|
||||
mapRouter[strings.Trim(url,"/")] = routerMatchData{httpHandle:handle}
|
||||
mapRouter[strings.Trim(url, "/")] = routerMatchData{httpHandle: handle}
|
||||
return true
|
||||
}
|
||||
|
||||
func (slf *HttpRouter) Router(session *HttpSession){
|
||||
if slf.httpFiltrateList!=nil {
|
||||
for _,fun := range slf.httpFiltrateList{
|
||||
func (slf *HttpRouter) Router(session *HttpSession) {
|
||||
if slf.httpFiltrateList != nil {
|
||||
for _, fun := range slf.httpFiltrateList {
|
||||
if fun(session) == false {
|
||||
//session.done()
|
||||
return
|
||||
@@ -288,13 +291,13 @@ func (slf *HttpRouter) Router(session *HttpSession){
|
||||
session.Done()
|
||||
}
|
||||
|
||||
func (httpService *HttpService) HttpEventHandler(ev event.IEvent) {
|
||||
func (httpService *HttpService) HttpEventHandler(ev event.IEvent) {
|
||||
ev.(*event.Event).Data.(*HttpSession).Handle()
|
||||
}
|
||||
|
||||
func (httpService *HttpService) SetHttpRouter(httpRouter IHttpRouter,eventHandler event.IEventHandler) {
|
||||
func (httpService *HttpService) SetHttpRouter(httpRouter IHttpRouter, eventHandler event.IEventHandler) {
|
||||
httpService.httpRouter = httpRouter
|
||||
httpService.RegEventReceiverFunc(event.Sys_Event_Http_Event,eventHandler, httpService.HttpEventHandler)
|
||||
httpService.RegEventReceiverFunc(event.Sys_Event_Http_Event, eventHandler, httpService.HttpEventHandler)
|
||||
}
|
||||
|
||||
func (slf *HttpRouter) SetServeFile(method HTTP_METHOD, urlpath string, dirname string) error {
|
||||
@@ -349,68 +352,84 @@ func (httpService *HttpService) OnInit() error {
|
||||
if iConfig == nil {
|
||||
return fmt.Errorf("%s service config is error!", httpService.GetName())
|
||||
}
|
||||
tcpCfg := iConfig.(map[string]interface{})
|
||||
addr,ok := tcpCfg["ListenAddr"]
|
||||
httpCfg := iConfig.(map[string]interface{})
|
||||
addr, ok := httpCfg["ListenAddr"]
|
||||
if ok == false {
|
||||
return fmt.Errorf("%s service config is error!", httpService.GetName())
|
||||
}
|
||||
var readTimeout time.Duration = DefaultReadTimeout
|
||||
var writeTimeout time.Duration = DefaultWriteTimeout
|
||||
|
||||
if cfgRead,ok := tcpCfg["ReadTimeout"];ok == true {
|
||||
readTimeout = time.Duration(cfgRead.(float64))*time.Millisecond
|
||||
if cfgRead, ok := httpCfg["ReadTimeout"]; ok == true {
|
||||
readTimeout = time.Duration(cfgRead.(float64)) * time.Millisecond
|
||||
}
|
||||
|
||||
if cfgWrite,ok := tcpCfg["WriteTimeout"];ok == true {
|
||||
writeTimeout = time.Duration(cfgWrite.(float64))*time.Millisecond
|
||||
if cfgWrite, ok := httpCfg["WriteTimeout"]; ok == true {
|
||||
writeTimeout = time.Duration(cfgWrite.(float64)) * time.Millisecond
|
||||
}
|
||||
|
||||
if manualStart, ok := httpCfg["ManualStart"]; ok == true {
|
||||
httpService.manualStart = manualStart.(bool)
|
||||
}else{
|
||||
manualStart =false
|
||||
}
|
||||
|
||||
httpService.processTimeout = DefaultProcessTimeout
|
||||
if cfgProcessTimeout,ok := tcpCfg["ProcessTimeout"];ok == true {
|
||||
httpService.processTimeout = time.Duration(cfgProcessTimeout.(float64))*time.Millisecond
|
||||
if cfgProcessTimeout, ok := httpCfg["ProcessTimeout"]; ok == true {
|
||||
httpService.processTimeout = time.Duration(cfgProcessTimeout.(float64)) * time.Millisecond
|
||||
}
|
||||
|
||||
httpService.httpServer.Init(addr.(string), httpService, readTimeout, writeTimeout)
|
||||
//Set CAFile
|
||||
caFileList,ok := tcpCfg["CAFile"]
|
||||
caFileList, ok := httpCfg["CAFile"]
|
||||
if ok == false {
|
||||
return nil
|
||||
}
|
||||
iCaList := caFileList.([]interface{})
|
||||
var caFile [] network.CAFile
|
||||
for _,i := range iCaList {
|
||||
var caFile []network.CAFile
|
||||
for _, i := range iCaList {
|
||||
mapCAFile := i.(map[string]interface{})
|
||||
c,ok := mapCAFile["Certfile"]
|
||||
if ok == false{
|
||||
c, ok := mapCAFile["Certfile"]
|
||||
if ok == false {
|
||||
continue
|
||||
}
|
||||
k,ok := mapCAFile["Keyfile"]
|
||||
if ok == false{
|
||||
k, ok := mapCAFile["Keyfile"]
|
||||
if ok == false {
|
||||
continue
|
||||
}
|
||||
|
||||
if c.(string)!="" && k.(string)!="" {
|
||||
caFile = append(caFile,network.CAFile{
|
||||
CertFile: c.(string),
|
||||
if c.(string) != "" && k.(string) != "" {
|
||||
caFile = append(caFile, network.CAFile{
|
||||
CertFile: c.(string),
|
||||
Keyfile: k.(string),
|
||||
})
|
||||
}
|
||||
}
|
||||
httpService.httpServer.SetCAFile(caFile)
|
||||
httpService.httpServer.Start()
|
||||
|
||||
if httpService.manualStart == false {
|
||||
httpService.httpServer.Start()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (httpService *HttpService) StartListen() {
|
||||
if httpService.manualStart {
|
||||
httpService.httpServer.Start()
|
||||
}
|
||||
}
|
||||
|
||||
func (httpService *HttpService) SetAllowCORS(corsHeader *CORSHeader) {
|
||||
httpService.corsHeader = corsHeader
|
||||
}
|
||||
|
||||
func (httpService *HttpService) ProcessFile(session *HttpSession){
|
||||
func (httpService *HttpService) ProcessFile(session *HttpSession) {
|
||||
uPath := session.r.URL.Path
|
||||
idx := strings.Index(uPath, session.fileData.matchUrl)
|
||||
subPath := strings.Trim(uPath[idx+len(session.fileData.matchUrl):], "/")
|
||||
|
||||
destLocalPath := session.fileData.localPath + "/"+subPath
|
||||
destLocalPath := session.fileData.localPath + "/" + subPath
|
||||
|
||||
switch session.GetMethod() {
|
||||
case METHOD_GET:
|
||||
@@ -454,29 +473,29 @@ func (httpService *HttpService) ProcessFile(session *HttpSession){
|
||||
defer localFd.Close()
|
||||
io.Copy(localFd, resourceFile)
|
||||
session.WriteStatusCode(http.StatusOK)
|
||||
session.Write([]byte(uPath+"/"+fileName))
|
||||
session.Write([]byte(uPath + "/" + fileName))
|
||||
session.flush()
|
||||
}
|
||||
}
|
||||
|
||||
func NewAllowCORSHeader() *CORSHeader{
|
||||
func NewAllowCORSHeader() *CORSHeader {
|
||||
header := &CORSHeader{}
|
||||
header.AllowCORSHeader = map[string][]string{}
|
||||
header.AllowCORSHeader["Access-Control-Allow-Origin"] = []string{"*"}
|
||||
header.AllowCORSHeader["Access-Control-Allow-Methods"] =[]string{ "POST, GET, OPTIONS, PUT, DELETE"}
|
||||
header.AllowCORSHeader["Access-Control-Allow-Methods"] = []string{"POST, GET, OPTIONS, PUT, DELETE"}
|
||||
header.AllowCORSHeader["Access-Control-Allow-Headers"] = []string{"Content-Type"}
|
||||
|
||||
return header
|
||||
}
|
||||
|
||||
func (slf *CORSHeader) AddAllowHeader(key string,val string){
|
||||
slf.AllowCORSHeader["Access-Control-Allow-Headers"] = append(slf.AllowCORSHeader["Access-Control-Allow-Headers"],fmt.Sprintf("%s,%s",key,val))
|
||||
func (slf *CORSHeader) AddAllowHeader(key string, val string) {
|
||||
slf.AllowCORSHeader["Access-Control-Allow-Headers"] = append(slf.AllowCORSHeader["Access-Control-Allow-Headers"], fmt.Sprintf("%s,%s", key, val))
|
||||
}
|
||||
|
||||
func (slf *CORSHeader) copyTo(header http.Header){
|
||||
for k,v := range slf.AllowCORSHeader{
|
||||
for _,val := range v{
|
||||
header.Add(k,val)
|
||||
func (slf *CORSHeader) copyTo(header http.Header) {
|
||||
for k, v := range slf.AllowCORSHeader {
|
||||
for _, val := range v {
|
||||
header.Add(k, val)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -491,12 +510,12 @@ func (httpService *HttpService) ServeHTTP(w http.ResponseWriter, r *http.Request
|
||||
return
|
||||
}
|
||||
|
||||
session := &HttpSession{sessionDone:make(chan *HttpSession,1),httpRouter:httpService.httpRouter,statusCode:http.StatusOK}
|
||||
session := &HttpSession{sessionDone: make(chan *HttpSession, 1), httpRouter: httpService.httpRouter, statusCode: http.StatusOK}
|
||||
session.r = r
|
||||
session.w = w
|
||||
|
||||
defer r.Body.Close()
|
||||
body, err := ioutil.ReadAll(r.Body)
|
||||
body, err := io.ReadAll(r.Body)
|
||||
if err != nil {
|
||||
session.WriteStatusCode(http.StatusGatewayTimeout)
|
||||
session.flush()
|
||||
@@ -504,19 +523,19 @@ func (httpService *HttpService) ServeHTTP(w http.ResponseWriter, r *http.Request
|
||||
}
|
||||
session.body = body
|
||||
|
||||
httpService.GetEventHandler().NotifyEvent(&event.Event{Type:event.Sys_Event_Http_Event,Data:session})
|
||||
httpService.GetEventHandler().NotifyEvent(&event.Event{Type: event.Sys_Event_Http_Event, Data: session})
|
||||
ticker := time.NewTicker(httpService.processTimeout)
|
||||
select {
|
||||
case <-ticker.C:
|
||||
session.WriteStatusCode(http.StatusGatewayTimeout)
|
||||
session.flush()
|
||||
break
|
||||
case <- session.sessionDone:
|
||||
if session.fileData!=nil {
|
||||
case <-session.sessionDone:
|
||||
if session.fileData != nil {
|
||||
httpService.ProcessFile(session)
|
||||
}else if session.redirectData!=nil {
|
||||
} else if session.redirectData != nil {
|
||||
session.redirects()
|
||||
}else{
|
||||
} else {
|
||||
session.flush()
|
||||
}
|
||||
}
|
||||
|
||||
231
sysservice/messagequeueservice/CustomerSubscriber.go
Normal file
231
sysservice/messagequeueservice/CustomerSubscriber.go
Normal file
@@ -0,0 +1,231 @@
|
||||
package messagequeueservice
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/cluster"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/util/coroutine"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
type CustomerSubscriber struct {
|
||||
rpc.IRpcHandler
|
||||
topic string
|
||||
subscriber *Subscriber
|
||||
fromNodeId int
|
||||
callBackRpcMethod string
|
||||
serviceName string
|
||||
StartIndex uint64
|
||||
oneBatchQuantity int32
|
||||
subscribeMethod SubscribeMethod
|
||||
customerId string
|
||||
|
||||
isStop int32 //退出标记
|
||||
topicCache []TopicData // 从消息队列中取出来的消息的缓存
|
||||
}
|
||||
|
||||
const DefaultOneBatchQuantity = 1000
|
||||
|
||||
type SubscribeMethod = int32
|
||||
|
||||
const (
|
||||
MethodCustom SubscribeMethod = 0 //自定义模式,以消费者设置的StartIndex开始获取或订阅
|
||||
MethodLast SubscribeMethod = 1 //Last模式,以该消费者上次记录的位置开始订阅
|
||||
)
|
||||
|
||||
func (cs *CustomerSubscriber) trySetSubscriberBaseInfo(rpcHandler rpc.IRpcHandler, ss *Subscriber, topic string, subscribeMethod SubscribeMethod, customerId string, fromNodeId int, callBackRpcMethod string, startIndex uint64, oneBatchQuantity int32) error {
|
||||
cs.subscriber = ss
|
||||
cs.fromNodeId = fromNodeId
|
||||
cs.callBackRpcMethod = callBackRpcMethod
|
||||
//cs.StartIndex = startIndex
|
||||
cs.subscribeMethod = subscribeMethod
|
||||
cs.customerId = customerId
|
||||
cs.StartIndex = startIndex
|
||||
cs.topic = topic
|
||||
cs.IRpcHandler = rpcHandler
|
||||
if oneBatchQuantity == 0 {
|
||||
cs.oneBatchQuantity = DefaultOneBatchQuantity
|
||||
} else {
|
||||
cs.oneBatchQuantity = oneBatchQuantity
|
||||
}
|
||||
|
||||
strRpcMethod := strings.Split(callBackRpcMethod, ".")
|
||||
if len(strRpcMethod) != 2 {
|
||||
err := errors.New("RpcMethod " + callBackRpcMethod + " is error")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
}
|
||||
cs.serviceName = strRpcMethod[0]
|
||||
|
||||
if cluster.HasService(fromNodeId, cs.serviceName) == false {
|
||||
err := fmt.Errorf("nodeId %d cannot found %s", fromNodeId, cs.serviceName)
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
if cluster.GetCluster().IsNodeConnected(fromNodeId) == false {
|
||||
err := fmt.Errorf("nodeId %d is disconnect", fromNodeId)
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
if startIndex == 0 {
|
||||
now := time.Now()
|
||||
zeroTime := time.Date(now.Year(), now.Month(), now.Day(), 0, 0, 0, 0, now.Location())
|
||||
//fmt.Println(zeroTime.Unix())
|
||||
cs.StartIndex = uint64(zeroTime.Unix() << 32)
|
||||
}
|
||||
|
||||
cs.topicCache = make([]TopicData, oneBatchQuantity)
|
||||
return nil
|
||||
}
|
||||
|
||||
// 开始订阅
|
||||
func (cs *CustomerSubscriber) Subscribe(rpcHandler rpc.IRpcHandler, ss *Subscriber, topic string, subscribeMethod SubscribeMethod, customerId string, fromNodeId int, callBackRpcMethod string, startIndex uint64, oneBatchQuantity int32) error {
|
||||
err := cs.trySetSubscriberBaseInfo(rpcHandler, ss, topic, subscribeMethod, customerId, fromNodeId, callBackRpcMethod, startIndex, oneBatchQuantity)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cs.subscriber.queueWait.Add(1)
|
||||
coroutine.GoRecover(cs.SubscribeRun, -1)
|
||||
return nil
|
||||
}
|
||||
|
||||
// 取消订阅
|
||||
func (cs *CustomerSubscriber) UnSubscribe() {
|
||||
atomic.StoreInt32(&cs.isStop, 1)
|
||||
}
|
||||
|
||||
func (cs *CustomerSubscriber) LoadLastIndex() {
|
||||
for {
|
||||
if atomic.LoadInt32(&cs.isStop) != 0 {
|
||||
log.SRelease("topic ", cs.topic, " out of subscription")
|
||||
break
|
||||
}
|
||||
|
||||
log.SRelease("customer ", cs.customerId, " start load last index ")
|
||||
lastIndex, ret := cs.subscriber.dataPersist.LoadCustomerIndex(cs.topic, cs.customerId)
|
||||
if ret == true {
|
||||
if lastIndex > 0 {
|
||||
cs.StartIndex = lastIndex
|
||||
} else {
|
||||
//否则直接使用客户端发回来的
|
||||
}
|
||||
log.SRelease("customer ", cs.customerId, " load finish,start index is ", cs.StartIndex)
|
||||
break
|
||||
}
|
||||
|
||||
log.SRelease("customer ", cs.customerId, " load last index is fail...")
|
||||
time.Sleep(5 * time.Second)
|
||||
}
|
||||
}
|
||||
|
||||
func (cs *CustomerSubscriber) SubscribeRun() {
|
||||
defer cs.subscriber.queueWait.Done()
|
||||
log.SRelease("topic ", cs.topic, " start subscription")
|
||||
|
||||
//加载之前的位置
|
||||
if cs.subscribeMethod == MethodLast {
|
||||
cs.LoadLastIndex()
|
||||
}
|
||||
|
||||
for {
|
||||
if atomic.LoadInt32(&cs.isStop) != 0 {
|
||||
log.SRelease("topic ", cs.topic, " out of subscription")
|
||||
break
|
||||
}
|
||||
|
||||
if cs.checkCustomerIsValid() == false {
|
||||
break
|
||||
}
|
||||
|
||||
//todo 检测退出
|
||||
if cs.subscribe() == false {
|
||||
log.SRelease("topic ", cs.topic, " out of subscription")
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
//删除订阅关系
|
||||
cs.subscriber.removeCustomer(cs.customerId, cs)
|
||||
log.SRelease("topic ", cs.topic, " unsubscription")
|
||||
}
|
||||
|
||||
func (cs *CustomerSubscriber) subscribe() bool {
|
||||
//先从内存中查找
|
||||
topicData, ret := cs.subscriber.queue.FindData(cs.StartIndex+1, cs.oneBatchQuantity, cs.topicCache[:0])
|
||||
if ret == true {
|
||||
cs.publishToCustomer(topicData)
|
||||
return true
|
||||
}
|
||||
|
||||
//从持久化数据中来找
|
||||
topicData = cs.subscriber.dataPersist.FindTopicData(cs.topic, cs.StartIndex, int64(cs.oneBatchQuantity),cs.topicCache[:0])
|
||||
return cs.publishToCustomer(topicData)
|
||||
}
|
||||
|
||||
func (cs *CustomerSubscriber) checkCustomerIsValid() bool {
|
||||
//1.检查nodeid是否在线,不在线,直接取消订阅
|
||||
if cluster.GetCluster().IsNodeConnected(cs.fromNodeId) == false {
|
||||
return false
|
||||
}
|
||||
|
||||
//2.验证是否有该服务,如果没有则退出
|
||||
if cluster.HasService(cs.fromNodeId, cs.serviceName) == false {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func (cs *CustomerSubscriber) publishToCustomer(topicData []TopicData) bool {
|
||||
if cs.checkCustomerIsValid() == false {
|
||||
return false
|
||||
}
|
||||
|
||||
if len(topicData) == 0 {
|
||||
//没有任何数据待一秒吧
|
||||
time.Sleep(time.Second * 1)
|
||||
return true
|
||||
}
|
||||
|
||||
//3.发送失败重试发送
|
||||
var dbQueuePublishReq rpc.DBQueuePublishReq
|
||||
var dbQueuePushRes rpc.DBQueuePublishRes
|
||||
dbQueuePublishReq.TopicName = cs.topic
|
||||
cs.subscriber.dataPersist.OnPushTopicDataToCustomer(cs.topic, topicData)
|
||||
for i := 0; i < len(topicData); i++ {
|
||||
dbQueuePublishReq.PushData = append(dbQueuePublishReq.PushData, topicData[i].RawData)
|
||||
}
|
||||
|
||||
for {
|
||||
if atomic.LoadInt32(&cs.isStop) != 0 {
|
||||
break
|
||||
}
|
||||
|
||||
if cs.checkCustomerIsValid() == false {
|
||||
return false
|
||||
}
|
||||
|
||||
//推送数据
|
||||
err := cs.CallNodeWithTimeout(4*time.Minute,cs.fromNodeId, cs.callBackRpcMethod, &dbQueuePublishReq, &dbQueuePushRes)
|
||||
if err != nil {
|
||||
time.Sleep(time.Second * 1)
|
||||
continue
|
||||
}
|
||||
|
||||
//持久化进度
|
||||
endIndex := cs.subscriber.dataPersist.GetIndex(&topicData[len(topicData)-1])
|
||||
cs.StartIndex = endIndex
|
||||
cs.subscriber.dataPersist.PersistIndex(cs.topic, cs.customerId, endIndex)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
108
sysservice/messagequeueservice/MemoryQueue.go
Normal file
108
sysservice/messagequeueservice/MemoryQueue.go
Normal file
@@ -0,0 +1,108 @@
|
||||
package messagequeueservice
|
||||
|
||||
import (
|
||||
"github.com/duanhf2012/origin/util/algorithms"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type MemoryQueue struct {
|
||||
subscriber *Subscriber
|
||||
|
||||
topicQueue []TopicData
|
||||
head int32
|
||||
tail int32
|
||||
|
||||
locker sync.RWMutex
|
||||
}
|
||||
|
||||
func (mq *MemoryQueue) Init(cap int32) {
|
||||
mq.topicQueue = make([]TopicData, cap+1)
|
||||
}
|
||||
|
||||
// 从队尾Push数据
|
||||
func (mq *MemoryQueue) Push(topicData *TopicData) bool {
|
||||
mq.locker.Lock()
|
||||
defer mq.locker.Unlock()
|
||||
|
||||
nextPos := (mq.tail + 1) % int32(len(mq.topicQueue))
|
||||
//如果队列满了
|
||||
if nextPos == mq.head {
|
||||
//将对首的数据删除掉
|
||||
mq.head++
|
||||
mq.head = mq.head % int32(len(mq.topicQueue))
|
||||
}
|
||||
|
||||
mq.tail = nextPos
|
||||
mq.topicQueue[mq.tail] = *topicData
|
||||
return true
|
||||
}
|
||||
|
||||
func (mq *MemoryQueue) findData(startPos int32, startIndex uint64, limit int32) ([]TopicData, bool) {
|
||||
//空队列,无数据
|
||||
if mq.head == mq.tail {
|
||||
return nil, true
|
||||
}
|
||||
|
||||
var findStartPos int32
|
||||
var findEndPos int32
|
||||
findStartPos = startPos //(mq.head + 1) % cap(mq.topicQueue)
|
||||
if findStartPos <= mq.tail {
|
||||
findEndPos = mq.tail + 1
|
||||
} else {
|
||||
findEndPos = int32(len(mq.topicQueue))
|
||||
}
|
||||
|
||||
if findStartPos >= findEndPos {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
// 要取的Seq 比内存中最小的数据的Seq还小,那么需要返回错误
|
||||
if mq.topicQueue[findStartPos].Seq > startIndex {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
//二分查找位置
|
||||
pos := int32(algorithms.BiSearch(mq.topicQueue[findStartPos:findEndPos], startIndex, 1))
|
||||
if pos == -1 {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
pos += findStartPos
|
||||
//取得结束位置
|
||||
endPos := limit + pos
|
||||
if endPos > findEndPos {
|
||||
endPos = findEndPos
|
||||
}
|
||||
|
||||
return mq.topicQueue[pos:endPos], true
|
||||
}
|
||||
|
||||
// FindData 返回参数[]TopicData 表示查找到的数据,nil表示无数据。bool表示是否不应该在内存中来查
|
||||
func (mq *MemoryQueue) FindData(startIndex uint64, limit int32, dataQueue []TopicData) ([]TopicData, bool) {
|
||||
mq.locker.RLock()
|
||||
defer mq.locker.RUnlock()
|
||||
|
||||
//队列为空时,应该从数据库查找
|
||||
if mq.head == mq.tail {
|
||||
return nil, false
|
||||
} else if mq.head < mq.tail {
|
||||
// 队列没有折叠
|
||||
datas,ret := mq.findData(mq.head + 1, startIndex, limit)
|
||||
if ret {
|
||||
dataQueue = append(dataQueue, datas...)
|
||||
}
|
||||
return dataQueue, ret
|
||||
} else {
|
||||
// 折叠先找后面的部分
|
||||
datas,ret := mq.findData(mq.head+1, startIndex, limit)
|
||||
if ret {
|
||||
dataQueue = append(dataQueue, datas...)
|
||||
return dataQueue, ret
|
||||
}
|
||||
|
||||
// 后面没找到,从前面开始找
|
||||
datas,ret = mq.findData(0, startIndex, limit)
|
||||
dataQueue = append(dataQueue, datas...)
|
||||
return dataQueue, ret
|
||||
}
|
||||
}
|
||||
36
sysservice/messagequeueservice/MemoryQueue_test.go
Normal file
36
sysservice/messagequeueservice/MemoryQueue_test.go
Normal file
@@ -0,0 +1,36 @@
|
||||
package messagequeueservice
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type In int
|
||||
|
||||
func (i In) GetValue() int {
|
||||
return int(i)
|
||||
}
|
||||
|
||||
func Test_BiSearch(t *testing.T) {
|
||||
var memQueue MemoryQueue
|
||||
memQueue.Init(5)
|
||||
|
||||
for i := 1; i <= 8; i++ {
|
||||
memQueue.Push(&TopicData{Seq: uint64(i)})
|
||||
}
|
||||
|
||||
startindex := uint64(0)
|
||||
for {
|
||||
retData, ret := memQueue.FindData(startindex+1, 10)
|
||||
fmt.Println(retData, ret)
|
||||
for _, d := range retData {
|
||||
if d.Seq > startindex {
|
||||
startindex = d.Seq
|
||||
}
|
||||
}
|
||||
if ret == false {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
126
sysservice/messagequeueservice/MessageQueueService.go
Normal file
126
sysservice/messagequeueservice/MessageQueueService.go
Normal file
@@ -0,0 +1,126 @@
|
||||
package messagequeueservice
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/service"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type QueueDataPersist interface {
|
||||
service.IModule
|
||||
|
||||
OnExit()
|
||||
OnReceiveTopicData(topic string, topicData []TopicData) //当收到推送过来的数据时
|
||||
OnPushTopicDataToCustomer(topic string, topicData []TopicData) //当推送数据到Customer时回调
|
||||
PersistTopicData(topic string, topicData []TopicData, retryCount int) ([]TopicData, []TopicData, bool) //持久化数据,失败则返回false,上层会重复尝试,直到成功,建议在函数中加入次数,超过次数则返回true
|
||||
FindTopicData(topic string, startIndex uint64, limit int64, topicBuff []TopicData) []TopicData //查找数据,参数bool代表数据库查找是否成功
|
||||
LoadCustomerIndex(topic string, customerId string) (uint64, bool) //false时代表获取失败,一般是读取错误,会进行重试。如果不存在时,返回(0,true)
|
||||
GetIndex(topicData *TopicData) uint64 //通过topic数据获取进度索引号
|
||||
PersistIndex(topic string, customerId string, index uint64) //持久化进度索引号
|
||||
}
|
||||
|
||||
type MessageQueueService struct {
|
||||
service.Service
|
||||
|
||||
sync.Mutex
|
||||
mapTopicRoom map[string]*TopicRoom
|
||||
|
||||
queueWait sync.WaitGroup
|
||||
dataPersist QueueDataPersist
|
||||
|
||||
memoryQueueLen int32
|
||||
maxProcessTopicBacklogNum int32 //最大积压的数据量,因为是写入到channel中,然后由协程取出再持久化,不设置有默认值100000
|
||||
}
|
||||
|
||||
func (ms *MessageQueueService) OnInit() error {
|
||||
ms.mapTopicRoom = map[string]*TopicRoom{}
|
||||
errC := ms.ReadCfg()
|
||||
if errC != nil {
|
||||
return errC
|
||||
}
|
||||
|
||||
if ms.dataPersist == nil {
|
||||
return errors.New("not setup QueueDataPersist.")
|
||||
}
|
||||
|
||||
_, err := ms.AddModule(ms.dataPersist)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ms *MessageQueueService) ReadCfg() error {
|
||||
mapDBServiceCfg, ok := ms.GetService().GetServiceCfg().(map[string]interface{})
|
||||
if ok == false {
|
||||
return fmt.Errorf("MessageQueueService config is error")
|
||||
}
|
||||
|
||||
maxProcessTopicBacklogNum, ok := mapDBServiceCfg["MaxProcessTopicBacklogNum"]
|
||||
if ok == false {
|
||||
ms.maxProcessTopicBacklogNum = DefaultMaxTopicBacklogNum
|
||||
log.SRelease("MaxProcessTopicBacklogNum config is set to the default value of ", maxProcessTopicBacklogNum)
|
||||
} else {
|
||||
ms.maxProcessTopicBacklogNum = int32(maxProcessTopicBacklogNum.(float64))
|
||||
}
|
||||
|
||||
memoryQueueLen, ok := mapDBServiceCfg["MemoryQueueLen"]
|
||||
if ok == false {
|
||||
ms.memoryQueueLen = DefaultMemoryQueueLen
|
||||
log.SRelease("MemoryQueueLen config is set to the default value of ", DefaultMemoryQueueLen)
|
||||
} else {
|
||||
ms.memoryQueueLen = int32(memoryQueueLen.(float64))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ms *MessageQueueService) Setup(dataPersist QueueDataPersist) {
|
||||
ms.dataPersist = dataPersist
|
||||
}
|
||||
|
||||
func (ms *MessageQueueService) OnRelease() {
|
||||
|
||||
//停止所有的TopicRoom房间
|
||||
ms.Lock()
|
||||
for _, room := range ms.mapTopicRoom {
|
||||
room.Stop()
|
||||
}
|
||||
ms.Unlock()
|
||||
|
||||
//释放时确保所有的协程退出
|
||||
ms.queueWait.Wait()
|
||||
|
||||
//通知持久化对象
|
||||
ms.dataPersist.OnExit()
|
||||
}
|
||||
|
||||
func (ms *MessageQueueService) GetTopicRoom(topic string) *TopicRoom {
|
||||
ms.Lock()
|
||||
defer ms.Unlock()
|
||||
topicRoom := ms.mapTopicRoom[topic]
|
||||
if topicRoom != nil {
|
||||
return topicRoom
|
||||
}
|
||||
|
||||
topicRoom = &TopicRoom{}
|
||||
topicRoom.Init(ms.maxProcessTopicBacklogNum, ms.memoryQueueLen, topic, &ms.queueWait, ms.dataPersist)
|
||||
ms.mapTopicRoom[topic] = topicRoom
|
||||
|
||||
return topicRoom
|
||||
}
|
||||
|
||||
func (ms *MessageQueueService) RPC_Publish(inParam *rpc.DBQueuePublishReq, outParam *rpc.DBQueuePublishRes) error {
|
||||
|
||||
topicRoom := ms.GetTopicRoom(inParam.TopicName)
|
||||
return topicRoom.Publish(inParam.PushData)
|
||||
}
|
||||
|
||||
func (ms *MessageQueueService) RPC_Subscribe(req *rpc.DBQueueSubscribeReq, res *rpc.DBQueueSubscribeRes) error {
|
||||
topicRoom := ms.GetTopicRoom(req.TopicName)
|
||||
return topicRoom.TopicSubscribe(ms.GetRpcHandler(), req.SubType, int32(req.Method), int(req.FromNodeId), req.RpcMethod, req.TopicName, req.CustomerId, req.StartIndex, req.OneBatchQuantity)
|
||||
}
|
||||
425
sysservice/messagequeueservice/MongoPersist.go
Normal file
425
sysservice/messagequeueservice/MongoPersist.go
Normal file
@@ -0,0 +1,425 @@
|
||||
package messagequeueservice
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/service"
|
||||
"github.com/duanhf2012/origin/sysmodule/mongodbmodule"
|
||||
"go.mongodb.org/mongo-driver/bson"
|
||||
"go.mongodb.org/mongo-driver/mongo/options"
|
||||
"time"
|
||||
)
|
||||
|
||||
const MaxDays = 180
|
||||
|
||||
type DataType interface {
|
||||
int | uint | int64 | uint64 | float32 | float64 | int32 | uint32 | int16 | uint16
|
||||
}
|
||||
|
||||
func convertToNumber[DType DataType](val interface{}) (error, DType) {
|
||||
switch val.(type) {
|
||||
case int64:
|
||||
return nil, DType(val.(int64))
|
||||
case int:
|
||||
return nil, DType(val.(int))
|
||||
case uint:
|
||||
return nil, DType(val.(uint))
|
||||
case uint64:
|
||||
return nil, DType(val.(uint64))
|
||||
case float32:
|
||||
return nil, DType(val.(float32))
|
||||
case float64:
|
||||
return nil, DType(val.(float64))
|
||||
case int32:
|
||||
return nil, DType(val.(int32))
|
||||
case uint32:
|
||||
return nil, DType(val.(uint32))
|
||||
case int16:
|
||||
return nil, DType(val.(int16))
|
||||
case uint16:
|
||||
return nil, DType(val.(uint16))
|
||||
}
|
||||
|
||||
return errors.New("unsupported type"), 0
|
||||
}
|
||||
|
||||
type MongoPersist struct {
|
||||
service.Module
|
||||
mongo mongodbmodule.MongoModule
|
||||
|
||||
url string //连接url
|
||||
dbName string //数据库名称
|
||||
retryCount int //落地数据库重试次数
|
||||
}
|
||||
|
||||
const CustomerCollectName = "SysCustomer"
|
||||
|
||||
func (mp *MongoPersist) OnInit() error {
|
||||
if errC := mp.ReadCfg(); errC != nil {
|
||||
return errC
|
||||
}
|
||||
|
||||
err := mp.mongo.Init(mp.url, time.Second*15)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = mp.mongo.Start()
|
||||
if err != nil {
|
||||
log.SError("start dbService[", mp.dbName, "], url[", mp.url, "] init error:", err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
//添加索引
|
||||
var IndexKey [][]string
|
||||
var keys []string
|
||||
keys = append(keys, "Customer", "Topic")
|
||||
IndexKey = append(IndexKey, keys)
|
||||
s := mp.mongo.TakeSession()
|
||||
if err := s.EnsureUniqueIndex(mp.dbName, CustomerCollectName, IndexKey, true, true,true); err != nil {
|
||||
log.SError("EnsureUniqueIndex is fail ", err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) ReadCfg() error {
|
||||
mapDBServiceCfg, ok := mp.GetService().GetServiceCfg().(map[string]interface{})
|
||||
if ok == false {
|
||||
return fmt.Errorf("MessageQueueService config is error")
|
||||
}
|
||||
|
||||
//parse MsgRouter
|
||||
url, ok := mapDBServiceCfg["Url"]
|
||||
if ok == false {
|
||||
return fmt.Errorf("MessageQueueService config is error")
|
||||
}
|
||||
mp.url = url.(string)
|
||||
|
||||
dbName, ok := mapDBServiceCfg["DBName"]
|
||||
if ok == false {
|
||||
return fmt.Errorf("MessageQueueService config is error")
|
||||
}
|
||||
mp.dbName = dbName.(string)
|
||||
|
||||
//
|
||||
goroutineNum, ok := mapDBServiceCfg["RetryCount"]
|
||||
if ok == false {
|
||||
return fmt.Errorf("MongoPersist config is error")
|
||||
}
|
||||
mp.retryCount = int(goroutineNum.(float64))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) OnExit() {
|
||||
}
|
||||
|
||||
// OnReceiveTopicData 当收到推送过来的数据时
|
||||
func (mp *MongoPersist) OnReceiveTopicData(topic string, topicData []TopicData) {
|
||||
//1.收到推送过来的数据,在里面插入_id字段
|
||||
for i := 0; i < len(topicData); i++ {
|
||||
var document bson.D
|
||||
err := bson.Unmarshal(topicData[i].RawData, &document)
|
||||
if err != nil {
|
||||
topicData[i].RawData = nil
|
||||
log.SError(topic, " data Unmarshal is fail ", err.Error())
|
||||
continue
|
||||
}
|
||||
|
||||
document = append(document, bson.E{Key: "_id", Value: topicData[i].Seq})
|
||||
|
||||
byteRet, err := bson.Marshal(document)
|
||||
if err != nil {
|
||||
topicData[i].RawData = nil
|
||||
log.SError(topic, " data Marshal is fail ", err.Error())
|
||||
continue
|
||||
}
|
||||
topicData[i].ExtendParam = document
|
||||
topicData[i].RawData = byteRet
|
||||
}
|
||||
}
|
||||
|
||||
// OnPushTopicDataToCustomer 当推送数据到Customer时回调
|
||||
func (mp *MongoPersist) OnPushTopicDataToCustomer(topic string, topicData []TopicData) {
|
||||
}
|
||||
|
||||
// PersistTopicData 持久化数据
|
||||
func (mp *MongoPersist) persistTopicData(collectionName string, topicData []TopicData, retryCount int) bool {
|
||||
s := mp.mongo.TakeSession()
|
||||
ctx, cancel := s.GetDefaultContext()
|
||||
defer cancel()
|
||||
|
||||
var documents []interface{}
|
||||
for _, tData := range topicData {
|
||||
if tData.ExtendParam == nil {
|
||||
continue
|
||||
}
|
||||
documents = append(documents, tData.ExtendParam)
|
||||
}
|
||||
|
||||
_, err := s.Collection(mp.dbName, collectionName).InsertMany(ctx, documents)
|
||||
if err != nil {
|
||||
log.SError("PersistTopicData InsertMany fail,collect name is ", collectionName," error:",err.Error())
|
||||
|
||||
//失败最大重试数量
|
||||
return retryCount >= mp.retryCount
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) IsSameDay(timestamp1 int64,timestamp2 int64) bool{
|
||||
t1 := time.Unix(timestamp1, 0)
|
||||
t2 := time.Unix(timestamp2, 0)
|
||||
return t1.Year() == t2.Year() && t1.Month() == t2.Month()&&t1.Day() == t2.Day()
|
||||
}
|
||||
|
||||
// PersistTopicData 持久化数据
|
||||
func (mp *MongoPersist) PersistTopicData(topic string, topicData []TopicData, retryCount int) ([]TopicData, []TopicData, bool) {
|
||||
if len(topicData) == 0 {
|
||||
return nil, nil,true
|
||||
}
|
||||
|
||||
preDate := topicData[0].Seq >> 32
|
||||
var findPos int
|
||||
for findPos = 1; findPos < len(topicData); findPos++ {
|
||||
newDate := topicData[findPos].Seq >> 32
|
||||
//说明换天了
|
||||
if mp.IsSameDay(int64(preDate),int64(newDate)) == false {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
collectName := fmt.Sprintf("%s_%s", topic, mp.GetDateByIndex(topicData[0].Seq))
|
||||
ret := mp.persistTopicData(collectName, topicData[:findPos], retryCount)
|
||||
//如果失败,下次重试
|
||||
if ret == false {
|
||||
return nil, nil, false
|
||||
}
|
||||
|
||||
//如果成功
|
||||
return topicData[findPos:len(topicData)], topicData[0:findPos], true
|
||||
}
|
||||
|
||||
// FindTopicData 查找数据
|
||||
func (mp *MongoPersist) findTopicData(topic string, startIndex uint64, limit int64,topicBuff []TopicData) ([]TopicData, bool) {
|
||||
s := mp.mongo.TakeSession()
|
||||
|
||||
|
||||
condition := bson.D{{Key: "_id", Value: bson.D{{Key: "$gt", Value: startIndex}}}}
|
||||
|
||||
var findOption options.FindOptions
|
||||
findOption.SetLimit(limit)
|
||||
var findOptions []*options.FindOptions
|
||||
findOptions = append(findOptions, &findOption)
|
||||
|
||||
ctx, cancel := s.GetDefaultContext()
|
||||
defer cancel()
|
||||
collectName := fmt.Sprintf("%s_%s", topic, mp.GetDateByIndex(startIndex))
|
||||
cursor, err := s.Collection(mp.dbName, collectName).Find(ctx, condition, findOptions...)
|
||||
if err != nil || cursor.Err() != nil {
|
||||
if err == nil {
|
||||
err = cursor.Err()
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
log.SError("find collect name ", topic, " is error:", err.Error())
|
||||
return nil, false
|
||||
}
|
||||
|
||||
return nil, false
|
||||
}
|
||||
|
||||
var res []interface{}
|
||||
ctxAll, cancelAll := s.GetDefaultContext()
|
||||
defer cancelAll()
|
||||
err = cursor.All(ctxAll, &res)
|
||||
if err != nil {
|
||||
if err != nil {
|
||||
log.SError("find collect name ", topic, " is error:", err.Error())
|
||||
return nil, false
|
||||
}
|
||||
|
||||
return nil, false
|
||||
}
|
||||
|
||||
//序列化返回
|
||||
for i := 0; i < len(res); i++ {
|
||||
rawData, errM := bson.Marshal(res[i])
|
||||
if errM != nil {
|
||||
if errM != nil {
|
||||
log.SError("collect name ", topic, " Marshal is error:", err.Error())
|
||||
return nil, false
|
||||
}
|
||||
continue
|
||||
}
|
||||
topicBuff = append(topicBuff, TopicData{RawData: rawData})
|
||||
}
|
||||
|
||||
return topicBuff, true
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) IsYesterday(startIndex uint64) (bool,string){
|
||||
timeStamp := int64(startIndex>>32)
|
||||
|
||||
startTime := time.Unix(timeStamp, 0).AddDate(0,0,1)
|
||||
nowTm := time.Now()
|
||||
|
||||
return startTime.Year() == nowTm.Year() && startTime.Month() == nowTm.Month()&&startTime.Day() == nowTm.Day(),nowTm.Format("20060102")
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) getCollectCount(topic string,today string) (int64 ,error){
|
||||
s := mp.mongo.TakeSession()
|
||||
ctx, cancel := s.GetDefaultContext()
|
||||
defer cancel()
|
||||
collectName := fmt.Sprintf("%s_%s", topic, today)
|
||||
count, err := s.Collection(mp.dbName, collectName).EstimatedDocumentCount(ctx)
|
||||
return count,err
|
||||
}
|
||||
|
||||
// FindTopicData 查找数据
|
||||
func (mp *MongoPersist) FindTopicData(topic string, startIndex uint64, limit int64,topicBuff []TopicData) []TopicData {
|
||||
//某表找不到,一直往前找,找到当前置为止
|
||||
for days := 1; days <= MaxDays; days++ {
|
||||
//是否可以跳天
|
||||
//在换天时,如果记录在其他协程还没insert完成时,因为没查到直接跳到第二天,导致漏掉数据
|
||||
//解决的方法是在换天时,先判断新换的当天有没有记录,有记录时,说明昨天的数据已经插入完成,才可以跳天查询
|
||||
IsJumpDays := true
|
||||
|
||||
//如果是昨天,先判断当天有没有表数据
|
||||
bYesterday,strToday := mp.IsYesterday(startIndex)
|
||||
if bYesterday {
|
||||
count,err := mp.getCollectCount(topic,strToday)
|
||||
if err != nil {
|
||||
//失败时,重新开始
|
||||
log.SError("getCollectCount ",topic,"_",strToday," is fail:",err.Error())
|
||||
return nil
|
||||
}
|
||||
//当天没有记录,则不能跳表,有可能当天还有数据
|
||||
if count == 0 {
|
||||
IsJumpDays = false
|
||||
}
|
||||
}
|
||||
|
||||
//从startIndex开始一直往后查
|
||||
topicData, isSucc := mp.findTopicData(topic, startIndex, limit,topicBuff)
|
||||
//有数据或者数据库出错时返回,返回后,会进行下一轮的查询遍历
|
||||
if len(topicData) > 0 || isSucc == false {
|
||||
return topicData
|
||||
}
|
||||
|
||||
//找不到数据时,判断当前日期是否一致
|
||||
if mp.GetDateByIndex(startIndex) >= mp.GetNowTime() {
|
||||
break
|
||||
}
|
||||
|
||||
//不允许跳天,则直接跳出
|
||||
if IsJumpDays == false {
|
||||
break
|
||||
}
|
||||
|
||||
startIndex = mp.GetNextIndex(startIndex, days)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) GetNowTime() string {
|
||||
now := time.Now()
|
||||
zeroTime := time.Date(now.Year(), now.Month(), now.Day(), 0, 0, 0, 0, now.Location())
|
||||
return zeroTime.Format("20060102")
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) GetDateByIndex(startIndex uint64) string {
|
||||
startTm := int64(startIndex >> 32)
|
||||
return time.Unix(startTm, 0).Format("20060102")
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) GetNextIndex(startIndex uint64, addDay int) uint64 {
|
||||
startTime := time.Unix(int64(startIndex>>32), 0)
|
||||
dateTime := time.Date(startTime.Year(), startTime.Month(), startTime.Day(), 0, 0, 0, 0, startTime.Location())
|
||||
newDateTime := dateTime.AddDate(0, 0, addDay)
|
||||
nextIndex := uint64(newDateTime.Unix()) << 32
|
||||
return nextIndex
|
||||
}
|
||||
|
||||
// LoadCustomerIndex false时代表获取失败,一般是读取错误,会进行重试。如果不存在时,返回(0,true)
|
||||
func (mp *MongoPersist) LoadCustomerIndex(topic string, customerId string) (uint64, bool) {
|
||||
s := mp.mongo.TakeSession()
|
||||
ctx, cancel := s.GetDefaultContext()
|
||||
defer cancel()
|
||||
|
||||
condition := bson.D{{Key: "Customer", Value: customerId}, {Key: "Topic", Value: topic}}
|
||||
cursor, err := s.Collection(mp.dbName, CustomerCollectName).Find(ctx, condition)
|
||||
if err != nil {
|
||||
log.SError("Load topic ", topic, " customer ", customerId, " is fail:", err.Error())
|
||||
return 0, false
|
||||
}
|
||||
|
||||
type findRes struct {
|
||||
Index uint64 `bson:"Index,omitempty"`
|
||||
}
|
||||
|
||||
var res []findRes
|
||||
ctxAll, cancelAll := s.GetDefaultContext()
|
||||
defer cancelAll()
|
||||
err = cursor.All(ctxAll, &res)
|
||||
if err != nil {
|
||||
log.SError("Load topic ", topic, " customer ", customerId, " is fail:", err.Error())
|
||||
return 0, false
|
||||
}
|
||||
|
||||
if len(res) == 0 {
|
||||
return 0, true
|
||||
}
|
||||
|
||||
return res[0].Index, true
|
||||
}
|
||||
|
||||
// GetIndex 通过topic数据获取进度索引号
|
||||
func (mp *MongoPersist) GetIndex(topicData *TopicData) uint64 {
|
||||
if topicData.Seq > 0 {
|
||||
return topicData.Seq
|
||||
}
|
||||
|
||||
var document bson.D
|
||||
err := bson.Unmarshal(topicData.RawData, &document)
|
||||
if err != nil {
|
||||
log.SError("GetIndex is fail ", err.Error())
|
||||
return 0
|
||||
}
|
||||
|
||||
for _, e := range document {
|
||||
if e.Key == "_id" {
|
||||
errC, seq := convertToNumber[uint64](e.Value)
|
||||
if errC != nil {
|
||||
log.Error("value is error:%s,%+v, ", errC.Error(), e.Value)
|
||||
}
|
||||
|
||||
return seq
|
||||
}
|
||||
}
|
||||
return topicData.Seq
|
||||
}
|
||||
|
||||
// PersistIndex 持久化进度索引号
|
||||
func (mp *MongoPersist) PersistIndex(topic string, customerId string, index uint64) {
|
||||
s := mp.mongo.TakeSession()
|
||||
|
||||
condition := bson.D{{Key: "Customer", Value: customerId}, {Key: "Topic", Value: topic}}
|
||||
upsert := bson.M{"Customer": customerId, "Topic": topic, "Index": index}
|
||||
updata := bson.M{"$set": upsert}
|
||||
|
||||
var UpdateOptionsOpts []*options.UpdateOptions
|
||||
UpdateOptionsOpts = append(UpdateOptionsOpts, options.Update().SetUpsert(true))
|
||||
|
||||
ctx, cancel := s.GetDefaultContext()
|
||||
defer cancel()
|
||||
_, err := s.Collection(mp.dbName, CustomerCollectName).UpdateOne(ctx, condition, updata, UpdateOptionsOpts...)
|
||||
if err != nil {
|
||||
log.SError("PersistIndex fail :", err.Error())
|
||||
}
|
||||
}
|
||||
122
sysservice/messagequeueservice/MongoPersist_test.go
Normal file
122
sysservice/messagequeueservice/MongoPersist_test.go
Normal file
@@ -0,0 +1,122 @@
|
||||
package messagequeueservice
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"go.mongodb.org/mongo-driver/bson"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
var seq uint64
|
||||
var lastTime int64
|
||||
|
||||
func NextSeq(addDays int) uint64 {
|
||||
now := time.Now().AddDate(0, 0, addDays)
|
||||
|
||||
nowSec := now.Unix()
|
||||
if nowSec != lastTime {
|
||||
seq = 0
|
||||
lastTime = nowSec
|
||||
}
|
||||
//必需从1开始,查询时seq>0
|
||||
seq += 1
|
||||
|
||||
return uint64(nowSec)<<32 | uint64(seq)
|
||||
}
|
||||
|
||||
func Test_MongoPersist(t *testing.T) {
|
||||
//1.初始化
|
||||
var mongoPersist MongoPersist
|
||||
mongoPersist.url = "mongodb://admin:123456@192.168.2.15:27017/?minPoolSize=5&maxPoolSize=35&maxIdleTimeMS=30000"
|
||||
mongoPersist.dbName = "MongoPersistTest"
|
||||
mongoPersist.retryCount = 10
|
||||
mongoPersist.OnInit()
|
||||
|
||||
//2.
|
||||
//加载索引
|
||||
index, ret := mongoPersist.LoadCustomerIndex("TestTopic", "TestCustomer")
|
||||
fmt.Println(index, ret)
|
||||
|
||||
now := time.Now()
|
||||
zeroTime := time.Date(now.Year(), now.Month(), now.Day()+1, 0, 0, 0, 0, now.Location())
|
||||
//fmt.Println(zeroTime.Unix())
|
||||
startIndex := uint64(zeroTime.Unix()<<32) | 1
|
||||
|
||||
//存储索引
|
||||
mongoPersist.PersistIndex("TestTopic", "TestCustomer", startIndex)
|
||||
|
||||
//加载索引
|
||||
index, ret = mongoPersist.LoadCustomerIndex("TestTopic", "TestCustomer")
|
||||
|
||||
type RowTest struct {
|
||||
Name string `bson:"Name,omitempty"`
|
||||
MapTest map[int]int `bson:"MapTest,omitempty"`
|
||||
Message string `bson:"Message,omitempty"`
|
||||
}
|
||||
|
||||
type RowTest2 struct {
|
||||
Id uint64 `bson:"_id,omitempty"`
|
||||
Name string `bson:"Name,omitempty"`
|
||||
MapTest map[int]int `bson:"MapTest,omitempty"`
|
||||
Message string `bson:"Message,omitempty"`
|
||||
}
|
||||
|
||||
//存档
|
||||
var findStartIndex uint64
|
||||
var topicData []TopicData
|
||||
for i := 1; i <= 1000; i++ {
|
||||
|
||||
var rowTest RowTest
|
||||
rowTest.Name = fmt.Sprintf("Name_%d", i)
|
||||
rowTest.MapTest = make(map[int]int, 1)
|
||||
rowTest.MapTest[i] = i*1000 + i
|
||||
rowTest.Message = fmt.Sprintf("xxxxxxxxxxxxxxxxxx%d", i)
|
||||
byteRet, _ := bson.Marshal(rowTest)
|
||||
|
||||
var dataSeq uint64
|
||||
if i <= 500 {
|
||||
dataSeq = NextSeq(-1)
|
||||
} else {
|
||||
dataSeq = NextSeq(0)
|
||||
}
|
||||
|
||||
topicData = append(topicData, TopicData{RawData: byteRet, Seq: dataSeq})
|
||||
|
||||
if i == 1 {
|
||||
findStartIndex = topicData[0].Seq
|
||||
}
|
||||
}
|
||||
|
||||
mongoPersist.OnReceiveTopicData("TestTopic", topicData)
|
||||
|
||||
for {
|
||||
if len(topicData) == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
topicData, ret = mongoPersist.PersistTopicData("TestTopic", topicData, 1)
|
||||
fmt.Println(ret)
|
||||
}
|
||||
|
||||
//
|
||||
for {
|
||||
retTopicData := mongoPersist.FindTopicData("TestTopic", findStartIndex, 300)
|
||||
for i, data := range retTopicData {
|
||||
var rowTest RowTest2
|
||||
bson.Unmarshal(data.RawData, &rowTest)
|
||||
t.Log(rowTest.Name)
|
||||
|
||||
if i == len(retTopicData)-1 {
|
||||
findStartIndex = mongoPersist.GetIndex(&data)
|
||||
}
|
||||
}
|
||||
|
||||
t.Log("..................")
|
||||
if len(retTopicData) == 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
//t.Log(mongoPersist.GetIndex(&retTopicData[0]))
|
||||
//t.Log(mongoPersist.GetIndex(&retTopicData[len(retTopicData)-1]))
|
||||
}
|
||||
91
sysservice/messagequeueservice/Subscriber.go
Normal file
91
sysservice/messagequeueservice/Subscriber.go
Normal file
@@ -0,0 +1,91 @@
|
||||
package messagequeueservice
|
||||
|
||||
import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
|
||||
"sync"
|
||||
)
|
||||
|
||||
// 订阅器
|
||||
type Subscriber struct {
|
||||
customerLocker sync.RWMutex
|
||||
mapCustomer map[string]*CustomerSubscriber
|
||||
queue MemoryQueue
|
||||
dataPersist QueueDataPersist //对列数据处理器
|
||||
queueWait *sync.WaitGroup
|
||||
}
|
||||
|
||||
func (ss *Subscriber) Init(memoryQueueCap int32) {
|
||||
ss.queue.Init(memoryQueueCap)
|
||||
ss.mapCustomer = make(map[string]*CustomerSubscriber, 5)
|
||||
}
|
||||
|
||||
func (ss *Subscriber) PushTopicDataToQueue(topic string, topics []TopicData) {
|
||||
for i := 0; i < len(topics); i++ {
|
||||
ss.queue.Push(&topics[i])
|
||||
}
|
||||
}
|
||||
|
||||
func (ss *Subscriber) PersistTopicData(topic string, topics []TopicData, retryCount int) ([]TopicData, []TopicData, bool) {
|
||||
return ss.dataPersist.PersistTopicData(topic, topics, retryCount)
|
||||
}
|
||||
|
||||
func (ss *Subscriber) TopicSubscribe(rpcHandler rpc.IRpcHandler, subScribeType rpc.SubscribeType, subscribeMethod SubscribeMethod, fromNodeId int, callBackRpcMethod string, topic string, customerId string, StartIndex uint64, oneBatchQuantity int32) error {
|
||||
//取消订阅时
|
||||
if subScribeType == rpc.SubscribeType_Unsubscribe {
|
||||
ss.UnSubscribe(customerId)
|
||||
return nil
|
||||
} else {
|
||||
ss.customerLocker.Lock()
|
||||
customerSubscriber, ok := ss.mapCustomer[customerId]
|
||||
if ok == true {
|
||||
//已经订阅过,则取消订阅
|
||||
customerSubscriber.UnSubscribe()
|
||||
delete(ss.mapCustomer, customerId)
|
||||
}
|
||||
|
||||
//不存在,则订阅
|
||||
customerSubscriber = &CustomerSubscriber{}
|
||||
ss.mapCustomer[customerId] = customerSubscriber
|
||||
ss.customerLocker.Unlock()
|
||||
|
||||
err := customerSubscriber.Subscribe(rpcHandler, ss, topic, subscribeMethod, customerId, fromNodeId, callBackRpcMethod, StartIndex, oneBatchQuantity)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if ok == true {
|
||||
log.SRelease("repeat subscription for customer ", customerId)
|
||||
} else {
|
||||
log.SRelease("subscription for customer ", customerId)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ss *Subscriber) UnSubscribe(customerId string) {
|
||||
ss.customerLocker.RLocker()
|
||||
defer ss.customerLocker.RUnlock()
|
||||
|
||||
customerSubscriber, ok := ss.mapCustomer[customerId]
|
||||
if ok == false {
|
||||
log.SWarning("failed to unsubscribe customer " + customerId)
|
||||
return
|
||||
}
|
||||
|
||||
customerSubscriber.UnSubscribe()
|
||||
}
|
||||
|
||||
func (ss *Subscriber) removeCustomer(customerId string, cs *CustomerSubscriber) {
|
||||
|
||||
ss.customerLocker.Lock()
|
||||
//确保删掉是当前的关系。有可能在替换订阅时,将该customer替换的情况
|
||||
customer, _ := ss.mapCustomer[customerId]
|
||||
if customer == cs {
|
||||
delete(ss.mapCustomer, customerId)
|
||||
}
|
||||
ss.customerLocker.Unlock()
|
||||
}
|
||||
149
sysservice/messagequeueservice/TopicRoom.go
Normal file
149
sysservice/messagequeueservice/TopicRoom.go
Normal file
@@ -0,0 +1,149 @@
|
||||
package messagequeueservice
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/util/coroutine"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
type TopicData struct {
|
||||
Seq uint64 //序号
|
||||
RawData []byte //原始数据
|
||||
|
||||
ExtendParam interface{} //扩展参数
|
||||
}
|
||||
|
||||
func (t TopicData) GetValue() uint64 {
|
||||
return t.Seq
|
||||
}
|
||||
|
||||
var topicFullError = errors.New("topic room is full")
|
||||
|
||||
const DefaultOnceProcessTopicDataNum = 1024 //一次处理的topic数量,考虑批量落地的数量
|
||||
const DefaultMaxTopicBacklogNum = 100000 //处理的channel最大数量
|
||||
const DefaultMemoryQueueLen = 50000 //内存的最大长度
|
||||
const maxTryPersistNum = 3000 //最大重试次数,约>5分钟
|
||||
|
||||
type TopicRoom struct {
|
||||
topic string //主题名称
|
||||
channelTopic chan TopicData //主题push过来待处理的数据
|
||||
|
||||
Subscriber //订阅器
|
||||
|
||||
//序号生成
|
||||
seq uint32
|
||||
lastTime int64
|
||||
|
||||
//onceProcessTopicDataNum int //一次处理的订阅数据最大量,方便订阅器Subscriber和QueueDataProcessor批量处理
|
||||
StagingBuff []TopicData
|
||||
|
||||
isStop int32
|
||||
}
|
||||
|
||||
// maxProcessTopicBacklogNum:主题最大积压数量
|
||||
func (tr *TopicRoom) Init(maxTopicBacklogNum int32, memoryQueueLen int32, topic string, queueWait *sync.WaitGroup, dataPersist QueueDataPersist) {
|
||||
if maxTopicBacklogNum == 0 {
|
||||
maxTopicBacklogNum = DefaultMaxTopicBacklogNum
|
||||
}
|
||||
|
||||
tr.channelTopic = make(chan TopicData, maxTopicBacklogNum)
|
||||
tr.topic = topic
|
||||
tr.dataPersist = dataPersist
|
||||
tr.queueWait = queueWait
|
||||
tr.StagingBuff = make([]TopicData, DefaultOnceProcessTopicDataNum)
|
||||
tr.queueWait.Add(1)
|
||||
tr.Subscriber.Init(memoryQueueLen)
|
||||
coroutine.GoRecover(tr.topicRoomRun, -1)
|
||||
}
|
||||
|
||||
func (tr *TopicRoom) Publish(data [][]byte) error {
|
||||
if len(tr.channelTopic)+len(data) > cap(tr.channelTopic) {
|
||||
return topicFullError
|
||||
}
|
||||
|
||||
//生成有序序号
|
||||
for _, rawData := range data {
|
||||
tr.channelTopic <- TopicData{RawData: rawData, Seq: tr.NextSeq()}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (tr *TopicRoom) NextSeq() uint64 {
|
||||
now := time.Now()
|
||||
|
||||
nowSec := now.Unix()
|
||||
if nowSec != tr.lastTime {
|
||||
tr.seq = 0
|
||||
tr.lastTime = nowSec
|
||||
}
|
||||
//必需从1开始,查询时seq>0
|
||||
tr.seq += 1
|
||||
|
||||
return uint64(nowSec)<<32 | uint64(tr.seq)
|
||||
}
|
||||
|
||||
func (tr *TopicRoom) Stop() {
|
||||
atomic.StoreInt32(&tr.isStop, 1)
|
||||
}
|
||||
|
||||
func (tr *TopicRoom) topicRoomRun() {
|
||||
defer tr.queueWait.Done()
|
||||
|
||||
log.SRelease("topic room ", tr.topic, " is running..")
|
||||
for {
|
||||
if atomic.LoadInt32(&tr.isStop) != 0 {
|
||||
break
|
||||
}
|
||||
stagingBuff := tr.StagingBuff[:0]
|
||||
|
||||
for i := 0; i < len(tr.channelTopic) && i < DefaultOnceProcessTopicDataNum; i++ {
|
||||
topicData := <-tr.channelTopic
|
||||
|
||||
stagingBuff = append(stagingBuff, topicData)
|
||||
}
|
||||
tr.Subscriber.dataPersist.OnReceiveTopicData(tr.topic, stagingBuff)
|
||||
//持久化与放内存
|
||||
if len(stagingBuff) == 0 {
|
||||
time.Sleep(time.Millisecond * 50)
|
||||
continue
|
||||
}
|
||||
|
||||
//如果落地失败,最大重试maxTryPersistNum次数
|
||||
for retryCount := 0; retryCount < maxTryPersistNum; {
|
||||
//持久化处理
|
||||
stagingBuff, savedBuff, ret := tr.PersistTopicData(tr.topic, stagingBuff, retryCount+1)
|
||||
|
||||
if ret == true {
|
||||
// 1. 把成功存储的数据放入内存中
|
||||
if len(savedBuff) > 0 {
|
||||
tr.PushTopicDataToQueue(tr.topic, savedBuff)
|
||||
}
|
||||
|
||||
// 2. 如果存档成功,并且有后续批次,则继续存档
|
||||
if ret == true && len(stagingBuff) > 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// 3. 成功了,跳出
|
||||
break
|
||||
} else {
|
||||
//计数增加一次,并且等待100ms,继续重试
|
||||
retryCount++
|
||||
time.Sleep(time.Millisecond * 100)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//将所有的订阅取消
|
||||
tr.customerLocker.Lock()
|
||||
for _, customer := range tr.mapCustomer {
|
||||
customer.UnSubscribe()
|
||||
}
|
||||
tr.customerLocker.Unlock()
|
||||
|
||||
log.SRelease("topic room ", tr.topic, " is stop")
|
||||
}
|
||||
429
sysservice/rankservice/MongodbPersist.go
Normal file
429
sysservice/rankservice/MongodbPersist.go
Normal file
@@ -0,0 +1,429 @@
|
||||
package rankservice
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/service"
|
||||
"github.com/duanhf2012/origin/sysmodule/mongodbmodule"
|
||||
"go.mongodb.org/mongo-driver/bson"
|
||||
"go.mongodb.org/mongo-driver/mongo/options"
|
||||
"runtime"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
const batchRemoveNum = 128 //一切删除的最大数量
|
||||
|
||||
// RankDataDB 排行表数据
|
||||
type RankDataDB struct {
|
||||
Id uint64 `bson:"_id"`
|
||||
RefreshTime int64 `bson:"RefreshTime"`
|
||||
SortData []int64 `bson:"SortData"`
|
||||
Data []byte `bson:"Data"`
|
||||
ExData []int64 `bson:"ExData"`
|
||||
}
|
||||
|
||||
// MongoPersist持久化Module
|
||||
type MongoPersist struct {
|
||||
service.Module
|
||||
mongo mongodbmodule.MongoModule
|
||||
|
||||
url string //Mongodb连接url
|
||||
dbName string //数据库名称
|
||||
SaveInterval time.Duration //落地数据库时间间隔
|
||||
|
||||
sync.Mutex
|
||||
mapRemoveRankData map[uint64]map[uint64]struct{} //将要删除的排行数据 map[RankId]map[Key]struct{}
|
||||
mapUpsertRankData map[uint64]map[uint64]RankData //需要upsert的排行数据 map[RankId][key]RankData
|
||||
|
||||
mapRankSkip map[uint64]IRankSkip //所有的排行榜对象map[RankId]IRankSkip
|
||||
maxRetrySaveCount int //存档重试次数
|
||||
retryTimeIntervalMs time.Duration //重试时间间隔
|
||||
|
||||
lastSaveTime time.Time //最后一次存档时间
|
||||
|
||||
stop int32 //是否停服
|
||||
waitGroup sync.WaitGroup //等待停服
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) OnInit() error {
|
||||
mp.mapRemoveRankData = map[uint64]map[uint64]struct{}{}
|
||||
mp.mapUpsertRankData = map[uint64]map[uint64]RankData{}
|
||||
mp.mapRankSkip = map[uint64]IRankSkip{}
|
||||
|
||||
if errC := mp.ReadCfg(); errC != nil {
|
||||
return errC
|
||||
}
|
||||
|
||||
//初始化MongoDB
|
||||
err := mp.mongo.Init(mp.url, time.Second*15)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
//开始运行
|
||||
err = mp.mongo.Start()
|
||||
if err != nil {
|
||||
log.SError("start dbService[", mp.dbName, "], url[", mp.url, "] init error:", err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
//开启协程
|
||||
mp.waitGroup.Add(1)
|
||||
go mp.persistCoroutine()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) ReadCfg() error {
|
||||
mapDBServiceCfg, ok := mp.GetService().GetServiceCfg().(map[string]interface{})
|
||||
if ok == false {
|
||||
return fmt.Errorf("RankService config is error")
|
||||
}
|
||||
|
||||
//读取数据库配置
|
||||
saveMongoCfg,ok := mapDBServiceCfg["SaveMongo"]
|
||||
if ok == false {
|
||||
return fmt.Errorf("RankService.SaveMongo config is error")
|
||||
}
|
||||
|
||||
mongodbCfg,ok := saveMongoCfg.(map[string]interface{})
|
||||
if ok == false {
|
||||
return fmt.Errorf("RankService.SaveMongo config is error")
|
||||
}
|
||||
|
||||
url, ok := mongodbCfg["Url"]
|
||||
if ok == false {
|
||||
return fmt.Errorf("RankService.SaveMongo.Url config is error")
|
||||
}
|
||||
mp.url = url.(string)
|
||||
|
||||
dbName, ok := mongodbCfg["DBName"]
|
||||
if ok == false {
|
||||
return fmt.Errorf("RankService.SaveMongo.DBName config is error")
|
||||
}
|
||||
mp.dbName = dbName.(string)
|
||||
|
||||
saveInterval, ok := mongodbCfg["SaveIntervalMs"]
|
||||
if ok == false {
|
||||
return fmt.Errorf("RankService.SaveMongo.SaveIntervalMs config is error")
|
||||
}
|
||||
|
||||
mp.SaveInterval = time.Duration(saveInterval.(float64))*time.Millisecond
|
||||
|
||||
maxRetrySaveCount, ok := mongodbCfg["MaxRetrySaveCount"]
|
||||
if ok == false {
|
||||
return fmt.Errorf("RankService.SaveMongo.MaxRetrySaveCount config is error")
|
||||
}
|
||||
mp.maxRetrySaveCount = int(maxRetrySaveCount.(float64))
|
||||
|
||||
retryTimeIntervalMs, ok := mongodbCfg["RetryTimeIntervalMs"]
|
||||
if ok == false {
|
||||
return fmt.Errorf("RankService.SaveMongo.RetryTimeIntervalMs config is error")
|
||||
}
|
||||
mp.retryTimeIntervalMs = time.Duration(retryTimeIntervalMs.(float64))*time.Millisecond
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
//启服从数据库加载
|
||||
func (mp *MongoPersist) OnStart() {
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) OnSetupRank(manual bool,rankSkip *RankSkip) error{
|
||||
if mp.mapRankSkip == nil {
|
||||
mp.mapRankSkip = map[uint64]IRankSkip{}
|
||||
}
|
||||
|
||||
mp.mapRankSkip[rankSkip.GetRankID()] = rankSkip
|
||||
if manual == true {
|
||||
return nil
|
||||
}
|
||||
|
||||
log.SRelease("start load rank ",rankSkip.GetRankName()," from mongodb.")
|
||||
err := mp.loadFromDB(rankSkip.GetRankID(),rankSkip.GetRankName())
|
||||
if err != nil {
|
||||
log.SError("load from db is fail :%s",err.Error())
|
||||
return err
|
||||
}
|
||||
log.SRelease("finish load rank ",rankSkip.GetRankName()," from mongodb.")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) loadFromDB(rankId uint64,rankCollectName string) error{
|
||||
s := mp.mongo.TakeSession()
|
||||
ctx, cancel := s.GetDefaultContext()
|
||||
defer cancel()
|
||||
|
||||
condition := bson.D{}
|
||||
cursor, err := s.Collection(mp.dbName, rankCollectName).Find(ctx, condition)
|
||||
if err != nil {
|
||||
log.SError("find collect name ", rankCollectName, " is error:", err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
if cursor.Err()!=nil {
|
||||
log.SError("find collect name ", rankCollectName, " is error:", cursor.Err().Error())
|
||||
return err
|
||||
}
|
||||
|
||||
rankSkip := mp.mapRankSkip[rankId]
|
||||
if rankSkip == nil {
|
||||
err = fmt.Errorf("rank ", rankCollectName, " is not setup:")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
defer cursor.Close(ctx)
|
||||
for cursor.Next(ctx) {
|
||||
var rankDataDB RankDataDB
|
||||
err = cursor.Decode(&rankDataDB)
|
||||
if err != nil {
|
||||
log.SError(" collect name ", rankCollectName, " Decode is error:", err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
var rankData rpc.RankData
|
||||
rankData.Data = rankDataDB.Data
|
||||
rankData.Key = rankDataDB.Id
|
||||
rankData.SortData = rankDataDB.SortData
|
||||
for _,eData := range rankDataDB.ExData{
|
||||
rankData.ExData = append(rankData.ExData,&rpc.ExtendIncData{InitValue:eData})
|
||||
}
|
||||
|
||||
//更新到排行榜
|
||||
rankSkip.UpsetRank(&rankData,rankDataDB.RefreshTime,true)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) lazyInitRemoveMap(rankId uint64){
|
||||
if mp.mapRemoveRankData[rankId] == nil {
|
||||
mp.mapRemoveRankData[rankId] = make(map[uint64]struct{},256)
|
||||
}
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) lazyInitUpsertMap(rankId uint64){
|
||||
if mp.mapUpsertRankData[rankId] == nil {
|
||||
mp.mapUpsertRankData[rankId] = make(map[uint64]RankData,256)
|
||||
}
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) OnEnterRank(rankSkip IRankSkip, enterData *RankData){
|
||||
mp.Lock()
|
||||
defer mp.Unlock()
|
||||
|
||||
delete(mp.mapRemoveRankData,enterData.Key)
|
||||
|
||||
mp.lazyInitUpsertMap(rankSkip.GetRankID())
|
||||
mp.mapUpsertRankData[rankSkip.GetRankID()][enterData.Key] = *enterData
|
||||
}
|
||||
|
||||
|
||||
func (mp *MongoPersist) OnLeaveRank(rankSkip IRankSkip, leaveData *RankData){
|
||||
mp.Lock()
|
||||
defer mp.Unlock()
|
||||
|
||||
//先删掉更新中的数据
|
||||
delete(mp.mapUpsertRankData,leaveData.Key)
|
||||
mp.lazyInitRemoveMap(rankSkip.GetRankID())
|
||||
mp.mapRemoveRankData[rankSkip.GetRankID()][leaveData.Key] = struct{}{}
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) OnChangeRankData(rankSkip IRankSkip, changeData *RankData){
|
||||
mp.Lock()
|
||||
defer mp.Unlock()
|
||||
|
||||
//先删掉要删除的数据
|
||||
delete(mp.mapRemoveRankData,changeData.Key)
|
||||
|
||||
//更新数据
|
||||
mp.lazyInitUpsertMap(rankSkip.GetRankID())
|
||||
mp.mapUpsertRankData[rankSkip.GetRankID()][changeData.Key] = *changeData
|
||||
}
|
||||
|
||||
//停存持久化到DB
|
||||
func (mp *MongoPersist) OnStop(mapRankSkip map[uint64]*RankSkip){
|
||||
atomic.StoreInt32(&mp.stop,1)
|
||||
mp.waitGroup.Wait()
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) JugeTimeoutSave() bool{
|
||||
timeout := time.Now()
|
||||
isTimeOut := timeout.Sub(mp.lastSaveTime) >= mp.SaveInterval
|
||||
if isTimeOut == true {
|
||||
mp.lastSaveTime = timeout
|
||||
}
|
||||
|
||||
return isTimeOut
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) persistCoroutine(){
|
||||
defer mp.waitGroup.Done()
|
||||
for atomic.LoadInt32(&mp.stop)==0 || mp.hasPersistData(){
|
||||
//间隔时间sleep
|
||||
time.Sleep(time.Second*1)
|
||||
|
||||
//没有持久化数据continue
|
||||
if mp.hasPersistData() == false {
|
||||
continue
|
||||
}
|
||||
|
||||
if mp.JugeTimeoutSave() == false{
|
||||
continue
|
||||
}
|
||||
|
||||
//存档数据到数据库
|
||||
mp.saveToDB()
|
||||
}
|
||||
|
||||
//退出时存一次档
|
||||
mp.saveToDB()
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) hasPersistData() bool{
|
||||
mp.Lock()
|
||||
defer mp.Unlock()
|
||||
|
||||
return len(mp.mapUpsertRankData)>0 || len(mp.mapRemoveRankData) >0
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) saveToDB(){
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError(" Core dump info[", errString, "]\n", string(buf[:l]))
|
||||
}
|
||||
}()
|
||||
|
||||
//1.copy数据
|
||||
mp.Lock()
|
||||
mapRemoveRankData := mp.mapRemoveRankData
|
||||
mapUpsertRankData := mp.mapUpsertRankData
|
||||
mp.mapRemoveRankData = map[uint64]map[uint64]struct{}{}
|
||||
mp.mapUpsertRankData = map[uint64]map[uint64]RankData{}
|
||||
mp.Unlock()
|
||||
|
||||
//2.存档
|
||||
for len(mapUpsertRankData) > 0 {
|
||||
mp.upsertRankDataToDB(mapUpsertRankData)
|
||||
}
|
||||
|
||||
for len(mapRemoveRankData) >0 {
|
||||
mp.removeRankDataToDB(mapRemoveRankData)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) removeToDB(collectName string,keys []uint64) error{
|
||||
s := mp.mongo.TakeSession()
|
||||
ctx, cancel := s.GetDefaultContext()
|
||||
defer cancel()
|
||||
|
||||
condition := bson.D{{Key: "_id", Value: bson.M{"$in": keys}}}
|
||||
|
||||
_, err := s.Collection(mp.dbName, collectName).DeleteMany(ctx, condition)
|
||||
if err != nil {
|
||||
log.SError("MongoPersist DeleteMany fail,collect name is ", collectName)
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) removeRankData(rankId uint64,keys []uint64) bool {
|
||||
rank := mp.mapRankSkip[rankId]
|
||||
if rank== nil {
|
||||
log.SError("cannot find rankId ",rankId,"config")
|
||||
return false
|
||||
}
|
||||
|
||||
//不成功则重试maxRetrySaveCount次
|
||||
for i:=0;i<mp.maxRetrySaveCount;i++{
|
||||
if mp.removeToDB(rank.GetRankName(),keys)!= nil {
|
||||
time.Sleep(mp.retryTimeIntervalMs)
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) upsertToDB(collectName string,rankData *RankData) error{
|
||||
condition := bson.D{{"_id", rankData.Key}}
|
||||
upsert := bson.M{"_id":rankData.Key,"RefreshTime": rankData.refreshTimestamp, "SortData": rankData.SortData, "Data": rankData.Data,"ExData":rankData.ExData}
|
||||
update := bson.M{"$set": upsert}
|
||||
|
||||
s := mp.mongo.TakeSession()
|
||||
ctx, cancel := s.GetDefaultContext()
|
||||
defer cancel()
|
||||
|
||||
updateOpts := options.Update().SetUpsert(true)
|
||||
_, err := s.Collection(mp.dbName, collectName).UpdateOne(ctx, condition,update,updateOpts)
|
||||
if err != nil {
|
||||
log.SError("MongoPersist upsertDB fail,collect name is ", collectName)
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) upsertRankDataToDB(mapUpsertRankData map[uint64]map[uint64]RankData) error{
|
||||
for rankId,mapRankData := range mapUpsertRankData{
|
||||
rank,ok := mp.mapRankSkip[rankId]
|
||||
if ok == false {
|
||||
log.SError("cannot find rankId ",rankId,",config is error")
|
||||
delete(mapUpsertRankData,rankId)
|
||||
continue
|
||||
}
|
||||
|
||||
for key,rankData := range mapRankData{
|
||||
//最大重试mp.maxRetrySaveCount次
|
||||
for i:=0;i<mp.maxRetrySaveCount;i++{
|
||||
err := mp.upsertToDB(rank.GetRankName(),&rankData)
|
||||
if err != nil {
|
||||
time.Sleep(mp.retryTimeIntervalMs)
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
//存完删掉指定key
|
||||
delete(mapRankData,key)
|
||||
}
|
||||
|
||||
if len(mapRankData) == 0 {
|
||||
delete(mapUpsertRankData,rankId)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) removeRankDataToDB(mapRemoveRankData map[uint64]map[uint64]struct{}) {
|
||||
for rankId ,mapRemoveKey := range mapRemoveRankData{
|
||||
//每100个一删
|
||||
keyList := make([]uint64,0,batchRemoveNum)
|
||||
for key := range mapRemoveKey {
|
||||
delete(mapRemoveKey,key)
|
||||
keyList = append(keyList,key)
|
||||
if len(keyList) >= batchRemoveNum {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
mp.removeRankData(rankId,keyList)
|
||||
|
||||
//如果删完,删掉rankid下所有
|
||||
if len(mapRemoveKey) == 0 {
|
||||
delete(mapRemoveRankData,rankId)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
109
sysservice/rankservice/RankData.go
Normal file
109
sysservice/rankservice/RankData.go
Normal file
@@ -0,0 +1,109 @@
|
||||
package rankservice
|
||||
|
||||
import (
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/util/algorithms/skip"
|
||||
"github.com/duanhf2012/origin/util/sync"
|
||||
)
|
||||
|
||||
var emptyRankData RankData
|
||||
|
||||
var RankDataPool = sync.NewPoolEx(make(chan sync.IPoolData, 10240), func() sync.IPoolData {
|
||||
var newRankData RankData
|
||||
return &newRankData
|
||||
})
|
||||
|
||||
type RankData struct {
|
||||
Key uint64
|
||||
SortData []int64
|
||||
Data []byte
|
||||
ExData []int64
|
||||
|
||||
refreshTimestamp int64 //刷新时间
|
||||
//bRelease bool
|
||||
ref bool
|
||||
compareFunc func(other skip.Comparator) int
|
||||
}
|
||||
|
||||
func NewRankData(isDec bool, data *rpc.RankData,refreshTimestamp int64) *RankData {
|
||||
ret := RankDataPool.Get().(*RankData)
|
||||
ret.compareFunc = ret.ascCompare
|
||||
if isDec {
|
||||
ret.compareFunc = ret.desCompare
|
||||
}
|
||||
ret.Key = data.Key
|
||||
ret.SortData = data.SortData
|
||||
ret.Data = data.Data
|
||||
|
||||
for _,d := range data.ExData{
|
||||
ret.ExData = append(ret.ExData,d.InitValue+d.IncreaseValue)
|
||||
}
|
||||
|
||||
ret.refreshTimestamp = refreshTimestamp
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
func ReleaseRankData(rankData *RankData) {
|
||||
RankDataPool.Put(rankData)
|
||||
}
|
||||
|
||||
func (p *RankData) Reset() {
|
||||
*p = emptyRankData
|
||||
}
|
||||
|
||||
func (p *RankData) IsRef() bool {
|
||||
return p.ref
|
||||
}
|
||||
|
||||
func (p *RankData) Ref() {
|
||||
p.ref = true
|
||||
}
|
||||
|
||||
func (p *RankData) UnRef() {
|
||||
p.ref = false
|
||||
}
|
||||
|
||||
func (p *RankData) Compare(other skip.Comparator) int {
|
||||
return p.compareFunc(other)
|
||||
}
|
||||
|
||||
func (p *RankData) GetKey() uint64 {
|
||||
return p.Key
|
||||
}
|
||||
|
||||
func (p *RankData) ascCompare(other skip.Comparator) int {
|
||||
otherRankData := other.(*RankData)
|
||||
|
||||
if otherRankData.Key == p.Key {
|
||||
return 0
|
||||
}
|
||||
|
||||
retFlg := compareMoreThan(p.SortData, otherRankData.SortData)
|
||||
if retFlg == 0 {
|
||||
if p.Key > otherRankData.Key {
|
||||
retFlg = 1
|
||||
} else {
|
||||
retFlg = -1
|
||||
}
|
||||
}
|
||||
return retFlg
|
||||
}
|
||||
|
||||
func (p *RankData) desCompare(other skip.Comparator) int {
|
||||
otherRankData := other.(*RankData)
|
||||
|
||||
if otherRankData.Key == p.Key {
|
||||
return 0
|
||||
}
|
||||
|
||||
retFlg := compareMoreThan(otherRankData.SortData, p.SortData)
|
||||
if retFlg == 0 {
|
||||
if p.Key > otherRankData.Key {
|
||||
retFlg = -1
|
||||
} else {
|
||||
retFlg = 1
|
||||
}
|
||||
}
|
||||
return retFlg
|
||||
}
|
||||
125
sysservice/rankservice/RankDataExpire.go
Normal file
125
sysservice/rankservice/RankDataExpire.go
Normal file
@@ -0,0 +1,125 @@
|
||||
package rankservice
|
||||
|
||||
import (
|
||||
"container/heap"
|
||||
"github.com/duanhf2012/origin/util/sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
var expireDataPool = sync.NewPoolEx(make(chan sync.IPoolData, 10240), func() sync.IPoolData {
|
||||
return &ExpireData{}
|
||||
})
|
||||
|
||||
type ExpireData struct {
|
||||
Index int
|
||||
Key uint64
|
||||
RefreshTimestamp int64
|
||||
ref bool
|
||||
}
|
||||
|
||||
type rankDataHeap struct {
|
||||
rankDatas []*ExpireData
|
||||
expireMs int64
|
||||
mapExpireData map[uint64]*ExpireData
|
||||
}
|
||||
|
||||
var expireData ExpireData
|
||||
func (ed *ExpireData) Reset(){
|
||||
*ed = expireData
|
||||
}
|
||||
|
||||
func (ed *ExpireData) IsRef() bool{
|
||||
return ed.ref
|
||||
}
|
||||
|
||||
func (ed *ExpireData) Ref(){
|
||||
ed.ref = true
|
||||
}
|
||||
|
||||
func (ed *ExpireData) UnRef(){
|
||||
ed.ref = false
|
||||
}
|
||||
|
||||
func (rd *rankDataHeap) Init(maxRankDataCount int32,expireMs time.Duration){
|
||||
rd.rankDatas = make([]*ExpireData,0,maxRankDataCount)
|
||||
rd.expireMs = int64(expireMs)
|
||||
rd.mapExpireData = make(map[uint64]*ExpireData,512)
|
||||
heap.Init(rd)
|
||||
}
|
||||
|
||||
func (rd *rankDataHeap) Len() int {
|
||||
return len(rd.rankDatas)
|
||||
}
|
||||
|
||||
func (rd *rankDataHeap) Less(i, j int) bool {
|
||||
return rd.rankDatas[i].RefreshTimestamp < rd.rankDatas[j].RefreshTimestamp
|
||||
}
|
||||
|
||||
func (rd *rankDataHeap) Swap(i, j int) {
|
||||
rd.rankDatas[i], rd.rankDatas[j] = rd.rankDatas[j], rd.rankDatas[i]
|
||||
rd.rankDatas[i].Index,rd.rankDatas[j].Index = i,j
|
||||
}
|
||||
|
||||
func (rd *rankDataHeap) Push(x interface{}) {
|
||||
ed := x.(*ExpireData)
|
||||
ed.Index = len(rd.rankDatas)
|
||||
rd.rankDatas = append(rd.rankDatas,ed)
|
||||
}
|
||||
|
||||
func (rd *rankDataHeap) Pop() (ret interface{}) {
|
||||
l := len(rd.rankDatas)
|
||||
var retData *ExpireData
|
||||
rd.rankDatas, retData = rd.rankDatas[:l-1], rd.rankDatas[l-1]
|
||||
retData.Index = -1
|
||||
ret = retData
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (rd *rankDataHeap) PopExpireKey() uint64{
|
||||
if rd.Len() <= 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
if rd.rankDatas[0].RefreshTimestamp+rd.expireMs > time.Now().UnixNano() {
|
||||
return 0
|
||||
}
|
||||
|
||||
rankData := heap.Pop(rd).(*ExpireData)
|
||||
delete(rd.mapExpireData,rankData.Key)
|
||||
|
||||
return rankData.Key
|
||||
}
|
||||
|
||||
func (rd *rankDataHeap) PushOrRefreshExpireKey(key uint64,refreshTimestamp int64){
|
||||
//1.先删掉之前的
|
||||
expData ,ok := rd.mapExpireData[key]
|
||||
if ok == true {
|
||||
expData.RefreshTimestamp = refreshTimestamp
|
||||
heap.Fix(rd,expData.Index)
|
||||
return
|
||||
}
|
||||
|
||||
//2.直接插入
|
||||
expData = expireDataPool.Get().(*ExpireData)
|
||||
expData.Key = key
|
||||
expData.RefreshTimestamp = refreshTimestamp
|
||||
rd.mapExpireData[key] = expData
|
||||
|
||||
heap.Push(rd,expData)
|
||||
}
|
||||
|
||||
func (rd *rankDataHeap) RemoveExpireKey(key uint64){
|
||||
expData ,ok := rd.mapExpireData[key]
|
||||
if ok == false {
|
||||
return
|
||||
}
|
||||
|
||||
delete(rd.mapExpireData,key)
|
||||
heap.Remove(rd,expData.Index)
|
||||
expireDataPool.Put(expData)
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
52
sysservice/rankservice/RankFunc.go
Normal file
52
sysservice/rankservice/RankFunc.go
Normal file
@@ -0,0 +1,52 @@
|
||||
package rankservice
|
||||
|
||||
func transformLevel(level int32) interface{} {
|
||||
switch level {
|
||||
case 8:
|
||||
return uint8(0)
|
||||
case 16:
|
||||
return uint16(0)
|
||||
case 32:
|
||||
return uint32(0)
|
||||
case 64:
|
||||
return uint64(0)
|
||||
default:
|
||||
return uint32(0)
|
||||
}
|
||||
}
|
||||
|
||||
func compareIsEqual(firstSortData, secondSortData []int64) bool {
|
||||
firstLen := len(firstSortData)
|
||||
if firstLen != len(secondSortData) {
|
||||
return false
|
||||
}
|
||||
|
||||
for i := firstLen - 1; i >= 0; i-- {
|
||||
if firstSortData[i] != secondSortData[i] {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func compareMoreThan(firstSortData, secondSortData []int64) int {
|
||||
firstLen := len(firstSortData)
|
||||
secondLen := len(secondSortData)
|
||||
minLen := firstLen
|
||||
if firstLen > secondLen {
|
||||
minLen = secondLen
|
||||
}
|
||||
|
||||
for i := 0; i < minLen; i++ {
|
||||
if firstSortData[i] > secondSortData[i] {
|
||||
return 1
|
||||
}
|
||||
|
||||
if firstSortData[i] < secondSortData[i] {
|
||||
return -1
|
||||
}
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
47
sysservice/rankservice/RankInterface.go
Normal file
47
sysservice/rankservice/RankInterface.go
Normal file
@@ -0,0 +1,47 @@
|
||||
package rankservice
|
||||
|
||||
import (
|
||||
"github.com/duanhf2012/origin/service"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
)
|
||||
|
||||
type RankDataChangeType int8
|
||||
|
||||
type IRankSkip interface {
|
||||
GetRankID() uint64
|
||||
GetRankName() string
|
||||
GetRankLen() uint64
|
||||
UpsetRank(upsetData *rpc.RankData,refreshTimestamp int64,fromLoad bool) RankDataChangeType
|
||||
}
|
||||
|
||||
type IRankModule interface {
|
||||
service.IModule
|
||||
|
||||
|
||||
OnSetupRank(manual bool,rankSkip *RankSkip) error //当完成安装排行榜对象时
|
||||
OnStart() //服务开启时回调
|
||||
OnEnterRank(rankSkip IRankSkip, enterData *RankData) //进入排行
|
||||
OnLeaveRank(rankSkip IRankSkip, leaveData *RankData) //离开排行
|
||||
OnChangeRankData(rankSkip IRankSkip, changeData *RankData) //当排行数据变化时
|
||||
OnStop(mapRankSkip map[uint64]*RankSkip) //服务结束时回调
|
||||
}
|
||||
|
||||
type DefaultRankModule struct {
|
||||
service.Module
|
||||
}
|
||||
|
||||
func (dr *DefaultRankModule) OnStart(mapRankSkip map[uint64]*RankSkip) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (dr *DefaultRankModule) OnEnterRank(rankSkip IRankSkip, enterData []*RankData) {
|
||||
}
|
||||
|
||||
func (dr *DefaultRankModule) OnLeaveRank(rankSkip IRankSkip, leaveData []*RankData) {
|
||||
}
|
||||
|
||||
func (dr *DefaultRankModule) OnChangeRankData(rankSkip IRankSkip, changeData []*RankData) {
|
||||
}
|
||||
|
||||
func (dr *DefaultRankModule) OnStop(mapRankSkip map[uint64]*RankSkip) {
|
||||
}
|
||||
271
sysservice/rankservice/RankService.go
Normal file
271
sysservice/rankservice/RankService.go
Normal file
@@ -0,0 +1,271 @@
|
||||
package rankservice
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/service"
|
||||
)
|
||||
|
||||
const PreMapRankSkipLen = 10
|
||||
|
||||
type RankService struct {
|
||||
service.Service
|
||||
|
||||
mapRankSkip map[uint64]*RankSkip
|
||||
rankModule IRankModule
|
||||
}
|
||||
|
||||
func (rs *RankService) OnInit() error {
|
||||
if rs.rankModule != nil {
|
||||
_, err := rs.AddModule(rs.rankModule)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
rs.AddModule(&DefaultRankModule{})
|
||||
}
|
||||
|
||||
rs.mapRankSkip = make(map[uint64]*RankSkip, PreMapRankSkipLen)
|
||||
err := rs.dealCfg()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (rs *RankService) OnStart() {
|
||||
rs.rankModule.OnStart()
|
||||
}
|
||||
|
||||
func (rs *RankService) OnRelease() {
|
||||
rs.rankModule.OnStop(rs.mapRankSkip)
|
||||
}
|
||||
|
||||
// 安装排行模块
|
||||
func (rs *RankService) SetupRankModule(rankModule IRankModule) {
|
||||
rs.rankModule = rankModule
|
||||
}
|
||||
|
||||
// RPC_ManualAddRankSkip 提供手动添加排行榜
|
||||
func (rs *RankService) RPC_ManualAddRankSkip(addInfo *rpc.AddRankList, addResult *rpc.RankResult) error {
|
||||
for _, addRankListData := range addInfo.AddList {
|
||||
if addRankListData.RankId == 0 {
|
||||
return fmt.Errorf("RPC_AddRankSkip must has rank id")
|
||||
}
|
||||
|
||||
//重复的排行榜信息不允许添加
|
||||
rank := rs.mapRankSkip[addRankListData.RankId]
|
||||
if rank != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
newSkip := NewRankSkip(addRankListData.RankId, addRankListData.RankName, addRankListData.IsDec, transformLevel(addRankListData.SkipListLevel), addRankListData.MaxRank, time.Duration(addRankListData.ExpireMs)*time.Millisecond)
|
||||
newSkip.SetupRankModule(rs.rankModule)
|
||||
|
||||
rs.mapRankSkip[addRankListData.RankId] = newSkip
|
||||
rs.rankModule.OnSetupRank(true, newSkip)
|
||||
}
|
||||
|
||||
addResult.AddCount = 1
|
||||
return nil
|
||||
}
|
||||
|
||||
// RPC_UpsetRank 更新排行榜
|
||||
func (rs *RankService) RPC_UpsetRank(upsetInfo *rpc.UpsetRankData, upsetResult *rpc.RankResult) error {
|
||||
rankSkip, ok := rs.mapRankSkip[upsetInfo.RankId]
|
||||
if ok == false || rankSkip == nil {
|
||||
return fmt.Errorf("RPC_UpsetRank[", upsetInfo.RankId, "] no this rank id")
|
||||
}
|
||||
|
||||
addCount, updateCount := rankSkip.UpsetRankList(upsetInfo.RankDataList)
|
||||
upsetResult.AddCount = addCount
|
||||
upsetResult.ModifyCount = updateCount
|
||||
|
||||
if upsetInfo.FindNewRank == true {
|
||||
for _, rdata := range upsetInfo.RankDataList {
|
||||
_, rank := rankSkip.GetRankNodeData(rdata.Key)
|
||||
upsetResult.NewRank = append(upsetResult.NewRank, &rpc.RankInfo{Key: rdata.Key, Rank: rank})
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RPC_IncreaseRankData 增量更新排行扩展数据
|
||||
func (rs *RankService) RPC_IncreaseRankData(changeRankData *rpc.IncreaseRankData, changeRankDataRet *rpc.IncreaseRankDataRet) error {
|
||||
rankSkip, ok := rs.mapRankSkip[changeRankData.RankId]
|
||||
if ok == false || rankSkip == nil {
|
||||
return fmt.Errorf("RPC_ChangeRankData[", changeRankData.RankId, "] no this rank id")
|
||||
}
|
||||
|
||||
ret := rankSkip.ChangeExtendData(changeRankData)
|
||||
if ret == false {
|
||||
return fmt.Errorf("RPC_ChangeRankData[", changeRankData.RankId, "] no this key ", changeRankData.Key)
|
||||
}
|
||||
|
||||
if changeRankData.ReturnRankData == true {
|
||||
rankData, rank := rankSkip.GetRankNodeData(changeRankData.Key)
|
||||
changeRankDataRet.PosData = &rpc.RankPosData{}
|
||||
changeRankDataRet.PosData.Rank = rank
|
||||
|
||||
changeRankDataRet.PosData.Key = rankData.Key
|
||||
changeRankDataRet.PosData.Data = rankData.Data
|
||||
changeRankDataRet.PosData.SortData = rankData.SortData
|
||||
changeRankDataRet.PosData.ExtendData = rankData.ExData
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RPC_UpsetRank 更新排行榜
|
||||
func (rs *RankService) RPC_UpdateRankData(updateRankData *rpc.UpdateRankData, updateRankDataRet *rpc.UpdateRankDataRet) error {
|
||||
rankSkip, ok := rs.mapRankSkip[updateRankData.RankId]
|
||||
if ok == false || rankSkip == nil {
|
||||
updateRankDataRet.Ret = false
|
||||
return nil
|
||||
}
|
||||
|
||||
updateRankDataRet.Ret = rankSkip.UpdateRankData(updateRankData)
|
||||
return nil
|
||||
}
|
||||
|
||||
// RPC_DeleteRankDataByKey 按key从排行榜中进行删除
|
||||
func (rs *RankService) RPC_DeleteRankDataByKey(delInfo *rpc.DeleteByKey, delResult *rpc.RankResult) error {
|
||||
rankSkip, ok := rs.mapRankSkip[delInfo.RankId]
|
||||
if ok == false || rankSkip == nil {
|
||||
return fmt.Errorf("RPC_DeleteRankDataByKey[", delInfo.RankId, "] no this rank type")
|
||||
}
|
||||
|
||||
removeCount := rankSkip.DeleteRankData(delInfo.KeyList)
|
||||
if removeCount == 0 {
|
||||
log.SError("remove count is zero")
|
||||
}
|
||||
|
||||
delResult.RemoveCount = removeCount
|
||||
return nil
|
||||
}
|
||||
|
||||
// RPC_FindRankDataByKey 按key查找,返回对应的排行名次信息
|
||||
func (rs *RankService) RPC_FindRankDataByKey(findInfo *rpc.FindRankDataByKey, findResult *rpc.RankPosData) error {
|
||||
rankObj, ok := rs.mapRankSkip[findInfo.RankId]
|
||||
if ok == false || rankObj == nil {
|
||||
return fmt.Errorf("RPC_FindRankDataByKey[", findInfo.RankId, "] no this rank type")
|
||||
}
|
||||
|
||||
findRankData, rank := rankObj.GetRankNodeData(findInfo.Key)
|
||||
if findRankData != nil {
|
||||
findResult.Data = findRankData.Data
|
||||
findResult.Key = findRankData.Key
|
||||
findResult.SortData = findRankData.SortData
|
||||
findResult.Rank = rank
|
||||
findResult.ExtendData = findRankData.ExData
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// RPC_FindRankDataByRank 按pos查找
|
||||
func (rs *RankService) RPC_FindRankDataByRank(findInfo *rpc.FindRankDataByRank, findResult *rpc.RankPosData) error {
|
||||
rankObj, ok := rs.mapRankSkip[findInfo.RankId]
|
||||
if ok == false || rankObj == nil {
|
||||
return fmt.Errorf("RPC_FindRankDataByKey[", findInfo.RankId, "] no this rank type")
|
||||
}
|
||||
|
||||
findRankData, rankPos := rankObj.GetRankNodeDataByRank(findInfo.Rank)
|
||||
if findRankData != nil {
|
||||
findResult.Data = findRankData.Data
|
||||
findResult.Key = findRankData.Key
|
||||
findResult.SortData = findRankData.SortData
|
||||
findResult.Rank = rankPos
|
||||
findResult.ExtendData = findRankData.ExData
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// RPC_FindRankDataList 按StartRank查找,从StartRank开始count个排行数据
|
||||
func (rs *RankService) RPC_FindRankDataList(findInfo *rpc.FindRankDataList, findResult *rpc.RankDataList) error {
|
||||
rankObj, ok := rs.mapRankSkip[findInfo.RankId]
|
||||
if ok == false || rankObj == nil {
|
||||
err := fmt.Errorf("not config rank %d", findInfo.RankId)
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
findResult.RankDataCount = rankObj.GetRankLen()
|
||||
err := rankObj.GetRankDataFromToLimit(findInfo.StartRank-1, findInfo.Count, findResult)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
//查询附带的key
|
||||
if findInfo.Key != 0 {
|
||||
findRankData, rank := rankObj.GetRankNodeData(findInfo.Key)
|
||||
if findRankData != nil {
|
||||
findResult.KeyRank = &rpc.RankPosData{}
|
||||
findResult.KeyRank.Data = findRankData.Data
|
||||
findResult.KeyRank.Key = findRankData.Key
|
||||
findResult.KeyRank.SortData = findRankData.SortData
|
||||
findResult.KeyRank.Rank = rank
|
||||
findResult.KeyRank.ExtendData = findRankData.ExData
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (rs *RankService) deleteRankList(delIdList []uint64) {
|
||||
if rs.mapRankSkip == nil {
|
||||
return
|
||||
}
|
||||
|
||||
for _, id := range delIdList {
|
||||
delete(rs.mapRankSkip, id)
|
||||
}
|
||||
}
|
||||
|
||||
func (rs *RankService) dealCfg() error {
|
||||
mapDBServiceCfg, ok := rs.GetServiceCfg().(map[string]interface{})
|
||||
if ok == false {
|
||||
return nil
|
||||
}
|
||||
|
||||
cfgList, okList := mapDBServiceCfg["SortCfg"].([]interface{})
|
||||
if okList == false {
|
||||
return fmt.Errorf("RankService SortCfg must be list")
|
||||
}
|
||||
|
||||
for _, cfg := range cfgList {
|
||||
mapCfg, okCfg := cfg.(map[string]interface{})
|
||||
if okCfg == false {
|
||||
return fmt.Errorf("RankService SortCfg data must be map or struct")
|
||||
}
|
||||
|
||||
rankId, okId := mapCfg["RankID"].(float64)
|
||||
if okId == false || uint64(rankId) == 0 {
|
||||
return fmt.Errorf("RankService SortCfg data must has RankID[number]")
|
||||
}
|
||||
|
||||
rankName, okId := mapCfg["RankName"].(string)
|
||||
if okId == false || len(rankName) == 0 {
|
||||
return fmt.Errorf("RankService SortCfg data must has RankName[string]")
|
||||
}
|
||||
|
||||
level, _ := mapCfg["SkipListLevel"].(float64)
|
||||
isDec, _ := mapCfg["IsDec"].(bool)
|
||||
maxRank, _ := mapCfg["MaxRank"].(float64)
|
||||
expireMs, _ := mapCfg["ExpireMs"].(float64)
|
||||
|
||||
newSkip := NewRankSkip(uint64(rankId), rankName, isDec, transformLevel(int32(level)), uint64(maxRank), time.Duration(expireMs)*time.Millisecond)
|
||||
newSkip.SetupRankModule(rs.rankModule)
|
||||
rs.mapRankSkip[uint64(rankId)] = newSkip
|
||||
err := rs.rankModule.OnSetupRank(false, newSkip)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
473
sysservice/rankservice/RankSkip.go
Normal file
473
sysservice/rankservice/RankSkip.go
Normal file
@@ -0,0 +1,473 @@
|
||||
package rankservice
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/util/algorithms/skip"
|
||||
)
|
||||
|
||||
type RankSkip struct {
|
||||
rankId uint64 //排行榜ID
|
||||
rankName string //排行榜名称
|
||||
isDes bool //是否为降序 true:降序 false:升序
|
||||
skipList *skip.SkipList //跳表
|
||||
mapRankData map[uint64]*RankData //排行数据map
|
||||
maxLen uint64 //排行数据长度
|
||||
expireMs time.Duration //有效时间
|
||||
rankModule IRankModule
|
||||
rankDataExpire rankDataHeap
|
||||
}
|
||||
|
||||
const MaxPickExpireNum = 128
|
||||
const (
|
||||
RankDataNone RankDataChangeType = 0
|
||||
RankDataAdd RankDataChangeType = 1 //数据插入
|
||||
RankDataUpdate RankDataChangeType = 2 //数据更新
|
||||
RankDataDelete RankDataChangeType = 3 //数据删除
|
||||
)
|
||||
|
||||
// NewRankSkip 创建排行榜
|
||||
func NewRankSkip(rankId uint64, rankName string, isDes bool, level interface{}, maxLen uint64, expireMs time.Duration) *RankSkip {
|
||||
rs := &RankSkip{}
|
||||
|
||||
rs.rankId = rankId
|
||||
rs.rankName = rankName
|
||||
rs.isDes = isDes
|
||||
rs.skipList = skip.New(level)
|
||||
rs.mapRankData = make(map[uint64]*RankData, 10240)
|
||||
rs.maxLen = maxLen
|
||||
rs.expireMs = expireMs
|
||||
rs.rankDataExpire.Init(int32(maxLen), expireMs)
|
||||
|
||||
return rs
|
||||
}
|
||||
|
||||
func (rs *RankSkip) pickExpireKey() {
|
||||
if rs.expireMs == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
for i := 1; i <= MaxPickExpireNum; i++ {
|
||||
key := rs.rankDataExpire.PopExpireKey()
|
||||
if key == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
rs.DeleteRankData([]uint64{key})
|
||||
}
|
||||
}
|
||||
|
||||
func (rs *RankSkip) SetupRankModule(rankModule IRankModule) {
|
||||
rs.rankModule = rankModule
|
||||
}
|
||||
|
||||
// GetRankID 获取排行榜Id
|
||||
func (rs *RankSkip) GetRankID() uint64 {
|
||||
return rs.rankId
|
||||
}
|
||||
|
||||
// GetRankName 获取排行榜名称
|
||||
func (rs *RankSkip) GetRankName() string {
|
||||
return rs.rankName
|
||||
}
|
||||
|
||||
// GetRankLen 获取排行榜长度
|
||||
func (rs *RankSkip) GetRankLen() uint64 {
|
||||
return rs.skipList.Len()
|
||||
}
|
||||
|
||||
func (rs *RankSkip) UpsetRankList(upsetRankData []*rpc.RankData) (addCount int32, modifyCount int32) {
|
||||
for _, upsetData := range upsetRankData {
|
||||
changeType := rs.UpsetRank(upsetData, time.Now().UnixNano(), false)
|
||||
if changeType == RankDataAdd {
|
||||
addCount += 1
|
||||
} else if changeType == RankDataUpdate {
|
||||
modifyCount += 1
|
||||
}
|
||||
}
|
||||
|
||||
rs.pickExpireKey()
|
||||
return
|
||||
}
|
||||
|
||||
func (rs *RankSkip) InsertDataOnNonExistent(changeRankData *rpc.IncreaseRankData) bool {
|
||||
if changeRankData.InsertDataOnNonExistent == false {
|
||||
return false
|
||||
}
|
||||
|
||||
var upsetData rpc.RankData
|
||||
upsetData.Key = changeRankData.Key
|
||||
upsetData.Data = changeRankData.InitData
|
||||
upsetData.SortData = changeRankData.InitSortData
|
||||
|
||||
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(upsetData.SortData); i++ {
|
||||
upsetData.SortData[i] += changeRankData.IncreaseSortData[i]
|
||||
}
|
||||
|
||||
for _, val := range changeRankData.Extend {
|
||||
upsetData.ExData = append(upsetData.ExData, &rpc.ExtendIncData{InitValue: val.InitValue, IncreaseValue: val.IncreaseValue})
|
||||
}
|
||||
|
||||
//强制设计指定值
|
||||
for _, setData := range changeRankData.SetSortAndExtendData {
|
||||
if setData.IsSortData == true {
|
||||
if int(setData.Pos) >= len(upsetData.SortData) {
|
||||
return false
|
||||
}
|
||||
upsetData.SortData[setData.Pos] = setData.Data
|
||||
} else {
|
||||
if int(setData.Pos) < len(upsetData.ExData) {
|
||||
upsetData.ExData[setData.Pos].IncreaseValue = 0
|
||||
upsetData.ExData[setData.Pos].InitValue = setData.Data
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
refreshTimestamp := time.Now().UnixNano()
|
||||
newRankData := NewRankData(rs.isDes, &upsetData, refreshTimestamp)
|
||||
rs.skipList.Insert(newRankData)
|
||||
rs.mapRankData[upsetData.Key] = newRankData
|
||||
|
||||
//刷新有效期和存档数据
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||
rs.rankModule.OnChangeRankData(rs, newRankData)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func (rs *RankSkip) UpdateRankData(updateRankData *rpc.UpdateRankData) bool {
|
||||
rankNode, ok := rs.mapRankData[updateRankData.Key]
|
||||
if ok == false {
|
||||
return false
|
||||
}
|
||||
|
||||
rankNode.Data = updateRankData.Data
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(updateRankData.Key, time.Now().UnixNano())
|
||||
rs.rankModule.OnChangeRankData(rs, rankNode)
|
||||
return true
|
||||
}
|
||||
|
||||
func (rs *RankSkip) ChangeExtendData(changeRankData *rpc.IncreaseRankData) bool {
|
||||
rankNode, ok := rs.mapRankData[changeRankData.Key]
|
||||
if ok == false {
|
||||
return rs.InsertDataOnNonExistent(changeRankData)
|
||||
}
|
||||
|
||||
//先判断是不是有修改
|
||||
bChange := false
|
||||
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(rankNode.SortData); i++ {
|
||||
if changeRankData.IncreaseSortData[i] != 0 {
|
||||
bChange = true
|
||||
}
|
||||
}
|
||||
|
||||
if bChange == false {
|
||||
for _, setSortAndExtendData := range changeRankData.SetSortAndExtendData {
|
||||
if setSortAndExtendData.IsSortData == true {
|
||||
bChange = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//如果有改变,删除原有的数据,重新刷新到跳表
|
||||
rankData := rankNode
|
||||
refreshTimestamp := time.Now().UnixNano()
|
||||
if bChange == true {
|
||||
//copy数据
|
||||
var upsetData rpc.RankData
|
||||
upsetData.Key = rankNode.Key
|
||||
upsetData.Data = rankNode.Data
|
||||
upsetData.SortData = append(upsetData.SortData, rankNode.SortData...)
|
||||
|
||||
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(upsetData.SortData); i++ {
|
||||
if changeRankData.IncreaseSortData[i] != 0 {
|
||||
upsetData.SortData[i] += changeRankData.IncreaseSortData[i]
|
||||
}
|
||||
}
|
||||
|
||||
for _, setData := range changeRankData.SetSortAndExtendData {
|
||||
if setData.IsSortData == true {
|
||||
if int(setData.Pos) < len(upsetData.SortData) {
|
||||
upsetData.SortData[setData.Pos] = setData.Data
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
rankData = NewRankData(rs.isDes, &upsetData, refreshTimestamp)
|
||||
rankData.ExData = append(rankData.ExData, rankNode.ExData...)
|
||||
|
||||
//从排行榜中删除
|
||||
rs.skipList.Delete(rankNode)
|
||||
ReleaseRankData(rankNode)
|
||||
|
||||
rs.skipList.Insert(rankData)
|
||||
rs.mapRankData[upsetData.Key] = rankData
|
||||
}
|
||||
|
||||
//增长扩展参数
|
||||
for i := 0; i < len(changeRankData.Extend); i++ {
|
||||
if i < len(rankData.ExData) {
|
||||
//直接增长
|
||||
rankData.ExData[i] += changeRankData.Extend[i].IncreaseValue
|
||||
} else {
|
||||
//如果不存在的扩展位置,append补充,并按IncreaseValue增长
|
||||
rankData.ExData = append(rankData.ExData, changeRankData.Extend[i].InitValue+changeRankData.Extend[i].IncreaseValue)
|
||||
}
|
||||
}
|
||||
|
||||
//设置固定值
|
||||
for _, setData := range changeRankData.SetSortAndExtendData {
|
||||
if setData.IsSortData == false {
|
||||
if int(setData.Pos) < len(rankData.ExData) {
|
||||
rankData.ExData[setData.Pos] = setData.Data
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(rankData.Key, refreshTimestamp)
|
||||
rs.rankModule.OnChangeRankData(rs, rankData)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// UpsetRank 更新玩家排行数据,返回变化后的数据及变化类型
|
||||
func (rs *RankSkip) UpsetRank(upsetData *rpc.RankData, refreshTimestamp int64, fromLoad bool) RankDataChangeType {
|
||||
rankNode, ok := rs.mapRankData[upsetData.Key]
|
||||
if ok == true {
|
||||
//增长扩展数据
|
||||
for i := 0; i < len(upsetData.ExData); i++ {
|
||||
if i < len(rankNode.ExData) {
|
||||
//直接增长
|
||||
rankNode.ExData[i] += upsetData.ExData[i].IncreaseValue
|
||||
} else {
|
||||
//如果不存在的扩展位置,append补充,并按IncreaseValue增长
|
||||
rankNode.ExData = append(rankNode.ExData, upsetData.ExData[i].InitValue+upsetData.ExData[i].IncreaseValue)
|
||||
}
|
||||
}
|
||||
|
||||
//找到的情况对比排名数据是否有变化,无变化进行data更新,有变化则进行删除更新
|
||||
if compareIsEqual(rankNode.SortData, upsetData.SortData) {
|
||||
rankNode.Data = upsetData.GetData()
|
||||
rankNode.refreshTimestamp = refreshTimestamp
|
||||
|
||||
if fromLoad == false {
|
||||
rs.rankModule.OnChangeRankData(rs, rankNode)
|
||||
}
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||
return RankDataUpdate
|
||||
}
|
||||
|
||||
if upsetData.Data == nil {
|
||||
upsetData.Data = rankNode.Data
|
||||
}
|
||||
|
||||
//设置额外数据
|
||||
for idx, exValue := range rankNode.ExData {
|
||||
currentIncreaseValue := int64(0)
|
||||
if idx < len(upsetData.ExData) {
|
||||
currentIncreaseValue = upsetData.ExData[idx].IncreaseValue
|
||||
}
|
||||
|
||||
upsetData.ExData = append(upsetData.ExData, &rpc.ExtendIncData{
|
||||
InitValue: exValue,
|
||||
IncreaseValue: currentIncreaseValue,
|
||||
})
|
||||
}
|
||||
|
||||
rs.skipList.Delete(rankNode)
|
||||
ReleaseRankData(rankNode)
|
||||
|
||||
newRankData := NewRankData(rs.isDes, upsetData, refreshTimestamp)
|
||||
rs.skipList.Insert(newRankData)
|
||||
rs.mapRankData[upsetData.Key] = newRankData
|
||||
|
||||
//刷新有效期
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||
|
||||
if fromLoad == false {
|
||||
rs.rankModule.OnChangeRankData(rs, newRankData)
|
||||
}
|
||||
return RankDataUpdate
|
||||
}
|
||||
|
||||
if rs.checkInsertAndReplace(upsetData) {
|
||||
newRankData := NewRankData(rs.isDes, upsetData, refreshTimestamp)
|
||||
|
||||
rs.skipList.Insert(newRankData)
|
||||
rs.mapRankData[upsetData.Key] = newRankData
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||
|
||||
if fromLoad == false {
|
||||
rs.rankModule.OnEnterRank(rs, newRankData)
|
||||
}
|
||||
|
||||
return RankDataAdd
|
||||
}
|
||||
|
||||
return RankDataNone
|
||||
}
|
||||
|
||||
// DeleteRankData 删除排行数据
|
||||
func (rs *RankSkip) DeleteRankData(delKeys []uint64) int32 {
|
||||
var removeRankData int32
|
||||
//预统计处理,进行回调
|
||||
for _, key := range delKeys {
|
||||
rankData, ok := rs.mapRankData[key]
|
||||
if ok == false {
|
||||
continue
|
||||
}
|
||||
|
||||
removeRankData += 1
|
||||
rs.skipList.Delete(rankData)
|
||||
delete(rs.mapRankData, rankData.Key)
|
||||
rs.rankDataExpire.RemoveExpireKey(rankData.Key)
|
||||
rs.rankModule.OnLeaveRank(rs, rankData)
|
||||
ReleaseRankData(rankData)
|
||||
}
|
||||
|
||||
return removeRankData
|
||||
}
|
||||
|
||||
// GetRankNodeData 获取,返回排名节点与名次
|
||||
func (rs *RankSkip) GetRankNodeData(findKey uint64) (*RankData, uint64) {
|
||||
rankNode, ok := rs.mapRankData[findKey]
|
||||
if ok == false {
|
||||
return nil, 0
|
||||
}
|
||||
|
||||
rs.pickExpireKey()
|
||||
_, index := rs.skipList.GetWithPosition(rankNode)
|
||||
return rankNode, index + 1
|
||||
}
|
||||
|
||||
// GetRankNodeDataByPos 获取,返回排名节点与名次
|
||||
func (rs *RankSkip) GetRankNodeDataByRank(rank uint64) (*RankData, uint64) {
|
||||
rs.pickExpireKey()
|
||||
rankNode := rs.skipList.ByPosition(rank - 1)
|
||||
if rankNode == nil {
|
||||
return nil, 0
|
||||
}
|
||||
|
||||
return rankNode.(*RankData), rank
|
||||
}
|
||||
|
||||
// GetRankKeyPrevToLimit 获取key前count名的数据
|
||||
func (rs *RankSkip) GetRankKeyPrevToLimit(findKey, count uint64, result *rpc.RankDataList) error {
|
||||
if rs.GetRankLen() <= 0 {
|
||||
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||
}
|
||||
|
||||
findData, ok := rs.mapRankData[findKey]
|
||||
if ok == false {
|
||||
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||
}
|
||||
|
||||
_, rankPos := rs.skipList.GetWithPosition(findData)
|
||||
iter := rs.skipList.Iter(findData)
|
||||
iterCount := uint64(0)
|
||||
for iter.Prev() && iterCount < count {
|
||||
rankData := iter.Value().(*RankData)
|
||||
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
||||
Key: rankData.Key,
|
||||
Rank: rankPos - iterCount + 1,
|
||||
SortData: rankData.SortData,
|
||||
Data: rankData.Data,
|
||||
ExtendData: rankData.ExData,
|
||||
})
|
||||
iterCount++
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetRankKeyPrevToLimit 获取key前count名的数据
|
||||
func (rs *RankSkip) GetRankKeyNextToLimit(findKey, count uint64, result *rpc.RankDataList) error {
|
||||
if rs.GetRankLen() <= 0 {
|
||||
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||
}
|
||||
|
||||
findData, ok := rs.mapRankData[findKey]
|
||||
if ok == false {
|
||||
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||
}
|
||||
|
||||
_, rankPos := rs.skipList.GetWithPosition(findData)
|
||||
iter := rs.skipList.Iter(findData)
|
||||
iterCount := uint64(0)
|
||||
for iter.Next() && iterCount < count {
|
||||
rankData := iter.Value().(*RankData)
|
||||
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
||||
Key: rankData.Key,
|
||||
Rank: rankPos + iterCount + 1,
|
||||
SortData: rankData.SortData,
|
||||
Data: rankData.Data,
|
||||
ExtendData: rankData.ExData,
|
||||
})
|
||||
iterCount++
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetRankList 获取排行榜数据,startPos开始的count个数据
|
||||
func (rs *RankSkip) GetRankDataFromToLimit(startPos, count uint64, result *rpc.RankDataList) error {
|
||||
if rs.GetRankLen() <= 0 {
|
||||
//初始排行榜可能没有数据
|
||||
return nil
|
||||
}
|
||||
|
||||
rs.pickExpireKey()
|
||||
if result.RankDataCount < startPos {
|
||||
startPos = result.RankDataCount - 1
|
||||
}
|
||||
|
||||
iter := rs.skipList.IterAtPosition(startPos)
|
||||
iterCount := uint64(0)
|
||||
for iter.Next() && iterCount < count {
|
||||
rankData := iter.Value().(*RankData)
|
||||
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
||||
Key: rankData.Key,
|
||||
Rank: iterCount + startPos + 1,
|
||||
SortData: rankData.SortData,
|
||||
Data: rankData.Data,
|
||||
ExtendData: rankData.ExData,
|
||||
})
|
||||
iterCount++
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// checkCanInsert 检查是否能插入
|
||||
func (rs *RankSkip) checkInsertAndReplace(upsetData *rpc.RankData) bool {
|
||||
//maxLen为0,不限制长度
|
||||
if rs.maxLen == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
//没有放满,则进行插入
|
||||
rankLen := rs.skipList.Len()
|
||||
if rs.maxLen > rankLen {
|
||||
return true
|
||||
}
|
||||
|
||||
//已经放满了,进行数据比较
|
||||
lastPosData := rs.skipList.ByPosition(rankLen - 1)
|
||||
lastRankData := lastPosData.(*RankData)
|
||||
moreThanFlag := compareMoreThan(upsetData.SortData, lastRankData.SortData)
|
||||
//降序排列,比最后一位小,不能插入 升序排列,比最后一位大,不能插入
|
||||
if (rs.isDes == true && moreThanFlag < 0) || (rs.isDes == false && moreThanFlag > 0) || moreThanFlag == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
//移除最后一位
|
||||
//回调模块,该RandData从排行中删除
|
||||
rs.rankDataExpire.RemoveExpireKey(lastRankData.Key)
|
||||
rs.rankModule.OnLeaveRank(rs, lastRankData)
|
||||
rs.skipList.Delete(lastPosData)
|
||||
delete(rs.mapRankData, lastRankData.Key)
|
||||
ReleaseRankData(lastRankData)
|
||||
return true
|
||||
}
|
||||
@@ -21,9 +21,6 @@ type TcpService struct {
|
||||
mapClientLocker sync.RWMutex
|
||||
mapClient map[uint64] *Client
|
||||
process processor.IProcessor
|
||||
|
||||
ReadDeadline time.Duration
|
||||
WriteDeadline time.Duration
|
||||
}
|
||||
|
||||
type TcpPackType int8
|
||||
@@ -34,14 +31,6 @@ const(
|
||||
TPT_UnknownPack TcpPackType = 3
|
||||
)
|
||||
|
||||
const Default_MaxConnNum = 3000
|
||||
const Default_PendingWriteNum = 10000
|
||||
const Default_LittleEndian = false
|
||||
const Default_MinMsgLen = 2
|
||||
const Default_MaxMsgLen = 65535
|
||||
const Default_ReadDeadline = 180 //30s
|
||||
const Default_WriteDeadline = 180 //30s
|
||||
|
||||
const (
|
||||
MaxNodeId = 1<<14 - 1 //最大值 16383
|
||||
MaxSeed = 1<<19 - 1 //最大值 524287
|
||||
@@ -89,14 +78,6 @@ func (tcpService *TcpService) OnInit() error{
|
||||
}
|
||||
|
||||
tcpService.tcpServer.Addr = addr.(string)
|
||||
tcpService.tcpServer.MaxConnNum = Default_MaxConnNum
|
||||
tcpService.tcpServer.PendingWriteNum = Default_PendingWriteNum
|
||||
tcpService.tcpServer.LittleEndian = Default_LittleEndian
|
||||
tcpService.tcpServer.MinMsgLen = Default_MinMsgLen
|
||||
tcpService.tcpServer.MaxMsgLen = Default_MaxMsgLen
|
||||
tcpService.ReadDeadline = Default_ReadDeadline
|
||||
tcpService.WriteDeadline = Default_WriteDeadline
|
||||
|
||||
MaxConnNum,ok := tcpCfg["MaxConnNum"]
|
||||
if ok == true {
|
||||
tcpService.tcpServer.MaxConnNum = int(MaxConnNum.(float64))
|
||||
@@ -109,6 +90,10 @@ func (tcpService *TcpService) OnInit() error{
|
||||
if ok == true {
|
||||
tcpService.tcpServer.LittleEndian = LittleEndian.(bool)
|
||||
}
|
||||
LenMsgLen,ok := tcpCfg["LenMsgLen"]
|
||||
if ok == true {
|
||||
tcpService.tcpServer.LenMsgLen = int(LenMsgLen.(float64))
|
||||
}
|
||||
MinMsgLen,ok := tcpCfg["MinMsgLen"]
|
||||
if ok == true {
|
||||
tcpService.tcpServer.MinMsgLen = uint32(MinMsgLen.(float64))
|
||||
@@ -120,12 +105,12 @@ func (tcpService *TcpService) OnInit() error{
|
||||
|
||||
readDeadline,ok := tcpCfg["ReadDeadline"]
|
||||
if ok == true {
|
||||
tcpService.ReadDeadline = time.Second*time.Duration(readDeadline.(float64))
|
||||
tcpService.tcpServer.ReadDeadline = time.Second*time.Duration(readDeadline.(float64))
|
||||
}
|
||||
|
||||
writeDeadline,ok := tcpCfg["WriteDeadline"]
|
||||
if ok == true {
|
||||
tcpService.WriteDeadline = time.Second*time.Duration(writeDeadline.(float64))
|
||||
tcpService.tcpServer.WriteDeadline = time.Second*time.Duration(writeDeadline.(float64))
|
||||
}
|
||||
|
||||
tcpService.mapClient = make( map[uint64] *Client, tcpService.tcpServer.MaxConnNum)
|
||||
@@ -195,7 +180,7 @@ func (slf *Client) Run() {
|
||||
break
|
||||
}
|
||||
|
||||
slf.tcpConn.SetReadDeadline(slf.tcpService.ReadDeadline)
|
||||
slf.tcpConn.SetReadDeadline(slf.tcpService.tcpServer.ReadDeadline)
|
||||
bytes,err := slf.tcpConn.ReadMsg()
|
||||
if err != nil {
|
||||
log.SDebug("read client id ",slf.id," is error:",err.Error())
|
||||
@@ -231,7 +216,6 @@ func (tcpService *TcpService) SendMsg(clientId uint64,msg interface{}) error{
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
client.tcpConn.SetWriteDeadline(tcpService.WriteDeadline)
|
||||
return client.tcpConn.WriteMsg(bytes)
|
||||
}
|
||||
|
||||
@@ -271,7 +255,6 @@ func (tcpService *TcpService) SendRawMsg(clientId uint64,msg []byte) error{
|
||||
return fmt.Errorf("client %d is disconnect!",clientId)
|
||||
}
|
||||
tcpService.mapClientLocker.Unlock()
|
||||
client.tcpConn.SetWriteDeadline(tcpService.WriteDeadline)
|
||||
return client.tcpConn.WriteMsg(msg)
|
||||
}
|
||||
|
||||
@@ -283,7 +266,6 @@ func (tcpService *TcpService) SendRawData(clientId uint64,data []byte) error{
|
||||
return fmt.Errorf("client %d is disconnect!",clientId)
|
||||
}
|
||||
tcpService.mapClientLocker.Unlock()
|
||||
client.tcpConn.SetWriteDeadline(tcpService.WriteDeadline)
|
||||
return client.tcpConn.WriteRawMsg(data)
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
package algorithms
|
||||
|
||||
|
||||
type NumberType interface {
|
||||
int | int8 | int16 | int32 | int64 | string | float32 | float64 | uint | uint8 | uint16 | uint32 | uint64
|
||||
}
|
||||
@@ -9,8 +8,16 @@ type Element[ValueType NumberType] interface {
|
||||
GetValue() ValueType
|
||||
}
|
||||
|
||||
//BiSearch 二分查找,切片必需有序号。matchUp表示是否向上范围查找。比如:数列10 20 30 ,当value传入25时,返回结果是2,表示落到3的范围
|
||||
func BiSearch[ValueType NumberType, T Element[ValueType]](sElement []T, value ValueType, matchUp bool) int {
|
||||
/*
|
||||
BiSearch 二分查找,切片必需有序
|
||||
matchUp规则如下:
|
||||
参数为0时,则一定要找到相等的值
|
||||
参数-1时,找value左边的值,例如:[10,20,30,40],当value为9时返回-1; 当value为11时,返回0 当value为41时,返回 3
|
||||
参数 1时,找value右边的值,例如:[10,20,30,40],当value为9时返回 0; 当value为11时,返回1 当value为41时,返回-1
|
||||
|
||||
返回-1时代表没有找到下标
|
||||
*/
|
||||
func BiSearch[ValueType NumberType, T Element[ValueType]](sElement []T, value ValueType, matchUp int) int {
|
||||
low, high := 0, len(sElement)-1
|
||||
if high == -1 {
|
||||
return -1
|
||||
@@ -28,12 +35,31 @@ func BiSearch[ValueType NumberType, T Element[ValueType]](sElement []T, value Va
|
||||
}
|
||||
}
|
||||
|
||||
if matchUp == true {
|
||||
if (sElement[mid].GetValue()) < value &&
|
||||
(mid+1 < len(sElement)-1) {
|
||||
switch matchUp {
|
||||
case 1:
|
||||
if (sElement[mid].GetValue()) < value {
|
||||
if mid+1 >= len(sElement) {
|
||||
return -1
|
||||
}
|
||||
return mid + 1
|
||||
}
|
||||
return mid
|
||||
case -1:
|
||||
if (sElement[mid].GetValue()) > value {
|
||||
if mid-1 < 0 {
|
||||
return -1
|
||||
} else {
|
||||
return mid - 1
|
||||
}
|
||||
} else if (sElement[mid].GetValue()) < value {
|
||||
//if mid+1 < len(sElement)-1 {
|
||||
// return mid + 1
|
||||
//} else {
|
||||
return mid
|
||||
//}
|
||||
} else {
|
||||
return mid
|
||||
}
|
||||
}
|
||||
|
||||
return -1
|
||||
|
||||
61
util/algorithms/BitwiseOperation.go
Normal file
61
util/algorithms/BitwiseOperation.go
Normal file
@@ -0,0 +1,61 @@
|
||||
package algorithms
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
type BitNumber interface {
|
||||
int | int8 | int16 | int32 | int64 | uint | uint8 | uint16 | uint32 | uint64 | uintptr
|
||||
}
|
||||
|
||||
type UnsignedNumber interface {
|
||||
uint | uint8 | uint16 | uint32 | uint64 | uintptr
|
||||
}
|
||||
|
||||
func getBitTagIndex[Number BitNumber, UNumber UnsignedNumber](bitBuff []Number, bitPositionIndex UNumber) (uintptr, uintptr, bool) {
|
||||
sliceIndex := uintptr(bitPositionIndex) / (8 * unsafe.Sizeof(bitBuff[0]))
|
||||
sliceBitIndex := uintptr(bitPositionIndex) % (8 * unsafe.Sizeof(bitBuff[0]))
|
||||
|
||||
//位index不能越界
|
||||
if uintptr(bitPositionIndex) >= uintptr(len(bitBuff))*unsafe.Sizeof(bitBuff[0])*8 {
|
||||
return 0, 0, false
|
||||
}
|
||||
return sliceIndex, sliceBitIndex, true
|
||||
}
|
||||
|
||||
func setBitTagByIndex[Number BitNumber, UNumber UnsignedNumber](bitBuff []Number, bitPositionIndex UNumber, setTag bool) bool {
|
||||
sliceIndex, sliceBitIndex, ret := getBitTagIndex(bitBuff, bitPositionIndex)
|
||||
if ret == false {
|
||||
return ret
|
||||
}
|
||||
|
||||
if setTag {
|
||||
bitBuff[sliceIndex] = bitBuff[sliceIndex] | 1<<sliceBitIndex
|
||||
} else {
|
||||
bitBuff[sliceIndex] = bitBuff[sliceIndex] &^ (1 << sliceBitIndex)
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func GetBitwiseTag[Number BitNumber, UNumber UnsignedNumber](bitBuff []Number, bitPositionIndex UNumber) (bool, error) {
|
||||
sliceIndex, sliceBitIndex, ret := getBitTagIndex(bitBuff, bitPositionIndex)
|
||||
if ret == false {
|
||||
return false, errors.New("Invalid parameter")
|
||||
}
|
||||
|
||||
return (bitBuff[sliceIndex] & (1 << sliceBitIndex)) > 0, nil
|
||||
}
|
||||
|
||||
func SetBitwiseTag[Number BitNumber, UNumber UnsignedNumber](bitBuff []Number, bitPositionIndex UNumber) bool {
|
||||
return setBitTagByIndex(bitBuff, bitPositionIndex, true)
|
||||
}
|
||||
|
||||
func ClearBitwiseTag[Number BitNumber, UNumber UnsignedNumber](bitBuff []Number, bitPositionIndex UNumber) bool {
|
||||
return setBitTagByIndex(bitBuff, bitPositionIndex, false)
|
||||
}
|
||||
|
||||
func GetBitwiseNum[Number BitNumber](bitBuff []Number) int {
|
||||
return len(bitBuff) * int(unsafe.Sizeof(bitBuff[0])*8)
|
||||
}
|
||||
37
util/algorithms/BitwiseOperation_test.go
Normal file
37
util/algorithms/BitwiseOperation_test.go
Normal file
@@ -0,0 +1,37 @@
|
||||
package algorithms
|
||||
|
||||
import "testing"
|
||||
|
||||
func Test_Bitwise(t *testing.T) {
|
||||
//1.预分配10个byte切片,用于存储位标识
|
||||
byteBuff := make([]byte, 10)
|
||||
|
||||
//2.获取buff总共位数
|
||||
bitNum := GetBitwiseNum(byteBuff)
|
||||
t.Log(bitNum)
|
||||
|
||||
//3..对索引79位打标记,注意是从0开始,79即为最后一个位
|
||||
idx := uint(79)
|
||||
|
||||
//4.对byteBuff索引idx位置打上标记
|
||||
SetBitwiseTag(byteBuff, idx)
|
||||
|
||||
//5.获取索引idx位置标记
|
||||
isTag, ret := GetBitwiseTag(byteBuff, idx)
|
||||
t.Log("set index ", idx, " :", isTag, ret)
|
||||
if isTag != true {
|
||||
t.Fatal("error")
|
||||
}
|
||||
|
||||
//6.清除掉索引idx位标记
|
||||
ClearBitwiseTag(byteBuff, idx)
|
||||
|
||||
//7.获取索引idx位置标记
|
||||
isTag, ret = GetBitwiseTag(byteBuff, idx)
|
||||
t.Log("get index ", idx, " :", isTag, ret)
|
||||
|
||||
if isTag != false {
|
||||
t.Fatal("error")
|
||||
}
|
||||
|
||||
}
|
||||
47
util/algorithms/skip/interface.go
Normal file
47
util/algorithms/skip/interface.go
Normal file
@@ -0,0 +1,47 @@
|
||||
/*
|
||||
Copyright 2014 Workiva, LLC
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package skip
|
||||
|
||||
// Comparator is a generic interface that represents items that can
|
||||
// be compared.
|
||||
type Comparator interface {
|
||||
// Compare compares this interface with another. Returns a positive
|
||||
// number if this interface is greater, 0 if equal, negative number
|
||||
// if less.
|
||||
Compare(Comparator) int
|
||||
}
|
||||
|
||||
// Comparators is a typed list of type Comparator.
|
||||
type Comparators []Comparator
|
||||
|
||||
// Iterator defines an interface that allows a consumer to iterate
|
||||
// all results of a query. All values will be visited in-order.
|
||||
type Iterator interface {
|
||||
// Next returns a bool indicating if there is future value
|
||||
// in the iterator and moves the iterator to that value.
|
||||
Next() bool
|
||||
// Prev returns a bool indicating if there is Previous value
|
||||
// in the iterator and moves the iterator to that value.
|
||||
Prev() bool
|
||||
// Value returns a Comparator representing the iterator's current
|
||||
// position. If there is no value, this returns nil.
|
||||
Value() Comparator
|
||||
// exhaust is a helper method that will iterate this iterator
|
||||
// to completion and return a list of resulting Entries
|
||||
// in order.
|
||||
exhaust() Comparators
|
||||
}
|
||||
86
util/algorithms/skip/iterator.go
Normal file
86
util/algorithms/skip/iterator.go
Normal file
@@ -0,0 +1,86 @@
|
||||
/*
|
||||
Copyright 2014 Workiva, LLC
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package skip
|
||||
|
||||
const iteratorExhausted = -2
|
||||
|
||||
// iterator represents an object that can be iterated. It will
|
||||
// return false on Next and nil on Value if there are no further
|
||||
// values to be iterated.
|
||||
type iterator struct {
|
||||
first bool
|
||||
n *node
|
||||
}
|
||||
|
||||
// Next returns a bool indicating if there are any further values
|
||||
// in this iterator.
|
||||
func (iter *iterator) Next() bool {
|
||||
if iter.first {
|
||||
iter.first = false
|
||||
return iter.n != nil
|
||||
}
|
||||
|
||||
if iter.n == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
iter.n = iter.n.forward[0]
|
||||
return iter.n != nil
|
||||
}
|
||||
|
||||
// Prev returns a bool indicating if there are any Previous values
|
||||
// in this iterator.
|
||||
func (iter *iterator) Prev() bool {
|
||||
if iter.first {
|
||||
iter.first = false
|
||||
return iter.n != nil
|
||||
}
|
||||
|
||||
if iter.n == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
iter.n = iter.n.preNode
|
||||
return iter.n != nil && iter.n.entry != nil
|
||||
}
|
||||
|
||||
// Value returns a Comparator representing the iterator's present
|
||||
// position in the query. Returns nil if no values remain to iterate.
|
||||
func (iter *iterator) Value() Comparator {
|
||||
if iter.n == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return iter.n.entry
|
||||
}
|
||||
|
||||
// exhaust is a helper method to exhaust this iterator and return
|
||||
// all remaining entries.
|
||||
func (iter *iterator) exhaust() Comparators {
|
||||
entries := make(Comparators, 0, 10)
|
||||
for i := iter; i.Next(); {
|
||||
entries = append(entries, i.Value())
|
||||
}
|
||||
|
||||
return entries
|
||||
}
|
||||
|
||||
// nilIterator returns an iterator that will always return false
|
||||
// for Next and nil for Value.
|
||||
func nilIterator() *iterator {
|
||||
return &iterator{}
|
||||
}
|
||||
50
util/algorithms/skip/node.go
Normal file
50
util/algorithms/skip/node.go
Normal file
@@ -0,0 +1,50 @@
|
||||
/*
|
||||
Copyright 2014 Workiva, LLC
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package skip
|
||||
|
||||
type widths []uint64
|
||||
|
||||
type nodes []*node
|
||||
|
||||
type node struct {
|
||||
// forward denotes the forward pointing pointers in this
|
||||
// node.
|
||||
forward nodes
|
||||
//zero level pre node
|
||||
preNode *node
|
||||
// widths keeps track of the distance between this pointer
|
||||
// and the forward pointers so we can access skip list
|
||||
// values by position in logarithmic time.
|
||||
widths widths
|
||||
// entry is the associated value with this node.
|
||||
entry Comparator
|
||||
}
|
||||
|
||||
func (n *node) Compare(e Comparator) int {
|
||||
return n.entry.Compare(e)
|
||||
}
|
||||
|
||||
// newNode will allocate and return a new node with the entry
|
||||
// provided. maxLevels will determine the length of the forward
|
||||
// pointer list associated with this node.
|
||||
func newNode(cmp Comparator, maxLevels uint8) *node {
|
||||
return &node{
|
||||
entry: cmp,
|
||||
forward: make(nodes, maxLevels),
|
||||
widths: make(widths, maxLevels),
|
||||
}
|
||||
}
|
||||
494
util/algorithms/skip/skip.go
Normal file
494
util/algorithms/skip/skip.go
Normal file
@@ -0,0 +1,494 @@
|
||||
/*
|
||||
Copyright 2014 Workiva, LLC
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Package skip defines a skiplist datastructure. That is, a data structure
|
||||
that probabilistically determines relationships between keys. By doing
|
||||
so, it becomes easier to program than a binary search tree but maintains
|
||||
similar speeds.
|
||||
|
||||
Performance characteristics:
|
||||
Insert: O(log n)
|
||||
Search: O(log n)
|
||||
Delete: O(log n)
|
||||
Space: O(n)
|
||||
|
||||
Recently added is the capability to address, insert, and replace an
|
||||
entry by position. This capability is acheived by saving the width
|
||||
of the "gap" between two nodes. Searching for an item by position is
|
||||
very similar to searching by value in that the same basic algorithm is
|
||||
used but we are searching for width instead of value. Because this avoids
|
||||
the overhead associated with Golang interfaces, operations by position
|
||||
are about twice as fast as operations by value. Time complexities listed
|
||||
below.
|
||||
|
||||
SearchByPosition: O(log n)
|
||||
InsertByPosition: O(log n)
|
||||
|
||||
More information here: http://cglab.ca/~morin/teaching/5408/refs/p90b.pdf
|
||||
|
||||
Benchmarks:
|
||||
BenchmarkInsert-8 2000000 930 ns/op
|
||||
BenchmarkGet-8 2000000 989 ns/op
|
||||
BenchmarkDelete-8 3000000 600 ns/op
|
||||
BenchmarkPrepend-8 1000000 1468 ns/op
|
||||
BenchmarkByPosition-8 10000000 202 ns/op
|
||||
BenchmarkInsertAtPosition-8 3000000 485 ns/op
|
||||
|
||||
CPU profiling has shown that the most expensive thing we do here
|
||||
is call Compare. A potential optimization for gets only is to
|
||||
do a binary search in the forward/width lists instead of visiting
|
||||
every value. We could also use generics if Golang had them and
|
||||
let the consumer specify primitive types, which would speed up
|
||||
these operation dramatically.
|
||||
*/
|
||||
package skip
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
const p = .5 // the p level defines the probability that a node
|
||||
// with a value at level i also has a value at i+1. This number
|
||||
// is also important in determining max level. Max level will
|
||||
// be defined as L(N) where L = log base (1/p) of n where n
|
||||
// is the number of items in the list and N is the number of possible
|
||||
// items in the universe. If p = .5 then maxlevel = 32 is appropriate
|
||||
// for uint32.
|
||||
|
||||
// lockedSource is an implementation of rand.Source that is safe for
|
||||
// concurrent use by multiple goroutines. The code is modeled after
|
||||
// https://golang.org/src/math/rand/rand.go.
|
||||
type lockedSource struct {
|
||||
mu sync.Mutex
|
||||
src rand.Source
|
||||
}
|
||||
|
||||
// Int63 implements the rand.Source interface.
|
||||
func (ls *lockedSource) Int63() (n int64) {
|
||||
ls.mu.Lock()
|
||||
n = ls.src.Int63()
|
||||
ls.mu.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
// Seed implements the rand.Source interface.
|
||||
func (ls *lockedSource) Seed(seed int64) {
|
||||
ls.mu.Lock()
|
||||
ls.src.Seed(seed)
|
||||
ls.mu.Unlock()
|
||||
}
|
||||
|
||||
// generator will be the common generator to create random numbers. It
|
||||
// is seeded with unix nanosecond when this line is executed at runtime,
|
||||
// and only executed once ensuring all random numbers come from the same
|
||||
// randomly seeded generator.
|
||||
var generator = rand.New(&lockedSource{src: rand.NewSource(time.Now().UnixNano())})
|
||||
|
||||
func generateLevel(maxLevel uint8) uint8 {
|
||||
var level uint8
|
||||
for level = uint8(1); level < maxLevel-1; level++ {
|
||||
if generator.Float64() >= p {
|
||||
|
||||
return level
|
||||
}
|
||||
}
|
||||
|
||||
return level
|
||||
}
|
||||
|
||||
func insertNode(sl *SkipList, n *node, cmp Comparator, pos uint64, cache nodes, posCache widths, allowDuplicate bool) Comparator {
|
||||
if !allowDuplicate && n != nil && n.Compare(cmp) == 0 { // a simple update in this case
|
||||
oldEntry := n.entry
|
||||
n.entry = cmp
|
||||
return oldEntry
|
||||
}
|
||||
atomic.AddUint64(&sl.num, 1)
|
||||
|
||||
nodeLevel := generateLevel(sl.maxLevel)
|
||||
if nodeLevel > sl.level {
|
||||
for i := sl.level; i < nodeLevel; i++ {
|
||||
cache[i] = sl.head
|
||||
}
|
||||
sl.level = nodeLevel
|
||||
}
|
||||
|
||||
nn := newNode(cmp, nodeLevel)
|
||||
for i := uint8(0); i < nodeLevel; i++ {
|
||||
if i == 0 {
|
||||
nn.preNode = cache[i]
|
||||
if cache[i].forward[i] != nil {
|
||||
cache[i].forward[i].preNode = nn
|
||||
}
|
||||
}
|
||||
|
||||
nn.forward[i] = cache[i].forward[i]
|
||||
cache[i].forward[i] = nn
|
||||
|
||||
formerWidth := cache[i].widths[i]
|
||||
if formerWidth == 0 {
|
||||
nn.widths[i] = 0
|
||||
} else {
|
||||
nn.widths[i] = posCache[i] + formerWidth + 1 - pos
|
||||
}
|
||||
|
||||
if cache[i].forward[i] != nil {
|
||||
cache[i].widths[i] = pos - posCache[i]
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
for i := nodeLevel; i < sl.level; i++ {
|
||||
if cache[i].forward[i] == nil {
|
||||
continue
|
||||
}
|
||||
cache[i].widths[i]++
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func splitAt(sl *SkipList, index uint64) (*SkipList, *SkipList) {
|
||||
right := &SkipList{}
|
||||
right.maxLevel = sl.maxLevel
|
||||
right.level = sl.level
|
||||
right.cache = make(nodes, sl.maxLevel)
|
||||
right.posCache = make(widths, sl.maxLevel)
|
||||
right.head = newNode(nil, sl.maxLevel)
|
||||
sl.searchByPosition(index, sl.cache, sl.posCache) // populate the cache that needs updating
|
||||
|
||||
for i := uint8(0); i <= sl.level; i++ {
|
||||
right.head.forward[i] = sl.cache[i].forward[i]
|
||||
if sl.cache[i].forward[i] != nil {
|
||||
right.head.widths[i] = sl.cache[i].widths[i] - (index - sl.posCache[i])
|
||||
}
|
||||
sl.cache[i].widths[i] = 0
|
||||
sl.cache[i].forward[i] = nil
|
||||
}
|
||||
|
||||
right.num = sl.Len() - index // right is not in user's hands yet
|
||||
atomic.AddUint64(&sl.num, -right.num)
|
||||
|
||||
sl.resetMaxLevel()
|
||||
right.resetMaxLevel()
|
||||
|
||||
return sl, right
|
||||
}
|
||||
|
||||
// Skip list is a datastructure that probabalistically determines
|
||||
// relationships between nodes. This results in a structure
|
||||
// that performs similarly to a BST but is much easier to build
|
||||
// from a programmatic perspective (no rotations).
|
||||
type SkipList struct {
|
||||
maxLevel, level uint8
|
||||
head *node
|
||||
num uint64
|
||||
// a list of nodes that can be reused, should reduce
|
||||
// the number of allocations in the insert/delete case.
|
||||
cache nodes
|
||||
posCache widths
|
||||
}
|
||||
|
||||
// init will initialize this skiplist. The parameter is expected
|
||||
// to be of some uint type which will set this skiplist's maximum
|
||||
// level.
|
||||
func (sl *SkipList) init(ifc interface{}) {
|
||||
switch ifc.(type) {
|
||||
case uint8:
|
||||
sl.maxLevel = 8
|
||||
case uint16:
|
||||
sl.maxLevel = 16
|
||||
case uint32:
|
||||
sl.maxLevel = 32
|
||||
case uint64, uint:
|
||||
sl.maxLevel = 64
|
||||
}
|
||||
sl.cache = make(nodes, sl.maxLevel)
|
||||
sl.posCache = make(widths, sl.maxLevel)
|
||||
sl.head = newNode(nil, sl.maxLevel)
|
||||
}
|
||||
|
||||
func (sl *SkipList) search(cmp Comparator, update nodes, widths widths) (*node, uint64) {
|
||||
if sl.Len() == 0 { // nothing in the list
|
||||
return nil, 1
|
||||
}
|
||||
|
||||
var pos uint64 = 0
|
||||
var offset uint8
|
||||
var alreadyChecked *node
|
||||
n := sl.head
|
||||
for i := uint8(0); i <= sl.level; i++ {
|
||||
offset = sl.level - i
|
||||
for n.forward[offset] != nil && n.forward[offset] != alreadyChecked && n.forward[offset].Compare(cmp) < 0 {
|
||||
pos += n.widths[offset]
|
||||
n = n.forward[offset]
|
||||
}
|
||||
|
||||
alreadyChecked = n
|
||||
if update != nil {
|
||||
update[offset] = n
|
||||
widths[offset] = pos
|
||||
}
|
||||
}
|
||||
|
||||
return n.forward[0], pos + 1
|
||||
}
|
||||
|
||||
func (sl *SkipList) resetMaxLevel() {
|
||||
if sl.level < 1 {
|
||||
sl.level = 1
|
||||
return
|
||||
}
|
||||
for sl.head.forward[sl.level-1] == nil && sl.level > 1 {
|
||||
sl.level--
|
||||
}
|
||||
}
|
||||
|
||||
func (sl *SkipList) searchByPosition(position uint64, update nodes, widths widths) (*node, uint64) {
|
||||
if sl.Len() == 0 { // nothing in the list
|
||||
return nil, 1
|
||||
}
|
||||
|
||||
if position > sl.Len() {
|
||||
return nil, 1
|
||||
}
|
||||
|
||||
var pos uint64 = 0
|
||||
var offset uint8
|
||||
n := sl.head
|
||||
for i := uint8(0); i <= sl.level; i++ {
|
||||
offset = sl.level - i
|
||||
for n.forward[offset] != nil && pos+n.widths[offset] <= position {
|
||||
pos += n.widths[offset]
|
||||
n = n.forward[offset]
|
||||
}
|
||||
|
||||
if update != nil {
|
||||
update[offset] = n
|
||||
widths[offset] = pos
|
||||
}
|
||||
}
|
||||
|
||||
return n, pos + 1
|
||||
}
|
||||
|
||||
// Get will retrieve values associated with the keys provided. If an
|
||||
// associated value could not be found, a nil is returned in its place.
|
||||
// This is an O(log n) operation.
|
||||
func (sl *SkipList) Get(comparators ...Comparator) Comparators {
|
||||
result := make(Comparators, 0, len(comparators))
|
||||
|
||||
var n *node
|
||||
for _, cmp := range comparators {
|
||||
n, _ = sl.search(cmp, nil, nil)
|
||||
if n != nil && n.Compare(cmp) == 0 {
|
||||
result = append(result, n.entry)
|
||||
} else {
|
||||
result = append(result, nil)
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// GetWithPosition will retrieve the value with the provided key and
|
||||
// return the position of that value within the list. Returns nil, 0
|
||||
// if an associated value could not be found.
|
||||
func (sl *SkipList) GetWithPosition(cmp Comparator) (Comparator, uint64) {
|
||||
n, pos := sl.search(cmp, nil, nil)
|
||||
if n == nil {
|
||||
return nil, 0
|
||||
}
|
||||
|
||||
return n.entry, pos - 1
|
||||
}
|
||||
|
||||
// ByPosition returns the Comparator at the given position.
|
||||
func (sl *SkipList) ByPosition(position uint64) Comparator {
|
||||
n, _ := sl.searchByPosition(position+1, nil, nil)
|
||||
if n == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return n.entry
|
||||
}
|
||||
|
||||
func (sl *SkipList) insert(cmp Comparator) Comparator {
|
||||
n, pos := sl.search(cmp, sl.cache, sl.posCache)
|
||||
return insertNode(sl, n, cmp, pos, sl.cache, sl.posCache, false)
|
||||
}
|
||||
|
||||
// Insert will insert the provided comparators into the list. Returned
|
||||
// is a list of comparators that were overwritten. This is expected to
|
||||
// be an O(log n) operation.
|
||||
func (sl *SkipList) Insert(comparators ...Comparator) Comparators {
|
||||
overwritten := make(Comparators, 0, len(comparators))
|
||||
for _, cmp := range comparators {
|
||||
overwritten = append(overwritten, sl.insert(cmp))
|
||||
}
|
||||
|
||||
return overwritten
|
||||
}
|
||||
|
||||
func (sl *SkipList) insertAtPosition(position uint64, cmp Comparator) {
|
||||
if position > sl.Len() {
|
||||
position = sl.Len()
|
||||
}
|
||||
n, pos := sl.searchByPosition(position, sl.cache, sl.posCache)
|
||||
insertNode(sl, n, cmp, pos, sl.cache, sl.posCache, true)
|
||||
}
|
||||
|
||||
// InsertAtPosition will insert the provided Comparator at the provided position.
|
||||
// If position is greater than the length of the skiplist, the Comparator
|
||||
// is appended. This method bypasses order checks and checks for
|
||||
// duplicates so use with caution.
|
||||
func (sl *SkipList) InsertAtPosition(position uint64, cmp Comparator) {
|
||||
sl.insertAtPosition(position, cmp)
|
||||
}
|
||||
|
||||
func (sl *SkipList) replaceAtPosition(position uint64, cmp Comparator) {
|
||||
n, _ := sl.searchByPosition(position+1, nil, nil)
|
||||
if n == nil {
|
||||
return
|
||||
}
|
||||
|
||||
n.entry = cmp
|
||||
}
|
||||
|
||||
// Replace at position will replace the Comparator at the provided position
|
||||
// with the provided Comparator. If the provided position does not exist,
|
||||
// this operation is a no-op.
|
||||
func (sl *SkipList) ReplaceAtPosition(position uint64, cmp Comparator) {
|
||||
sl.replaceAtPosition(position, cmp)
|
||||
}
|
||||
|
||||
func (sl *SkipList) delete(cmp Comparator) Comparator {
|
||||
n, _ := sl.search(cmp, sl.cache, sl.posCache)
|
||||
|
||||
if n == nil || n.Compare(cmp) != 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
atomic.AddUint64(&sl.num, ^uint64(0)) // decrement
|
||||
|
||||
for i := uint8(0); i <= sl.level; i++ {
|
||||
if sl.cache[i].forward[i] != n {
|
||||
if sl.cache[i].forward[i] != nil {
|
||||
sl.cache[i].widths[i]--
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if i == 0 {
|
||||
if n.forward[i] != nil {
|
||||
n.forward[i].preNode = sl.cache[i]
|
||||
}
|
||||
n.preNode = nil
|
||||
}
|
||||
|
||||
sl.cache[i].widths[i] += n.widths[i] - 1
|
||||
sl.cache[i].forward[i] = n.forward[i]
|
||||
}
|
||||
|
||||
for sl.level > 1 && sl.head.forward[sl.level-1] == nil {
|
||||
sl.head.widths[sl.level] = 0
|
||||
sl.level--
|
||||
}
|
||||
|
||||
return n.entry
|
||||
}
|
||||
|
||||
// Delete will remove the provided keys from the skiplist and return
|
||||
// a list of in-order Comparators that were deleted. This is a no-op if
|
||||
// an associated key could not be found. This is an O(log n) operation.
|
||||
func (sl *SkipList) Delete(comparators ...Comparator) Comparators {
|
||||
deleted := make(Comparators, 0, len(comparators))
|
||||
|
||||
for _, cmp := range comparators {
|
||||
deleted = append(deleted, sl.delete(cmp))
|
||||
}
|
||||
|
||||
return deleted
|
||||
}
|
||||
|
||||
// Len returns the number of items in this skiplist.
|
||||
func (sl *SkipList) Len() uint64 {
|
||||
return atomic.LoadUint64(&sl.num)
|
||||
}
|
||||
|
||||
func (sl *SkipList) iterAtPosition(pos uint64) *iterator {
|
||||
n, _ := sl.searchByPosition(pos, nil, nil)
|
||||
if n == nil || n.entry == nil {
|
||||
return nilIterator()
|
||||
}
|
||||
|
||||
return &iterator{
|
||||
first: true,
|
||||
n: n,
|
||||
}
|
||||
}
|
||||
|
||||
// IterAtPosition is the sister method to Iter only the user defines
|
||||
// a position in the skiplist to begin iteration instead of a value.
|
||||
func (sl *SkipList) IterAtPosition(pos uint64) Iterator {
|
||||
return sl.iterAtPosition(pos + 1)
|
||||
}
|
||||
|
||||
func (sl *SkipList) iter(cmp Comparator) *iterator {
|
||||
n, _ := sl.search(cmp, nil, nil)
|
||||
if n == nil {
|
||||
return nilIterator()
|
||||
}
|
||||
|
||||
return &iterator{
|
||||
first: true,
|
||||
n: n,
|
||||
}
|
||||
}
|
||||
|
||||
// Iter will return an iterator that can be used to iterate
|
||||
// over all the values with a key equal to or greater than
|
||||
// the key provided.
|
||||
func (sl *SkipList) Iter(cmp Comparator) Iterator {
|
||||
return sl.iter(cmp)
|
||||
}
|
||||
|
||||
// SplitAt will split the current skiplist into two lists. The first
|
||||
// skiplist returned is the "left" list and the second is the "right."
|
||||
// The index defines the last item in the left list. If index is greater
|
||||
// then the length of this list, only the left skiplist is returned
|
||||
// and the right will be nil. This is a mutable operation and modifies
|
||||
// the content of this list.
|
||||
func (sl *SkipList) SplitAt(index uint64) (*SkipList, *SkipList) {
|
||||
index++ // 0-index offset
|
||||
if index >= sl.Len() {
|
||||
return sl, nil
|
||||
}
|
||||
return splitAt(sl, index)
|
||||
}
|
||||
|
||||
// New will allocate, initialize, and return a new skiplist.
|
||||
// The provided parameter should be of type uint and will determine
|
||||
// the maximum possible level that will be created to ensure
|
||||
// a random and quick distribution of levels. Parameter must
|
||||
// be a uint type.
|
||||
func New(ifc interface{}) *SkipList {
|
||||
sl := &SkipList{}
|
||||
sl.init(ifc)
|
||||
return sl
|
||||
}
|
||||
@@ -6,10 +6,15 @@ go tool nm ./originserver.exe |grep buildtime
|
||||
|
||||
//编译传入编译时间信息
|
||||
go build -ldflags "-X 'github.com/duanhf2012/origin/util/buildtime.BuildTime=20200101'"
|
||||
go build -ldflags "-X github.com/duanhf2012/origin/util/buildtime.BuildTime=20200101 -X github.com/duanhf2012/origin/util/buildtime.BuildTag=debug"
|
||||
*/
|
||||
var BuildTime string
|
||||
|
||||
var BuildTag string
|
||||
|
||||
func GetBuildDateTime() string {
|
||||
return BuildTime
|
||||
}
|
||||
|
||||
func GetBuildTag() string {
|
||||
return BuildTag
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package coroutine
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"reflect"
|
||||
"runtime/debug"
|
||||
)
|
||||
@@ -12,10 +13,11 @@ func F(callback interface{},recoverNum int, args ...interface{}) {
|
||||
var coreInfo string
|
||||
coreInfo = string(debug.Stack())
|
||||
coreInfo += "\n" + fmt.Sprintf("Core information is %v\n", r)
|
||||
fmt.Print(coreInfo)
|
||||
|
||||
if recoverNum==-1 ||recoverNum-1 >= 0 {
|
||||
log.SError(coreInfo)
|
||||
if recoverNum > 0{
|
||||
recoverNum -= 1
|
||||
}
|
||||
if recoverNum == -1 || recoverNum > 0 {
|
||||
go F(callback,recoverNum, args...)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
package math
|
||||
|
||||
import "github.com/duanhf2012/origin/log"
|
||||
|
||||
type NumberType interface {
|
||||
int | int8 | int16 | int32 | int64 | float32 | float64 | uint | uint8 | uint16 | uint32 | uint64
|
||||
}
|
||||
@@ -35,3 +37,42 @@ func Abs[NumType SignedNumberType](Num NumType) NumType {
|
||||
|
||||
return Num
|
||||
}
|
||||
|
||||
|
||||
func Add[NumType NumberType](number1 NumType, number2 NumType) NumType {
|
||||
ret := number1 + number2
|
||||
if number2> 0 && ret < number1 {
|
||||
log.SStack("Calculation overflow , number1 is ",number1," number2 is ",number2)
|
||||
}else if (number2<0 && ret > number1){
|
||||
log.SStack("Calculation overflow , number1 is ",number1," number2 is ",number2)
|
||||
}
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
func Sub[NumType NumberType](number1 NumType, number2 NumType) NumType {
|
||||
ret := number1 - number2
|
||||
if number2> 0 && ret > number1 {
|
||||
log.SStack("Calculation overflow , number1 is ",number1," number2 is ",number2)
|
||||
}else if (number2<0 && ret < number1){
|
||||
log.SStack("Calculation overflow , number1 is ",number1," number2 is ",number2)
|
||||
}
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
|
||||
func Mul[NumType NumberType](number1 NumType, number2 NumType) NumType {
|
||||
ret := number1 * number2
|
||||
if number1 == 0 || number2 == 0 {
|
||||
return ret
|
||||
}
|
||||
|
||||
if ret / number2 == number1 {
|
||||
return ret
|
||||
}
|
||||
|
||||
log.SStack("Calculation overflow , number1 is ",number1," number2 is ",number2)
|
||||
return ret
|
||||
}
|
||||
|
||||
|
||||
413
util/queue/deque.go
Normal file
413
util/queue/deque.go
Normal file
@@ -0,0 +1,413 @@
|
||||
package queue
|
||||
|
||||
// minCapacity is the smallest capacity that deque may have. Must be power of 2
|
||||
// for bitwise modulus: x % n == x & (n - 1).
|
||||
const minCapacity = 16
|
||||
|
||||
// Deque represents a single instance of the deque data structure. A Deque
|
||||
// instance contains items of the type sepcified by the type argument.
|
||||
type Deque[T any] struct {
|
||||
buf []T
|
||||
head int
|
||||
tail int
|
||||
count int
|
||||
minCap int
|
||||
}
|
||||
|
||||
// New creates a new Deque, optionally setting the current and minimum capacity
|
||||
// when non-zero values are given for these. The Deque instance returns
|
||||
// operates on items of the type specified by the type argument. For example,
|
||||
// to create a Deque that contains strings,
|
||||
//
|
||||
// stringDeque := deque.New[string]()
|
||||
//
|
||||
// To create a Deque with capacity to store 2048 ints without resizing, and
|
||||
// that will not resize below space for 32 items when removing items:
|
||||
// d := deque.New[int](2048, 32)
|
||||
//
|
||||
// To create a Deque that has not yet allocated memory, but after it does will
|
||||
// never resize to have space for less than 64 items:
|
||||
// d := deque.New[int](0, 64)
|
||||
//
|
||||
// Any size values supplied here are rounded up to the nearest power of 2.
|
||||
func New[T any](size ...int) *Deque[T] {
|
||||
var capacity, minimum int
|
||||
if len(size) >= 1 {
|
||||
capacity = size[0]
|
||||
if len(size) >= 2 {
|
||||
minimum = size[1]
|
||||
}
|
||||
}
|
||||
|
||||
minCap := minCapacity
|
||||
for minCap < minimum {
|
||||
minCap <<= 1
|
||||
}
|
||||
|
||||
var buf []T
|
||||
if capacity != 0 {
|
||||
bufSize := minCap
|
||||
for bufSize < capacity {
|
||||
bufSize <<= 1
|
||||
}
|
||||
buf = make([]T, bufSize)
|
||||
}
|
||||
|
||||
return &Deque[T]{
|
||||
buf: buf,
|
||||
minCap: minCap,
|
||||
}
|
||||
}
|
||||
|
||||
// Cap returns the current capacity of the Deque. If q is nil, q.Cap() is zero.
|
||||
func (q *Deque[T]) Cap() int {
|
||||
if q == nil {
|
||||
return 0
|
||||
}
|
||||
return len(q.buf)
|
||||
}
|
||||
|
||||
// Len returns the number of elements currently stored in the queue. If q is
|
||||
// nil, q.Len() is zero.
|
||||
func (q *Deque[T]) Len() int {
|
||||
if q == nil {
|
||||
return 0
|
||||
}
|
||||
return q.count
|
||||
}
|
||||
|
||||
// PushBack appends an element to the back of the queue. Implements FIFO when
|
||||
// elements are removed with PopFront(), and LIFO when elements are removed
|
||||
// with PopBack().
|
||||
func (q *Deque[T]) PushBack(elem T) {
|
||||
q.growIfFull()
|
||||
|
||||
q.buf[q.tail] = elem
|
||||
// Calculate new tail position.
|
||||
q.tail = q.next(q.tail)
|
||||
q.count++
|
||||
}
|
||||
|
||||
// PushFront prepends an element to the front of the queue.
|
||||
func (q *Deque[T]) PushFront(elem T) {
|
||||
q.growIfFull()
|
||||
|
||||
// Calculate new head position.
|
||||
q.head = q.prev(q.head)
|
||||
q.buf[q.head] = elem
|
||||
q.count++
|
||||
}
|
||||
|
||||
// PopFront removes and returns the element from the front of the queue.
|
||||
// Implements FIFO when used with PushBack(). If the queue is empty, the call
|
||||
// panics.
|
||||
func (q *Deque[T]) PopFront() T {
|
||||
if q.count <= 0 {
|
||||
panic("deque: PopFront() called on empty queue")
|
||||
}
|
||||
ret := q.buf[q.head]
|
||||
var zero T
|
||||
q.buf[q.head] = zero
|
||||
// Calculate new head position.
|
||||
q.head = q.next(q.head)
|
||||
q.count--
|
||||
|
||||
q.shrinkIfExcess()
|
||||
return ret
|
||||
}
|
||||
|
||||
// PopBack removes and returns the element from the back of the queue.
|
||||
// Implements LIFO when used with PushBack(). If the queue is empty, the call
|
||||
// panics.
|
||||
func (q *Deque[T]) PopBack() T {
|
||||
if q.count <= 0 {
|
||||
panic("deque: PopBack() called on empty queue")
|
||||
}
|
||||
|
||||
// Calculate new tail position
|
||||
q.tail = q.prev(q.tail)
|
||||
|
||||
// Remove value at tail.
|
||||
ret := q.buf[q.tail]
|
||||
var zero T
|
||||
q.buf[q.tail] = zero
|
||||
q.count--
|
||||
|
||||
q.shrinkIfExcess()
|
||||
return ret
|
||||
}
|
||||
|
||||
// Front returns the element at the front of the queue. This is the element
|
||||
// that would be returned by PopFront(). This call panics if the queue is
|
||||
// empty.
|
||||
func (q *Deque[T]) Front() T {
|
||||
if q.count <= 0 {
|
||||
panic("deque: Front() called when empty")
|
||||
}
|
||||
return q.buf[q.head]
|
||||
}
|
||||
|
||||
// Back returns the element at the back of the queue. This is the element that
|
||||
// would be returned by PopBack(). This call panics if the queue is empty.
|
||||
func (q *Deque[T]) Back() T {
|
||||
if q.count <= 0 {
|
||||
panic("deque: Back() called when empty")
|
||||
}
|
||||
return q.buf[q.prev(q.tail)]
|
||||
}
|
||||
|
||||
// At returns the element at index i in the queue without removing the element
|
||||
// from the queue. This method accepts only non-negative index values. At(0)
|
||||
// refers to the first element and is the same as Front(). At(Len()-1) refers
|
||||
// to the last element and is the same as Back(). If the index is invalid, the
|
||||
// call panics.
|
||||
//
|
||||
// The purpose of At is to allow Deque to serve as a more general purpose
|
||||
// circular buffer, where items are only added to and removed from the ends of
|
||||
// the deque, but may be read from any place within the deque. Consider the
|
||||
// case of a fixed-size circular log buffer: A new entry is pushed onto one end
|
||||
// and when full the oldest is popped from the other end. All the log entries
|
||||
// in the buffer must be readable without altering the buffer contents.
|
||||
func (q *Deque[T]) At(i int) T {
|
||||
if i < 0 || i >= q.count {
|
||||
panic("deque: At() called with index out of range")
|
||||
}
|
||||
// bitwise modulus
|
||||
return q.buf[(q.head+i)&(len(q.buf)-1)]
|
||||
}
|
||||
|
||||
// Set puts the element at index i in the queue. Set shares the same purpose
|
||||
// than At() but perform the opposite operation. The index i is the same index
|
||||
// defined by At(). If the index is invalid, the call panics.
|
||||
func (q *Deque[T]) Set(i int, elem T) {
|
||||
if i < 0 || i >= q.count {
|
||||
panic("deque: Set() called with index out of range")
|
||||
}
|
||||
// bitwise modulus
|
||||
q.buf[(q.head+i)&(len(q.buf)-1)] = elem
|
||||
}
|
||||
|
||||
// Clear removes all elements from the queue, but retains the current capacity.
|
||||
// This is useful when repeatedly reusing the queue at high frequency to avoid
|
||||
// GC during reuse. The queue will not be resized smaller as long as items are
|
||||
// only added. Only when items are removed is the queue subject to getting
|
||||
// resized smaller.
|
||||
func (q *Deque[T]) Clear() {
|
||||
// bitwise modulus
|
||||
modBits := len(q.buf) - 1
|
||||
var zero T
|
||||
for h := q.head; h != q.tail; h = (h + 1) & modBits {
|
||||
q.buf[h] = zero
|
||||
}
|
||||
q.head = 0
|
||||
q.tail = 0
|
||||
q.count = 0
|
||||
}
|
||||
|
||||
// Rotate rotates the deque n steps front-to-back. If n is negative, rotates
|
||||
// back-to-front. Having Deque provide Rotate() avoids resizing that could
|
||||
// happen if implementing rotation using only Pop and Push methods. If q.Len()
|
||||
// is one or less, or q is nil, then Rotate does nothing.
|
||||
func (q *Deque[T]) Rotate(n int) {
|
||||
if q.Len() <= 1 {
|
||||
return
|
||||
}
|
||||
// Rotating a multiple of q.count is same as no rotation.
|
||||
n %= q.count
|
||||
if n == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
modBits := len(q.buf) - 1
|
||||
// If no empty space in buffer, only move head and tail indexes.
|
||||
if q.head == q.tail {
|
||||
// Calculate new head and tail using bitwise modulus.
|
||||
q.head = (q.head + n) & modBits
|
||||
q.tail = q.head
|
||||
return
|
||||
}
|
||||
|
||||
var zero T
|
||||
|
||||
if n < 0 {
|
||||
// Rotate back to front.
|
||||
for ; n < 0; n++ {
|
||||
// Calculate new head and tail using bitwise modulus.
|
||||
q.head = (q.head - 1) & modBits
|
||||
q.tail = (q.tail - 1) & modBits
|
||||
// Put tail value at head and remove value at tail.
|
||||
q.buf[q.head] = q.buf[q.tail]
|
||||
q.buf[q.tail] = zero
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Rotate front to back.
|
||||
for ; n > 0; n-- {
|
||||
// Put head value at tail and remove value at head.
|
||||
q.buf[q.tail] = q.buf[q.head]
|
||||
q.buf[q.head] = zero
|
||||
// Calculate new head and tail using bitwise modulus.
|
||||
q.head = (q.head + 1) & modBits
|
||||
q.tail = (q.tail + 1) & modBits
|
||||
}
|
||||
}
|
||||
|
||||
// Index returns the index into the Deque of the first item satisfying f(item),
|
||||
// or -1 if none do. If q is nil, then -1 is always returned. Search is linear
|
||||
// starting with index 0.
|
||||
func (q *Deque[T]) Index(f func(T) bool) int {
|
||||
if q.Len() > 0 {
|
||||
modBits := len(q.buf) - 1
|
||||
for i := 0; i < q.count; i++ {
|
||||
if f(q.buf[(q.head+i)&modBits]) {
|
||||
return i
|
||||
}
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
// RIndex is the same as Index, but searches from Back to Front. The index
|
||||
// returned is from Front to Back, where index 0 is the index of the item
|
||||
// returned by Front().
|
||||
func (q *Deque[T]) RIndex(f func(T) bool) int {
|
||||
if q.Len() > 0 {
|
||||
modBits := len(q.buf) - 1
|
||||
for i := q.count - 1; i >= 0; i-- {
|
||||
if f(q.buf[(q.head+i)&modBits]) {
|
||||
return i
|
||||
}
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
// Insert is used to insert an element into the middle of the queue, before the
|
||||
// element at the specified index. Insert(0,e) is the same as PushFront(e) and
|
||||
// Insert(Len(),e) is the same as PushBack(e). Accepts only non-negative index
|
||||
// values, and panics if index is out of range.
|
||||
//
|
||||
// Important: Deque is optimized for O(1) operations at the ends of the queue,
|
||||
// not for operations in the the middle. Complexity of this function is
|
||||
// constant plus linear in the lesser of the distances between the index and
|
||||
// either of the ends of the queue.
|
||||
func (q *Deque[T]) Insert(at int, item T) {
|
||||
if at < 0 || at > q.count {
|
||||
panic("deque: Insert() called with index out of range")
|
||||
}
|
||||
if at*2 < q.count {
|
||||
q.PushFront(item)
|
||||
front := q.head
|
||||
for i := 0; i < at; i++ {
|
||||
next := q.next(front)
|
||||
q.buf[front], q.buf[next] = q.buf[next], q.buf[front]
|
||||
front = next
|
||||
}
|
||||
return
|
||||
}
|
||||
swaps := q.count - at
|
||||
q.PushBack(item)
|
||||
back := q.prev(q.tail)
|
||||
for i := 0; i < swaps; i++ {
|
||||
prev := q.prev(back)
|
||||
q.buf[back], q.buf[prev] = q.buf[prev], q.buf[back]
|
||||
back = prev
|
||||
}
|
||||
}
|
||||
|
||||
// Remove removes and returns an element from the middle of the queue, at the
|
||||
// specified index. Remove(0) is the same as PopFront() and Remove(Len()-1) is
|
||||
// the same as PopBack(). Accepts only non-negative index values, and panics if
|
||||
// index is out of range.
|
||||
//
|
||||
// Important: Deque is optimized for O(1) operations at the ends of the queue,
|
||||
// not for operations in the the middle. Complexity of this function is
|
||||
// constant plus linear in the lesser of the distances between the index and
|
||||
// either of the ends of the queue.
|
||||
func (q *Deque[T]) Remove(at int) T {
|
||||
if at < 0 || at >= q.Len() {
|
||||
panic("deque: Remove() called with index out of range")
|
||||
}
|
||||
|
||||
rm := (q.head + at) & (len(q.buf) - 1)
|
||||
if at*2 < q.count {
|
||||
for i := 0; i < at; i++ {
|
||||
prev := q.prev(rm)
|
||||
q.buf[prev], q.buf[rm] = q.buf[rm], q.buf[prev]
|
||||
rm = prev
|
||||
}
|
||||
return q.PopFront()
|
||||
}
|
||||
swaps := q.count - at - 1
|
||||
for i := 0; i < swaps; i++ {
|
||||
next := q.next(rm)
|
||||
q.buf[rm], q.buf[next] = q.buf[next], q.buf[rm]
|
||||
rm = next
|
||||
}
|
||||
return q.PopBack()
|
||||
}
|
||||
|
||||
// SetMinCapacity sets a minimum capacity of 2^minCapacityExp. If the value of
|
||||
// the minimum capacity is less than or equal to the minimum allowed, then
|
||||
// capacity is set to the minimum allowed. This may be called at anytime to set
|
||||
// a new minimum capacity.
|
||||
//
|
||||
// Setting a larger minimum capacity may be used to prevent resizing when the
|
||||
// number of stored items changes frequently across a wide range.
|
||||
func (q *Deque[T]) SetMinCapacity(minCapacityExp uint) {
|
||||
if 1<<minCapacityExp > minCapacity {
|
||||
q.minCap = 1 << minCapacityExp
|
||||
} else {
|
||||
q.minCap = minCapacity
|
||||
}
|
||||
}
|
||||
|
||||
// prev returns the previous buffer position wrapping around buffer.
|
||||
func (q *Deque[T]) prev(i int) int {
|
||||
return (i - 1) & (len(q.buf) - 1) // bitwise modulus
|
||||
}
|
||||
|
||||
// next returns the next buffer position wrapping around buffer.
|
||||
func (q *Deque[T]) next(i int) int {
|
||||
return (i + 1) & (len(q.buf) - 1) // bitwise modulus
|
||||
}
|
||||
|
||||
// growIfFull resizes up if the buffer is full.
|
||||
func (q *Deque[T]) growIfFull() {
|
||||
if q.count != len(q.buf) {
|
||||
return
|
||||
}
|
||||
if len(q.buf) == 0 {
|
||||
if q.minCap == 0 {
|
||||
q.minCap = minCapacity
|
||||
}
|
||||
q.buf = make([]T, q.minCap)
|
||||
return
|
||||
}
|
||||
q.resize()
|
||||
}
|
||||
|
||||
// shrinkIfExcess resize down if the buffer 1/4 full.
|
||||
func (q *Deque[T]) shrinkIfExcess() {
|
||||
if len(q.buf) > q.minCap && (q.count<<2) == len(q.buf) {
|
||||
q.resize()
|
||||
}
|
||||
}
|
||||
|
||||
// resize resizes the deque to fit exactly twice its current contents. This is
|
||||
// used to grow the queue when it is full, and also to shrink it when it is
|
||||
// only a quarter full.
|
||||
func (q *Deque[T]) resize() {
|
||||
newBuf := make([]T, q.count<<1)
|
||||
if q.tail > q.head {
|
||||
copy(newBuf, q.buf[q.head:q.tail])
|
||||
} else {
|
||||
n := copy(newBuf, q.buf[q.head:])
|
||||
copy(newBuf[n:], q.buf[:q.tail])
|
||||
}
|
||||
|
||||
q.head = 0
|
||||
q.tail = q.count
|
||||
q.buf = newBuf
|
||||
}
|
||||
836
util/queue/deque_test.go
Normal file
836
util/queue/deque_test.go
Normal file
@@ -0,0 +1,836 @@
|
||||
package queue
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
func TestEmpty(t *testing.T) {
|
||||
q := New[string]()
|
||||
if q.Len() != 0 {
|
||||
t.Error("q.Len() =", q.Len(), "expect 0")
|
||||
}
|
||||
if q.Cap() != 0 {
|
||||
t.Error("expected q.Cap() == 0")
|
||||
}
|
||||
idx := q.Index(func(item string) bool {
|
||||
return true
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Error("should return -1 index for nil deque")
|
||||
}
|
||||
idx = q.RIndex(func(item string) bool {
|
||||
return true
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Error("should return -1 index for nil deque")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNil(t *testing.T) {
|
||||
var q *Deque[int]
|
||||
if q.Len() != 0 {
|
||||
t.Error("expected q.Len() == 0")
|
||||
}
|
||||
if q.Cap() != 0 {
|
||||
t.Error("expected q.Cap() == 0")
|
||||
}
|
||||
q.Rotate(5)
|
||||
idx := q.Index(func(item int) bool {
|
||||
return true
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Error("should return -1 index for nil deque")
|
||||
}
|
||||
idx = q.RIndex(func(item int) bool {
|
||||
return true
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Error("should return -1 index for nil deque")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFrontBack(t *testing.T) {
|
||||
var q Deque[string]
|
||||
q.PushBack("foo")
|
||||
q.PushBack("bar")
|
||||
q.PushBack("baz")
|
||||
if q.Front() != "foo" {
|
||||
t.Error("wrong value at front of queue")
|
||||
}
|
||||
if q.Back() != "baz" {
|
||||
t.Error("wrong value at back of queue")
|
||||
}
|
||||
|
||||
if q.PopFront() != "foo" {
|
||||
t.Error("wrong value removed from front of queue")
|
||||
}
|
||||
if q.Front() != "bar" {
|
||||
t.Error("wrong value remaining at front of queue")
|
||||
}
|
||||
if q.Back() != "baz" {
|
||||
t.Error("wrong value remaining at back of queue")
|
||||
}
|
||||
|
||||
if q.PopBack() != "baz" {
|
||||
t.Error("wrong value removed from back of queue")
|
||||
}
|
||||
if q.Front() != "bar" {
|
||||
t.Error("wrong value remaining at front of queue")
|
||||
}
|
||||
if q.Back() != "bar" {
|
||||
t.Error("wrong value remaining at back of queue")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGrowShrinkBack(t *testing.T) {
|
||||
var q Deque[int]
|
||||
size := minCapacity * 2
|
||||
|
||||
for i := 0; i < size; i++ {
|
||||
if q.Len() != i {
|
||||
t.Error("q.Len() =", q.Len(), "expected", i)
|
||||
}
|
||||
q.PushBack(i)
|
||||
}
|
||||
bufLen := len(q.buf)
|
||||
|
||||
// Remove from back.
|
||||
for i := size; i > 0; i-- {
|
||||
if q.Len() != i {
|
||||
t.Error("q.Len() =", q.Len(), "expected", i)
|
||||
}
|
||||
x := q.PopBack()
|
||||
if x != i-1 {
|
||||
t.Error("q.PopBack() =", x, "expected", i-1)
|
||||
}
|
||||
}
|
||||
if q.Len() != 0 {
|
||||
t.Error("q.Len() =", q.Len(), "expected 0")
|
||||
}
|
||||
if len(q.buf) == bufLen {
|
||||
t.Error("queue buffer did not shrink")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGrowShrinkFront(t *testing.T) {
|
||||
var q Deque[int]
|
||||
size := minCapacity * 2
|
||||
|
||||
for i := 0; i < size; i++ {
|
||||
if q.Len() != i {
|
||||
t.Error("q.Len() =", q.Len(), "expected", i)
|
||||
}
|
||||
q.PushBack(i)
|
||||
}
|
||||
bufLen := len(q.buf)
|
||||
|
||||
// Remove from Front
|
||||
for i := 0; i < size; i++ {
|
||||
if q.Len() != size-i {
|
||||
t.Error("q.Len() =", q.Len(), "expected", minCapacity*2-i)
|
||||
}
|
||||
x := q.PopFront()
|
||||
if x != i {
|
||||
t.Error("q.PopBack() =", x, "expected", i)
|
||||
}
|
||||
}
|
||||
if q.Len() != 0 {
|
||||
t.Error("q.Len() =", q.Len(), "expected 0")
|
||||
}
|
||||
if len(q.buf) == bufLen {
|
||||
t.Error("queue buffer did not shrink")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSimple(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
if q.Front() != 0 {
|
||||
t.Fatalf("expected 0 at front, got %d", q.Front())
|
||||
}
|
||||
if q.Back() != minCapacity-1 {
|
||||
t.Fatalf("expected %d at back, got %d", minCapacity-1, q.Back())
|
||||
}
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
if q.Front() != i {
|
||||
t.Error("peek", i, "had value", q.Front())
|
||||
}
|
||||
x := q.PopFront()
|
||||
if x != i {
|
||||
t.Error("remove", i, "had value", x)
|
||||
}
|
||||
}
|
||||
|
||||
q.Clear()
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
q.PushFront(i)
|
||||
}
|
||||
for i := minCapacity - 1; i >= 0; i-- {
|
||||
x := q.PopFront()
|
||||
if x != i {
|
||||
t.Error("remove", i, "had value", x)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestBufferWrap(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
q.PopFront()
|
||||
q.PushBack(minCapacity + i)
|
||||
}
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
if q.Front() != i+3 {
|
||||
t.Error("peek", i, "had value", q.Front())
|
||||
}
|
||||
q.PopFront()
|
||||
}
|
||||
}
|
||||
|
||||
func TestBufferWrapReverse(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
q.PushFront(i)
|
||||
}
|
||||
for i := 0; i < 3; i++ {
|
||||
q.PopBack()
|
||||
q.PushFront(minCapacity + i)
|
||||
}
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
if q.Back() != i+3 {
|
||||
t.Error("peek", i, "had value", q.Front())
|
||||
}
|
||||
q.PopBack()
|
||||
}
|
||||
}
|
||||
|
||||
func TestLen(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
if q.Len() != 0 {
|
||||
t.Error("empty queue length not 0")
|
||||
}
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
q.PushBack(i)
|
||||
if q.Len() != i+1 {
|
||||
t.Error("adding: queue with", i, "elements has length", q.Len())
|
||||
}
|
||||
}
|
||||
for i := 0; i < 1000; i++ {
|
||||
q.PopFront()
|
||||
if q.Len() != 1000-i-1 {
|
||||
t.Error("removing: queue with", 1000-i-i, "elements has length", q.Len())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestBack(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < minCapacity+5; i++ {
|
||||
q.PushBack(i)
|
||||
if q.Back() != i {
|
||||
t.Errorf("Back returned wrong value")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNew(t *testing.T) {
|
||||
minCap := 64
|
||||
q := New[string](0, minCap)
|
||||
if q.Cap() != 0 {
|
||||
t.Fatal("should not have allowcated mem yet")
|
||||
}
|
||||
q.PushBack("foo")
|
||||
q.PopFront()
|
||||
if q.Len() != 0 {
|
||||
t.Fatal("Len() should return 0")
|
||||
}
|
||||
if q.Cap() != minCap {
|
||||
t.Fatalf("worng capactiy expected %d, got %d", minCap, q.Cap())
|
||||
}
|
||||
|
||||
curCap := 128
|
||||
q = New[string](curCap, minCap)
|
||||
if q.Cap() != curCap {
|
||||
t.Fatalf("Cap() should return %d, got %d", curCap, q.Cap())
|
||||
}
|
||||
if q.Len() != 0 {
|
||||
t.Fatalf("Len() should return 0")
|
||||
}
|
||||
q.PushBack("foo")
|
||||
if q.Cap() != curCap {
|
||||
t.Fatalf("Cap() should return %d, got %d", curCap, q.Cap())
|
||||
}
|
||||
}
|
||||
|
||||
func checkRotate(t *testing.T, size int) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < size; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
|
||||
for i := 0; i < q.Len(); i++ {
|
||||
x := i
|
||||
for n := 0; n < q.Len(); n++ {
|
||||
if q.At(n) != x {
|
||||
t.Fatalf("a[%d] != %d after rotate and copy", n, x)
|
||||
}
|
||||
x++
|
||||
if x == q.Len() {
|
||||
x = 0
|
||||
}
|
||||
}
|
||||
q.Rotate(1)
|
||||
if q.Back() != i {
|
||||
t.Fatal("wrong value during rotation")
|
||||
}
|
||||
}
|
||||
for i := q.Len() - 1; i >= 0; i-- {
|
||||
q.Rotate(-1)
|
||||
if q.Front() != i {
|
||||
t.Fatal("wrong value during reverse rotation")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRotate(t *testing.T) {
|
||||
checkRotate(t, 10)
|
||||
checkRotate(t, minCapacity)
|
||||
checkRotate(t, minCapacity+minCapacity/2)
|
||||
|
||||
var q Deque[int]
|
||||
for i := 0; i < 10; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
q.Rotate(11)
|
||||
if q.Front() != 1 {
|
||||
t.Error("rotating 11 places should have been same as one")
|
||||
}
|
||||
q.Rotate(-21)
|
||||
if q.Front() != 0 {
|
||||
t.Error("rotating -21 places should have been same as one -1")
|
||||
}
|
||||
q.Rotate(q.Len())
|
||||
if q.Front() != 0 {
|
||||
t.Error("should not have rotated")
|
||||
}
|
||||
q.Clear()
|
||||
q.PushBack(0)
|
||||
q.Rotate(13)
|
||||
if q.Front() != 0 {
|
||||
t.Error("should not have rotated")
|
||||
}
|
||||
}
|
||||
|
||||
func TestAt(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
|
||||
// Front to back.
|
||||
for j := 0; j < q.Len(); j++ {
|
||||
if q.At(j) != j {
|
||||
t.Errorf("index %d doesn't contain %d", j, j)
|
||||
}
|
||||
}
|
||||
|
||||
// Back to front
|
||||
for j := 1; j <= q.Len(); j++ {
|
||||
if q.At(q.Len()-j) != q.Len()-j {
|
||||
t.Errorf("index %d doesn't contain %d", q.Len()-j, q.Len()-j)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSet(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
q.PushBack(i)
|
||||
q.Set(i, i+50)
|
||||
}
|
||||
|
||||
// Front to back.
|
||||
for j := 0; j < q.Len(); j++ {
|
||||
if q.At(j) != j+50 {
|
||||
t.Errorf("index %d doesn't contain %d", j, j+50)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestClear(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < 100; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
if q.Len() != 100 {
|
||||
t.Error("push: queue with 100 elements has length", q.Len())
|
||||
}
|
||||
cap := len(q.buf)
|
||||
q.Clear()
|
||||
if q.Len() != 0 {
|
||||
t.Error("empty queue length not 0 after clear")
|
||||
}
|
||||
if len(q.buf) != cap {
|
||||
t.Error("queue capacity changed after clear")
|
||||
}
|
||||
|
||||
// Check that there are no remaining references after Clear()
|
||||
for i := 0; i < len(q.buf); i++ {
|
||||
if q.buf[i] != 0 {
|
||||
t.Error("queue has non-nil deleted elements after Clear()")
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIndex(t *testing.T) {
|
||||
var q Deque[rune]
|
||||
for _, x := range "Hello, 世界" {
|
||||
q.PushBack(x)
|
||||
}
|
||||
idx := q.Index(func(item rune) bool {
|
||||
c := item
|
||||
return unicode.Is(unicode.Han, c)
|
||||
})
|
||||
if idx != 7 {
|
||||
t.Fatal("Expected index 7, got", idx)
|
||||
}
|
||||
idx = q.Index(func(item rune) bool {
|
||||
c := item
|
||||
return c == 'H'
|
||||
})
|
||||
if idx != 0 {
|
||||
t.Fatal("Expected index 0, got", idx)
|
||||
}
|
||||
idx = q.Index(func(item rune) bool {
|
||||
return false
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Fatal("Expected index -1, got", idx)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRIndex(t *testing.T) {
|
||||
var q Deque[rune]
|
||||
for _, x := range "Hello, 世界" {
|
||||
q.PushBack(x)
|
||||
}
|
||||
idx := q.RIndex(func(item rune) bool {
|
||||
c := item
|
||||
return unicode.Is(unicode.Han, c)
|
||||
})
|
||||
if idx != 8 {
|
||||
t.Fatal("Expected index 8, got", idx)
|
||||
}
|
||||
idx = q.RIndex(func(item rune) bool {
|
||||
c := item
|
||||
return c == 'H'
|
||||
})
|
||||
if idx != 0 {
|
||||
t.Fatal("Expected index 0, got", idx)
|
||||
}
|
||||
idx = q.RIndex(func(item rune) bool {
|
||||
return false
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Fatal("Expected index -1, got", idx)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInsert(t *testing.T) {
|
||||
q := new(Deque[rune])
|
||||
for _, x := range "ABCDEFG" {
|
||||
q.PushBack(x)
|
||||
}
|
||||
q.Insert(4, 'x') // ABCDxEFG
|
||||
if q.At(4) != 'x' {
|
||||
t.Error("expected x at position 4, got", q.At(4))
|
||||
}
|
||||
|
||||
q.Insert(2, 'y') // AByCDxEFG
|
||||
if q.At(2) != 'y' {
|
||||
t.Error("expected y at position 2")
|
||||
}
|
||||
if q.At(5) != 'x' {
|
||||
t.Error("expected x at position 5")
|
||||
}
|
||||
|
||||
q.Insert(0, 'b') // bAByCDxEFG
|
||||
if q.Front() != 'b' {
|
||||
t.Error("expected b inserted at front, got", q.Front())
|
||||
}
|
||||
|
||||
q.Insert(q.Len(), 'e') // bAByCDxEFGe
|
||||
|
||||
for i, x := range "bAByCDxEFGe" {
|
||||
if q.PopFront() != x {
|
||||
t.Error("expected", x, "at position", i)
|
||||
}
|
||||
}
|
||||
|
||||
qs := New[string](16)
|
||||
|
||||
for i := 0; i < qs.Cap(); i++ {
|
||||
qs.PushBack(fmt.Sprint(i))
|
||||
}
|
||||
// deque: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
|
||||
// buffer: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
|
||||
for i := 0; i < qs.Cap()/2; i++ {
|
||||
qs.PopFront()
|
||||
}
|
||||
// deque: 8 9 10 11 12 13 14 15
|
||||
// buffer: [_,_,_,_,_,_,_,_,8,9,10,11,12,13,14,15]
|
||||
for i := 0; i < qs.Cap()/4; i++ {
|
||||
qs.PushBack(fmt.Sprint(qs.Cap() + i))
|
||||
}
|
||||
// deque: 8 9 10 11 12 13 14 15 16 17 18 19
|
||||
// buffer: [16,17,18,19,_,_,_,_,8,9,10,11,12,13,14,15]
|
||||
|
||||
at := qs.Len() - 2
|
||||
qs.Insert(at, "x")
|
||||
// deque: 8 9 10 11 12 13 14 15 16 17 x 18 19
|
||||
// buffer: [16,17,x,18,19,_,_,_,8,9,10,11,12,13,14,15]
|
||||
if qs.At(at) != "x" {
|
||||
t.Error("expected x at position", at)
|
||||
}
|
||||
if qs.At(at) != "x" {
|
||||
t.Error("expected x at position", at)
|
||||
}
|
||||
|
||||
qs.Insert(2, "y")
|
||||
// deque: 8 9 y 10 11 12 13 14 15 16 17 x 18 19
|
||||
// buffer: [16,17,x,18,19,_,_,8,9,y,10,11,12,13,14,15]
|
||||
if qs.At(2) != "y" {
|
||||
t.Error("expected y at position 2")
|
||||
}
|
||||
if qs.At(at+1) != "x" {
|
||||
t.Error("expected x at position 5")
|
||||
}
|
||||
|
||||
qs.Insert(0, "b")
|
||||
// deque: b 8 9 y 10 11 12 13 14 15 16 17 x 18 19
|
||||
// buffer: [16,17,x,18,19,_,b,8,9,y,10,11,12,13,14,15]
|
||||
if qs.Front() != "b" {
|
||||
t.Error("expected b inserted at front, got", qs.Front())
|
||||
}
|
||||
|
||||
qs.Insert(qs.Len(), "e")
|
||||
if qs.Cap() != qs.Len() {
|
||||
t.Fatal("Expected full buffer")
|
||||
}
|
||||
// deque: b 8 9 y 10 11 12 13 14 15 16 17 x 18 19 e
|
||||
// buffer: [16,17,x,18,19,e,b,8,9,y,10,11,12,13,14,15]
|
||||
for i, x := range []string{"16", "17", "x", "18", "19", "e", "b", "8", "9", "y", "10", "11", "12", "13", "14", "15"} {
|
||||
if qs.buf[i] != x {
|
||||
t.Error("expected", x, "at buffer position", i)
|
||||
}
|
||||
}
|
||||
for i, x := range []string{"b", "8", "9", "y", "10", "11", "12", "13", "14", "15", "16", "17", "x", "18", "19", "e"} {
|
||||
if qs.Front() != x {
|
||||
t.Error("expected", x, "at position", i, "got", qs.Front())
|
||||
}
|
||||
qs.PopFront()
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemove(t *testing.T) {
|
||||
q := new(Deque[rune])
|
||||
for _, x := range "ABCDEFG" {
|
||||
q.PushBack(x)
|
||||
}
|
||||
|
||||
if q.Remove(4) != 'E' { // ABCDFG
|
||||
t.Error("expected E from position 4")
|
||||
}
|
||||
|
||||
if q.Remove(2) != 'C' { // ABDFG
|
||||
t.Error("expected C at position 2")
|
||||
}
|
||||
if q.Back() != 'G' {
|
||||
t.Error("expected G at back")
|
||||
}
|
||||
|
||||
if q.Remove(0) != 'A' { // BDFG
|
||||
t.Error("expected to remove A from front")
|
||||
}
|
||||
if q.Front() != 'B' {
|
||||
t.Error("expected G at back")
|
||||
}
|
||||
|
||||
if q.Remove(q.Len()-1) != 'G' { // BDF
|
||||
t.Error("expected to remove G from back")
|
||||
}
|
||||
if q.Back() != 'F' {
|
||||
t.Error("expected F at back")
|
||||
}
|
||||
|
||||
if q.Len() != 3 {
|
||||
t.Error("wrong length")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFrontBackOutOfRangePanics(t *testing.T) {
|
||||
const msg = "should panic when peeking empty queue"
|
||||
var q Deque[int]
|
||||
assertPanics(t, msg, func() {
|
||||
q.Front()
|
||||
})
|
||||
assertPanics(t, msg, func() {
|
||||
q.Back()
|
||||
})
|
||||
|
||||
q.PushBack(1)
|
||||
q.PopFront()
|
||||
|
||||
assertPanics(t, msg, func() {
|
||||
q.Front()
|
||||
})
|
||||
assertPanics(t, msg, func() {
|
||||
q.Back()
|
||||
})
|
||||
}
|
||||
|
||||
func TestPopFrontOutOfRangePanics(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
assertPanics(t, "should panic when removing empty queue", func() {
|
||||
q.PopFront()
|
||||
})
|
||||
|
||||
q.PushBack(1)
|
||||
q.PopFront()
|
||||
|
||||
assertPanics(t, "should panic when removing emptied queue", func() {
|
||||
q.PopFront()
|
||||
})
|
||||
}
|
||||
|
||||
func TestPopBackOutOfRangePanics(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
assertPanics(t, "should panic when removing empty queue", func() {
|
||||
q.PopBack()
|
||||
})
|
||||
|
||||
q.PushBack(1)
|
||||
q.PopBack()
|
||||
|
||||
assertPanics(t, "should panic when removing emptied queue", func() {
|
||||
q.PopBack()
|
||||
})
|
||||
}
|
||||
|
||||
func TestAtOutOfRangePanics(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
q.PushBack(1)
|
||||
q.PushBack(2)
|
||||
q.PushBack(3)
|
||||
|
||||
assertPanics(t, "should panic when negative index", func() {
|
||||
q.At(-4)
|
||||
})
|
||||
|
||||
assertPanics(t, "should panic when index greater than length", func() {
|
||||
q.At(4)
|
||||
})
|
||||
}
|
||||
|
||||
func TestSetOutOfRangePanics(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
q.PushBack(1)
|
||||
q.PushBack(2)
|
||||
q.PushBack(3)
|
||||
|
||||
assertPanics(t, "should panic when negative index", func() {
|
||||
q.Set(-4, 1)
|
||||
})
|
||||
|
||||
assertPanics(t, "should panic when index greater than length", func() {
|
||||
q.Set(4, 1)
|
||||
})
|
||||
}
|
||||
|
||||
func TestInsertOutOfRangePanics(t *testing.T) {
|
||||
q := new(Deque[string])
|
||||
|
||||
assertPanics(t, "should panic when inserting out of range", func() {
|
||||
q.Insert(1, "X")
|
||||
})
|
||||
|
||||
q.PushBack("A")
|
||||
|
||||
assertPanics(t, "should panic when inserting at negative index", func() {
|
||||
q.Insert(-1, "Y")
|
||||
})
|
||||
|
||||
assertPanics(t, "should panic when inserting out of range", func() {
|
||||
q.Insert(2, "B")
|
||||
})
|
||||
}
|
||||
|
||||
func TestRemoveOutOfRangePanics(t *testing.T) {
|
||||
q := new(Deque[string])
|
||||
|
||||
assertPanics(t, "should panic when removing from empty queue", func() {
|
||||
q.Remove(0)
|
||||
})
|
||||
|
||||
q.PushBack("A")
|
||||
|
||||
assertPanics(t, "should panic when removing at negative index", func() {
|
||||
q.Remove(-1)
|
||||
})
|
||||
|
||||
assertPanics(t, "should panic when removing out of range", func() {
|
||||
q.Remove(1)
|
||||
})
|
||||
}
|
||||
|
||||
func TestSetMinCapacity(t *testing.T) {
|
||||
var q Deque[string]
|
||||
exp := uint(8)
|
||||
q.SetMinCapacity(exp)
|
||||
q.PushBack("A")
|
||||
if q.minCap != 1<<exp {
|
||||
t.Fatal("wrong minimum capacity")
|
||||
}
|
||||
if len(q.buf) != 1<<exp {
|
||||
t.Fatal("wrong buffer size")
|
||||
}
|
||||
q.PopBack()
|
||||
if q.minCap != 1<<exp {
|
||||
t.Fatal("wrong minimum capacity")
|
||||
}
|
||||
if len(q.buf) != 1<<exp {
|
||||
t.Fatal("wrong buffer size")
|
||||
}
|
||||
q.SetMinCapacity(0)
|
||||
if q.minCap != minCapacity {
|
||||
t.Fatal("wrong minimum capacity")
|
||||
}
|
||||
}
|
||||
|
||||
func assertPanics(t *testing.T, name string, f func()) {
|
||||
defer func() {
|
||||
if r := recover(); r == nil {
|
||||
t.Errorf("%s: didn't panic as expected", name)
|
||||
}
|
||||
}()
|
||||
|
||||
f()
|
||||
}
|
||||
|
||||
func BenchmarkPushFront(b *testing.B) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushFront(i)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkPushBack(b *testing.B) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkSerial(b *testing.B) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PopFront()
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkSerialReverse(b *testing.B) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushFront(i)
|
||||
}
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PopBack()
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkRotate(b *testing.B) {
|
||||
q := new(Deque[int])
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
b.ResetTimer()
|
||||
// N complete rotations on length N - 1.
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.Rotate(b.N - 1)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkInsert(b *testing.B) {
|
||||
q := new(Deque[int])
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.Insert(q.Len()/2, -i)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkRemove(b *testing.B) {
|
||||
q := new(Deque[int])
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.Remove(q.Len() / 2)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkYoyo(b *testing.B) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < b.N; i++ {
|
||||
for j := 0; j < 65536; j++ {
|
||||
q.PushBack(j)
|
||||
}
|
||||
for j := 0; j < 65536; j++ {
|
||||
q.PopFront()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkYoyoFixed(b *testing.B) {
|
||||
var q Deque[int]
|
||||
q.SetMinCapacity(16)
|
||||
for i := 0; i < b.N; i++ {
|
||||
for j := 0; j < 65536; j++ {
|
||||
q.PushBack(j)
|
||||
}
|
||||
for j := 0; j < 65536; j++ {
|
||||
q.PopFront()
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -69,6 +69,13 @@ func (pq *PriorityQueue) Pop() *Item {
|
||||
return heap.Pop(&pq.priorityQueueSlice).(*Item)
|
||||
}
|
||||
|
||||
func (pq *PriorityQueue) GetHighest() *Item{
|
||||
if len(pq.priorityQueueSlice)>0 {
|
||||
return pq.priorityQueueSlice[0]
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
func (pq *PriorityQueue) Len() int {
|
||||
return len(pq.priorityQueueSlice)
|
||||
}
|
||||
|
||||
167
util/queue/squeue.go
Normal file
167
util/queue/squeue.go
Normal file
@@ -0,0 +1,167 @@
|
||||
package queue
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
/*
|
||||
这是一个循环队列
|
||||
*/
|
||||
type SQueue[ElementType any] struct {
|
||||
elements []ElementType
|
||||
head int
|
||||
tail int
|
||||
locker sync.RWMutex
|
||||
}
|
||||
|
||||
//游标,通过该游标获取数据
|
||||
type SCursor[ElementType any] struct {
|
||||
pos int
|
||||
squeue *SQueue[ElementType]
|
||||
}
|
||||
|
||||
func NewSQueue[ElementType any](maxElementNum int) *SQueue[ElementType]{
|
||||
queue := &SQueue[ElementType]{}
|
||||
queue.elements = make([]ElementType,maxElementNum+1)
|
||||
|
||||
return queue
|
||||
}
|
||||
|
||||
//游标移动到队首
|
||||
func (s *SCursor[ElementType]) First(){
|
||||
s.squeue.locker.RLock()
|
||||
defer s.squeue.locker.RUnlock()
|
||||
s.pos = s.squeue.head
|
||||
}
|
||||
|
||||
//从当前位置移动游标,注意如果在多协程读或者pop时,可能会导致游标失效
|
||||
func (s *SCursor[ElementType]) Next() (elem ElementType,ret bool){
|
||||
s.squeue.locker.RLock()
|
||||
defer s.squeue.locker.RUnlock()
|
||||
|
||||
if s.pos == s.squeue.tail {
|
||||
return
|
||||
}
|
||||
|
||||
s.pos++
|
||||
s.pos = (s.pos)%(len(s.squeue.elements))
|
||||
return s.squeue.elements[s.pos],true
|
||||
}
|
||||
|
||||
//获取队列元数个数
|
||||
func (s *SQueue[ElementType]) Len() int {
|
||||
s.locker.RLock()
|
||||
defer s.locker.RUnlock()
|
||||
|
||||
return s.len()
|
||||
}
|
||||
|
||||
func (s *SQueue[ElementType]) len() int {
|
||||
if s.head <= s.tail {
|
||||
return s.tail - s.head
|
||||
}
|
||||
|
||||
//(len(s.elements)-1-s.head)+(s.tail+1)
|
||||
return len(s.elements)-s.head+s.tail
|
||||
}
|
||||
|
||||
//获取游标,默认是队首
|
||||
func (s *SQueue[ElementType]) GetCursor() (cur SCursor[ElementType]){
|
||||
s.locker.RLock()
|
||||
defer s.locker.RUnlock()
|
||||
|
||||
cur.squeue = s
|
||||
cur.pos = s.head
|
||||
return
|
||||
}
|
||||
|
||||
//获取指定位置的游标
|
||||
func (s *SQueue[ElementType]) GetPosCursor(pos int) (cur SCursor[ElementType],ret bool){
|
||||
s.locker.RLock()
|
||||
defer s.locker.RUnlock()
|
||||
|
||||
if s.head < s.tail {
|
||||
if pos<=s.head || pos>s.tail{
|
||||
return
|
||||
}
|
||||
|
||||
ret = true
|
||||
cur.squeue = s
|
||||
cur.pos = pos
|
||||
return
|
||||
}
|
||||
|
||||
if pos >s.tail && pos <=s.head {
|
||||
return
|
||||
}
|
||||
|
||||
cur.squeue = s
|
||||
cur.pos = pos
|
||||
return
|
||||
}
|
||||
|
||||
//从队首移除掉指定数量元素
|
||||
func (s *SQueue[ElementType]) RemoveElement(elementNum int) (removeNum int) {
|
||||
s.locker.Lock()
|
||||
defer s.locker.Unlock()
|
||||
|
||||
lens := s.len()
|
||||
if elementNum > lens{
|
||||
removeNum = lens
|
||||
}else{
|
||||
removeNum = elementNum
|
||||
}
|
||||
|
||||
|
||||
s.head = (s.head + removeNum)%len(s.elements)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
//从队首Pop元素
|
||||
func (s *SQueue[ElementType]) Pop() (elem ElementType,ret bool){
|
||||
s.locker.Lock()
|
||||
defer s.locker.Unlock()
|
||||
|
||||
if s.head == s.tail {
|
||||
return
|
||||
}
|
||||
|
||||
s.head++
|
||||
s.head = s.head%len(s.elements)
|
||||
return s.elements[s.head],true
|
||||
}
|
||||
|
||||
//从队尾Push数据
|
||||
func (s *SQueue[ElementType]) Push(elem ElementType) bool {
|
||||
s.locker.Lock()
|
||||
defer s.locker.Unlock()
|
||||
|
||||
nextPos := (s.tail+1) % len(s.elements)
|
||||
if nextPos == s.head {
|
||||
//is full
|
||||
return false
|
||||
}
|
||||
|
||||
s.tail = nextPos
|
||||
s.elements[s.tail] = elem
|
||||
return true
|
||||
}
|
||||
|
||||
//判断队列是否为空
|
||||
func (s *SQueue[ElementType]) IsEmpty() bool{
|
||||
s.locker.RLock()
|
||||
defer s.locker.RUnlock()
|
||||
|
||||
return s.head == s.tail
|
||||
}
|
||||
|
||||
//判断队列是否已满
|
||||
func (s *SQueue[ElementType]) IsFull() bool{
|
||||
s.locker.RLock()
|
||||
defer s.locker.RUnlock()
|
||||
|
||||
nextPos := (s.tail+1) % len(s.elements)
|
||||
return nextPos == s.head
|
||||
}
|
||||
|
||||
66
util/queue/syncqueue_test.go
Normal file
66
util/queue/syncqueue_test.go
Normal file
@@ -0,0 +1,66 @@
|
||||
package queue
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test_Example(t *testing.T) {
|
||||
//1.创建阶列
|
||||
queue := NewSQueue[int](5)
|
||||
|
||||
//2.判断是否为空
|
||||
t.Log("is empty :", queue.IsEmpty())
|
||||
t.Log("is full :", queue.IsFull())
|
||||
|
||||
//3.游标使用,打印所有数据
|
||||
cursor := queue.GetCursor()
|
||||
cursor.First()
|
||||
for {
|
||||
elem, ret := cursor.Next()
|
||||
if ret == false {
|
||||
break
|
||||
}
|
||||
t.Log("elem:", elem)
|
||||
}
|
||||
|
||||
//4.push数据,塞满队列
|
||||
for i := 0; i < 6; i++ {
|
||||
t.Log("push:", queue.Push(i))
|
||||
}
|
||||
|
||||
t.Log("is empty :", queue.IsEmpty())
|
||||
t.Log("is full :", queue.IsFull())
|
||||
|
||||
//5.使用游标遍历所有数据
|
||||
cursor.First()
|
||||
for {
|
||||
elem, ret := cursor.Next()
|
||||
if ret == false {
|
||||
break
|
||||
}
|
||||
t.Log("elem:", elem)
|
||||
}
|
||||
|
||||
//6.删除2个元素
|
||||
removeNum := queue.RemoveElement(2)
|
||||
t.Log("Remove Num:", removeNum)
|
||||
|
||||
//7.游标遍历
|
||||
cursor.First()
|
||||
for {
|
||||
elem, ret := cursor.Next()
|
||||
if ret == false {
|
||||
break
|
||||
}
|
||||
t.Log("elem:", elem)
|
||||
}
|
||||
|
||||
//8.pop数据所有
|
||||
for i := 0; i < 6; i++ {
|
||||
elem, ret := queue.Pop()
|
||||
t.Log("pop:", elem, "-", ret, " len:", queue.Len())
|
||||
}
|
||||
|
||||
t.Log("is empty :", queue.IsEmpty())
|
||||
t.Log("is full :", queue.IsFull())
|
||||
}
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"reflect"
|
||||
"runtime"
|
||||
"time"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
// ITimer
|
||||
@@ -29,7 +30,7 @@ type OnAddTimer func(timer ITimer)
|
||||
// Timer
|
||||
type Timer struct {
|
||||
Id uint64
|
||||
cancelled bool //是否关闭
|
||||
cancelled int32 //是否关闭
|
||||
C chan ITimer //定时器管道
|
||||
interval time.Duration // 时间间隔(用于循环定时器)
|
||||
fireTime time.Time // 触发时间
|
||||
@@ -171,12 +172,12 @@ func (t *Timer) GetInterval() time.Duration {
|
||||
}
|
||||
|
||||
func (t *Timer) Cancel() {
|
||||
t.cancelled = true
|
||||
atomic.StoreInt32(&t.cancelled,1)
|
||||
}
|
||||
|
||||
// 判断定时器是否已经取消
|
||||
func (t *Timer) IsActive() bool {
|
||||
return !t.cancelled
|
||||
return atomic.LoadInt32(&t.cancelled) == 0
|
||||
}
|
||||
|
||||
func (t *Timer) GetName() string {
|
||||
|
||||
Reference in New Issue
Block a user