mirror of
https://github.com/duanhf2012/origin.git
synced 2026-02-15 00:04:46 +08:00
Compare commits
52 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
dfb6959843 | ||
|
|
dd4aaf9c57 | ||
|
|
6ef98a2104 | ||
|
|
1890b300ee | ||
|
|
6fea2226e1 | ||
|
|
ec1c2b4517 | ||
|
|
4b84d9a1d5 | ||
|
|
85a8ec58e5 | ||
|
|
962016d476 | ||
|
|
a61979e985 | ||
|
|
6de25d1c6d | ||
|
|
b392617d6e | ||
|
|
92fdb7860c | ||
|
|
f78d0d58be | ||
|
|
5675681ab1 | ||
|
|
ddeaaf7d77 | ||
|
|
1174b47475 | ||
|
|
18fff3b567 | ||
|
|
7ab6c88f9c | ||
|
|
6b64de06a2 | ||
|
|
95b153f8cf | ||
|
|
f3ff09b90f | ||
|
|
f9738fb9d0 | ||
|
|
91e773aa8c | ||
|
|
c9b96404f4 | ||
|
|
aaae63a674 | ||
|
|
47dc21aee1 | ||
|
|
4d09532801 | ||
|
|
d3ad7fc898 | ||
|
|
ba2b0568b2 | ||
|
|
5a3600bd62 | ||
|
|
4783d05e75 | ||
|
|
8cc1b1afcb | ||
|
|
53d9392901 | ||
|
|
8111b12da5 | ||
|
|
0ebbe0e31d | ||
|
|
e326e342f2 | ||
|
|
a7c6b45764 | ||
|
|
541abd93b4 | ||
|
|
8c8d681093 | ||
|
|
b8150cfc51 | ||
|
|
3833884777 | ||
|
|
60064cbba6 | ||
|
|
66770f07a5 | ||
|
|
76c8541b34 | ||
|
|
b1fee9bc57 | ||
|
|
284d43dc71 | ||
|
|
fd43863b73 | ||
|
|
1fcd870f1d | ||
|
|
11b78f84c4 | ||
|
|
8c6ee24b16 | ||
|
|
ca23925796 |
203
README.md
203
README.md
@@ -1,10 +1,10 @@
|
|||||||
origin 游戏服务器引擎简介
|
origin 游戏服务器引擎简介
|
||||||
==================
|
=========================
|
||||||
|
|
||||||
|
|
||||||
origin 是一个由 Go 语言(golang)编写的分布式开源游戏服务器引擎。origin适用于各类游戏服务器的开发,包括 H5(HTML5)游戏服务器。
|
origin 是一个由 Go 语言(golang)编写的分布式开源游戏服务器引擎。origin适用于各类游戏服务器的开发,包括 H5(HTML5)游戏服务器。
|
||||||
|
|
||||||
origin 解决的问题:
|
origin 解决的问题:
|
||||||
|
|
||||||
* origin总体设计如go语言设计一样,总是尽可能的提供简洁和易用的模式,快速开发。
|
* origin总体设计如go语言设计一样,总是尽可能的提供简洁和易用的模式,快速开发。
|
||||||
* 能够根据业务需求快速并灵活的制定服务器架构。
|
* 能够根据业务需求快速并灵活的制定服务器架构。
|
||||||
* 利用多核优势,将不同的service配置到不同的node,并能高效的协同工作。
|
* 利用多核优势,将不同的service配置到不同的node,并能高效的协同工作。
|
||||||
@@ -12,12 +12,16 @@ origin 解决的问题:
|
|||||||
* 有丰富并健壮的工具库。
|
* 有丰富并健壮的工具库。
|
||||||
|
|
||||||
Hello world!
|
Hello world!
|
||||||
---------------
|
------------
|
||||||
|
|
||||||
下面我们来一步步的建立origin服务器,先下载[origin引擎](https://github.com/duanhf2012/origin "origin引擎"),或者使用如下命令:
|
下面我们来一步步的建立origin服务器,先下载[origin引擎](https://github.com/duanhf2012/origin "origin引擎"),或者使用如下命令:
|
||||||
|
|
||||||
```go
|
```go
|
||||||
go get -v -u github.com/duanhf2012/origin
|
go get -v -u github.com/duanhf2012/origin
|
||||||
```
|
```
|
||||||
|
|
||||||
于是下载到GOPATH环境目录中,在src中加入main.go,内容如下:
|
于是下载到GOPATH环境目录中,在src中加入main.go,内容如下:
|
||||||
|
|
||||||
```go
|
```go
|
||||||
package main
|
package main
|
||||||
|
|
||||||
@@ -29,16 +33,20 @@ func main() {
|
|||||||
node.Start()
|
node.Start()
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
以上只是基础代码,具体运行参数和配置请参照第一章节。
|
以上只是基础代码,具体运行参数和配置请参照第一章节。
|
||||||
|
|
||||||
一个origin进程需要创建一个node对象,Start开始运行。您也可以直接下载origin引擎示例:
|
一个origin进程需要创建一个node对象,Start开始运行。您也可以直接下载origin引擎示例:
|
||||||
|
|
||||||
```
|
```
|
||||||
go get -v -u github.com/duanhf2012/originserver
|
go get -v -u github.com/duanhf2012/originserver
|
||||||
```
|
```
|
||||||
|
|
||||||
本文所有的说明都是基于该示例为主。
|
本文所有的说明都是基于该示例为主。
|
||||||
|
|
||||||
origin引擎三大对象关系
|
origin引擎三大对象关系
|
||||||
---------------
|
----------------------
|
||||||
|
|
||||||
* Node: 可以认为每一个Node代表着一个origin进程
|
* Node: 可以认为每一个Node代表着一个origin进程
|
||||||
* Service:一个独立的服务可以认为是一个大的功能模块,他是Node的子集,创建完成并安装Node对象中。服务可以支持对外部RPC等功能。
|
* Service:一个独立的服务可以认为是一个大的功能模块,他是Node的子集,创建完成并安装Node对象中。服务可以支持对外部RPC等功能。
|
||||||
* Module: 这是origin最小对象单元,强烈建议所有的业务模块都划分成各个小的Module组合,origin引擎将监控所有服务与Module运行状态,例如可以监控它们的慢处理和死循环函数。Module可以建立树状关系。Service本身也是Module的类型。
|
* Module: 这是origin最小对象单元,强烈建议所有的业务模块都划分成各个小的Module组合,origin引擎将监控所有服务与Module运行状态,例如可以监控它们的慢处理和死循环函数。Module可以建立树状关系。Service本身也是Module的类型。
|
||||||
@@ -46,7 +54,8 @@ origin引擎三大对象关系
|
|||||||
origin集群核心配置文件在config的cluster目录下,如github.com/duanhf2012/originserver的config/cluster目录下有cluster.json与service.json配置:
|
origin集群核心配置文件在config的cluster目录下,如github.com/duanhf2012/originserver的config/cluster目录下有cluster.json与service.json配置:
|
||||||
|
|
||||||
cluster.json如下:
|
cluster.json如下:
|
||||||
---------------
|
------------------
|
||||||
|
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
"NodeList":[
|
"NodeList":[
|
||||||
@@ -70,21 +79,26 @@ cluster.json如下:
|
|||||||
}
|
}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
---------------
|
|
||||||
|
---
|
||||||
|
|
||||||
以上配置了两个结点服务器程序:
|
以上配置了两个结点服务器程序:
|
||||||
|
|
||||||
* NodeId: 表示origin程序的结点Id标识,不允许重复。
|
* NodeId: 表示origin程序的结点Id标识,不允许重复。
|
||||||
* Private: 是否私有结点,如果为true,表示其他结点不会发现它,但可以自我运行。
|
* Private: 是否私有结点,如果为true,表示其他结点不会发现它,但可以自我运行。
|
||||||
* ListenAddr:Rpc通信服务的监听地址
|
* ListenAddr:Rpc通信服务的监听地址
|
||||||
* MaxRpcParamLen:Rpc参数数据包最大长度,该参数可以缺省,默认一次Rpc调用支持最大4294967295byte长度数据。
|
* MaxRpcParamLen:Rpc参数数据包最大长度,该参数可以缺省,默认一次Rpc调用支持最大4294967295byte长度数据。
|
||||||
* NodeName:结点名称
|
* NodeName:结点名称
|
||||||
* remark:备注,可选项
|
* remark:备注,可选项
|
||||||
* ServiceList:该Node将安装的服务列表
|
* ServiceList:该Node拥有的服务列表,注意:origin按配置的顺序进行安装初始化。但停止服务的顺序是相反。
|
||||||
---------------
|
|
||||||
|
---
|
||||||
|
|
||||||
在启动程序命令originserver -start nodeid=1中nodeid就是根据该配置装载服务。
|
在启动程序命令originserver -start nodeid=1中nodeid就是根据该配置装载服务。
|
||||||
更多参数使用,请使用originserver -help查看。
|
更多参数使用,请使用originserver -help查看。
|
||||||
service.json如下:
|
service.json如下:
|
||||||
---------------
|
------------------
|
||||||
|
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
"Global": {
|
"Global": {
|
||||||
@@ -103,7 +117,7 @@ service.json如下:
|
|||||||
"Keyfile":""
|
"Keyfile":""
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
||||||
},
|
},
|
||||||
"TcpService":{
|
"TcpService":{
|
||||||
"ListenAddr":"0.0.0.0:9030",
|
"ListenAddr":"0.0.0.0:9030",
|
||||||
@@ -160,10 +174,12 @@ service.json如下:
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
---------------
|
---
|
||||||
|
|
||||||
以上配置分为两个部分:Global,Service与NodeService。Global是全局配置,在任何服务中都可以通过cluster.GetCluster().GetGlobalCfg()获取,NodeService中配置的对应结点中服务的配置,如果启动程序中根据nodeid查找该域的对应的服务,如果找不到时,从Service公共部分查找。
|
以上配置分为两个部分:Global,Service与NodeService。Global是全局配置,在任何服务中都可以通过cluster.GetCluster().GetGlobalCfg()获取,NodeService中配置的对应结点中服务的配置,如果启动程序中根据nodeid查找该域的对应的服务,如果找不到时,从Service公共部分查找。
|
||||||
|
|
||||||
**HttpService配置**
|
**HttpService配置**
|
||||||
|
|
||||||
* ListenAddr:Http监听地址
|
* ListenAddr:Http监听地址
|
||||||
* ReadTimeout:读网络超时毫秒
|
* ReadTimeout:读网络超时毫秒
|
||||||
* WriteTimeout:写网络超时毫秒
|
* WriteTimeout:写网络超时毫秒
|
||||||
@@ -172,6 +188,7 @@ service.json如下:
|
|||||||
* CAFile: 证书文件,如果您的服务器通过web服务器代理配置https可以忽略该配置
|
* CAFile: 证书文件,如果您的服务器通过web服务器代理配置https可以忽略该配置
|
||||||
|
|
||||||
**TcpService配置**
|
**TcpService配置**
|
||||||
|
|
||||||
* ListenAddr: 监听地址
|
* ListenAddr: 监听地址
|
||||||
* MaxConnNum: 允许最大连接数
|
* MaxConnNum: 允许最大连接数
|
||||||
* PendingWriteNum:发送网络队列最大数量
|
* PendingWriteNum:发送网络队列最大数量
|
||||||
@@ -180,20 +197,21 @@ service.json如下:
|
|||||||
* MaxMsgLen:包最大长度
|
* MaxMsgLen:包最大长度
|
||||||
|
|
||||||
**WSService配置**
|
**WSService配置**
|
||||||
|
|
||||||
* ListenAddr: 监听地址
|
* ListenAddr: 监听地址
|
||||||
* MaxConnNum: 允许最大连接数
|
* MaxConnNum: 允许最大连接数
|
||||||
* PendingWriteNum:发送网络队列最大数量
|
* PendingWriteNum:发送网络队列最大数量
|
||||||
* MaxMsgLen:包最大长度
|
* MaxMsgLen:包最大长度
|
||||||
---------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
第一章:origin基础:
|
第一章:origin基础:
|
||||||
---------------
|
-------------------
|
||||||
|
|
||||||
查看github.com/duanhf2012/originserver中的simple_service中新建两个服务,分别是TestService1.go与CTestService2.go。
|
查看github.com/duanhf2012/originserver中的simple_service中新建两个服务,分别是TestService1.go与CTestService2.go。
|
||||||
|
|
||||||
simple_service/TestService1.go如下:
|
simple_service/TestService1.go如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
package simple_service
|
package simple_service
|
||||||
|
|
||||||
@@ -223,7 +241,9 @@ func (slf *TestService1) OnInit() error {
|
|||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
simple_service/TestService2.go如下:
|
simple_service/TestService2.go如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
import (
|
import (
|
||||||
"github.com/duanhf2012/origin/node"
|
"github.com/duanhf2012/origin/node"
|
||||||
@@ -263,6 +283,7 @@ func main(){
|
|||||||
```
|
```
|
||||||
|
|
||||||
* config/cluster/cluster.json如下:
|
* config/cluster/cluster.json如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
"NodeList":[
|
"NodeList":[
|
||||||
@@ -279,6 +300,7 @@ func main(){
|
|||||||
```
|
```
|
||||||
|
|
||||||
编译后运行结果如下:
|
编译后运行结果如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
#originserver -start nodeid=1
|
#originserver -start nodeid=1
|
||||||
TestService1 OnInit.
|
TestService1 OnInit.
|
||||||
@@ -286,13 +308,15 @@ TestService2 OnInit.
|
|||||||
```
|
```
|
||||||
|
|
||||||
第二章:Service中常用功能:
|
第二章:Service中常用功能:
|
||||||
---------------
|
--------------------------
|
||||||
|
|
||||||
定时器:
|
定时器:
|
||||||
---------------
|
-------
|
||||||
|
|
||||||
在开发中最常用的功能有定时任务,origin提供两种定时方式:
|
在开发中最常用的功能有定时任务,origin提供两种定时方式:
|
||||||
|
|
||||||
一种AfterFunc函数,可以间隔一定时间触发回调,参照simple_service/TestService2.go,实现如下:
|
一种AfterFunc函数,可以间隔一定时间触发回调,参照simple_service/TestService2.go,实现如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
func (slf *TestService2) OnInit() error {
|
func (slf *TestService2) OnInit() error {
|
||||||
fmt.Printf("TestService2 OnInit.\n")
|
fmt.Printf("TestService2 OnInit.\n")
|
||||||
@@ -305,10 +329,11 @@ func (slf *TestService2) OnSecondTick(){
|
|||||||
slf.AfterFunc(time.Second*1,slf.OnSecondTick)
|
slf.AfterFunc(time.Second*1,slf.OnSecondTick)
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
此时日志可以看到每隔1秒钟会print一次"tick.",如果下次还需要触发,需要重新设置定时器
|
此时日志可以看到每隔1秒钟会print一次"tick.",如果下次还需要触发,需要重新设置定时器
|
||||||
|
|
||||||
|
|
||||||
另一种方式是类似Linux系统的crontab命令,使用如下:
|
另一种方式是类似Linux系统的crontab命令,使用如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
func (slf *TestService2) OnInit() error {
|
func (slf *TestService2) OnInit() error {
|
||||||
@@ -327,27 +352,29 @@ func (slf *TestService2) OnCron(cron *timer.Cron){
|
|||||||
fmt.Printf(":A minute passed!\n")
|
fmt.Printf(":A minute passed!\n")
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
以上运行结果每换分钟时打印:A minute passed!
|
|
||||||
|
|
||||||
|
以上运行结果每换分钟时打印:A minute passed!
|
||||||
|
|
||||||
打开多协程模式:
|
打开多协程模式:
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
在origin引擎设计中,所有的服务是单协程模式,这样在编写逻辑代码时,不用考虑线程安全问题。极大的减少开发难度,但某些开发场景下不用考虑这个问题,而且需要并发执行的情况,比如,某服务只处理数据库操作控制,而数据库处理中发生阻塞等待的问题,因为一个协程,该服务接受的数据库操作只能是一个
|
在origin引擎设计中,所有的服务是单协程模式,这样在编写逻辑代码时,不用考虑线程安全问题。极大的减少开发难度,但某些开发场景下不用考虑这个问题,而且需要并发执行的情况,比如,某服务只处理数据库操作控制,而数据库处理中发生阻塞等待的问题,因为一个协程,该服务接受的数据库操作只能是一个
|
||||||
一个的排队处理,效率过低。于是可以打开此模式指定处理协程数,代码如下:
|
一个的排队处理,效率过低。于是可以打开此模式指定处理协程数,代码如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
func (slf *TestService1) OnInit() error {
|
func (slf *TestService1) OnInit() error {
|
||||||
fmt.Printf("TestService1 OnInit.\n")
|
fmt.Printf("TestService1 OnInit.\n")
|
||||||
|
|
||||||
//打开多线程处理模式,10个协程并发处理
|
//打开多线程处理模式,10个协程并发处理
|
||||||
slf.SetGoRoutineNum(10)
|
slf.SetGoRoutineNum(10)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
为了
|
|
||||||
|
|
||||||
|
|
||||||
性能监控功能:
|
性能监控功能:
|
||||||
---------------
|
-------------
|
||||||
|
|
||||||
我们在开发一个大型的系统时,经常由于一些代码质量的原因,产生处理过慢或者死循环的产生,该功能可以被监测到。使用方法如下:
|
我们在开发一个大型的系统时,经常由于一些代码质量的原因,产生处理过慢或者死循环的产生,该功能可以被监测到。使用方法如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
@@ -382,6 +409,7 @@ func main(){
|
|||||||
}
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
上面通过GetProfiler().SetOverTime与slf.GetProfiler().SetMaxOverTimer设置监控时间
|
上面通过GetProfiler().SetOverTime与slf.GetProfiler().SetMaxOverTimer设置监控时间
|
||||||
并在main.go中,打开了性能报告器,以每10秒汇报一次,因为上面的例子中,定时器是有死循环,所以可以得到以下报告:
|
并在main.go中,打开了性能报告器,以每10秒汇报一次,因为上面的例子中,定时器是有死循环,所以可以得到以下报告:
|
||||||
|
|
||||||
@@ -390,10 +418,11 @@ process count 0,take time 0 Milliseconds,average 0 Milliseconds/per.
|
|||||||
too slow process:Timer_orginserver/simple_service.(*TestService1).Loop-fm is take 38003 Milliseconds
|
too slow process:Timer_orginserver/simple_service.(*TestService1).Loop-fm is take 38003 Milliseconds
|
||||||
直接帮助找到TestService1服务中的Loop函数
|
直接帮助找到TestService1服务中的Loop函数
|
||||||
|
|
||||||
|
|
||||||
结点连接和断开事件监听:
|
结点连接和断开事件监听:
|
||||||
---------------
|
-----------------------
|
||||||
|
|
||||||
在有些业务中需要关注某结点是否断开连接,可以注册回调如下:
|
在有些业务中需要关注某结点是否断开连接,可以注册回调如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
func (ts *TestService) OnInit() error{
|
func (ts *TestService) OnInit() error{
|
||||||
ts.RegRpcListener(ts)
|
ts.RegRpcListener(ts)
|
||||||
@@ -408,13 +437,14 @@ func (ts *TestService) OnNodeDisconnect(nodeId int){
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
第三章:Module使用:
|
第三章:Module使用:
|
||||||
---------------
|
-------------------
|
||||||
|
|
||||||
Module创建与销毁:
|
Module创建与销毁:
|
||||||
---------------
|
-----------------
|
||||||
|
|
||||||
可以认为Service就是一种Module,它有Module所有的功能。在示例代码中可以参考originserver/simple_module/TestService3.go。
|
可以认为Service就是一种Module,它有Module所有的功能。在示例代码中可以参考originserver/simple_module/TestService3.go。
|
||||||
|
|
||||||
```
|
```
|
||||||
package simple_module
|
package simple_module
|
||||||
|
|
||||||
@@ -476,7 +506,9 @@ func (slf *TestService3) OnInit() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
在OnInit中创建了一条线型的模块关系TestService3->module1->module2,调用AddModule后会返回Module的Id,自动生成的Id从10e17开始,内部的id,您可以自己设置Id。当调用ReleaseModule释放时module1时,同样会将module2释放。会自动调用OnRelease函数,日志顺序如下:
|
在OnInit中创建了一条线型的模块关系TestService3->module1->module2,调用AddModule后会返回Module的Id,自动生成的Id从10e17开始,内部的id,您可以自己设置Id。当调用ReleaseModule释放时module1时,同样会将module2释放。会自动调用OnRelease函数,日志顺序如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
Module1 OnInit.
|
Module1 OnInit.
|
||||||
Module2 OnInit.
|
Module2 OnInit.
|
||||||
@@ -484,14 +516,16 @@ module1 id is 100000000000000001, module2 id is 100000000000000002
|
|||||||
Module2 Release.
|
Module2 Release.
|
||||||
Module1 Release.
|
Module1 Release.
|
||||||
```
|
```
|
||||||
|
|
||||||
在Module中同样可以使用定时器功能,请参照第二章节的定时器部分。
|
在Module中同样可以使用定时器功能,请参照第二章节的定时器部分。
|
||||||
|
|
||||||
|
|
||||||
第四章:事件使用
|
第四章:事件使用
|
||||||
---------------
|
----------------
|
||||||
|
|
||||||
事件是origin中一个重要的组成部分,可以在同一个node中的service与service或者与module之间进行事件通知。系统内置的几个服务,如:TcpService/HttpService等都是通过事件功能实现。他也是一个典型的观察者设计模型。在event中有两个类型的interface,一个是event.IEventProcessor它提供注册与卸载功能,另一个是event.IEventHandler提供消息广播等功能。
|
事件是origin中一个重要的组成部分,可以在同一个node中的service与service或者与module之间进行事件通知。系统内置的几个服务,如:TcpService/HttpService等都是通过事件功能实现。他也是一个典型的观察者设计模型。在event中有两个类型的interface,一个是event.IEventProcessor它提供注册与卸载功能,另一个是event.IEventHandler提供消息广播等功能。
|
||||||
|
|
||||||
在目录simple_event/TestService4.go中
|
在目录simple_event/TestService4.go中
|
||||||
|
|
||||||
```
|
```
|
||||||
package simple_event
|
package simple_event
|
||||||
|
|
||||||
@@ -535,6 +569,7 @@ func (slf *TestService4) TriggerEvent(){
|
|||||||
```
|
```
|
||||||
|
|
||||||
在目录simple_event/TestService5.go中
|
在目录simple_event/TestService5.go中
|
||||||
|
|
||||||
```
|
```
|
||||||
package simple_event
|
package simple_event
|
||||||
|
|
||||||
@@ -590,19 +625,24 @@ func (slf *TestService5) OnServiceEvent(ev event.IEvent){
|
|||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
程序运行10秒后,调用slf.TriggerEvent函数广播事件,于是在TestService5中会收到
|
程序运行10秒后,调用slf.TriggerEvent函数广播事件,于是在TestService5中会收到
|
||||||
|
|
||||||
```
|
```
|
||||||
OnServiceEvent type :1001 data:event data.
|
OnServiceEvent type :1001 data:event data.
|
||||||
OnModuleEvent type :1001 data:event data.
|
OnModuleEvent type :1001 data:event data.
|
||||||
```
|
```
|
||||||
|
|
||||||
在上面的TestModule中监听的事情,当这个Module被Release时监听会自动卸载。
|
在上面的TestModule中监听的事情,当这个Module被Release时监听会自动卸载。
|
||||||
|
|
||||||
第五章:RPC使用
|
第五章:RPC使用
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
RPC是service与service间通信的重要方式,它允许跨进程node互相访问,当然也可以指定nodeid进行调用。如下示例:
|
RPC是service与service间通信的重要方式,它允许跨进程node互相访问,当然也可以指定nodeid进行调用。如下示例:
|
||||||
|
|
||||||
simple_rpc/TestService6.go文件如下:
|
simple_rpc/TestService6.go文件如下:
|
||||||
```
|
|
||||||
|
```go
|
||||||
package simple_rpc
|
package simple_rpc
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@@ -627,6 +667,7 @@ type InputData struct {
|
|||||||
B int
|
B int
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 注意RPC函数名的格式必需为RPC_FunctionName或者是RPCFunctionName,如下的RPC_Sum也可以写成RPCSum
|
||||||
func (slf *TestService6) RPC_Sum(input *InputData,output *int) error{
|
func (slf *TestService6) RPC_Sum(input *InputData,output *int) error{
|
||||||
*output = input.A+input.B
|
*output = input.A+input.B
|
||||||
return nil
|
return nil
|
||||||
@@ -635,6 +676,7 @@ func (slf *TestService6) RPC_Sum(input *InputData,output *int) error{
|
|||||||
```
|
```
|
||||||
|
|
||||||
simple_rpc/TestService7.go文件如下:
|
simple_rpc/TestService7.go文件如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
package simple_rpc
|
package simple_rpc
|
||||||
|
|
||||||
@@ -709,11 +751,82 @@ func (slf *TestService7) GoTest(){
|
|||||||
}
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
您可以把TestService6配置到其他的Node中,比如NodeId为2中。只要在一个子网,origin引擎可以无差别调用。开发者只需要关注Service关系。同样它也是您服务器架构设计的核心需要思考的部分。
|
您可以把TestService6配置到其他的Node中,比如NodeId为2中。只要在一个子网,origin引擎可以无差别调用。开发者只需要关注Service关系。同样它也是您服务器架构设计的核心需要思考的部分。
|
||||||
|
|
||||||
第六章:配置服务发现
|
|
||||||
|
第六章:并发函数调用
|
||||||
---------------
|
---------------
|
||||||
|
在开发中经常会有将某些任务放到其他协程中并发执行,执行完成后,将服务的工作线程去回调。使用方式很简单,先打开该功能如下代码:
|
||||||
|
```
|
||||||
|
//以下通过cpu数量来定开启协程并发数量,建议:(1)cpu密集型计算使用1.0 (2)i/o密集型使用2.0或者更高
|
||||||
|
slf.OpenConcurrentByNumCPU(1.0)
|
||||||
|
|
||||||
|
//以下通过函数打开并发协程数,以下协程数最小5,最大10,任务管道的cap数量1000000
|
||||||
|
//origin会根据任务的数量在最小与最大协程数间动态伸缩
|
||||||
|
//slf.OpenConcurrent(5, 10, 1000000)
|
||||||
|
```
|
||||||
|
|
||||||
|
使用示例如下:
|
||||||
|
```
|
||||||
|
|
||||||
|
func (slf *TestService13) testAsyncDo() {
|
||||||
|
var context struct {
|
||||||
|
data int64
|
||||||
|
}
|
||||||
|
|
||||||
|
//1.示例普通使用
|
||||||
|
//参数一的函数在其他协程池中执行完成,将执行完成事件放入服务工作协程,
|
||||||
|
//参数二的函数在服务协程中执行,是协程安全的。
|
||||||
|
slf.AsyncDo(func() bool {
|
||||||
|
//该函数回调在协程池中执行
|
||||||
|
context.data = 100
|
||||||
|
return true
|
||||||
|
}, func(err error) {
|
||||||
|
//函数将在服务协程中执行
|
||||||
|
fmt.Print(context.data) //显示100
|
||||||
|
})
|
||||||
|
|
||||||
|
//2.示例按队列顺序
|
||||||
|
//参数一传入队列Id,同一个队列Id将在协程池中被排队执行
|
||||||
|
//以下进行两次调用,因为两次都传入参数queueId都为1,所以它们会都进入queueId为1的排队执行
|
||||||
|
queueId := int64(1)
|
||||||
|
for i := 0; i < 2; i++ {
|
||||||
|
slf.AsyncDoByQueue(queueId, func() bool {
|
||||||
|
//该函数会被2次调用,但是会排队执行
|
||||||
|
return true
|
||||||
|
}, func(err error) {
|
||||||
|
//函数将在服务协程中执行
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
//3.函数参数可以某中一个为空
|
||||||
|
//参数二函数将被延迟执行
|
||||||
|
slf.AsyncDo(nil, func(err error) {
|
||||||
|
//将在下
|
||||||
|
})
|
||||||
|
|
||||||
|
//参数一函数在协程池中执行,但没有在服务协程中回调
|
||||||
|
slf.AsyncDo(func() bool {
|
||||||
|
return true
|
||||||
|
}, nil)
|
||||||
|
|
||||||
|
//4.函数返回值控制不进行回调
|
||||||
|
slf.AsyncDo(func() bool {
|
||||||
|
//返回false时,参数二函数将不会被执行; 为true时,则会被执行
|
||||||
|
return false
|
||||||
|
}, func(err error) {
|
||||||
|
//该函数将不会被执行
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
第七章:配置服务发现
|
||||||
|
--------------------
|
||||||
|
|
||||||
origin引擎默认使用读取所有结点配置的进行确认结点有哪些Service。引擎也支持动态服务发现的方式,使用了内置的DiscoveryMaster服务用于中心Service,DiscoveryClient用于向DiscoveryMaster获取整个origin网络中所有的结点以及服务信息。具体实现细节请查看这两部分的服务实现。具体使用方式,在以下cluster配置中加入以下内容:
|
origin引擎默认使用读取所有结点配置的进行确认结点有哪些Service。引擎也支持动态服务发现的方式,使用了内置的DiscoveryMaster服务用于中心Service,DiscoveryClient用于向DiscoveryMaster获取整个origin网络中所有的结点以及服务信息。具体实现细节请查看这两部分的服务实现。具体使用方式,在以下cluster配置中加入以下内容:
|
||||||
|
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
"MasterDiscoveryNode": [{
|
"MasterDiscoveryNode": [{
|
||||||
@@ -727,8 +840,8 @@ origin引擎默认使用读取所有结点配置的进行确认结点有哪些Se
|
|||||||
"ListenAddr": "127.0.0.1:8801",
|
"ListenAddr": "127.0.0.1:8801",
|
||||||
"MaxRpcParamLen": 409600
|
"MaxRpcParamLen": 409600
|
||||||
}],
|
}],
|
||||||
|
|
||||||
|
|
||||||
"NodeList": [{
|
"NodeList": [{
|
||||||
"NodeId": 1,
|
"NodeId": 1,
|
||||||
"ListenAddr": "127.0.0.1:8801",
|
"ListenAddr": "127.0.0.1:8801",
|
||||||
@@ -741,6 +854,7 @@ origin引擎默认使用读取所有结点配置的进行确认结点有哪些Se
|
|||||||
}]
|
}]
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
新上有两新不同的字段分别为MasterDiscoveryNode与DiscoveryService。其中:
|
新上有两新不同的字段分别为MasterDiscoveryNode与DiscoveryService。其中:
|
||||||
|
|
||||||
MasterDiscoveryNode中配置了结点Id为1的服务发现Master,他的监听地址ListenAddr为127.0.0.1:8801,结点为2的也是一个服务发现Master,不同在于多了"NeighborService":["HttpGateService"]配置。如果"NeighborService"有配置具体的服务时,则表示该结点是一个邻居Master结点。当前运行的Node结点会从该Master结点上筛选HttpGateService的服务,并且当前运行的Node结点不会向上同步本地所有公开的服务,和邻居结点关系是单向的。
|
MasterDiscoveryNode中配置了结点Id为1的服务发现Master,他的监听地址ListenAddr为127.0.0.1:8801,结点为2的也是一个服务发现Master,不同在于多了"NeighborService":["HttpGateService"]配置。如果"NeighborService"有配置具体的服务时,则表示该结点是一个邻居Master结点。当前运行的Node结点会从该Master结点上筛选HttpGateService的服务,并且当前运行的Node结点不会向上同步本地所有公开的服务,和邻居结点关系是单向的。
|
||||||
@@ -748,14 +862,13 @@ MasterDiscoveryNode中配置了结点Id为1的服务发现Master,他的监听
|
|||||||
NeighborService可以用在当有多个以Master中心结点的网络,发现跨网络的服务场景。
|
NeighborService可以用在当有多个以Master中心结点的网络,发现跨网络的服务场景。
|
||||||
DiscoveryService表示将筛选origin网络中的TestService8服务,注意如果DiscoveryService不配置,则筛选功能不生效。
|
DiscoveryService表示将筛选origin网络中的TestService8服务,注意如果DiscoveryService不配置,则筛选功能不生效。
|
||||||
|
|
||||||
|
第八章:HttpService使用
|
||||||
|
-----------------------
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
第七章:HttpService使用
|
|
||||||
---------------
|
|
||||||
HttpService是origin引擎中系统实现的http服务,http接口中常用的GET,POST以及url路由处理。
|
HttpService是origin引擎中系统实现的http服务,http接口中常用的GET,POST以及url路由处理。
|
||||||
|
|
||||||
simple_http/TestHttpService.go文件如下:
|
simple_http/TestHttpService.go文件如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
package simple_http
|
package simple_http
|
||||||
|
|
||||||
@@ -825,15 +938,16 @@ func (slf *TestHttpService) HttpPost(session *sysservice.HttpSession){
|
|||||||
}
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
注意,要在main.go中加入import _ "orginserver/simple_service",并且在config/cluster/cluster.json中的ServiceList加入服务。
|
注意,要在main.go中加入import _ "orginserver/simple_service",并且在config/cluster/cluster.json中的ServiceList加入服务。
|
||||||
|
|
||||||
|
第九章:TcpService服务使用
|
||||||
|
--------------------------
|
||||||
|
|
||||||
|
|
||||||
第七章:TcpService服务使用
|
|
||||||
---------------
|
|
||||||
TcpService是origin引擎中系统实现的Tcp服务,可以支持自定义消息格式处理器。只要重新实现network.Processor接口。目前内置已经实现最常用的protobuf处理器。
|
TcpService是origin引擎中系统实现的Tcp服务,可以支持自定义消息格式处理器。只要重新实现network.Processor接口。目前内置已经实现最常用的protobuf处理器。
|
||||||
|
|
||||||
simple_tcp/TestTcpService.go文件如下:
|
simple_tcp/TestTcpService.go文件如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
package simple_tcp
|
package simple_tcp
|
||||||
|
|
||||||
@@ -901,9 +1015,9 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
第十章:其他系统模块介绍
|
||||||
|
------------------------
|
||||||
|
|
||||||
第八章:其他系统模块介绍
|
|
||||||
---------------
|
|
||||||
* sysservice/wsservice.go:支持了WebSocket协议,使用方法与TcpService类似
|
* sysservice/wsservice.go:支持了WebSocket协议,使用方法与TcpService类似
|
||||||
* sysmodule/DBModule.go:对mysql数据库操作
|
* sysmodule/DBModule.go:对mysql数据库操作
|
||||||
* sysmodule/RedisModule.go:对Redis数据进行操作
|
* sysmodule/RedisModule.go:对Redis数据进行操作
|
||||||
@@ -912,9 +1026,9 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
|
|||||||
* util:在该目录下,有常用的uuid,hash,md5,协程封装等工具库
|
* util:在该目录下,有常用的uuid,hash,md5,协程封装等工具库
|
||||||
* https://github.com/duanhf2012/originservice: 其他扩展支持的服务可以在该工程上看到,目前支持firebase推送的封装。
|
* https://github.com/duanhf2012/originservice: 其他扩展支持的服务可以在该工程上看到,目前支持firebase推送的封装。
|
||||||
|
|
||||||
|
|
||||||
备注:
|
备注:
|
||||||
---------------
|
-----
|
||||||
|
|
||||||
**感觉不错请star, 谢谢!**
|
**感觉不错请star, 谢谢!**
|
||||||
|
|
||||||
**欢迎加入origin服务器开发QQ交流群:168306674,有任何疑问我都会及时解答**
|
**欢迎加入origin服务器开发QQ交流群:168306674,有任何疑问我都会及时解答**
|
||||||
@@ -924,6 +1038,7 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
|
|||||||
[因服务器是由个人维护,如果这个项目对您有帮助,您可以点我进行捐赠,感谢!](http://www.cppblog.com/images/cppblog_com/API/21416/r_pay.jpg "Thanks!")
|
[因服务器是由个人维护,如果这个项目对您有帮助,您可以点我进行捐赠,感谢!](http://www.cppblog.com/images/cppblog_com/API/21416/r_pay.jpg "Thanks!")
|
||||||
|
|
||||||
特别感谢以下赞助网友:
|
特别感谢以下赞助网友:
|
||||||
|
|
||||||
```
|
```
|
||||||
咕咕兽
|
咕咕兽
|
||||||
_
|
_
|
||||||
|
|||||||
@@ -26,7 +26,8 @@ type NodeInfo struct {
|
|||||||
Private bool
|
Private bool
|
||||||
ListenAddr string
|
ListenAddr string
|
||||||
MaxRpcParamLen uint32 //最大Rpc参数长度
|
MaxRpcParamLen uint32 //最大Rpc参数长度
|
||||||
ServiceList []string //所有的服务列表
|
CompressBytesLen int //超过字节进行压缩的长度
|
||||||
|
ServiceList []string //所有的有序服务列表
|
||||||
PublicServiceList []string //对外公开的服务列表
|
PublicServiceList []string //对外公开的服务列表
|
||||||
DiscoveryService []string //筛选发现的服务,如果不配置,不进行筛选
|
DiscoveryService []string //筛选发现的服务,如果不配置,不进行筛选
|
||||||
NeighborService []string
|
NeighborService []string
|
||||||
@@ -73,7 +74,7 @@ func SetServiceDiscovery(serviceDiscovery IServiceDiscovery) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (cls *Cluster) Start() {
|
func (cls *Cluster) Start() {
|
||||||
cls.rpcServer.Start(cls.localNodeInfo.ListenAddr, cls.localNodeInfo.MaxRpcParamLen)
|
cls.rpcServer.Start(cls.localNodeInfo.ListenAddr, cls.localNodeInfo.MaxRpcParamLen,cls.localNodeInfo.CompressBytesLen)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (cls *Cluster) Stop() {
|
func (cls *Cluster) Stop() {
|
||||||
@@ -110,15 +111,13 @@ func (cls *Cluster) DelNode(nodeId int, immediately bool) {
|
|||||||
break
|
break
|
||||||
}
|
}
|
||||||
|
|
||||||
rpc.client.Lock()
|
|
||||||
//正在连接中不主动断开,只断开没有连接中的
|
//正在连接中不主动断开,只断开没有连接中的
|
||||||
if rpc.client.IsConnected() {
|
if rpc.client.IsConnected() {
|
||||||
nodeInfo.status = Discard
|
nodeInfo.status = Discard
|
||||||
rpc.client.Unlock()
|
|
||||||
log.SRelease("Discard node ", nodeInfo.NodeId, " ", nodeInfo.ListenAddr)
|
log.SRelease("Discard node ", nodeInfo.NodeId, " ", nodeInfo.ListenAddr)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
rpc.client.Unlock()
|
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -194,20 +193,17 @@ func (cls *Cluster) serviceDiscoverySetNodeInfo(nodeInfo *NodeInfo) {
|
|||||||
if _, rpcInfoOK := cls.mapRpc[nodeInfo.NodeId]; rpcInfoOK == true {
|
if _, rpcInfoOK := cls.mapRpc[nodeInfo.NodeId]; rpcInfoOK == true {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
rpcInfo := NodeRpcInfo{}
|
rpcInfo := NodeRpcInfo{}
|
||||||
rpcInfo.nodeInfo = *nodeInfo
|
rpcInfo.nodeInfo = *nodeInfo
|
||||||
rpcInfo.client = &rpc.Client{}
|
rpcInfo.client =rpc.NewRClient(nodeInfo.NodeId, nodeInfo.ListenAddr, nodeInfo.MaxRpcParamLen,cls.localNodeInfo.CompressBytesLen,cls.triggerRpcEvent)
|
||||||
rpcInfo.client.TriggerRpcEvent = cls.triggerRpcEvent
|
|
||||||
rpcInfo.client.Connect(nodeInfo.NodeId, nodeInfo.ListenAddr, nodeInfo.MaxRpcParamLen)
|
|
||||||
cls.mapRpc[nodeInfo.NodeId] = rpcInfo
|
cls.mapRpc[nodeInfo.NodeId] = rpcInfo
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (cls *Cluster) buildLocalRpc() {
|
func (cls *Cluster) buildLocalRpc() {
|
||||||
rpcInfo := NodeRpcInfo{}
|
rpcInfo := NodeRpcInfo{}
|
||||||
rpcInfo.nodeInfo = cls.localNodeInfo
|
rpcInfo.nodeInfo = cls.localNodeInfo
|
||||||
rpcInfo.client = &rpc.Client{}
|
rpcInfo.client = rpc.NewLClient(rpcInfo.nodeInfo.NodeId)
|
||||||
rpcInfo.client.Connect(rpcInfo.nodeInfo.NodeId, "", 0)
|
|
||||||
|
|
||||||
cls.mapRpc[cls.localNodeInfo.NodeId] = rpcInfo
|
cls.mapRpc[cls.localNodeInfo.NodeId] = rpcInfo
|
||||||
}
|
}
|
||||||
@@ -253,8 +249,9 @@ func (cls *Cluster) checkDynamicDiscovery(localNodeId int) (bool, bool) {
|
|||||||
return localMaster, hasMaster
|
return localMaster, hasMaster
|
||||||
}
|
}
|
||||||
|
|
||||||
func (cls *Cluster) appendService(serviceName string, bPublicService bool) {
|
func (cls *Cluster) AddDynamicDiscoveryService(serviceName string, bPublicService bool) {
|
||||||
cls.localNodeInfo.ServiceList = append(cls.localNodeInfo.ServiceList, serviceName)
|
addServiceList := append([]string{},serviceName)
|
||||||
|
cls.localNodeInfo.ServiceList = append(addServiceList,cls.localNodeInfo.ServiceList...)
|
||||||
if bPublicService {
|
if bPublicService {
|
||||||
cls.localNodeInfo.PublicServiceList = append(cls.localNodeInfo.PublicServiceList, serviceName)
|
cls.localNodeInfo.PublicServiceList = append(cls.localNodeInfo.PublicServiceList, serviceName)
|
||||||
}
|
}
|
||||||
@@ -298,11 +295,10 @@ func (cls *Cluster) SetupServiceDiscovery(localNodeId int, setupServiceFun Setup
|
|||||||
|
|
||||||
//2.如果为动态服务发现安装本地发现服务
|
//2.如果为动态服务发现安装本地发现服务
|
||||||
cls.serviceDiscovery = getDynamicDiscovery()
|
cls.serviceDiscovery = getDynamicDiscovery()
|
||||||
|
cls.AddDynamicDiscoveryService(DynamicDiscoveryClientName, true)
|
||||||
if localMaster == true {
|
if localMaster == true {
|
||||||
cls.appendService(DynamicDiscoveryMasterName, false)
|
cls.AddDynamicDiscoveryService(DynamicDiscoveryMasterName, false)
|
||||||
}
|
}
|
||||||
cls.appendService(DynamicDiscoveryClientName, true)
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (cls *Cluster) FindRpcHandler(serviceName string) rpc.IRpcHandler {
|
func (cls *Cluster) FindRpcHandler(serviceName string) rpc.IRpcHandler {
|
||||||
@@ -358,10 +354,10 @@ func (cls *Cluster) IsNodeConnected(nodeId int) bool {
|
|||||||
return pClient != nil && pClient.IsConnected()
|
return pClient != nil && pClient.IsConnected()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (cls *Cluster) triggerRpcEvent(bConnect bool, clientSeq uint32, nodeId int) {
|
func (cls *Cluster) triggerRpcEvent(bConnect bool, clientId uint32, nodeId int) {
|
||||||
cls.locker.Lock()
|
cls.locker.Lock()
|
||||||
nodeInfo, ok := cls.mapRpc[nodeId]
|
nodeInfo, ok := cls.mapRpc[nodeId]
|
||||||
if ok == false || nodeInfo.client == nil || nodeInfo.client.GetClientSeq() != clientSeq {
|
if ok == false || nodeInfo.client == nil || nodeInfo.client.GetClientId() != clientId {
|
||||||
cls.locker.Unlock()
|
cls.locker.Unlock()
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -383,7 +379,6 @@ func (cls *Cluster) triggerRpcEvent(bConnect bool, clientSeq uint32, nodeId int)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
func (cls *Cluster) TriggerDiscoveryEvent(bDiscovery bool, nodeId int, serviceName []string) {
|
func (cls *Cluster) TriggerDiscoveryEvent(bDiscovery bool, nodeId int, serviceName []string) {
|
||||||
cls.rpcEventLocker.Lock()
|
cls.rpcEventLocker.Lock()
|
||||||
defer cls.rpcEventLocker.Unlock()
|
defer cls.rpcEventLocker.Unlock()
|
||||||
|
|||||||
@@ -5,6 +5,8 @@ import (
|
|||||||
"github.com/duanhf2012/origin/log"
|
"github.com/duanhf2012/origin/log"
|
||||||
"github.com/duanhf2012/origin/rpc"
|
"github.com/duanhf2012/origin/rpc"
|
||||||
"github.com/duanhf2012/origin/service"
|
"github.com/duanhf2012/origin/service"
|
||||||
|
"time"
|
||||||
|
"github.com/duanhf2012/origin/util/timer"
|
||||||
)
|
)
|
||||||
|
|
||||||
const DynamicDiscoveryMasterName = "DiscoveryMaster"
|
const DynamicDiscoveryMasterName = "DiscoveryMaster"
|
||||||
@@ -60,6 +62,21 @@ func (ds *DynamicDiscoveryMaster) addNodeInfo(nodeInfo *rpc.NodeInfo) {
|
|||||||
ds.nodeInfo = append(ds.nodeInfo, nodeInfo)
|
ds.nodeInfo = append(ds.nodeInfo, nodeInfo)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (ds *DynamicDiscoveryMaster) removeNodeInfo(nodeId int32) {
|
||||||
|
if _,ok:= ds.mapNodeInfo[nodeId];ok == false {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for i:=0;i<len(ds.nodeInfo);i++ {
|
||||||
|
if ds.nodeInfo[i].NodeId == nodeId {
|
||||||
|
ds.nodeInfo = append(ds.nodeInfo[:i],ds.nodeInfo[i+1:]...)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
delete(ds.mapNodeInfo,nodeId)
|
||||||
|
}
|
||||||
|
|
||||||
func (ds *DynamicDiscoveryMaster) OnInit() error {
|
func (ds *DynamicDiscoveryMaster) OnInit() error {
|
||||||
ds.mapNodeInfo = make(map[int32]struct{}, 20)
|
ds.mapNodeInfo = make(map[int32]struct{}, 20)
|
||||||
ds.RegRpcListener(ds)
|
ds.RegRpcListener(ds)
|
||||||
@@ -103,6 +120,8 @@ func (ds *DynamicDiscoveryMaster) OnNodeDisconnect(nodeId int) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ds.removeNodeInfo(int32(nodeId))
|
||||||
|
|
||||||
var notifyDiscover rpc.SubscribeDiscoverNotify
|
var notifyDiscover rpc.SubscribeDiscoverNotify
|
||||||
notifyDiscover.MasterNodeId = int32(cluster.GetLocalNodeInfo().NodeId)
|
notifyDiscover.MasterNodeId = int32(cluster.GetLocalNodeInfo().NodeId)
|
||||||
notifyDiscover.DelNodeId = int32(nodeId)
|
notifyDiscover.DelNodeId = int32(nodeId)
|
||||||
@@ -324,6 +343,10 @@ func (dc *DynamicDiscoveryClient) isDiscoverNode(nodeId int) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (dc *DynamicDiscoveryClient) OnNodeConnected(nodeId int) {
|
func (dc *DynamicDiscoveryClient) OnNodeConnected(nodeId int) {
|
||||||
|
dc.regServiceDiscover(nodeId)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (dc *DynamicDiscoveryClient) regServiceDiscover(nodeId int){
|
||||||
nodeInfo := cluster.GetMasterDiscoveryNodeInfo(nodeId)
|
nodeInfo := cluster.GetMasterDiscoveryNodeInfo(nodeId)
|
||||||
if nodeInfo == nil {
|
if nodeInfo == nil {
|
||||||
return
|
return
|
||||||
@@ -347,6 +370,10 @@ func (dc *DynamicDiscoveryClient) OnNodeConnected(nodeId int) {
|
|||||||
err := dc.AsyncCallNode(nodeId, RegServiceDiscover, &req, func(res *rpc.Empty, err error) {
|
err := dc.AsyncCallNode(nodeId, RegServiceDiscover, &req, func(res *rpc.Empty, err error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.SError("call ", RegServiceDiscover, " is fail :", err.Error())
|
log.SError("call ", RegServiceDiscover, " is fail :", err.Error())
|
||||||
|
dc.AfterFunc(time.Second*3, func(timer *timer.Timer) {
|
||||||
|
dc.regServiceDiscover(nodeId)
|
||||||
|
})
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
|||||||
93
concurrent/concurrent.go
Normal file
93
concurrent/concurrent.go
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
package concurrent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"runtime"
|
||||||
|
|
||||||
|
"github.com/duanhf2012/origin/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
const defaultMaxTaskChannelNum = 1000000
|
||||||
|
|
||||||
|
type IConcurrent interface {
|
||||||
|
OpenConcurrentByNumCPU(cpuMul float32)
|
||||||
|
OpenConcurrent(minGoroutineNum int32, maxGoroutineNum int32, maxTaskChannelNum int)
|
||||||
|
AsyncDoByQueue(queueId int64, fn func() bool, cb func(err error))
|
||||||
|
AsyncDo(f func() bool, cb func(err error))
|
||||||
|
}
|
||||||
|
|
||||||
|
type Concurrent struct {
|
||||||
|
dispatch
|
||||||
|
|
||||||
|
tasks chan task
|
||||||
|
cbChannel chan func(error)
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
cpuMul 表示cpu的倍数
|
||||||
|
建议:(1)cpu密集型 使用1 (2)i/o密集型使用2或者更高
|
||||||
|
*/
|
||||||
|
func (c *Concurrent) OpenConcurrentByNumCPU(cpuNumMul float32) {
|
||||||
|
goroutineNum := int32(float32(runtime.NumCPU())*cpuNumMul + 1)
|
||||||
|
c.OpenConcurrent(goroutineNum, goroutineNum, defaultMaxTaskChannelNum)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Concurrent) OpenConcurrent(minGoroutineNum int32, maxGoroutineNum int32, maxTaskChannelNum int) {
|
||||||
|
c.tasks = make(chan task, maxTaskChannelNum)
|
||||||
|
c.cbChannel = make(chan func(error), maxTaskChannelNum)
|
||||||
|
|
||||||
|
//打开dispach
|
||||||
|
c.dispatch.open(minGoroutineNum, maxGoroutineNum, c.tasks, c.cbChannel)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Concurrent) AsyncDo(f func() bool, cb func(err error)) {
|
||||||
|
c.AsyncDoByQueue(0, f, cb)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Concurrent) AsyncDoByQueue(queueId int64, fn func() bool, cb func(err error)) {
|
||||||
|
if cap(c.tasks) == 0 {
|
||||||
|
panic("not open concurrent")
|
||||||
|
}
|
||||||
|
|
||||||
|
if fn == nil && cb == nil {
|
||||||
|
log.SStack("fn and cb is nil")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if fn == nil {
|
||||||
|
c.pushAsyncDoCallbackEvent(cb)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if queueId != 0 {
|
||||||
|
queueId = queueId % maxTaskQueueSessionId+1
|
||||||
|
}
|
||||||
|
|
||||||
|
select {
|
||||||
|
case c.tasks <- task{queueId, fn, cb}:
|
||||||
|
default:
|
||||||
|
log.SError("tasks channel is full")
|
||||||
|
if cb != nil {
|
||||||
|
c.pushAsyncDoCallbackEvent(func(err error) {
|
||||||
|
cb(errors.New("tasks channel is full"))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Concurrent) Close() {
|
||||||
|
if cap(c.tasks) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
log.SRelease("wait close concurrent")
|
||||||
|
|
||||||
|
c.dispatch.close()
|
||||||
|
|
||||||
|
log.SRelease("concurrent has successfully exited")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Concurrent) GetCallBackChannel() chan func(error) {
|
||||||
|
return c.cbChannel
|
||||||
|
}
|
||||||
196
concurrent/dispatch.go
Normal file
196
concurrent/dispatch.go
Normal file
@@ -0,0 +1,196 @@
|
|||||||
|
package concurrent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"fmt"
|
||||||
|
"runtime"
|
||||||
|
|
||||||
|
"github.com/duanhf2012/origin/log"
|
||||||
|
"github.com/duanhf2012/origin/util/queue"
|
||||||
|
)
|
||||||
|
|
||||||
|
var idleTimeout = int64(2 * time.Second)
|
||||||
|
const maxTaskQueueSessionId = 10000
|
||||||
|
|
||||||
|
type dispatch struct {
|
||||||
|
minConcurrentNum int32
|
||||||
|
maxConcurrentNum int32
|
||||||
|
|
||||||
|
queueIdChannel chan int64
|
||||||
|
workerQueue chan task
|
||||||
|
tasks chan task
|
||||||
|
idle bool
|
||||||
|
workerNum int32
|
||||||
|
cbChannel chan func(error)
|
||||||
|
|
||||||
|
mapTaskQueueSession map[int64]*queue.Deque[task]
|
||||||
|
|
||||||
|
waitWorker sync.WaitGroup
|
||||||
|
waitDispatch sync.WaitGroup
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dispatch) open(minGoroutineNum int32, maxGoroutineNum int32, tasks chan task, cbChannel chan func(error)) {
|
||||||
|
d.minConcurrentNum = minGoroutineNum
|
||||||
|
d.maxConcurrentNum = maxGoroutineNum
|
||||||
|
d.tasks = tasks
|
||||||
|
d.mapTaskQueueSession = make(map[int64]*queue.Deque[task], maxTaskQueueSessionId)
|
||||||
|
d.workerQueue = make(chan task)
|
||||||
|
d.cbChannel = cbChannel
|
||||||
|
d.queueIdChannel = make(chan int64, cap(tasks))
|
||||||
|
|
||||||
|
d.waitDispatch.Add(1)
|
||||||
|
go d.run()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dispatch) run() {
|
||||||
|
defer d.waitDispatch.Done()
|
||||||
|
timeout := time.NewTimer(time.Duration(atomic.LoadInt64(&idleTimeout)))
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case queueId := <-d.queueIdChannel:
|
||||||
|
d.processqueueEvent(queueId)
|
||||||
|
default:
|
||||||
|
select {
|
||||||
|
case t, ok := <-d.tasks:
|
||||||
|
if ok == false {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
d.processTask(&t)
|
||||||
|
case queueId := <-d.queueIdChannel:
|
||||||
|
d.processqueueEvent(queueId)
|
||||||
|
case <-timeout.C:
|
||||||
|
d.processTimer()
|
||||||
|
if atomic.LoadInt32(&d.minConcurrentNum) == -1 && len(d.tasks) == 0 {
|
||||||
|
atomic.StoreInt64(&idleTimeout,int64(time.Millisecond * 10))
|
||||||
|
}
|
||||||
|
timeout.Reset(time.Duration(atomic.LoadInt64(&idleTimeout)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if atomic.LoadInt32(&d.minConcurrentNum) == -1 && d.workerNum == 0 {
|
||||||
|
d.waitWorker.Wait()
|
||||||
|
d.cbChannel <- nil
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dispatch) processTimer() {
|
||||||
|
if d.idle == true && d.workerNum > atomic.LoadInt32(&d.minConcurrentNum) {
|
||||||
|
d.processIdle()
|
||||||
|
}
|
||||||
|
|
||||||
|
d.idle = true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dispatch) processqueueEvent(queueId int64) {
|
||||||
|
d.idle = false
|
||||||
|
|
||||||
|
queueSession := d.mapTaskQueueSession[queueId]
|
||||||
|
if queueSession == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
queueSession.PopFront()
|
||||||
|
if queueSession.Len() == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
t := queueSession.Front()
|
||||||
|
d.executeTask(&t)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dispatch) executeTask(t *task) {
|
||||||
|
select {
|
||||||
|
case d.workerQueue <- *t:
|
||||||
|
return
|
||||||
|
default:
|
||||||
|
if d.workerNum < d.maxConcurrentNum {
|
||||||
|
var work worker
|
||||||
|
work.start(&d.waitWorker, t, d)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d.workerQueue <- *t
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dispatch) processTask(t *task) {
|
||||||
|
d.idle = false
|
||||||
|
|
||||||
|
//处理有排队任务
|
||||||
|
if t.queueId != 0 {
|
||||||
|
queueSession := d.mapTaskQueueSession[t.queueId]
|
||||||
|
if queueSession == nil {
|
||||||
|
queueSession = &queue.Deque[task]{}
|
||||||
|
d.mapTaskQueueSession[t.queueId] = queueSession
|
||||||
|
}
|
||||||
|
|
||||||
|
//没有正在执行的任务,则直接执行
|
||||||
|
if queueSession.Len() == 0 {
|
||||||
|
d.executeTask(t)
|
||||||
|
}
|
||||||
|
|
||||||
|
queueSession.PushBack(*t)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
//普通任务
|
||||||
|
d.executeTask(t)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dispatch) processIdle() {
|
||||||
|
select {
|
||||||
|
case d.workerQueue <- task{}:
|
||||||
|
d.workerNum--
|
||||||
|
default:
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dispatch) pushQueueTaskFinishEvent(queueId int64) {
|
||||||
|
d.queueIdChannel <- queueId
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *dispatch) pushAsyncDoCallbackEvent(cb func(err error)) {
|
||||||
|
if cb == nil {
|
||||||
|
//不需要回调的情况
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.cbChannel <- cb
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dispatch) close() {
|
||||||
|
atomic.StoreInt32(&d.minConcurrentNum, -1)
|
||||||
|
|
||||||
|
breakFor:
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case cb := <-d.cbChannel:
|
||||||
|
if cb == nil {
|
||||||
|
break breakFor
|
||||||
|
}
|
||||||
|
cb(nil)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d.waitDispatch.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *dispatch) DoCallback(cb func(err error)) {
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
buf := make([]byte, 4096)
|
||||||
|
l := runtime.Stack(buf, false)
|
||||||
|
errString := fmt.Sprint(r)
|
||||||
|
|
||||||
|
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
cb(nil)
|
||||||
|
}
|
||||||
79
concurrent/worker.go
Normal file
79
concurrent/worker.go
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
package concurrent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"runtime"
|
||||||
|
|
||||||
|
"github.com/duanhf2012/origin/log"
|
||||||
|
)
|
||||||
|
|
||||||
|
type task struct {
|
||||||
|
queueId int64
|
||||||
|
fn func() bool
|
||||||
|
cb func(err error)
|
||||||
|
}
|
||||||
|
|
||||||
|
type worker struct {
|
||||||
|
*dispatch
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *task) isExistTask() bool {
|
||||||
|
return t.fn == nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *worker) start(waitGroup *sync.WaitGroup, t *task, d *dispatch) {
|
||||||
|
w.dispatch = d
|
||||||
|
d.workerNum += 1
|
||||||
|
waitGroup.Add(1)
|
||||||
|
go w.run(waitGroup, *t)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *worker) run(waitGroup *sync.WaitGroup, t task) {
|
||||||
|
defer waitGroup.Done()
|
||||||
|
|
||||||
|
w.exec(&t)
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case tw := <-w.workerQueue:
|
||||||
|
if tw.isExistTask() {
|
||||||
|
//exit goroutine
|
||||||
|
log.SRelease("worker goroutine exit")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.exec(&tw)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *worker) exec(t *task) {
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
buf := make([]byte, 4096)
|
||||||
|
l := runtime.Stack(buf, false)
|
||||||
|
errString := fmt.Sprint(r)
|
||||||
|
|
||||||
|
cb := t.cb
|
||||||
|
t.cb = func(err error) {
|
||||||
|
cb(errors.New(errString))
|
||||||
|
}
|
||||||
|
|
||||||
|
w.endCallFun(true,t)
|
||||||
|
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
w.endCallFun(t.fn(),t)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *worker) endCallFun(isDocallBack bool,t *task) {
|
||||||
|
if isDocallBack {
|
||||||
|
w.pushAsyncDoCallbackEvent(t.cb)
|
||||||
|
}
|
||||||
|
|
||||||
|
if t.queueId != 0 {
|
||||||
|
w.pushQueueTaskFinishEvent(t.queueId)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -7,7 +7,6 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
//事件接受器
|
//事件接受器
|
||||||
type EventCallBack func(event IEvent)
|
type EventCallBack func(event IEvent)
|
||||||
|
|
||||||
@@ -229,7 +228,6 @@ func (processor *EventProcessor) EventHandler(ev IEvent) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
func (processor *EventProcessor) castEvent(event IEvent){
|
func (processor *EventProcessor) castEvent(event IEvent){
|
||||||
if processor.mapListenerEvent == nil {
|
if processor.mapListenerEvent == nil {
|
||||||
log.SError("mapListenerEvent not init!")
|
log.SError("mapListenerEvent not init!")
|
||||||
@@ -246,3 +244,4 @@ func (processor *EventProcessor) castEvent(event IEvent){
|
|||||||
proc.PushEvent(event)
|
proc.PushEvent(event)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
24
event/eventpool.go
Normal file
24
event/eventpool.go
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
package event
|
||||||
|
|
||||||
|
import "github.com/duanhf2012/origin/util/sync"
|
||||||
|
|
||||||
|
// eventPool的内存池,缓存Event
|
||||||
|
const defaultMaxEventChannelNum = 2000000
|
||||||
|
|
||||||
|
var eventPool = sync.NewPoolEx(make(chan sync.IPoolData, defaultMaxEventChannelNum), func() sync.IPoolData {
|
||||||
|
return &Event{}
|
||||||
|
})
|
||||||
|
|
||||||
|
func NewEvent() *Event{
|
||||||
|
return eventPool.Get().(*Event)
|
||||||
|
}
|
||||||
|
|
||||||
|
func DeleteEvent(event IEvent){
|
||||||
|
eventPool.Put(event.(sync.IPoolData))
|
||||||
|
}
|
||||||
|
|
||||||
|
func SetEventPoolSize(eventPoolSize int){
|
||||||
|
eventPool = sync.NewPoolEx(make(chan sync.IPoolData, eventPoolSize), func() sync.IPoolData {
|
||||||
|
return &Event{}
|
||||||
|
})
|
||||||
|
}
|
||||||
@@ -12,7 +12,11 @@ const (
|
|||||||
Sys_Event_WebSocket EventType = -5
|
Sys_Event_WebSocket EventType = -5
|
||||||
Sys_Event_Node_Event EventType = -6
|
Sys_Event_Node_Event EventType = -6
|
||||||
Sys_Event_DiscoverService EventType = -7
|
Sys_Event_DiscoverService EventType = -7
|
||||||
|
Sys_Event_DiscardGoroutine EventType = -8
|
||||||
|
Sys_Event_QueueTaskFinish EventType = -9
|
||||||
|
|
||||||
Sys_Event_User_Define EventType = 1
|
Sys_Event_User_Define EventType = 1
|
||||||
|
|
||||||
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
4
go.mod
4
go.mod
@@ -23,8 +23,8 @@ require (
|
|||||||
github.com/xdg-go/scram v1.0.2 // indirect
|
github.com/xdg-go/scram v1.0.2 // indirect
|
||||||
github.com/xdg-go/stringprep v1.0.2 // indirect
|
github.com/xdg-go/stringprep v1.0.2 // indirect
|
||||||
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d // indirect
|
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d // indirect
|
||||||
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f // indirect
|
golang.org/x/crypto v0.1.0 // indirect
|
||||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 // indirect
|
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 // indirect
|
||||||
golang.org/x/text v0.3.6 // indirect
|
golang.org/x/text v0.4.0 // indirect
|
||||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||||
)
|
)
|
||||||
|
|||||||
7
go.sum
7
go.sum
@@ -58,8 +58,9 @@ go.mongodb.org/mongo-driver v1.9.1/go.mod h1:0sQWfOeY63QTntERDJJ/0SuKK0T1uVSgKCu
|
|||||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||||
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f h1:aZp0e2vLN4MToVqnjNEYEtrEA8RH8U8FN1CU7JgqsPU=
|
|
||||||
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
|
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
|
||||||
|
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
|
||||||
|
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
|
||||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
@@ -79,8 +80,8 @@ golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXR
|
|||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
|
golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
|
||||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20190531172133-b3315ee88b7d/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
golang.org/x/tools v0.0.0-20190531172133-b3315ee88b7d/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
|
|||||||
@@ -68,6 +68,11 @@ func (pbProcessor *PBProcessor) MsgRoute(clientId uint64, msg interface{}) error
|
|||||||
// must goroutine safe
|
// must goroutine safe
|
||||||
func (pbProcessor *PBProcessor) Unmarshal(clientId uint64, data []byte) (interface{}, error) {
|
func (pbProcessor *PBProcessor) Unmarshal(clientId uint64, data []byte) (interface{}, error) {
|
||||||
defer pbProcessor.ReleaseByteSlice(data)
|
defer pbProcessor.ReleaseByteSlice(data)
|
||||||
|
return pbProcessor.UnmarshalWithOutRelease(clientId, data)
|
||||||
|
}
|
||||||
|
|
||||||
|
// unmarshal but not release data
|
||||||
|
func (pbProcessor *PBProcessor) UnmarshalWithOutRelease(clientId uint64, data []byte) (interface{}, error) {
|
||||||
var msgType uint16
|
var msgType uint16
|
||||||
if pbProcessor.LittleEndian == true {
|
if pbProcessor.LittleEndian == true {
|
||||||
msgType = binary.LittleEndian.Uint16(data[:2])
|
msgType = binary.LittleEndian.Uint16(data[:2])
|
||||||
|
|||||||
@@ -78,7 +78,6 @@ func (pbRawProcessor *PBRawProcessor) SetRawMsgHandler(handle RawMessageHandler)
|
|||||||
func (pbRawProcessor *PBRawProcessor) MakeRawMsg(msgType uint16,msg []byte,pbRawPackInfo *PBRawPackInfo) {
|
func (pbRawProcessor *PBRawProcessor) MakeRawMsg(msgType uint16,msg []byte,pbRawPackInfo *PBRawPackInfo) {
|
||||||
pbRawPackInfo.typ = msgType
|
pbRawPackInfo.typ = msgType
|
||||||
pbRawPackInfo.rawMsg = msg
|
pbRawPackInfo.rawMsg = msg
|
||||||
//return &PBRawPackInfo{typ:msgType,rawMsg:msg}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (pbRawProcessor *PBRawProcessor) UnknownMsgRoute(clientId uint64,msg interface{}){
|
func (pbRawProcessor *PBRawProcessor) UnknownMsgRoute(clientId uint64,msg interface{}){
|
||||||
|
|||||||
@@ -17,17 +17,11 @@ type IProcessor interface {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type IRawProcessor interface {
|
type IRawProcessor interface {
|
||||||
SetByteOrder(littleEndian bool)
|
IProcessor
|
||||||
MsgRoute(clientId uint64,msg interface{}) error
|
|
||||||
Unmarshal(clientId uint64,data []byte) (interface{}, error)
|
|
||||||
Marshal(clientId uint64,msg interface{}) ([]byte, error)
|
|
||||||
|
|
||||||
|
SetByteOrder(littleEndian bool)
|
||||||
SetRawMsgHandler(handle RawMessageHandler)
|
SetRawMsgHandler(handle RawMessageHandler)
|
||||||
MakeRawMsg(msgType uint16,msg []byte,pbRawPackInfo *PBRawPackInfo)
|
MakeRawMsg(msgType uint16,msg []byte,pbRawPackInfo *PBRawPackInfo)
|
||||||
UnknownMsgRoute(clientId uint64,msg interface{})
|
|
||||||
ConnectedRoute(clientId uint64)
|
|
||||||
DisConnectedRoute(clientId uint64)
|
|
||||||
|
|
||||||
SetUnknownMsgHandler(unknownMessageHandler UnknownRawMessageHandler)
|
SetUnknownMsgHandler(unknownMessageHandler UnknownRawMessageHandler)
|
||||||
SetConnectedHandler(connectHandler RawConnectHandler)
|
SetConnectedHandler(connectHandler RawConnectHandler)
|
||||||
SetDisConnectedHandler(disconnectHandler RawConnectHandler)
|
SetDisConnectedHandler(disconnectHandler RawConnectHandler)
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ type memAreaPool struct {
|
|||||||
pool []sync.Pool
|
pool []sync.Pool
|
||||||
}
|
}
|
||||||
|
|
||||||
var memAreaPoolList = [3]*memAreaPool{&memAreaPool{minAreaValue: 1, maxAreaValue: 4096, growthValue: 512}, &memAreaPool{minAreaValue: 4097, maxAreaValue: 40960, growthValue: 4096}, &memAreaPool{minAreaValue: 40961, maxAreaValue: 417792, growthValue: 16384}}
|
var memAreaPoolList = [4]*memAreaPool{&memAreaPool{minAreaValue: 1, maxAreaValue: 4096, growthValue: 512}, &memAreaPool{minAreaValue: 4097, maxAreaValue: 40960, growthValue: 4096}, &memAreaPool{minAreaValue: 40961, maxAreaValue: 417792, growthValue: 16384}, &memAreaPool{minAreaValue: 417793, maxAreaValue: 1925120, growthValue: 65536}}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
for i := 0; i < len(memAreaPoolList); i++ {
|
for i := 0; i < len(memAreaPoolList); i++ {
|
||||||
@@ -34,7 +34,6 @@ func (areaPool *memAreaPool) makePool() {
|
|||||||
for i := 0; i < poolLen; i++ {
|
for i := 0; i < poolLen; i++ {
|
||||||
memSize := (areaPool.minAreaValue - 1) + (i+1)*areaPool.growthValue
|
memSize := (areaPool.minAreaValue - 1) + (i+1)*areaPool.growthValue
|
||||||
areaPool.pool[i] = sync.Pool{New: func() interface{} {
|
areaPool.pool[i] = sync.Pool{New: func() interface{} {
|
||||||
//fmt.Println("make memsize:",memSize)
|
|
||||||
return make([]byte, memSize)
|
return make([]byte, memSize)
|
||||||
}}
|
}}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -22,11 +22,7 @@ type TCPClient struct {
|
|||||||
closeFlag bool
|
closeFlag bool
|
||||||
|
|
||||||
// msg parser
|
// msg parser
|
||||||
LenMsgLen int
|
MsgParser
|
||||||
MinMsgLen uint32
|
|
||||||
MaxMsgLen uint32
|
|
||||||
LittleEndian bool
|
|
||||||
msgParser *MsgParser
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (client *TCPClient) Start() {
|
func (client *TCPClient) Start() {
|
||||||
@@ -69,14 +65,24 @@ func (client *TCPClient) init() {
|
|||||||
log.SFatal("client is running")
|
log.SFatal("client is running")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if client.MinMsgLen == 0 {
|
||||||
|
client.MinMsgLen = Default_MinMsgLen
|
||||||
|
}
|
||||||
|
if client.MaxMsgLen == 0 {
|
||||||
|
client.MaxMsgLen = Default_MaxMsgLen
|
||||||
|
}
|
||||||
|
if client.LenMsgLen ==0 {
|
||||||
|
client.LenMsgLen = Default_LenMsgLen
|
||||||
|
}
|
||||||
|
maxMsgLen := client.MsgParser.getMaxMsgLen(client.LenMsgLen)
|
||||||
|
if client.MaxMsgLen > maxMsgLen {
|
||||||
|
client.MaxMsgLen = maxMsgLen
|
||||||
|
log.SRelease("invalid MaxMsgLen, reset to ", maxMsgLen)
|
||||||
|
}
|
||||||
|
|
||||||
client.cons = make(ConnSet)
|
client.cons = make(ConnSet)
|
||||||
client.closeFlag = false
|
client.closeFlag = false
|
||||||
|
client.MsgParser.init()
|
||||||
// msg parser
|
|
||||||
msgParser := NewMsgParser()
|
|
||||||
msgParser.SetMsgLen(client.LenMsgLen, client.MinMsgLen, client.MaxMsgLen)
|
|
||||||
msgParser.SetByteOrder(client.LittleEndian)
|
|
||||||
client.msgParser = msgParser
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (client *TCPClient) GetCloseFlag() bool{
|
func (client *TCPClient) GetCloseFlag() bool{
|
||||||
@@ -120,7 +126,7 @@ reconnect:
|
|||||||
client.cons[conn] = struct{}{}
|
client.cons[conn] = struct{}{}
|
||||||
client.Unlock()
|
client.Unlock()
|
||||||
|
|
||||||
tcpConn := newTCPConn(conn, client.PendingWriteNum, client.msgParser,client.WriteDeadline)
|
tcpConn := newTCPConn(conn, client.PendingWriteNum, &client.MsgParser,client.WriteDeadline)
|
||||||
agent := client.NewAgent(tcpConn)
|
agent := client.NewAgent(tcpConn)
|
||||||
agent.Run()
|
agent.Run()
|
||||||
|
|
||||||
|
|||||||
@@ -1,11 +1,12 @@
|
|||||||
package network
|
package network
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
"github.com/duanhf2012/origin/log"
|
"github.com/duanhf2012/origin/log"
|
||||||
"net"
|
"net"
|
||||||
"sync"
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
"time"
|
"time"
|
||||||
"errors"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type ConnSet map[net.Conn]struct{}
|
type ConnSet map[net.Conn]struct{}
|
||||||
@@ -14,7 +15,7 @@ type TCPConn struct {
|
|||||||
sync.Mutex
|
sync.Mutex
|
||||||
conn net.Conn
|
conn net.Conn
|
||||||
writeChan chan []byte
|
writeChan chan []byte
|
||||||
closeFlag bool
|
closeFlag int32
|
||||||
msgParser *MsgParser
|
msgParser *MsgParser
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -49,7 +50,7 @@ func newTCPConn(conn net.Conn, pendingWriteNum int, msgParser *MsgParser,writeDe
|
|||||||
conn.Close()
|
conn.Close()
|
||||||
tcpConn.Lock()
|
tcpConn.Lock()
|
||||||
freeChannel(tcpConn)
|
freeChannel(tcpConn)
|
||||||
tcpConn.closeFlag = true
|
atomic.StoreInt32(&tcpConn.closeFlag,1)
|
||||||
tcpConn.Unlock()
|
tcpConn.Unlock()
|
||||||
}()
|
}()
|
||||||
|
|
||||||
@@ -60,9 +61,9 @@ func (tcpConn *TCPConn) doDestroy() {
|
|||||||
tcpConn.conn.(*net.TCPConn).SetLinger(0)
|
tcpConn.conn.(*net.TCPConn).SetLinger(0)
|
||||||
tcpConn.conn.Close()
|
tcpConn.conn.Close()
|
||||||
|
|
||||||
if !tcpConn.closeFlag {
|
if atomic.LoadInt32(&tcpConn.closeFlag)==0 {
|
||||||
close(tcpConn.writeChan)
|
close(tcpConn.writeChan)
|
||||||
tcpConn.closeFlag = true
|
atomic.StoreInt32(&tcpConn.closeFlag,1)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -76,12 +77,12 @@ func (tcpConn *TCPConn) Destroy() {
|
|||||||
func (tcpConn *TCPConn) Close() {
|
func (tcpConn *TCPConn) Close() {
|
||||||
tcpConn.Lock()
|
tcpConn.Lock()
|
||||||
defer tcpConn.Unlock()
|
defer tcpConn.Unlock()
|
||||||
if tcpConn.closeFlag {
|
if atomic.LoadInt32(&tcpConn.closeFlag)==1 {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
tcpConn.doWrite(nil)
|
tcpConn.doWrite(nil)
|
||||||
tcpConn.closeFlag = true
|
atomic.StoreInt32(&tcpConn.closeFlag,1)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (tcpConn *TCPConn) GetRemoteIp() string {
|
func (tcpConn *TCPConn) GetRemoteIp() string {
|
||||||
@@ -104,7 +105,7 @@ func (tcpConn *TCPConn) doWrite(b []byte) error{
|
|||||||
func (tcpConn *TCPConn) Write(b []byte) error{
|
func (tcpConn *TCPConn) Write(b []byte) error{
|
||||||
tcpConn.Lock()
|
tcpConn.Lock()
|
||||||
defer tcpConn.Unlock()
|
defer tcpConn.Unlock()
|
||||||
if tcpConn.closeFlag || b == nil {
|
if atomic.LoadInt32(&tcpConn.closeFlag)==1 || b == nil {
|
||||||
tcpConn.ReleaseReadMsg(b)
|
tcpConn.ReleaseReadMsg(b)
|
||||||
return errors.New("conn is close")
|
return errors.New("conn is close")
|
||||||
}
|
}
|
||||||
@@ -133,14 +134,14 @@ func (tcpConn *TCPConn) ReleaseReadMsg(byteBuff []byte){
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (tcpConn *TCPConn) WriteMsg(args ...[]byte) error {
|
func (tcpConn *TCPConn) WriteMsg(args ...[]byte) error {
|
||||||
if tcpConn.closeFlag == true {
|
if atomic.LoadInt32(&tcpConn.closeFlag) == 1 {
|
||||||
return errors.New("conn is close")
|
return errors.New("conn is close")
|
||||||
}
|
}
|
||||||
return tcpConn.msgParser.Write(tcpConn, args...)
|
return tcpConn.msgParser.Write(tcpConn, args...)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (tcpConn *TCPConn) WriteRawMsg(args []byte) error {
|
func (tcpConn *TCPConn) WriteRawMsg(args []byte) error {
|
||||||
if tcpConn.closeFlag == true {
|
if atomic.LoadInt32(&tcpConn.closeFlag) == 1 {
|
||||||
return errors.New("conn is close")
|
return errors.New("conn is close")
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -149,7 +150,7 @@ func (tcpConn *TCPConn) WriteRawMsg(args []byte) error {
|
|||||||
|
|
||||||
|
|
||||||
func (tcpConn *TCPConn) IsConnected() bool {
|
func (tcpConn *TCPConn) IsConnected() bool {
|
||||||
return tcpConn.closeFlag == false
|
return atomic.LoadInt32(&tcpConn.closeFlag) == 0
|
||||||
}
|
}
|
||||||
|
|
||||||
func (tcpConn *TCPConn) SetReadDeadline(d time.Duration) {
|
func (tcpConn *TCPConn) SetReadDeadline(d time.Duration) {
|
||||||
|
|||||||
@@ -11,62 +11,36 @@ import (
|
|||||||
// | len | data |
|
// | len | data |
|
||||||
// --------------
|
// --------------
|
||||||
type MsgParser struct {
|
type MsgParser struct {
|
||||||
lenMsgLen int
|
LenMsgLen int
|
||||||
minMsgLen uint32
|
MinMsgLen uint32
|
||||||
maxMsgLen uint32
|
MaxMsgLen uint32
|
||||||
littleEndian bool
|
LittleEndian bool
|
||||||
|
|
||||||
INetMempool
|
INetMempool
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewMsgParser() *MsgParser {
|
|
||||||
p := new(MsgParser)
|
|
||||||
p.lenMsgLen = 2
|
|
||||||
p.minMsgLen = 1
|
|
||||||
p.maxMsgLen = 4096
|
|
||||||
p.littleEndian = false
|
|
||||||
p.INetMempool = NewMemAreaPool()
|
|
||||||
return p
|
|
||||||
}
|
|
||||||
|
|
||||||
// It's dangerous to call the method on reading or writing
|
func (p *MsgParser) getMaxMsgLen(lenMsgLen int) uint32 {
|
||||||
func (p *MsgParser) SetMsgLen(lenMsgLen int, minMsgLen uint32, maxMsgLen uint32) {
|
switch p.LenMsgLen {
|
||||||
if lenMsgLen == 1 || lenMsgLen == 2 || lenMsgLen == 4 {
|
|
||||||
p.lenMsgLen = lenMsgLen
|
|
||||||
}
|
|
||||||
if minMsgLen != 0 {
|
|
||||||
p.minMsgLen = minMsgLen
|
|
||||||
}
|
|
||||||
if maxMsgLen != 0 {
|
|
||||||
p.maxMsgLen = maxMsgLen
|
|
||||||
}
|
|
||||||
|
|
||||||
var max uint32
|
|
||||||
switch p.lenMsgLen {
|
|
||||||
case 1:
|
case 1:
|
||||||
max = math.MaxUint8
|
return math.MaxUint8
|
||||||
case 2:
|
case 2:
|
||||||
max = math.MaxUint16
|
return math.MaxUint16
|
||||||
case 4:
|
case 4:
|
||||||
max = math.MaxUint32
|
return math.MaxUint32
|
||||||
}
|
default:
|
||||||
if p.minMsgLen > max {
|
panic("LenMsgLen value must be 1 or 2 or 4")
|
||||||
p.minMsgLen = max
|
|
||||||
}
|
|
||||||
if p.maxMsgLen > max {
|
|
||||||
p.maxMsgLen = max
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// It's dangerous to call the method on reading or writing
|
func (p *MsgParser) init(){
|
||||||
func (p *MsgParser) SetByteOrder(littleEndian bool) {
|
p.INetMempool = NewMemAreaPool()
|
||||||
p.littleEndian = littleEndian
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// goroutine safe
|
// goroutine safe
|
||||||
func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
|
func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
|
||||||
var b [4]byte
|
var b [4]byte
|
||||||
bufMsgLen := b[:p.lenMsgLen]
|
bufMsgLen := b[:p.LenMsgLen]
|
||||||
|
|
||||||
// read len
|
// read len
|
||||||
if _, err := io.ReadFull(conn, bufMsgLen); err != nil {
|
if _, err := io.ReadFull(conn, bufMsgLen); err != nil {
|
||||||
@@ -75,17 +49,17 @@ func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
|
|||||||
|
|
||||||
// parse len
|
// parse len
|
||||||
var msgLen uint32
|
var msgLen uint32
|
||||||
switch p.lenMsgLen {
|
switch p.LenMsgLen {
|
||||||
case 1:
|
case 1:
|
||||||
msgLen = uint32(bufMsgLen[0])
|
msgLen = uint32(bufMsgLen[0])
|
||||||
case 2:
|
case 2:
|
||||||
if p.littleEndian {
|
if p.LittleEndian {
|
||||||
msgLen = uint32(binary.LittleEndian.Uint16(bufMsgLen))
|
msgLen = uint32(binary.LittleEndian.Uint16(bufMsgLen))
|
||||||
} else {
|
} else {
|
||||||
msgLen = uint32(binary.BigEndian.Uint16(bufMsgLen))
|
msgLen = uint32(binary.BigEndian.Uint16(bufMsgLen))
|
||||||
}
|
}
|
||||||
case 4:
|
case 4:
|
||||||
if p.littleEndian {
|
if p.LittleEndian {
|
||||||
msgLen = binary.LittleEndian.Uint32(bufMsgLen)
|
msgLen = binary.LittleEndian.Uint32(bufMsgLen)
|
||||||
} else {
|
} else {
|
||||||
msgLen = binary.BigEndian.Uint32(bufMsgLen)
|
msgLen = binary.BigEndian.Uint32(bufMsgLen)
|
||||||
@@ -93,9 +67,9 @@ func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// check len
|
// check len
|
||||||
if msgLen > p.maxMsgLen {
|
if msgLen > p.MaxMsgLen {
|
||||||
return nil, errors.New("message too long")
|
return nil, errors.New("message too long")
|
||||||
} else if msgLen < p.minMsgLen {
|
} else if msgLen < p.MinMsgLen {
|
||||||
return nil, errors.New("message too short")
|
return nil, errors.New("message too short")
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -118,26 +92,26 @@ func (p *MsgParser) Write(conn *TCPConn, args ...[]byte) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// check len
|
// check len
|
||||||
if msgLen > p.maxMsgLen {
|
if msgLen > p.MaxMsgLen {
|
||||||
return errors.New("message too long")
|
return errors.New("message too long")
|
||||||
} else if msgLen < p.minMsgLen {
|
} else if msgLen < p.MinMsgLen {
|
||||||
return errors.New("message too short")
|
return errors.New("message too short")
|
||||||
}
|
}
|
||||||
|
|
||||||
//msg := make([]byte, uint32(p.lenMsgLen)+msgLen)
|
//msg := make([]byte, uint32(p.lenMsgLen)+msgLen)
|
||||||
msg := p.MakeByteSlice(p.lenMsgLen+int(msgLen))
|
msg := p.MakeByteSlice(p.LenMsgLen+int(msgLen))
|
||||||
// write len
|
// write len
|
||||||
switch p.lenMsgLen {
|
switch p.LenMsgLen {
|
||||||
case 1:
|
case 1:
|
||||||
msg[0] = byte(msgLen)
|
msg[0] = byte(msgLen)
|
||||||
case 2:
|
case 2:
|
||||||
if p.littleEndian {
|
if p.LittleEndian {
|
||||||
binary.LittleEndian.PutUint16(msg, uint16(msgLen))
|
binary.LittleEndian.PutUint16(msg, uint16(msgLen))
|
||||||
} else {
|
} else {
|
||||||
binary.BigEndian.PutUint16(msg, uint16(msgLen))
|
binary.BigEndian.PutUint16(msg, uint16(msgLen))
|
||||||
}
|
}
|
||||||
case 4:
|
case 4:
|
||||||
if p.littleEndian {
|
if p.LittleEndian {
|
||||||
binary.LittleEndian.PutUint32(msg, msgLen)
|
binary.LittleEndian.PutUint32(msg, msgLen)
|
||||||
} else {
|
} else {
|
||||||
binary.BigEndian.PutUint32(msg, msgLen)
|
binary.BigEndian.PutUint32(msg, msgLen)
|
||||||
@@ -145,7 +119,7 @@ func (p *MsgParser) Write(conn *TCPConn, args ...[]byte) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// write data
|
// write data
|
||||||
l := p.lenMsgLen
|
l := p.LenMsgLen
|
||||||
for i := 0; i < len(args); i++ {
|
for i := 0; i < len(args); i++ {
|
||||||
copy(msg[l:], args[i])
|
copy(msg[l:], args[i])
|
||||||
l += len(args[i])
|
l += len(args[i])
|
||||||
|
|||||||
@@ -7,14 +7,16 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
const Default_ReadDeadline = time.Second*30 //30s
|
const(
|
||||||
const Default_WriteDeadline = time.Second*30 //30s
|
Default_ReadDeadline = time.Second*30 //默认读超时30s
|
||||||
const Default_MaxConnNum = 3000
|
Default_WriteDeadline = time.Second*30 //默认写超时30s
|
||||||
const Default_PendingWriteNum = 10000
|
Default_MaxConnNum = 1000000 //默认最大连接数
|
||||||
const Default_LittleEndian = false
|
Default_PendingWriteNum = 100000 //单连接写消息Channel容量
|
||||||
const Default_MinMsgLen = 2
|
Default_LittleEndian = false //默认大小端
|
||||||
const Default_MaxMsgLen = 65535
|
Default_MinMsgLen = 2 //最小消息长度2byte
|
||||||
|
Default_LenMsgLen = 2 //包头字段长度占用2byte
|
||||||
|
Default_MaxMsgLen = 65535 //最大消息长度
|
||||||
|
)
|
||||||
|
|
||||||
type TCPServer struct {
|
type TCPServer struct {
|
||||||
Addr string
|
Addr string
|
||||||
@@ -22,6 +24,7 @@ type TCPServer struct {
|
|||||||
PendingWriteNum int
|
PendingWriteNum int
|
||||||
ReadDeadline time.Duration
|
ReadDeadline time.Duration
|
||||||
WriteDeadline time.Duration
|
WriteDeadline time.Duration
|
||||||
|
|
||||||
NewAgent func(*TCPConn) Agent
|
NewAgent func(*TCPConn) Agent
|
||||||
ln net.Listener
|
ln net.Listener
|
||||||
conns ConnSet
|
conns ConnSet
|
||||||
@@ -29,14 +32,7 @@ type TCPServer struct {
|
|||||||
wgLn sync.WaitGroup
|
wgLn sync.WaitGroup
|
||||||
wgConns sync.WaitGroup
|
wgConns sync.WaitGroup
|
||||||
|
|
||||||
|
MsgParser
|
||||||
// msg parser
|
|
||||||
LenMsgLen int
|
|
||||||
MinMsgLen uint32
|
|
||||||
MaxMsgLen uint32
|
|
||||||
LittleEndian bool
|
|
||||||
msgParser *MsgParser
|
|
||||||
netMemPool INetMempool
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (server *TCPServer) Start() {
|
func (server *TCPServer) Start() {
|
||||||
@@ -54,14 +50,15 @@ func (server *TCPServer) init() {
|
|||||||
server.MaxConnNum = Default_MaxConnNum
|
server.MaxConnNum = Default_MaxConnNum
|
||||||
log.SRelease("invalid MaxConnNum, reset to ", server.MaxConnNum)
|
log.SRelease("invalid MaxConnNum, reset to ", server.MaxConnNum)
|
||||||
}
|
}
|
||||||
|
|
||||||
if server.PendingWriteNum <= 0 {
|
if server.PendingWriteNum <= 0 {
|
||||||
server.PendingWriteNum = Default_PendingWriteNum
|
server.PendingWriteNum = Default_PendingWriteNum
|
||||||
log.SRelease("invalid PendingWriteNum, reset to ", server.PendingWriteNum)
|
log.SRelease("invalid PendingWriteNum, reset to ", server.PendingWriteNum)
|
||||||
}
|
}
|
||||||
|
|
||||||
if server.MinMsgLen <= 0 {
|
if server.LenMsgLen <= 0 {
|
||||||
server.MinMsgLen = Default_MinMsgLen
|
server.LenMsgLen = Default_LenMsgLen
|
||||||
log.SRelease("invalid MinMsgLen, reset to ", server.MinMsgLen)
|
log.SRelease("invalid LenMsgLen, reset to ", server.LenMsgLen)
|
||||||
}
|
}
|
||||||
|
|
||||||
if server.MaxMsgLen <= 0 {
|
if server.MaxMsgLen <= 0 {
|
||||||
@@ -69,10 +66,22 @@ func (server *TCPServer) init() {
|
|||||||
log.SRelease("invalid MaxMsgLen, reset to ", server.MaxMsgLen)
|
log.SRelease("invalid MaxMsgLen, reset to ", server.MaxMsgLen)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
maxMsgLen := server.MsgParser.getMaxMsgLen(server.LenMsgLen)
|
||||||
|
if server.MaxMsgLen > maxMsgLen {
|
||||||
|
server.MaxMsgLen = maxMsgLen
|
||||||
|
log.SRelease("invalid MaxMsgLen, reset to ", maxMsgLen)
|
||||||
|
}
|
||||||
|
|
||||||
|
if server.MinMsgLen <= 0 {
|
||||||
|
server.MinMsgLen = Default_MinMsgLen
|
||||||
|
log.SRelease("invalid MinMsgLen, reset to ", server.MinMsgLen)
|
||||||
|
}
|
||||||
|
|
||||||
if server.WriteDeadline == 0 {
|
if server.WriteDeadline == 0 {
|
||||||
server.WriteDeadline = Default_WriteDeadline
|
server.WriteDeadline = Default_WriteDeadline
|
||||||
log.SRelease("invalid WriteDeadline, reset to ", server.WriteDeadline.Seconds(),"s")
|
log.SRelease("invalid WriteDeadline, reset to ", server.WriteDeadline.Seconds(),"s")
|
||||||
}
|
}
|
||||||
|
|
||||||
if server.ReadDeadline == 0 {
|
if server.ReadDeadline == 0 {
|
||||||
server.ReadDeadline = Default_ReadDeadline
|
server.ReadDeadline = Default_ReadDeadline
|
||||||
log.SRelease("invalid ReadDeadline, reset to ", server.ReadDeadline.Seconds(),"s")
|
log.SRelease("invalid ReadDeadline, reset to ", server.ReadDeadline.Seconds(),"s")
|
||||||
@@ -84,24 +93,15 @@ func (server *TCPServer) init() {
|
|||||||
|
|
||||||
server.ln = ln
|
server.ln = ln
|
||||||
server.conns = make(ConnSet)
|
server.conns = make(ConnSet)
|
||||||
|
server.MsgParser.init()
|
||||||
// msg parser
|
|
||||||
msgParser := NewMsgParser()
|
|
||||||
if msgParser.INetMempool == nil {
|
|
||||||
msgParser.INetMempool = NewMemAreaPool()
|
|
||||||
}
|
|
||||||
|
|
||||||
msgParser.SetMsgLen(server.LenMsgLen, server.MinMsgLen, server.MaxMsgLen)
|
|
||||||
msgParser.SetByteOrder(server.LittleEndian)
|
|
||||||
server.msgParser = msgParser
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (server *TCPServer) SetNetMempool(mempool INetMempool){
|
func (server *TCPServer) SetNetMempool(mempool INetMempool){
|
||||||
server.msgParser.INetMempool = mempool
|
server.INetMempool = mempool
|
||||||
}
|
}
|
||||||
|
|
||||||
func (server *TCPServer) GetNetMempool() INetMempool{
|
func (server *TCPServer) GetNetMempool() INetMempool{
|
||||||
return server.msgParser.INetMempool
|
return server.INetMempool
|
||||||
}
|
}
|
||||||
|
|
||||||
func (server *TCPServer) run() {
|
func (server *TCPServer) run() {
|
||||||
@@ -127,6 +127,7 @@ func (server *TCPServer) run() {
|
|||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
conn.(*net.TCPConn).SetNoDelay(true)
|
conn.(*net.TCPConn).SetNoDelay(true)
|
||||||
tempDelay = 0
|
tempDelay = 0
|
||||||
|
|
||||||
@@ -137,16 +138,16 @@ func (server *TCPServer) run() {
|
|||||||
log.SWarning("too many connections")
|
log.SWarning("too many connections")
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
server.conns[conn] = struct{}{}
|
server.conns[conn] = struct{}{}
|
||||||
server.mutexConns.Unlock()
|
server.mutexConns.Unlock()
|
||||||
|
|
||||||
server.wgConns.Add(1)
|
server.wgConns.Add(1)
|
||||||
|
|
||||||
tcpConn := newTCPConn(conn, server.PendingWriteNum, server.msgParser,server.WriteDeadline)
|
tcpConn := newTCPConn(conn, server.PendingWriteNum, &server.MsgParser,server.WriteDeadline)
|
||||||
agent := server.NewAgent(tcpConn)
|
agent := server.NewAgent(tcpConn)
|
||||||
|
|
||||||
go func() {
|
go func() {
|
||||||
agent.Run()
|
agent.Run()
|
||||||
|
|
||||||
// cleanup
|
// cleanup
|
||||||
tcpConn.Close()
|
tcpConn.Close()
|
||||||
server.mutexConns.Lock()
|
server.mutexConns.Lock()
|
||||||
|
|||||||
39
node/node.go
39
node/node.go
@@ -22,7 +22,6 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
var closeSig chan bool
|
|
||||||
var sig chan os.Signal
|
var sig chan os.Signal
|
||||||
var nodeId int
|
var nodeId int
|
||||||
var preSetupService []service.IService //预安装
|
var preSetupService []service.IService //预安装
|
||||||
@@ -40,8 +39,6 @@ const(
|
|||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
|
|
||||||
closeSig = make(chan bool, 1)
|
|
||||||
sig = make(chan os.Signal, 3)
|
sig = make(chan os.Signal, 3)
|
||||||
signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM, syscall.Signal(10))
|
signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM, syscall.Signal(10))
|
||||||
|
|
||||||
@@ -155,21 +152,28 @@ func initNode(id int) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
//2.setup service
|
//2.顺序安装服务
|
||||||
for _, s := range preSetupService {
|
serviceOrder := cluster.GetCluster().GetLocalNodeInfo().ServiceList
|
||||||
//是否配置的service
|
for _,serviceName:= range serviceOrder{
|
||||||
if cluster.GetCluster().IsConfigService(s.GetName()) == false {
|
bSetup := false
|
||||||
continue
|
for _, s := range preSetupService {
|
||||||
|
if s.GetName() != serviceName {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
bSetup = true
|
||||||
|
pServiceCfg := cluster.GetCluster().GetServiceCfg(s.GetName())
|
||||||
|
s.Init(s, cluster.GetRpcClient, cluster.GetRpcServer, pServiceCfg)
|
||||||
|
|
||||||
|
service.Setup(s)
|
||||||
}
|
}
|
||||||
|
|
||||||
pServiceCfg := cluster.GetCluster().GetServiceCfg(s.GetName())
|
if bSetup == false {
|
||||||
s.Init(s, cluster.GetRpcClient, cluster.GetRpcServer, pServiceCfg)
|
log.SFatal("Service name "+serviceName+" configuration error")
|
||||||
|
}
|
||||||
service.Setup(s)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
//3.service初始化
|
//3.service初始化
|
||||||
service.Init(closeSig)
|
service.Init()
|
||||||
}
|
}
|
||||||
|
|
||||||
func initLog() error {
|
func initLog() error {
|
||||||
@@ -274,8 +278,7 @@ func startNode(args interface{}) error {
|
|||||||
}
|
}
|
||||||
cluster.GetCluster().Stop()
|
cluster.GetCluster().Stop()
|
||||||
//7.退出
|
//7.退出
|
||||||
close(closeSig)
|
service.StopAllService()
|
||||||
service.WaitStop()
|
|
||||||
|
|
||||||
log.SRelease("Server is stop.")
|
log.SRelease("Server is stop.")
|
||||||
return nil
|
return nil
|
||||||
@@ -292,9 +295,9 @@ func GetService(serviceName string) service.IService {
|
|||||||
return service.GetService(serviceName)
|
return service.GetService(serviceName)
|
||||||
}
|
}
|
||||||
|
|
||||||
func SetConfigDir(configDir string) {
|
func SetConfigDir(cfgDir string) {
|
||||||
configDir = configDir
|
configDir = cfgDir
|
||||||
cluster.SetConfigDir(configDir)
|
cluster.SetConfigDir(cfgDir)
|
||||||
}
|
}
|
||||||
|
|
||||||
func GetConfigDir() string {
|
func GetConfigDir() string {
|
||||||
|
|||||||
@@ -193,9 +193,11 @@ func Report() {
|
|||||||
|
|
||||||
record = prof.record
|
record = prof.record
|
||||||
prof.record = list.New()
|
prof.record = list.New()
|
||||||
|
callNum := prof.callNum
|
||||||
|
totalCostTime := prof.totalCostTime
|
||||||
prof.stackLocker.RUnlock()
|
prof.stackLocker.RUnlock()
|
||||||
|
|
||||||
DefaultReportFunction(name,prof.callNum,prof.totalCostTime,record)
|
DefaultReportFunction(name,callNum,totalCostTime,record)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
418
rpc/client.go
418
rpc/client.go
@@ -1,93 +1,64 @@
|
|||||||
package rpc
|
package rpc
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"container/list"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
|
||||||
"github.com/duanhf2012/origin/log"
|
|
||||||
"github.com/duanhf2012/origin/network"
|
"github.com/duanhf2012/origin/network"
|
||||||
"math"
|
|
||||||
"reflect"
|
"reflect"
|
||||||
"runtime"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
"sync"
|
"sync"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
"time"
|
"time"
|
||||||
|
"github.com/duanhf2012/origin/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
type Client struct {
|
const(
|
||||||
clientSeq uint32
|
DefaultRpcConnNum = 1
|
||||||
id int
|
DefaultRpcLenMsgLen = 4
|
||||||
bSelfNode bool
|
DefaultRpcMinMsgLen = 2
|
||||||
network.TCPClient
|
DefaultMaxCheckCallRpcCount = 1000
|
||||||
conn *network.TCPConn
|
DefaultMaxPendingWriteNum = 200000
|
||||||
|
|
||||||
pendingLock sync.RWMutex
|
|
||||||
startSeq uint64
|
|
||||||
pending map[uint64]*list.Element
|
|
||||||
pendingTimer *list.List
|
|
||||||
callRpcTimeout time.Duration
|
|
||||||
maxCheckCallRpcCount int
|
|
||||||
TriggerRpcEvent
|
|
||||||
}
|
|
||||||
|
|
||||||
const MaxCheckCallRpcCount = 1000
|
DefaultConnectInterval = 2*time.Second
|
||||||
const MaxPendingWriteNum = 200000
|
DefaultCheckRpcCallTimeoutInterval = 1*time.Second
|
||||||
const ConnectInterval = 2*time.Second
|
DefaultRpcTimeout = 15*time.Second
|
||||||
|
)
|
||||||
|
|
||||||
var clientSeq uint32
|
var clientSeq uint32
|
||||||
|
|
||||||
|
type IRealClient interface {
|
||||||
|
SetConn(conn *network.TCPConn)
|
||||||
|
Close(waitDone bool)
|
||||||
|
|
||||||
|
AsyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{},cancelable bool) (CancelRpc,error)
|
||||||
|
Go(timeout time.Duration,rpcHandler IRpcHandler, noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call
|
||||||
|
RawGo(timeout time.Duration,rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, rawArgs []byte, reply interface{}) *Call
|
||||||
|
IsConnected() bool
|
||||||
|
|
||||||
|
Run()
|
||||||
|
OnClose()
|
||||||
|
}
|
||||||
|
|
||||||
|
type Client struct {
|
||||||
|
clientId uint32
|
||||||
|
nodeId int
|
||||||
|
pendingLock sync.RWMutex
|
||||||
|
startSeq uint64
|
||||||
|
pending map[uint64]*Call
|
||||||
|
callRpcTimeout time.Duration
|
||||||
|
maxCheckCallRpcCount int
|
||||||
|
|
||||||
|
callTimerHeap CallTimerHeap
|
||||||
|
IRealClient
|
||||||
|
}
|
||||||
|
|
||||||
func (client *Client) NewClientAgent(conn *network.TCPConn) network.Agent {
|
func (client *Client) NewClientAgent(conn *network.TCPConn) network.Agent {
|
||||||
client.conn = conn
|
client.SetConn(conn)
|
||||||
client.ResetPending()
|
|
||||||
|
|
||||||
return client
|
return client
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (bc *Client) makeCallFail(call *Call) {
|
||||||
func (client *Client) Connect(id int, addr string, maxRpcParamLen uint32) error {
|
|
||||||
client.clientSeq = atomic.AddUint32(&clientSeq, 1)
|
|
||||||
client.id = id
|
|
||||||
client.Addr = addr
|
|
||||||
client.maxCheckCallRpcCount = MaxCheckCallRpcCount
|
|
||||||
client.callRpcTimeout = 15 * time.Second
|
|
||||||
client.ConnectInterval = ConnectInterval
|
|
||||||
client.PendingWriteNum = MaxPendingWriteNum
|
|
||||||
client.AutoReconnect = true
|
|
||||||
|
|
||||||
client.ConnNum = 1
|
|
||||||
client.LenMsgLen = 4
|
|
||||||
client.MinMsgLen = 2
|
|
||||||
client.ReadDeadline = Default_ReadWriteDeadline
|
|
||||||
client.WriteDeadline = Default_ReadWriteDeadline
|
|
||||||
|
|
||||||
if maxRpcParamLen > 0 {
|
|
||||||
client.MaxMsgLen = maxRpcParamLen
|
|
||||||
} else {
|
|
||||||
client.MaxMsgLen = math.MaxUint32
|
|
||||||
}
|
|
||||||
|
|
||||||
client.NewAgent = client.NewClientAgent
|
|
||||||
client.LittleEndian = LittleEndian
|
|
||||||
client.ResetPending()
|
|
||||||
go client.startCheckRpcCallTimer()
|
|
||||||
if addr == "" {
|
|
||||||
client.bSelfNode = true
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
client.Start()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (client *Client) startCheckRpcCallTimer() {
|
|
||||||
for {
|
|
||||||
time.Sleep(5 * time.Second)
|
|
||||||
client.checkRpcCallTimeout()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (client *Client) makeCallFail(call *Call) {
|
|
||||||
client.removePending(call.Seq)
|
|
||||||
if call.callback != nil && call.callback.IsValid() {
|
if call.callback != nil && call.callback.IsValid() {
|
||||||
call.rpcHandler.PushRpcResponse(call)
|
call.rpcHandler.PushRpcResponse(call)
|
||||||
} else {
|
} else {
|
||||||
@@ -95,271 +66,120 @@ func (client *Client) makeCallFail(call *Call) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (client *Client) checkRpcCallTimeout() {
|
func (bc *Client) checkRpcCallTimeout() {
|
||||||
now := time.Now()
|
for{
|
||||||
|
time.Sleep(DefaultCheckRpcCallTimeoutInterval)
|
||||||
|
for i := 0; i < bc.maxCheckCallRpcCount; i++ {
|
||||||
|
bc.pendingLock.Lock()
|
||||||
|
|
||||||
|
callSeq := bc.callTimerHeap.PopTimeout()
|
||||||
|
if callSeq == 0 {
|
||||||
|
bc.pendingLock.Unlock()
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
for i := 0; i < client.maxCheckCallRpcCount; i++ {
|
pCall := bc.pending[callSeq]
|
||||||
client.pendingLock.Lock()
|
if pCall == nil {
|
||||||
pElem := client.pendingTimer.Front()
|
bc.pendingLock.Unlock()
|
||||||
if pElem == nil {
|
log.SError("callSeq ",callSeq," is not find")
|
||||||
client.pendingLock.Unlock()
|
continue
|
||||||
break
|
}
|
||||||
}
|
|
||||||
pCall := pElem.Value.(*Call)
|
delete(bc.pending,callSeq)
|
||||||
if now.Sub(pCall.callTime) > client.callRpcTimeout {
|
strTimeout := strconv.FormatInt(int64(pCall.TimeOut.Seconds()), 10)
|
||||||
strTimeout := strconv.FormatInt(int64(client.callRpcTimeout/time.Second), 10)
|
pCall.Err = errors.New("RPC call takes more than " + strTimeout + " seconds,method is "+pCall.ServiceMethod)
|
||||||
pCall.Err = errors.New("RPC call takes more than " + strTimeout + " seconds")
|
log.SError(pCall.Err.Error())
|
||||||
client.makeCallFail(pCall)
|
bc.makeCallFail(pCall)
|
||||||
client.pendingLock.Unlock()
|
bc.pendingLock.Unlock()
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
client.pendingLock.Unlock()
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (client *Client) ResetPending() {
|
func (client *Client) InitPending() {
|
||||||
client.pendingLock.Lock()
|
client.pendingLock.Lock()
|
||||||
if client.pending != nil {
|
client.callTimerHeap.Init()
|
||||||
for _, v := range client.pending {
|
client.pending = make(map[uint64]*Call,4096)
|
||||||
v.Value.(*Call).Err = errors.New("node is disconnect")
|
client.pendingLock.Unlock()
|
||||||
v.Value.(*Call).done <- v.Value.(*Call)
|
}
|
||||||
}
|
|
||||||
|
func (bc *Client) AddPending(call *Call) {
|
||||||
|
bc.pendingLock.Lock()
|
||||||
|
|
||||||
|
if call.Seq == 0 {
|
||||||
|
bc.pendingLock.Unlock()
|
||||||
|
log.SStack("call is error.")
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
client.pending = make(map[uint64]*list.Element, 4096)
|
bc.pending[call.Seq] = call
|
||||||
client.pendingTimer = list.New()
|
bc.callTimerHeap.AddTimer(call.Seq,call.TimeOut)
|
||||||
client.pendingLock.Unlock()
|
|
||||||
|
bc.pendingLock.Unlock()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (client *Client) AddPending(call *Call) {
|
func (bc *Client) RemovePending(seq uint64) *Call {
|
||||||
client.pendingLock.Lock()
|
if seq == 0 {
|
||||||
call.callTime = time.Now()
|
|
||||||
elemTimer := client.pendingTimer.PushBack(call)
|
|
||||||
client.pending[call.Seq] = elemTimer //如果下面发送失败,将会一一直存在这里
|
|
||||||
client.pendingLock.Unlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (client *Client) RemovePending(seq uint64) *Call {
|
|
||||||
if seq == 0 {
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
client.pendingLock.Lock()
|
bc.pendingLock.Lock()
|
||||||
call := client.removePending(seq)
|
call := bc.removePending(seq)
|
||||||
client.pendingLock.Unlock()
|
bc.pendingLock.Unlock()
|
||||||
return call
|
return call
|
||||||
}
|
}
|
||||||
|
|
||||||
func (client *Client) removePending(seq uint64) *Call {
|
func (bc *Client) removePending(seq uint64) *Call {
|
||||||
v, ok := client.pending[seq]
|
v, ok := bc.pending[seq]
|
||||||
if ok == false {
|
if ok == false {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
call := v.Value.(*Call)
|
|
||||||
client.pendingTimer.Remove(v)
|
bc.callTimerHeap.Cancel(seq)
|
||||||
delete(client.pending, seq)
|
delete(bc.pending, seq)
|
||||||
return call
|
return v
|
||||||
}
|
}
|
||||||
|
|
||||||
func (client *Client) FindPending(seq uint64) *Call {
|
func (bc *Client) FindPending(seq uint64) (pCall *Call) {
|
||||||
if seq == 0 {
|
if seq == 0 {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
client.pendingLock.Lock()
|
bc.pendingLock.Lock()
|
||||||
v, ok := client.pending[seq]
|
pCall = bc.pending[seq]
|
||||||
if ok == false {
|
bc.pendingLock.Unlock()
|
||||||
client.pendingLock.Unlock()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
pCall := v.Value.(*Call)
|
|
||||||
client.pendingLock.Unlock()
|
|
||||||
|
|
||||||
return pCall
|
return pCall
|
||||||
}
|
}
|
||||||
|
|
||||||
func (client *Client) generateSeq() uint64 {
|
func (bc *Client) cleanPending(){
|
||||||
return atomic.AddUint64(&client.startSeq, 1)
|
bc.pendingLock.Lock()
|
||||||
}
|
for {
|
||||||
|
callSeq := bc.callTimerHeap.PopFirst()
|
||||||
func (client *Client) AsyncCall(rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{}) error {
|
if callSeq == 0 {
|
||||||
processorType, processor := GetProcessorType(args)
|
break
|
||||||
InParam, herr := processor.Marshal(args)
|
|
||||||
if herr != nil {
|
|
||||||
return herr
|
|
||||||
}
|
|
||||||
|
|
||||||
seq := client.generateSeq()
|
|
||||||
request := MakeRpcRequest(processor, seq, 0, serviceMethod, false, InParam)
|
|
||||||
bytes, err := processor.Marshal(request.RpcRequestData)
|
|
||||||
ReleaseRpcRequest(request)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if client.conn == nil {
|
|
||||||
return errors.New("Rpc server is disconnect,call " + serviceMethod)
|
|
||||||
}
|
|
||||||
|
|
||||||
call := MakeCall()
|
|
||||||
call.Reply = replyParam
|
|
||||||
call.callback = &callback
|
|
||||||
call.rpcHandler = rpcHandler
|
|
||||||
call.ServiceMethod = serviceMethod
|
|
||||||
call.Seq = seq
|
|
||||||
client.AddPending(call)
|
|
||||||
|
|
||||||
err = client.conn.WriteMsg([]byte{uint8(processorType)}, bytes)
|
|
||||||
if err != nil {
|
|
||||||
client.RemovePending(call.Seq)
|
|
||||||
ReleaseCall(call)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (client *Client) RawGo(processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, args []byte, reply interface{}) *Call {
|
|
||||||
call := MakeCall()
|
|
||||||
call.ServiceMethod = serviceMethod
|
|
||||||
call.Reply = reply
|
|
||||||
call.Seq = client.generateSeq()
|
|
||||||
|
|
||||||
request := MakeRpcRequest(processor, call.Seq, rpcMethodId, serviceMethod, noReply, args)
|
|
||||||
bytes, err := processor.Marshal(request.RpcRequestData)
|
|
||||||
ReleaseRpcRequest(request)
|
|
||||||
if err != nil {
|
|
||||||
call.Seq = 0
|
|
||||||
call.Err = err
|
|
||||||
return call
|
|
||||||
}
|
|
||||||
|
|
||||||
if client.conn == nil {
|
|
||||||
call.Seq = 0
|
|
||||||
call.Err = errors.New(serviceMethod + " was called failed,rpc client is disconnect")
|
|
||||||
return call
|
|
||||||
}
|
|
||||||
|
|
||||||
if noReply == false {
|
|
||||||
client.AddPending(call)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = client.conn.WriteMsg([]byte{uint8(processor.GetProcessorType())}, bytes)
|
|
||||||
if err != nil {
|
|
||||||
client.RemovePending(call.Seq)
|
|
||||||
call.Seq = 0
|
|
||||||
call.Err = err
|
|
||||||
}
|
|
||||||
|
|
||||||
return call
|
|
||||||
}
|
|
||||||
|
|
||||||
func (client *Client) Go(noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
|
|
||||||
_, processor := GetProcessorType(args)
|
|
||||||
InParam, err := processor.Marshal(args)
|
|
||||||
if err != nil {
|
|
||||||
call := MakeCall()
|
|
||||||
call.Err = err
|
|
||||||
return call
|
|
||||||
}
|
|
||||||
|
|
||||||
return client.RawGo(processor, noReply, 0, serviceMethod, InParam, reply)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (client *Client) Run() {
|
|
||||||
defer func() {
|
|
||||||
if r := recover(); r != nil {
|
|
||||||
buf := make([]byte, 4096)
|
|
||||||
l := runtime.Stack(buf, false)
|
|
||||||
errString := fmt.Sprint(r)
|
|
||||||
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
|
||||||
}
|
}
|
||||||
}()
|
pCall := bc.pending[callSeq]
|
||||||
|
if pCall == nil {
|
||||||
client.TriggerRpcEvent(true, client.GetClientSeq(), client.GetId())
|
log.SError("callSeq ",callSeq," is not find")
|
||||||
for {
|
|
||||||
bytes, err := client.conn.ReadMsg()
|
|
||||||
if err != nil {
|
|
||||||
log.SError("rpcClient ", client.Addr, " ReadMsg error:", err.Error())
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
processor := GetProcessor(bytes[0])
|
|
||||||
if processor == nil {
|
|
||||||
client.conn.ReleaseReadMsg(bytes)
|
|
||||||
log.SError("rpcClient ", client.Addr, " ReadMsg head error:", err.Error())
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
//1.解析head
|
|
||||||
response := RpcResponse{}
|
|
||||||
response.RpcResponseData = processor.MakeRpcResponse(0, "", nil)
|
|
||||||
|
|
||||||
err = processor.Unmarshal(bytes[1:], response.RpcResponseData)
|
|
||||||
client.conn.ReleaseReadMsg(bytes)
|
|
||||||
if err != nil {
|
|
||||||
processor.ReleaseRpcResponse(response.RpcResponseData)
|
|
||||||
log.SError("rpcClient Unmarshal head error:", err.Error())
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
v := client.RemovePending(response.RpcResponseData.GetSeq())
|
delete(bc.pending,callSeq)
|
||||||
if v == nil {
|
|
||||||
log.SError("rpcClient cannot find seq ", response.RpcResponseData.GetSeq(), " in pending")
|
|
||||||
} else {
|
|
||||||
v.Err = nil
|
|
||||||
if len(response.RpcResponseData.GetReply()) > 0 {
|
|
||||||
err = processor.Unmarshal(response.RpcResponseData.GetReply(), v.Reply)
|
|
||||||
if err != nil {
|
|
||||||
log.SError("rpcClient Unmarshal body error:", err.Error())
|
|
||||||
v.Err = err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if response.RpcResponseData.GetErr() != nil {
|
|
||||||
v.Err = response.RpcResponseData.GetErr()
|
|
||||||
}
|
|
||||||
|
|
||||||
if v.callback != nil && v.callback.IsValid() {
|
|
||||||
v.rpcHandler.PushRpcResponse(v)
|
|
||||||
} else {
|
|
||||||
v.done <- v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
processor.ReleaseRpcResponse(response.RpcResponseData)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (client *Client) OnClose() {
|
|
||||||
client.TriggerRpcEvent(false, client.GetClientSeq(), client.GetId())
|
|
||||||
}
|
|
||||||
|
|
||||||
func (client *Client) IsConnected() bool {
|
|
||||||
return client.bSelfNode || (client.conn != nil && client.conn.IsConnected() == true)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (client *Client) GetId() int {
|
|
||||||
return client.id
|
|
||||||
}
|
|
||||||
|
|
||||||
func (client *Client) Close(waitDone bool) {
|
|
||||||
client.TCPClient.Close(waitDone)
|
|
||||||
|
|
||||||
client.pendingLock.Lock()
|
|
||||||
for {
|
|
||||||
pElem := client.pendingTimer.Front()
|
|
||||||
if pElem == nil {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
pCall := pElem.Value.(*Call)
|
|
||||||
pCall.Err = errors.New("nodeid is disconnect ")
|
pCall.Err = errors.New("nodeid is disconnect ")
|
||||||
client.makeCallFail(pCall)
|
bc.makeCallFail(pCall)
|
||||||
}
|
}
|
||||||
client.pendingLock.Unlock()
|
|
||||||
|
bc.pendingLock.Unlock()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (client *Client) GetClientSeq() uint32 {
|
func (bc *Client) generateSeq() uint64 {
|
||||||
return client.clientSeq
|
return atomic.AddUint64(&bc.startSeq, 1)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (client *Client) GetNodeId() int {
|
||||||
|
return client.nodeId
|
||||||
|
}
|
||||||
|
|
||||||
|
func (client *Client) GetClientId() uint32 {
|
||||||
|
return client.clientId
|
||||||
}
|
}
|
||||||
|
|||||||
102
rpc/compressor.go
Normal file
102
rpc/compressor.go
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
package rpc
|
||||||
|
|
||||||
|
import (
|
||||||
|
"runtime"
|
||||||
|
"errors"
|
||||||
|
"github.com/pierrec/lz4/v4"
|
||||||
|
"fmt"
|
||||||
|
"github.com/duanhf2012/origin/network"
|
||||||
|
)
|
||||||
|
|
||||||
|
var memPool network.INetMempool = network.NewMemAreaPool()
|
||||||
|
|
||||||
|
type ICompressor interface {
|
||||||
|
CompressBlock(src []byte) ([]byte, error) //dst如果有预申请使用dst内存,传入nil时内部申请
|
||||||
|
UncompressBlock(src []byte) ([]byte, error) //dst如果有预申请使用dst内存,传入nil时内部申请
|
||||||
|
|
||||||
|
CompressBufferCollection(buffer []byte) //压缩的Buffer内存回收
|
||||||
|
UnCompressBufferCollection(buffer []byte) //解压缩的Buffer内存回收
|
||||||
|
}
|
||||||
|
|
||||||
|
var compressor ICompressor
|
||||||
|
func init(){
|
||||||
|
SetCompressor(&Lz4Compressor{})
|
||||||
|
}
|
||||||
|
|
||||||
|
func SetCompressor(cp ICompressor){
|
||||||
|
compressor = cp
|
||||||
|
}
|
||||||
|
|
||||||
|
type Lz4Compressor struct {
|
||||||
|
}
|
||||||
|
|
||||||
|
func (lc *Lz4Compressor) CompressBlock(src []byte) (dest []byte, err error) {
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
buf := make([]byte, 4096)
|
||||||
|
l := runtime.Stack(buf, false)
|
||||||
|
errString := fmt.Sprint(r)
|
||||||
|
err = errors.New("core dump info[" + errString + "]\n" + string(buf[:l]))
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
var c lz4.Compressor
|
||||||
|
var cnt int
|
||||||
|
dest = memPool.MakeByteSlice(lz4.CompressBlockBound(len(src))+1)
|
||||||
|
cnt, err = c.CompressBlock(src, dest[1:])
|
||||||
|
if err != nil {
|
||||||
|
memPool.ReleaseByteSlice(dest)
|
||||||
|
return nil,err
|
||||||
|
}
|
||||||
|
|
||||||
|
ratio := len(src) / cnt
|
||||||
|
if len(src) % cnt > 0 {
|
||||||
|
ratio += 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if ratio > 255 {
|
||||||
|
memPool.ReleaseByteSlice(dest)
|
||||||
|
return nil,fmt.Errorf("Impermissible errors")
|
||||||
|
}
|
||||||
|
|
||||||
|
dest[0] = uint8(ratio)
|
||||||
|
dest = dest[:cnt+1]
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func (lc *Lz4Compressor) UncompressBlock(src []byte) (dest []byte, err error) {
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
buf := make([]byte, 4096)
|
||||||
|
l := runtime.Stack(buf, false)
|
||||||
|
errString := fmt.Sprint(r)
|
||||||
|
err = errors.New("core dump info[" + errString + "]\n" + string(buf[:l]))
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
radio := uint8(src[0])
|
||||||
|
if radio == 0 {
|
||||||
|
return nil,fmt.Errorf("Impermissible errors")
|
||||||
|
}
|
||||||
|
|
||||||
|
dest = memPool.MakeByteSlice(len(src)*int(radio))
|
||||||
|
cnt, err := lz4.UncompressBlock(src[1:], dest)
|
||||||
|
if err != nil {
|
||||||
|
memPool.ReleaseByteSlice(dest)
|
||||||
|
return nil,err
|
||||||
|
}
|
||||||
|
|
||||||
|
return dest[:cnt],nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (lc *Lz4Compressor) compressBlockBound(n int) int{
|
||||||
|
return lz4.CompressBlockBound(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (lc *Lz4Compressor) CompressBufferCollection(buffer []byte){
|
||||||
|
memPool.ReleaseByteSlice(buffer)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (lc *Lz4Compressor) UnCompressBufferCollection(buffer []byte) {
|
||||||
|
memPool.ReleaseByteSlice(buffer)
|
||||||
|
}
|
||||||
@@ -3,6 +3,7 @@ package rpc
|
|||||||
import (
|
import (
|
||||||
"github.com/duanhf2012/origin/util/sync"
|
"github.com/duanhf2012/origin/util/sync"
|
||||||
"github.com/gogo/protobuf/proto"
|
"github.com/gogo/protobuf/proto"
|
||||||
|
"fmt"
|
||||||
)
|
)
|
||||||
|
|
||||||
type GoGoPBProcessor struct {
|
type GoGoPBProcessor struct {
|
||||||
@@ -40,7 +41,10 @@ func (slf *GoGoPBProcessor) Marshal(v interface{}) ([]byte, error){
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (slf *GoGoPBProcessor) Unmarshal(data []byte, msg interface{}) error{
|
func (slf *GoGoPBProcessor) Unmarshal(data []byte, msg interface{}) error{
|
||||||
protoMsg := msg.(proto.Message)
|
protoMsg,ok := msg.(proto.Message)
|
||||||
|
if ok == false {
|
||||||
|
return fmt.Errorf("%+v is not of proto.Message type",msg)
|
||||||
|
}
|
||||||
return proto.Unmarshal(data, protoMsg)
|
return proto.Unmarshal(data, protoMsg)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -73,6 +77,15 @@ func (slf *GoGoPBProcessor) GetProcessorType() RpcProcessorType{
|
|||||||
return RpcProcessorGoGoPB
|
return RpcProcessorGoGoPB
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (slf *GoGoPBProcessor) Clone(src interface{}) (interface{},error){
|
||||||
|
srcMsg,ok := src.(proto.Message)
|
||||||
|
if ok == false {
|
||||||
|
return nil,fmt.Errorf("param is not of proto.message type")
|
||||||
|
}
|
||||||
|
|
||||||
|
return proto.Clone(srcMsg),nil
|
||||||
|
}
|
||||||
|
|
||||||
func (slf *GoGoPBRpcRequestData) IsNoReply() bool{
|
func (slf *GoGoPBRpcRequestData) IsNoReply() bool{
|
||||||
return slf.GetNoReply()
|
return slf.GetNoReply()
|
||||||
}
|
}
|
||||||
@@ -91,5 +104,3 @@ func (slf *GoGoPBRpcResponseData) GetErr() *RpcError {
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ package rpc
|
|||||||
import (
|
import (
|
||||||
"github.com/duanhf2012/origin/util/sync"
|
"github.com/duanhf2012/origin/util/sync"
|
||||||
jsoniter "github.com/json-iterator/go"
|
jsoniter "github.com/json-iterator/go"
|
||||||
|
"reflect"
|
||||||
)
|
)
|
||||||
|
|
||||||
var json = jsoniter.ConfigCompatibleWithStandardLibrary
|
var json = jsoniter.ConfigCompatibleWithStandardLibrary
|
||||||
@@ -119,6 +120,22 @@ func (jsonRpcResponseData *JsonRpcResponseData) GetReply() []byte{
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
func (jsonProcessor *JsonProcessor) Clone(src interface{}) (interface{},error){
|
||||||
|
dstValue := reflect.New(reflect.ValueOf(src).Type().Elem())
|
||||||
|
bytes,err := json.Marshal(src)
|
||||||
|
if err != nil {
|
||||||
|
return nil,err
|
||||||
|
}
|
||||||
|
|
||||||
|
dst := dstValue.Interface()
|
||||||
|
err = json.Unmarshal(bytes,dst)
|
||||||
|
if err != nil {
|
||||||
|
return nil,err
|
||||||
|
}
|
||||||
|
|
||||||
|
return dst,nil
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
135
rpc/lclient.go
Normal file
135
rpc/lclient.go
Normal file
@@ -0,0 +1,135 @@
|
|||||||
|
package rpc
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"github.com/duanhf2012/origin/log"
|
||||||
|
"github.com/duanhf2012/origin/network"
|
||||||
|
"reflect"
|
||||||
|
"strings"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
//本结点的Client
|
||||||
|
type LClient struct {
|
||||||
|
selfClient *Client
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *LClient) Lock(){
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *LClient) Unlock(){
|
||||||
|
}
|
||||||
|
|
||||||
|
func (lc *LClient) Run(){
|
||||||
|
}
|
||||||
|
|
||||||
|
func (lc *LClient) OnClose(){
|
||||||
|
}
|
||||||
|
|
||||||
|
func (lc *LClient) IsConnected() bool {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (lc *LClient) SetConn(conn *network.TCPConn){
|
||||||
|
}
|
||||||
|
|
||||||
|
func (lc *LClient) Close(waitDone bool){
|
||||||
|
}
|
||||||
|
|
||||||
|
func (lc *LClient) Go(timeout time.Duration,rpcHandler IRpcHandler,noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
|
||||||
|
pLocalRpcServer := rpcHandler.GetRpcServer()()
|
||||||
|
//判断是否是同一服务
|
||||||
|
findIndex := strings.Index(serviceMethod, ".")
|
||||||
|
if findIndex == -1 {
|
||||||
|
sErr := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
||||||
|
log.SError(sErr.Error())
|
||||||
|
call := MakeCall()
|
||||||
|
call.DoError(sErr)
|
||||||
|
|
||||||
|
return call
|
||||||
|
}
|
||||||
|
|
||||||
|
serviceName := serviceMethod[:findIndex]
|
||||||
|
if serviceName == rpcHandler.GetName() { //自己服务调用
|
||||||
|
//调用自己rpcHandler处理器
|
||||||
|
err := pLocalRpcServer.myselfRpcHandlerGo(lc.selfClient,serviceName, serviceMethod, args, requestHandlerNull,reply)
|
||||||
|
call := MakeCall()
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
call.DoError(err)
|
||||||
|
return call
|
||||||
|
}
|
||||||
|
|
||||||
|
call.DoOK()
|
||||||
|
return call
|
||||||
|
}
|
||||||
|
|
||||||
|
//其他的rpcHandler的处理器
|
||||||
|
return pLocalRpcServer.selfNodeRpcHandlerGo(timeout,nil, lc.selfClient, noReply, serviceName, 0, serviceMethod, args, reply, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
func (rc *LClient) RawGo(timeout time.Duration,rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceName string, rawArgs []byte, reply interface{}) *Call {
|
||||||
|
pLocalRpcServer := rpcHandler.GetRpcServer()()
|
||||||
|
|
||||||
|
//服务自我调用
|
||||||
|
if serviceName == rpcHandler.GetName() {
|
||||||
|
call := MakeCall()
|
||||||
|
call.ServiceMethod = serviceName
|
||||||
|
call.Reply = reply
|
||||||
|
call.TimeOut = timeout
|
||||||
|
|
||||||
|
err := pLocalRpcServer.myselfRpcHandlerGo(rc.selfClient,serviceName, serviceName, rawArgs, requestHandlerNull,nil)
|
||||||
|
call.Err = err
|
||||||
|
call.done <- call
|
||||||
|
|
||||||
|
return call
|
||||||
|
}
|
||||||
|
|
||||||
|
//其他的rpcHandler的处理器
|
||||||
|
return pLocalRpcServer.selfNodeRpcHandlerGo(timeout,processor,rc.selfClient, true, serviceName, rpcMethodId, serviceName, nil, nil, rawArgs)
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
func (lc *LClient) AsyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, reply interface{},cancelable bool) (CancelRpc,error) {
|
||||||
|
pLocalRpcServer := rpcHandler.GetRpcServer()()
|
||||||
|
|
||||||
|
//判断是否是同一服务
|
||||||
|
findIndex := strings.Index(serviceMethod, ".")
|
||||||
|
if findIndex == -1 {
|
||||||
|
err := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
||||||
|
callback.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||||
|
log.SError(err.Error())
|
||||||
|
return emptyCancelRpc,nil
|
||||||
|
}
|
||||||
|
|
||||||
|
serviceName := serviceMethod[:findIndex]
|
||||||
|
//调用自己rpcHandler处理器
|
||||||
|
if serviceName == rpcHandler.GetName() { //自己服务调用
|
||||||
|
return emptyCancelRpc,pLocalRpcServer.myselfRpcHandlerGo(lc.selfClient,serviceName, serviceMethod, args,callback ,reply)
|
||||||
|
}
|
||||||
|
|
||||||
|
//其他的rpcHandler的处理器
|
||||||
|
calcelRpc,err := pLocalRpcServer.selfNodeRpcHandlerAsyncGo(timeout,lc.selfClient, rpcHandler, false, serviceName, serviceMethod, args, reply, callback,cancelable)
|
||||||
|
if err != nil {
|
||||||
|
callback.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||||
|
}
|
||||||
|
|
||||||
|
return calcelRpc,nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewLClient(nodeId int) *Client{
|
||||||
|
client := &Client{}
|
||||||
|
client.clientId = atomic.AddUint32(&clientSeq, 1)
|
||||||
|
client.nodeId = nodeId
|
||||||
|
client.maxCheckCallRpcCount = DefaultMaxCheckCallRpcCount
|
||||||
|
client.callRpcTimeout = DefaultRpcTimeout
|
||||||
|
|
||||||
|
lClient := &LClient{}
|
||||||
|
lClient.selfClient = client
|
||||||
|
client.IRealClient = lClient
|
||||||
|
client.InitPending()
|
||||||
|
go client.checkRpcCallTimeout()
|
||||||
|
return client
|
||||||
|
}
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
package rpc
|
package rpc
|
||||||
|
|
||||||
type IRpcProcessor interface {
|
type IRpcProcessor interface {
|
||||||
|
Clone(src interface{}) (interface{},error)
|
||||||
Marshal(v interface{}) ([]byte, error) //b表示自定义缓冲区,可以填nil,由系统自动分配
|
Marshal(v interface{}) ([]byte, error) //b表示自定义缓冲区,可以填nil,由系统自动分配
|
||||||
Unmarshal(data []byte, v interface{}) error
|
Unmarshal(data []byte, v interface{}) error
|
||||||
MakeRpcRequest(seq uint64,rpcMethodId uint32,serviceMethod string,noReply bool,inParam []byte) IRpcRequestData
|
MakeRpcRequest(seq uint64,rpcMethodId uint32,serviceMethod string,noReply bool,inParam []byte) IRpcRequestData
|
||||||
|
|||||||
2497
rpc/rank.pb.go
2497
rpc/rank.pb.go
File diff suppressed because it is too large
Load Diff
@@ -2,19 +2,48 @@ syntax = "proto3";
|
|||||||
package rpc;
|
package rpc;
|
||||||
option go_package = ".;rpc";
|
option go_package = ".;rpc";
|
||||||
|
|
||||||
// RankData 排行数据
|
message SetSortAndExtendData{
|
||||||
message RankData {
|
bool IsSortData = 1; //是否为排序字段,为true时,修改Sort字段,否则修改Extend数据
|
||||||
uint64 Key = 1; //数据主建
|
int32 Pos = 2; //排序位置
|
||||||
repeated int64 SortData = 2; //参与排行的数据
|
int64 Data = 3; //排序值
|
||||||
bytes Data = 3; //不参与排行的数据
|
}
|
||||||
|
|
||||||
|
//自增值
|
||||||
|
message IncreaseRankData {
|
||||||
|
uint64 RankId = 1; //排行榜的ID
|
||||||
|
uint64 Key = 2; //数据主建
|
||||||
|
repeated ExtendIncData Extend = 3; //扩展数据
|
||||||
|
repeated int64 IncreaseSortData = 4;//自增排行数值
|
||||||
|
repeated SetSortAndExtendData SetSortAndExtendData = 5;//设置排序数据值
|
||||||
|
bool ReturnRankData = 6; //是否查找最新排名,否则不返回排行Rank字段
|
||||||
|
|
||||||
|
bool InsertDataOnNonExistent = 7; //为true时:存在不进行更新,不存在则插入InitData与InitSortData数据。为false时:忽略不对InitData与InitSortData数据
|
||||||
|
bytes InitData = 8; //不参与排行的数据
|
||||||
|
repeated int64 InitSortData = 9; //参与排行的数据
|
||||||
|
}
|
||||||
|
|
||||||
|
message IncreaseRankDataRet{
|
||||||
|
RankPosData PosData = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
//用于单独刷新排行榜数据
|
||||||
|
message UpdateRankData {
|
||||||
|
uint64 RankId = 1; //排行榜的ID
|
||||||
|
uint64 Key = 2; //数据主建
|
||||||
|
bytes Data = 3; //数据部分
|
||||||
|
}
|
||||||
|
|
||||||
|
message UpdateRankDataRet {
|
||||||
|
bool Ret = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
// RankPosData 排行数据——查询返回
|
// RankPosData 排行数据——查询返回
|
||||||
message RankPosData {
|
message RankPosData {
|
||||||
uint64 Key = 1; //数据主建
|
uint64 Key = 1; //数据主建
|
||||||
uint64 Rank = 2; //名次
|
uint64 Rank = 2; //名次
|
||||||
repeated int64 SortData = 3; //参与排行的数据
|
repeated int64 SortData = 3; //参与排行的数据
|
||||||
bytes Data = 4; //不参与排行的数据
|
bytes Data = 4; //不参与排行的数据
|
||||||
|
repeated int64 ExtendData = 5; //扩展数据
|
||||||
}
|
}
|
||||||
|
|
||||||
// RankList 排行榜数据
|
// RankList 排行榜数据
|
||||||
@@ -31,6 +60,22 @@ message RankList {
|
|||||||
message UpsetRankData {
|
message UpsetRankData {
|
||||||
uint64 RankId = 1; //排行榜的ID
|
uint64 RankId = 1; //排行榜的ID
|
||||||
repeated RankData RankDataList = 2; //排行数据
|
repeated RankData RankDataList = 2; //排行数据
|
||||||
|
bool FindNewRank = 3; //是否查找最新排名
|
||||||
|
}
|
||||||
|
|
||||||
|
message ExtendIncData {
|
||||||
|
int64 InitValue = 1;
|
||||||
|
int64 IncreaseValue = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
// RankData 排行数据
|
||||||
|
message RankData {
|
||||||
|
uint64 Key = 1; //数据主建
|
||||||
|
repeated int64 SortData = 2; //参与排行的数据
|
||||||
|
|
||||||
|
bytes Data = 3; //不参与排行的数据
|
||||||
|
|
||||||
|
repeated ExtendIncData ExData = 4; //扩展增量数据
|
||||||
}
|
}
|
||||||
|
|
||||||
// DeleteByKey 删除排行榜数据
|
// DeleteByKey 删除排行榜数据
|
||||||
@@ -71,9 +116,15 @@ message RankDataList {
|
|||||||
RankPosData KeyRank = 3; //附带的Key查询排行结果信息
|
RankPosData KeyRank = 3; //附带的Key查询排行结果信息
|
||||||
}
|
}
|
||||||
|
|
||||||
|
message RankInfo{
|
||||||
|
uint64 Key = 1;
|
||||||
|
uint64 Rank = 2;
|
||||||
|
}
|
||||||
|
|
||||||
// RankResult
|
// RankResult
|
||||||
message RankResult {
|
message RankResult {
|
||||||
int32 AddCount = 1;//新增数量
|
int32 AddCount = 1;//新增数量
|
||||||
int32 ModifyCount = 2; //修改数量
|
int32 ModifyCount = 2; //修改数量
|
||||||
int32 RemoveCount = 3;//删除数量
|
int32 RemoveCount = 3;//删除数量
|
||||||
|
repeated RankInfo NewRank = 4; //新的排名名次,只有UpsetRankData.FindNewRank为true时才生效
|
||||||
}
|
}
|
||||||
|
|||||||
323
rpc/rclient.go
Normal file
323
rpc/rclient.go
Normal file
@@ -0,0 +1,323 @@
|
|||||||
|
package rpc
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"github.com/duanhf2012/origin/log"
|
||||||
|
"github.com/duanhf2012/origin/network"
|
||||||
|
"math"
|
||||||
|
"reflect"
|
||||||
|
"runtime"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
//跨结点连接的Client
|
||||||
|
type RClient struct {
|
||||||
|
compressBytesLen int
|
||||||
|
selfClient *Client
|
||||||
|
network.TCPClient
|
||||||
|
conn *network.TCPConn
|
||||||
|
TriggerRpcConnEvent
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RClient) IsConnected() bool {
|
||||||
|
rc.Lock()
|
||||||
|
defer rc.Unlock()
|
||||||
|
|
||||||
|
return rc.conn != nil && rc.conn.IsConnected() == true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RClient) GetConn() *network.TCPConn{
|
||||||
|
rc.Lock()
|
||||||
|
conn := rc.conn
|
||||||
|
rc.Unlock()
|
||||||
|
|
||||||
|
return conn
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RClient) SetConn(conn *network.TCPConn){
|
||||||
|
rc.Lock()
|
||||||
|
rc.conn = conn
|
||||||
|
rc.Unlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RClient) Go(timeout time.Duration,rpcHandler IRpcHandler,noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
|
||||||
|
_, processor := GetProcessorType(args)
|
||||||
|
InParam, err := processor.Marshal(args)
|
||||||
|
if err != nil {
|
||||||
|
log.SError(err.Error())
|
||||||
|
call := MakeCall()
|
||||||
|
call.DoError(err)
|
||||||
|
return call
|
||||||
|
}
|
||||||
|
|
||||||
|
return rc.RawGo(timeout,rpcHandler,processor, noReply, 0, serviceMethod, InParam, reply)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RClient) RawGo(timeout time.Duration,rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, rawArgs []byte, reply interface{}) *Call {
|
||||||
|
call := MakeCall()
|
||||||
|
call.ServiceMethod = serviceMethod
|
||||||
|
call.Reply = reply
|
||||||
|
call.Seq = rc.selfClient.generateSeq()
|
||||||
|
call.TimeOut = timeout
|
||||||
|
|
||||||
|
request := MakeRpcRequest(processor, call.Seq, rpcMethodId, serviceMethod, noReply, rawArgs)
|
||||||
|
bytes, err := processor.Marshal(request.RpcRequestData)
|
||||||
|
ReleaseRpcRequest(request)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
call.Seq = 0
|
||||||
|
log.SError(err.Error())
|
||||||
|
call.DoError(err)
|
||||||
|
return call
|
||||||
|
}
|
||||||
|
|
||||||
|
conn := rc.GetConn()
|
||||||
|
if conn == nil || conn.IsConnected()==false {
|
||||||
|
call.Seq = 0
|
||||||
|
sErr := errors.New(serviceMethod + " was called failed,rpc client is disconnect")
|
||||||
|
log.SError(sErr.Error())
|
||||||
|
call.DoError(sErr)
|
||||||
|
return call
|
||||||
|
}
|
||||||
|
|
||||||
|
var compressBuff[]byte
|
||||||
|
bCompress := uint8(0)
|
||||||
|
if rc.compressBytesLen > 0 && len(bytes) >= rc.compressBytesLen {
|
||||||
|
var cErr error
|
||||||
|
compressBuff,cErr = compressor.CompressBlock(bytes)
|
||||||
|
if cErr != nil {
|
||||||
|
call.Seq = 0
|
||||||
|
log.SError(cErr.Error())
|
||||||
|
call.DoError(cErr)
|
||||||
|
return call
|
||||||
|
}
|
||||||
|
if len(compressBuff) < len(bytes) {
|
||||||
|
bytes = compressBuff
|
||||||
|
bCompress = 1<<7
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if noReply == false {
|
||||||
|
rc.selfClient.AddPending(call)
|
||||||
|
}
|
||||||
|
|
||||||
|
err = conn.WriteMsg([]byte{uint8(processor.GetProcessorType())|bCompress}, bytes)
|
||||||
|
if cap(compressBuff) >0 {
|
||||||
|
compressor.CompressBufferCollection(compressBuff)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
rc.selfClient.RemovePending(call.Seq)
|
||||||
|
|
||||||
|
log.SError(err.Error())
|
||||||
|
|
||||||
|
call.Seq = 0
|
||||||
|
call.DoError(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return call
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
func (rc *RClient) AsyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{},cancelable bool) (CancelRpc,error) {
|
||||||
|
cancelRpc,err := rc.asyncCall(timeout,rpcHandler, serviceMethod, callback, args, replyParam,cancelable)
|
||||||
|
if err != nil {
|
||||||
|
callback.Call([]reflect.Value{reflect.ValueOf(replyParam), reflect.ValueOf(err)})
|
||||||
|
}
|
||||||
|
|
||||||
|
return cancelRpc,nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RClient) asyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{},cancelable bool) (CancelRpc,error) {
|
||||||
|
processorType, processor := GetProcessorType(args)
|
||||||
|
InParam, herr := processor.Marshal(args)
|
||||||
|
if herr != nil {
|
||||||
|
return emptyCancelRpc,herr
|
||||||
|
}
|
||||||
|
|
||||||
|
seq := rc.selfClient.generateSeq()
|
||||||
|
request := MakeRpcRequest(processor, seq, 0, serviceMethod, false, InParam)
|
||||||
|
bytes, err := processor.Marshal(request.RpcRequestData)
|
||||||
|
ReleaseRpcRequest(request)
|
||||||
|
if err != nil {
|
||||||
|
return emptyCancelRpc,err
|
||||||
|
}
|
||||||
|
|
||||||
|
conn := rc.GetConn()
|
||||||
|
if conn == nil || conn.IsConnected()==false {
|
||||||
|
return emptyCancelRpc,errors.New("Rpc server is disconnect,call " + serviceMethod)
|
||||||
|
}
|
||||||
|
|
||||||
|
var compressBuff[]byte
|
||||||
|
bCompress := uint8(0)
|
||||||
|
if rc.compressBytesLen>0 &&len(bytes) >= rc.compressBytesLen {
|
||||||
|
var cErr error
|
||||||
|
compressBuff,cErr = compressor.CompressBlock(bytes)
|
||||||
|
if cErr != nil {
|
||||||
|
return emptyCancelRpc,cErr
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(compressBuff) < len(bytes) {
|
||||||
|
bytes = compressBuff
|
||||||
|
bCompress = 1<<7
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
call := MakeCall()
|
||||||
|
call.Reply = replyParam
|
||||||
|
call.callback = &callback
|
||||||
|
call.rpcHandler = rpcHandler
|
||||||
|
call.ServiceMethod = serviceMethod
|
||||||
|
call.Seq = seq
|
||||||
|
call.TimeOut = timeout
|
||||||
|
rc.selfClient.AddPending(call)
|
||||||
|
|
||||||
|
err = conn.WriteMsg([]byte{uint8(processorType)|bCompress}, bytes)
|
||||||
|
if cap(compressBuff) >0 {
|
||||||
|
compressor.CompressBufferCollection(compressBuff)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
rc.selfClient.RemovePending(call.Seq)
|
||||||
|
ReleaseCall(call)
|
||||||
|
return emptyCancelRpc,err
|
||||||
|
}
|
||||||
|
|
||||||
|
if cancelable {
|
||||||
|
rpcCancel := RpcCancel{CallSeq:seq,Cli: rc.selfClient}
|
||||||
|
return rpcCancel.CancelRpc,nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return emptyCancelRpc,nil
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
func (rc *RClient) Run() {
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
buf := make([]byte, 4096)
|
||||||
|
l := runtime.Stack(buf, false)
|
||||||
|
errString := fmt.Sprint(r)
|
||||||
|
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
rc.TriggerRpcConnEvent(true, rc.selfClient.GetClientId(), rc.selfClient.GetNodeId())
|
||||||
|
for {
|
||||||
|
bytes, err := rc.conn.ReadMsg()
|
||||||
|
if err != nil {
|
||||||
|
log.SError("rpcClient ", rc.Addr, " ReadMsg error:", err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
bCompress := (bytes[0]>>7) > 0
|
||||||
|
processor := GetProcessor(bytes[0]&0x7f)
|
||||||
|
if processor == nil {
|
||||||
|
rc.conn.ReleaseReadMsg(bytes)
|
||||||
|
log.SError("rpcClient ", rc.Addr, " ReadMsg head error:", err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
//1.解析head
|
||||||
|
response := RpcResponse{}
|
||||||
|
response.RpcResponseData = processor.MakeRpcResponse(0, "", nil)
|
||||||
|
|
||||||
|
//解压缩
|
||||||
|
byteData := bytes[1:]
|
||||||
|
var compressBuff []byte
|
||||||
|
|
||||||
|
if bCompress == true {
|
||||||
|
var unCompressErr error
|
||||||
|
compressBuff,unCompressErr = compressor.UncompressBlock(byteData)
|
||||||
|
if unCompressErr!= nil {
|
||||||
|
rc.conn.ReleaseReadMsg(bytes)
|
||||||
|
log.SError("rpcClient ", rc.Addr, " ReadMsg head error:", unCompressErr.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
byteData = compressBuff
|
||||||
|
}
|
||||||
|
|
||||||
|
err = processor.Unmarshal(byteData, response.RpcResponseData)
|
||||||
|
if cap(compressBuff) > 0 {
|
||||||
|
compressor.UnCompressBufferCollection(compressBuff)
|
||||||
|
}
|
||||||
|
|
||||||
|
rc.conn.ReleaseReadMsg(bytes)
|
||||||
|
if err != nil {
|
||||||
|
processor.ReleaseRpcResponse(response.RpcResponseData)
|
||||||
|
log.SError("rpcClient Unmarshal head error:", err.Error())
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
v := rc.selfClient.RemovePending(response.RpcResponseData.GetSeq())
|
||||||
|
if v == nil {
|
||||||
|
log.SError("rpcClient cannot find seq ", response.RpcResponseData.GetSeq(), " in pending")
|
||||||
|
} else {
|
||||||
|
v.Err = nil
|
||||||
|
if len(response.RpcResponseData.GetReply()) > 0 {
|
||||||
|
err = processor.Unmarshal(response.RpcResponseData.GetReply(), v.Reply)
|
||||||
|
if err != nil {
|
||||||
|
log.SError("rpcClient Unmarshal body error:", err.Error())
|
||||||
|
v.Err = err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if response.RpcResponseData.GetErr() != nil {
|
||||||
|
v.Err = response.RpcResponseData.GetErr()
|
||||||
|
}
|
||||||
|
|
||||||
|
if v.callback != nil && v.callback.IsValid() {
|
||||||
|
v.rpcHandler.PushRpcResponse(v)
|
||||||
|
} else {
|
||||||
|
v.done <- v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
processor.ReleaseRpcResponse(response.RpcResponseData)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RClient) OnClose() {
|
||||||
|
rc.TriggerRpcConnEvent(false, rc.selfClient.GetClientId(), rc.selfClient.GetNodeId())
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewRClient(nodeId int, addr string, maxRpcParamLen uint32,compressBytesLen int,triggerRpcConnEvent TriggerRpcConnEvent) *Client{
|
||||||
|
client := &Client{}
|
||||||
|
client.clientId = atomic.AddUint32(&clientSeq, 1)
|
||||||
|
client.nodeId = nodeId
|
||||||
|
client.maxCheckCallRpcCount = DefaultMaxCheckCallRpcCount
|
||||||
|
client.callRpcTimeout = DefaultRpcTimeout
|
||||||
|
c:= &RClient{}
|
||||||
|
c.compressBytesLen = compressBytesLen
|
||||||
|
c.selfClient = client
|
||||||
|
c.Addr = addr
|
||||||
|
c.ConnectInterval = DefaultConnectInterval
|
||||||
|
c.PendingWriteNum = DefaultMaxPendingWriteNum
|
||||||
|
c.AutoReconnect = true
|
||||||
|
c.TriggerRpcConnEvent = triggerRpcConnEvent
|
||||||
|
c.ConnNum = DefaultRpcConnNum
|
||||||
|
c.LenMsgLen = DefaultRpcLenMsgLen
|
||||||
|
c.MinMsgLen = DefaultRpcMinMsgLen
|
||||||
|
c.ReadDeadline = Default_ReadWriteDeadline
|
||||||
|
c.WriteDeadline = Default_ReadWriteDeadline
|
||||||
|
c.LittleEndian = LittleEndian
|
||||||
|
c.NewAgent = client.NewClientAgent
|
||||||
|
|
||||||
|
if maxRpcParamLen > 0 {
|
||||||
|
c.MaxMsgLen = maxRpcParamLen
|
||||||
|
} else {
|
||||||
|
c.MaxMsgLen = math.MaxUint32
|
||||||
|
}
|
||||||
|
client.IRealClient = c
|
||||||
|
client.InitPending()
|
||||||
|
go client.checkRpcCallTimeout()
|
||||||
|
c.Start()
|
||||||
|
return client
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
func (rc *RClient) Close(waitDone bool) {
|
||||||
|
rc.TCPClient.Close(waitDone)
|
||||||
|
rc.selfClient.cleanPending()
|
||||||
|
}
|
||||||
|
|
||||||
28
rpc/rpc.go
28
rpc/rpc.go
@@ -51,12 +51,6 @@ type IRpcResponseData interface {
|
|||||||
GetReply() []byte
|
GetReply() []byte
|
||||||
}
|
}
|
||||||
|
|
||||||
type IRawInputArgs interface {
|
|
||||||
GetRawData() []byte //获取原始数据
|
|
||||||
DoFree() //处理完成,回收内存
|
|
||||||
DoEscape() //逃逸,GC自动回收
|
|
||||||
}
|
|
||||||
|
|
||||||
type RpcHandleFinder interface {
|
type RpcHandleFinder interface {
|
||||||
FindRpcHandler(serviceMethod string) IRpcHandler
|
FindRpcHandler(serviceMethod string) IRpcHandler
|
||||||
}
|
}
|
||||||
@@ -74,7 +68,16 @@ type Call struct {
|
|||||||
connId int
|
connId int
|
||||||
callback *reflect.Value
|
callback *reflect.Value
|
||||||
rpcHandler IRpcHandler
|
rpcHandler IRpcHandler
|
||||||
callTime time.Time
|
TimeOut time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
type RpcCancel struct {
|
||||||
|
Cli *Client
|
||||||
|
CallSeq uint64
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RpcCancel) CancelRpc(){
|
||||||
|
rc.Cli.RemovePending(rc.CallSeq)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (slf *RpcRequest) Clear() *RpcRequest{
|
func (slf *RpcRequest) Clear() *RpcRequest{
|
||||||
@@ -108,6 +111,15 @@ func (rpcResponse *RpcResponse) Clear() *RpcResponse{
|
|||||||
return rpcResponse
|
return rpcResponse
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (call *Call) DoError(err error){
|
||||||
|
call.Err = err
|
||||||
|
call.done <- call
|
||||||
|
}
|
||||||
|
|
||||||
|
func (call *Call) DoOK(){
|
||||||
|
call.done <- call
|
||||||
|
}
|
||||||
|
|
||||||
func (call *Call) Clear() *Call{
|
func (call *Call) Clear() *Call{
|
||||||
call.Seq = 0
|
call.Seq = 0
|
||||||
call.ServiceMethod = ""
|
call.ServiceMethod = ""
|
||||||
@@ -121,6 +133,8 @@ func (call *Call) Clear() *Call{
|
|||||||
call.connId = 0
|
call.connId = 0
|
||||||
call.callback = nil
|
call.callback = nil
|
||||||
call.rpcHandler = nil
|
call.rpcHandler = nil
|
||||||
|
call.TimeOut = 0
|
||||||
|
|
||||||
return call
|
return call
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -6,10 +6,10 @@ import (
|
|||||||
"github.com/duanhf2012/origin/log"
|
"github.com/duanhf2012/origin/log"
|
||||||
"reflect"
|
"reflect"
|
||||||
"runtime"
|
"runtime"
|
||||||
"strconv"
|
|
||||||
"strings"
|
"strings"
|
||||||
"unicode"
|
"unicode"
|
||||||
"unicode/utf8"
|
"unicode/utf8"
|
||||||
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
const maxClusterNode int = 128
|
const maxClusterNode int = 128
|
||||||
@@ -17,6 +17,7 @@ const maxClusterNode int = 128
|
|||||||
type FuncRpcClient func(nodeId int, serviceMethod string, client []*Client) (error, int)
|
type FuncRpcClient func(nodeId int, serviceMethod string, client []*Client) (error, int)
|
||||||
type FuncRpcServer func() *Server
|
type FuncRpcServer func() *Server
|
||||||
|
|
||||||
|
|
||||||
var nilError = reflect.Zero(reflect.TypeOf((*error)(nil)).Elem())
|
var nilError = reflect.Zero(reflect.TypeOf((*error)(nil)).Elem())
|
||||||
|
|
||||||
type RpcError string
|
type RpcError string
|
||||||
@@ -45,10 +46,7 @@ type RpcMethodInfo struct {
|
|||||||
rpcProcessorType RpcProcessorType
|
rpcProcessorType RpcProcessorType
|
||||||
}
|
}
|
||||||
|
|
||||||
type RawRpcCallBack interface {
|
type RawRpcCallBack func(rawData []byte)
|
||||||
Unmarshal(data []byte) (interface{}, error)
|
|
||||||
CB(data interface{})
|
|
||||||
}
|
|
||||||
|
|
||||||
type IRpcHandlerChannel interface {
|
type IRpcHandlerChannel interface {
|
||||||
PushRpcResponse(call *Call) error
|
PushRpcResponse(call *Call) error
|
||||||
@@ -67,7 +65,7 @@ type RpcHandler struct {
|
|||||||
pClientList []*Client
|
pClientList []*Client
|
||||||
}
|
}
|
||||||
|
|
||||||
type TriggerRpcEvent func(bConnect bool, clientSeq uint32, nodeId int)
|
type TriggerRpcConnEvent func(bConnect bool, clientSeq uint32, nodeId int)
|
||||||
type INodeListener interface {
|
type INodeListener interface {
|
||||||
OnNodeConnected(nodeId int)
|
OnNodeConnected(nodeId int)
|
||||||
OnNodeDisconnect(nodeId int)
|
OnNodeDisconnect(nodeId int)
|
||||||
@@ -78,6 +76,9 @@ type IDiscoveryServiceListener interface {
|
|||||||
OnUnDiscoveryService(nodeId int, serviceName []string)
|
OnUnDiscoveryService(nodeId int, serviceName []string)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type CancelRpc func()
|
||||||
|
func emptyCancelRpc(){}
|
||||||
|
|
||||||
type IRpcHandler interface {
|
type IRpcHandler interface {
|
||||||
IRpcHandlerChannel
|
IRpcHandlerChannel
|
||||||
GetName() string
|
GetName() string
|
||||||
@@ -86,16 +87,24 @@ type IRpcHandler interface {
|
|||||||
HandlerRpcRequest(request *RpcRequest)
|
HandlerRpcRequest(request *RpcRequest)
|
||||||
HandlerRpcResponseCB(call *Call)
|
HandlerRpcResponseCB(call *Call)
|
||||||
CallMethod(client *Client,ServiceMethod string, param interface{},callBack reflect.Value, reply interface{}) error
|
CallMethod(client *Client,ServiceMethod string, param interface{},callBack reflect.Value, reply interface{}) error
|
||||||
AsyncCall(serviceMethod string, args interface{}, callback interface{}) error
|
|
||||||
Call(serviceMethod string, args interface{}, reply interface{}) error
|
Call(serviceMethod string, args interface{}, reply interface{}) error
|
||||||
Go(serviceMethod string, args interface{}) error
|
|
||||||
AsyncCallNode(nodeId int, serviceMethod string, args interface{}, callback interface{}) error
|
|
||||||
CallNode(nodeId int, serviceMethod string, args interface{}, reply interface{}) error
|
CallNode(nodeId int, serviceMethod string, args interface{}, reply interface{}) error
|
||||||
|
AsyncCall(serviceMethod string, args interface{}, callback interface{}) error
|
||||||
|
AsyncCallNode(nodeId int, serviceMethod string, args interface{}, callback interface{}) error
|
||||||
|
|
||||||
|
CallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, reply interface{}) error
|
||||||
|
CallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, reply interface{}) error
|
||||||
|
AsyncCallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error)
|
||||||
|
AsyncCallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error)
|
||||||
|
|
||||||
|
Go(serviceMethod string, args interface{}) error
|
||||||
GoNode(nodeId int, serviceMethod string, args interface{}) error
|
GoNode(nodeId int, serviceMethod string, args interface{}) error
|
||||||
RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs IRawInputArgs) error
|
RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs []byte) error
|
||||||
CastGo(serviceMethod string, args interface{}) error
|
CastGo(serviceMethod string, args interface{}) error
|
||||||
IsSingleCoroutine() bool
|
IsSingleCoroutine() bool
|
||||||
UnmarshalInParam(rpcProcessor IRpcProcessor, serviceMethod string, rawRpcMethodId uint32, inParam []byte) (interface{}, error)
|
UnmarshalInParam(rpcProcessor IRpcProcessor, serviceMethod string, rawRpcMethodId uint32, inParam []byte) (interface{}, error)
|
||||||
|
GetRpcServer() FuncRpcServer
|
||||||
}
|
}
|
||||||
|
|
||||||
func reqHandlerNull(Returns interface{}, Err RpcError) {
|
func reqHandlerNull(Returns interface{}, Err RpcError) {
|
||||||
@@ -140,7 +149,7 @@ func (handler *RpcHandler) isExportedOrBuiltinType(t reflect.Type) bool {
|
|||||||
|
|
||||||
func (handler *RpcHandler) suitableMethods(method reflect.Method) error {
|
func (handler *RpcHandler) suitableMethods(method reflect.Method) error {
|
||||||
//只有RPC_开头的才能被调用
|
//只有RPC_开头的才能被调用
|
||||||
if strings.Index(method.Name, "RPC_") != 0 {
|
if strings.Index(method.Name, "RPC_") != 0 && strings.Index(method.Name, "RPC") != 0 {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -244,8 +253,13 @@ func (handler *RpcHandler) HandlerRpcRequest(request *RpcRequest) {
|
|||||||
log.SError("RpcHandler cannot find request rpc id", rawRpcId)
|
log.SError("RpcHandler cannot find request rpc id", rawRpcId)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
rawData,ok := request.inParam.([]byte)
|
||||||
|
if ok == false {
|
||||||
|
log.SError("RpcHandler " + handler.rpcHandler.GetName()," cannot convert in param to []byte", rawRpcId)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
v.CB(request.inParam)
|
v(rawData)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -288,14 +302,16 @@ func (handler *RpcHandler) HandlerRpcRequest(request *RpcRequest) {
|
|||||||
request.requestHandle(nil, RpcError(rErr))
|
request.requestHandle(nil, RpcError(rErr))
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
requestHanle := request.requestHandle
|
||||||
returnValues := v.method.Func.Call(paramList)
|
returnValues := v.method.Func.Call(paramList)
|
||||||
errInter := returnValues[0].Interface()
|
errInter := returnValues[0].Interface()
|
||||||
if errInter != nil {
|
if errInter != nil {
|
||||||
err = errInter.(error)
|
err = errInter.(error)
|
||||||
}
|
}
|
||||||
|
|
||||||
if request.requestHandle != nil && v.hasResponder == false {
|
if v.hasResponder == false && requestHanle != nil {
|
||||||
request.requestHandle(oParam.Interface(), ConvertError(err))
|
requestHanle(oParam.Interface(), ConvertError(err))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -427,36 +443,8 @@ func (handler *RpcHandler) goRpc(processor IRpcProcessor, bCast bool, nodeId int
|
|||||||
}
|
}
|
||||||
|
|
||||||
//2.rpcClient调用
|
//2.rpcClient调用
|
||||||
//如果调用本结点服务
|
|
||||||
for i := 0; i < count; i++ {
|
for i := 0; i < count; i++ {
|
||||||
if pClientList[i].bSelfNode == true {
|
pCall := pClientList[i].Go(DefaultRpcTimeout,handler.rpcHandler,true, serviceMethod, args, nil)
|
||||||
pLocalRpcServer := handler.funcRpcServer()
|
|
||||||
//判断是否是同一服务
|
|
||||||
findIndex := strings.Index(serviceMethod, ".")
|
|
||||||
if findIndex == -1 {
|
|
||||||
sErr := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
|
||||||
log.SError(sErr.Error())
|
|
||||||
err = sErr
|
|
||||||
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
serviceName := serviceMethod[:findIndex]
|
|
||||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
|
||||||
//调用自己rpcHandler处理器
|
|
||||||
return pLocalRpcServer.myselfRpcHandlerGo(pClientList[i],serviceName, serviceMethod, args, requestHandlerNull,nil)
|
|
||||||
}
|
|
||||||
//其他的rpcHandler的处理器
|
|
||||||
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(processor, pClientList[i], true, serviceName, 0, serviceMethod, args, nil, nil)
|
|
||||||
if pCall.Err != nil {
|
|
||||||
err = pCall.Err
|
|
||||||
}
|
|
||||||
pClientList[i].RemovePending(pCall.Seq)
|
|
||||||
ReleaseCall(pCall)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
//跨node调用
|
|
||||||
pCall := pClientList[i].Go(true, serviceMethod, args, nil)
|
|
||||||
if pCall.Err != nil {
|
if pCall.Err != nil {
|
||||||
err = pCall.Err
|
err = pCall.Err
|
||||||
}
|
}
|
||||||
@@ -467,7 +455,7 @@ func (handler *RpcHandler) goRpc(processor IRpcProcessor, bCast bool, nodeId int
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (handler *RpcHandler) callRpc(nodeId int, serviceMethod string, args interface{}, reply interface{}) error {
|
func (handler *RpcHandler) callRpc(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, reply interface{}) error {
|
||||||
var pClientList [maxClusterNode]*Client
|
var pClientList [maxClusterNode]*Client
|
||||||
err, count := handler.funcRpcClient(nodeId, serviceMethod, pClientList[:])
|
err, count := handler.funcRpcClient(nodeId, serviceMethod, pClientList[:])
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -482,117 +470,61 @@ func (handler *RpcHandler) callRpc(nodeId int, serviceMethod string, args interf
|
|||||||
return errors.New("cannot call more then 1 node")
|
return errors.New("cannot call more then 1 node")
|
||||||
}
|
}
|
||||||
|
|
||||||
//2.rpcClient调用
|
|
||||||
//如果调用本结点服务
|
|
||||||
pClient := pClientList[0]
|
pClient := pClientList[0]
|
||||||
if pClient.bSelfNode == true {
|
pCall := pClient.Go(timeout,handler.rpcHandler,false, serviceMethod, args, reply)
|
||||||
pLocalRpcServer := handler.funcRpcServer()
|
|
||||||
//判断是否是同一服务
|
|
||||||
findIndex := strings.Index(serviceMethod, ".")
|
|
||||||
if findIndex == -1 {
|
|
||||||
err := errors.New("Call serviceMethod " + serviceMethod + "is error!")
|
|
||||||
log.SError(err.Error())
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
serviceName := serviceMethod[:findIndex]
|
|
||||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
|
||||||
//调用自己rpcHandler处理器
|
|
||||||
return pLocalRpcServer.myselfRpcHandlerGo(pClient,serviceName, serviceMethod, args,requestHandlerNull, reply)
|
|
||||||
}
|
|
||||||
//其他的rpcHandler的处理器
|
|
||||||
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(nil, pClient, false, serviceName, 0, serviceMethod, args, reply, nil)
|
|
||||||
err = pCall.Done().Err
|
|
||||||
pClient.RemovePending(pCall.Seq)
|
|
||||||
ReleaseCall(pCall)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
//跨node调用
|
|
||||||
pCall := pClient.Go(false, serviceMethod, args, reply)
|
|
||||||
if pCall.Err != nil {
|
|
||||||
err = pCall.Err
|
|
||||||
ReleaseCall(pCall)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
err = pCall.Done().Err
|
err = pCall.Done().Err
|
||||||
pClient.RemovePending(pCall.Seq)
|
pClient.RemovePending(pCall.Seq)
|
||||||
ReleaseCall(pCall)
|
ReleaseCall(pCall)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (handler *RpcHandler) asyncCallRpc(nodeId int, serviceMethod string, args interface{}, callback interface{}) error {
|
func (handler *RpcHandler) asyncCallRpc(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error) {
|
||||||
fVal := reflect.ValueOf(callback)
|
fVal := reflect.ValueOf(callback)
|
||||||
if fVal.Kind() != reflect.Func {
|
if fVal.Kind() != reflect.Func {
|
||||||
err := errors.New("call " + serviceMethod + " input callback param is error!")
|
err := errors.New("call " + serviceMethod + " input callback param is error!")
|
||||||
log.SError(err.Error())
|
log.SError(err.Error())
|
||||||
return err
|
return emptyCancelRpc,err
|
||||||
}
|
}
|
||||||
|
|
||||||
if fVal.Type().NumIn() != 2 {
|
if fVal.Type().NumIn() != 2 {
|
||||||
err := errors.New("call " + serviceMethod + " callback param function is error!")
|
err := errors.New("call " + serviceMethod + " callback param function is error!")
|
||||||
log.SError(err.Error())
|
log.SError(err.Error())
|
||||||
return err
|
return emptyCancelRpc,err
|
||||||
}
|
}
|
||||||
|
|
||||||
if fVal.Type().In(0).Kind() != reflect.Ptr || fVal.Type().In(1).String() != "error" {
|
if fVal.Type().In(0).Kind() != reflect.Ptr || fVal.Type().In(1).String() != "error" {
|
||||||
err := errors.New("call " + serviceMethod + " callback param function is error!")
|
err := errors.New("call " + serviceMethod + " callback param function is error!")
|
||||||
log.SError(err.Error())
|
log.SError(err.Error())
|
||||||
return err
|
return emptyCancelRpc,err
|
||||||
}
|
}
|
||||||
|
|
||||||
reply := reflect.New(fVal.Type().In(0).Elem()).Interface()
|
reply := reflect.New(fVal.Type().In(0).Elem()).Interface()
|
||||||
var pClientList [maxClusterNode]*Client
|
var pClientList [2]*Client
|
||||||
err, count := handler.funcRpcClient(nodeId, serviceMethod, pClientList[:])
|
err, count := handler.funcRpcClient(nodeId, serviceMethod, pClientList[:])
|
||||||
if count == 0 || err != nil {
|
if count == 0 || err != nil {
|
||||||
strNodeId := strconv.Itoa(nodeId)
|
|
||||||
if err == nil {
|
if err == nil {
|
||||||
err = errors.New("cannot find rpcClient from nodeId " + strNodeId + " " + serviceMethod)
|
if nodeId > 0 {
|
||||||
|
err = fmt.Errorf("cannot find %s from nodeId %d",serviceMethod,nodeId)
|
||||||
|
}else {
|
||||||
|
err = fmt.Errorf("No %s service found in the origin network",serviceMethod)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||||
log.SError("Call serviceMethod is error:", err.Error())
|
log.SError("Call serviceMethod is error:", err.Error())
|
||||||
return nil
|
return emptyCancelRpc,nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if count > 1 {
|
if count > 1 {
|
||||||
err := errors.New("cannot call more then 1 node")
|
err := errors.New("cannot call more then 1 node")
|
||||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||||
log.SError(err.Error())
|
log.SError(err.Error())
|
||||||
return nil
|
return emptyCancelRpc,nil
|
||||||
}
|
}
|
||||||
|
|
||||||
//2.rpcClient调用
|
//2.rpcClient调用
|
||||||
//如果调用本结点服务
|
//如果调用本结点服务
|
||||||
pClient := pClientList[0]
|
return pClientList[0].AsyncCall(timeout,handler.rpcHandler, serviceMethod, fVal, args, reply,false)
|
||||||
if pClient.bSelfNode == true {
|
|
||||||
pLocalRpcServer := handler.funcRpcServer()
|
|
||||||
//判断是否是同一服务
|
|
||||||
findIndex := strings.Index(serviceMethod, ".")
|
|
||||||
if findIndex == -1 {
|
|
||||||
err := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
|
||||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
|
||||||
log.SError(err.Error())
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
serviceName := serviceMethod[:findIndex]
|
|
||||||
//调用自己rpcHandler处理器
|
|
||||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
|
||||||
return pLocalRpcServer.myselfRpcHandlerGo(pClient,serviceName, serviceMethod, args,fVal ,reply)
|
|
||||||
}
|
|
||||||
|
|
||||||
//其他的rpcHandler的处理器
|
|
||||||
err = pLocalRpcServer.selfNodeRpcHandlerAsyncGo(pClient, handler, false, serviceName, serviceMethod, args, reply, fVal)
|
|
||||||
if err != nil {
|
|
||||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
//跨node调用
|
|
||||||
err = pClient.AsyncCall(handler, serviceMethod, fVal, args, reply)
|
|
||||||
if err != nil {
|
|
||||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (handler *RpcHandler) GetName() string {
|
func (handler *RpcHandler) GetName() string {
|
||||||
@@ -603,12 +535,29 @@ func (handler *RpcHandler) IsSingleCoroutine() bool {
|
|||||||
return handler.rpcHandler.IsSingleCoroutine()
|
return handler.rpcHandler.IsSingleCoroutine()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (handler *RpcHandler) CallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, reply interface{}) error {
|
||||||
|
return handler.callRpc(timeout,0, serviceMethod, args, reply)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (handler *RpcHandler) CallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, reply interface{}) error{
|
||||||
|
return handler.callRpc(timeout,nodeId, serviceMethod, args, reply)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (handler *RpcHandler) AsyncCallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error){
|
||||||
|
return handler.asyncCallRpc(timeout,0, serviceMethod, args, callback)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (handler *RpcHandler) AsyncCallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error){
|
||||||
|
return handler.asyncCallRpc(timeout,nodeId, serviceMethod, args, callback)
|
||||||
|
}
|
||||||
|
|
||||||
func (handler *RpcHandler) AsyncCall(serviceMethod string, args interface{}, callback interface{}) error {
|
func (handler *RpcHandler) AsyncCall(serviceMethod string, args interface{}, callback interface{}) error {
|
||||||
return handler.asyncCallRpc(0, serviceMethod, args, callback)
|
_,err := handler.asyncCallRpc(DefaultRpcTimeout,0, serviceMethod, args, callback)
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (handler *RpcHandler) Call(serviceMethod string, args interface{}, reply interface{}) error {
|
func (handler *RpcHandler) Call(serviceMethod string, args interface{}, reply interface{}) error {
|
||||||
return handler.callRpc(0, serviceMethod, args, reply)
|
return handler.callRpc(DefaultRpcTimeout,0, serviceMethod, args, reply)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (handler *RpcHandler) Go(serviceMethod string, args interface{}) error {
|
func (handler *RpcHandler) Go(serviceMethod string, args interface{}) error {
|
||||||
@@ -616,11 +565,13 @@ func (handler *RpcHandler) Go(serviceMethod string, args interface{}) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (handler *RpcHandler) AsyncCallNode(nodeId int, serviceMethod string, args interface{}, callback interface{}) error {
|
func (handler *RpcHandler) AsyncCallNode(nodeId int, serviceMethod string, args interface{}, callback interface{}) error {
|
||||||
return handler.asyncCallRpc(nodeId, serviceMethod, args, callback)
|
_,err:= handler.asyncCallRpc(DefaultRpcTimeout,nodeId, serviceMethod, args, callback)
|
||||||
|
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (handler *RpcHandler) CallNode(nodeId int, serviceMethod string, args interface{}, reply interface{}) error {
|
func (handler *RpcHandler) CallNode(nodeId int, serviceMethod string, args interface{}, reply interface{}) error {
|
||||||
return handler.callRpc(nodeId, serviceMethod, args, reply)
|
return handler.callRpc(DefaultRpcTimeout,nodeId, serviceMethod, args, reply)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (handler *RpcHandler) GoNode(nodeId int, serviceMethod string, args interface{}) error {
|
func (handler *RpcHandler) GoNode(nodeId int, serviceMethod string, args interface{}) error {
|
||||||
@@ -631,16 +582,14 @@ func (handler *RpcHandler) CastGo(serviceMethod string, args interface{}) error
|
|||||||
return handler.goRpc(nil, true, 0, serviceMethod, args)
|
return handler.goRpc(nil, true, 0, serviceMethod, args)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (handler *RpcHandler) RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs IRawInputArgs) error {
|
func (handler *RpcHandler) RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs []byte) error {
|
||||||
processor := GetProcessor(uint8(rpcProcessorType))
|
processor := GetProcessor(uint8(rpcProcessorType))
|
||||||
err, count := handler.funcRpcClient(nodeId, serviceName, handler.pClientList)
|
err, count := handler.funcRpcClient(nodeId, serviceName, handler.pClientList)
|
||||||
if count == 0 || err != nil {
|
if count == 0 || err != nil {
|
||||||
//args.DoGc()
|
|
||||||
log.SError("Call serviceMethod is error:", err.Error())
|
log.SError("Call serviceMethod is error:", err.Error())
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if count > 1 {
|
if count > 1 {
|
||||||
//args.DoGc()
|
|
||||||
err := errors.New("cannot call more then 1 node")
|
err := errors.New("cannot call more then 1 node")
|
||||||
log.SError(err.Error())
|
log.SError(err.Error())
|
||||||
return err
|
return err
|
||||||
@@ -649,32 +598,12 @@ func (handler *RpcHandler) RawGoNode(rpcProcessorType RpcProcessorType, nodeId i
|
|||||||
//2.rpcClient调用
|
//2.rpcClient调用
|
||||||
//如果调用本结点服务
|
//如果调用本结点服务
|
||||||
for i := 0; i < count; i++ {
|
for i := 0; i < count; i++ {
|
||||||
if handler.pClientList[i].bSelfNode == true {
|
|
||||||
pLocalRpcServer := handler.funcRpcServer()
|
|
||||||
//调用自己rpcHandler处理器
|
|
||||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
|
||||||
err := pLocalRpcServer.myselfRpcHandlerGo(handler.pClientList[i],serviceName, serviceName, rawArgs.GetRawData(), requestHandlerNull,nil)
|
|
||||||
//args.DoGc()
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
//其他的rpcHandler的处理器
|
|
||||||
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(processor, handler.pClientList[i], true, serviceName, rpcMethodId, serviceName, nil, nil, rawArgs.GetRawData())
|
|
||||||
rawArgs.DoEscape()
|
|
||||||
if pCall.Err != nil {
|
|
||||||
err = pCall.Err
|
|
||||||
}
|
|
||||||
handler.pClientList[i].RemovePending(pCall.Seq)
|
|
||||||
ReleaseCall(pCall)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
//跨node调用
|
//跨node调用
|
||||||
pCall := handler.pClientList[i].RawGo(processor, true, rpcMethodId, serviceName, rawArgs.GetRawData(), nil)
|
pCall := handler.pClientList[i].RawGo(DefaultRpcTimeout,handler.rpcHandler,processor, true, rpcMethodId, serviceName, rawArgs, nil)
|
||||||
rawArgs.DoFree()
|
|
||||||
if pCall.Err != nil {
|
if pCall.Err != nil {
|
||||||
err = pCall.Err
|
err = pCall.Err
|
||||||
}
|
}
|
||||||
|
|
||||||
handler.pClientList[i].RemovePending(pCall.Seq)
|
handler.pClientList[i].RemovePending(pCall.Seq)
|
||||||
ReleaseCall(pCall)
|
ReleaseCall(pCall)
|
||||||
}
|
}
|
||||||
@@ -688,23 +617,7 @@ func (handler *RpcHandler) RegRawRpc(rpcMethodId uint32, rawRpcCB RawRpcCallBack
|
|||||||
|
|
||||||
func (handler *RpcHandler) UnmarshalInParam(rpcProcessor IRpcProcessor, serviceMethod string, rawRpcMethodId uint32, inParam []byte) (interface{}, error) {
|
func (handler *RpcHandler) UnmarshalInParam(rpcProcessor IRpcProcessor, serviceMethod string, rawRpcMethodId uint32, inParam []byte) (interface{}, error) {
|
||||||
if rawRpcMethodId > 0 {
|
if rawRpcMethodId > 0 {
|
||||||
v, ok := handler.mapRawFunctions[rawRpcMethodId]
|
return inParam,nil
|
||||||
if ok == false {
|
|
||||||
strRawRpcMethodId := strconv.FormatUint(uint64(rawRpcMethodId), 10)
|
|
||||||
err := errors.New("RpcHandler cannot find request rpc id " + strRawRpcMethodId)
|
|
||||||
log.SError(err.Error())
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
msg, err := v.Unmarshal(inParam)
|
|
||||||
if err != nil {
|
|
||||||
strRawRpcMethodId := strconv.FormatUint(uint64(rawRpcMethodId), 10)
|
|
||||||
err := errors.New("RpcHandler cannot Unmarshal rpc id " + strRawRpcMethodId)
|
|
||||||
log.SError(err.Error())
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return msg, err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
v, ok := handler.mapFunctions[serviceMethod]
|
v, ok := handler.mapFunctions[serviceMethod]
|
||||||
@@ -717,3 +630,8 @@ func (handler *RpcHandler) UnmarshalInParam(rpcProcessor IRpcProcessor, serviceM
|
|||||||
err = rpcProcessor.Unmarshal(inParam, param)
|
err = rpcProcessor.Unmarshal(inParam, param)
|
||||||
return param, err
|
return param, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
func (handler *RpcHandler) GetRpcServer() FuncRpcServer{
|
||||||
|
return handler.funcRpcServer
|
||||||
|
}
|
||||||
|
|||||||
89
rpc/rpctimer.go
Normal file
89
rpc/rpctimer.go
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
package rpc
|
||||||
|
|
||||||
|
import (
|
||||||
|
"container/heap"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type CallTimer struct {
|
||||||
|
SeqId uint64
|
||||||
|
FireTime int64
|
||||||
|
}
|
||||||
|
|
||||||
|
type CallTimerHeap struct {
|
||||||
|
callTimer []CallTimer
|
||||||
|
mapSeqIndex map[uint64]int
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *CallTimerHeap) Init() {
|
||||||
|
h.mapSeqIndex = make(map[uint64]int, 4096)
|
||||||
|
h.callTimer = make([]CallTimer, 0, 4096)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *CallTimerHeap) Len() int {
|
||||||
|
return len(h.callTimer)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *CallTimerHeap) Less(i, j int) bool {
|
||||||
|
return h.callTimer[i].FireTime < h.callTimer[j].FireTime
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *CallTimerHeap) Swap(i, j int) {
|
||||||
|
h.callTimer[i], h.callTimer[j] = h.callTimer[j], h.callTimer[i]
|
||||||
|
h.mapSeqIndex[h.callTimer[i].SeqId] = i
|
||||||
|
h.mapSeqIndex[h.callTimer[j].SeqId] = j
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *CallTimerHeap) Push(t any) {
|
||||||
|
callTimer := t.(CallTimer)
|
||||||
|
h.mapSeqIndex[callTimer.SeqId] = len(h.callTimer)
|
||||||
|
h.callTimer = append(h.callTimer, callTimer)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *CallTimerHeap) Pop() any {
|
||||||
|
l := len(h.callTimer)
|
||||||
|
seqId := h.callTimer[l-1].SeqId
|
||||||
|
|
||||||
|
h.callTimer = h.callTimer[:l-1]
|
||||||
|
delete(h.mapSeqIndex, seqId)
|
||||||
|
return seqId
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *CallTimerHeap) Cancel(seq uint64) bool {
|
||||||
|
index, ok := h.mapSeqIndex[seq]
|
||||||
|
if ok == false {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
heap.Remove(h, index)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *CallTimerHeap) AddTimer(seqId uint64,d time.Duration){
|
||||||
|
heap.Push(h, CallTimer{
|
||||||
|
SeqId: seqId,
|
||||||
|
FireTime: time.Now().Add(d).UnixNano(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *CallTimerHeap) PopTimeout() uint64 {
|
||||||
|
if h.Len() == 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
nextFireTime := h.callTimer[0].FireTime
|
||||||
|
if nextFireTime > time.Now().UnixNano() {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
return heap.Pop(h).(uint64)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *CallTimerHeap) PopFirst() uint64 {
|
||||||
|
if h.Len() == 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
return heap.Pop(h).(uint64)
|
||||||
|
}
|
||||||
|
|
||||||
234
rpc/server.go
234
rpc/server.go
@@ -19,7 +19,6 @@ const (
|
|||||||
RpcProcessorGoGoPB RpcProcessorType = 1
|
RpcProcessorGoGoPB RpcProcessorType = 1
|
||||||
)
|
)
|
||||||
|
|
||||||
//var processor IRpcProcessor = &JsonProcessor{}
|
|
||||||
var arrayProcessor = []IRpcProcessor{&JsonProcessor{}, &GoGoPBProcessor{}}
|
var arrayProcessor = []IRpcProcessor{&JsonProcessor{}, &GoGoPBProcessor{}}
|
||||||
var arrayProcessorLen uint8 = 2
|
var arrayProcessorLen uint8 = 2
|
||||||
var LittleEndian bool
|
var LittleEndian bool
|
||||||
@@ -28,6 +27,8 @@ type Server struct {
|
|||||||
functions map[interface{}]interface{}
|
functions map[interface{}]interface{}
|
||||||
rpcHandleFinder RpcHandleFinder
|
rpcHandleFinder RpcHandleFinder
|
||||||
rpcServer *network.TCPServer
|
rpcServer *network.TCPServer
|
||||||
|
|
||||||
|
compressBytesLen int
|
||||||
}
|
}
|
||||||
|
|
||||||
type RpcAgent struct {
|
type RpcAgent struct {
|
||||||
@@ -65,15 +66,15 @@ func (server *Server) Init(rpcHandleFinder RpcHandleFinder) {
|
|||||||
|
|
||||||
const Default_ReadWriteDeadline = 15*time.Second
|
const Default_ReadWriteDeadline = 15*time.Second
|
||||||
|
|
||||||
func (server *Server) Start(listenAddr string, maxRpcParamLen uint32) {
|
func (server *Server) Start(listenAddr string, maxRpcParamLen uint32,compressBytesLen int) {
|
||||||
splitAddr := strings.Split(listenAddr, ":")
|
splitAddr := strings.Split(listenAddr, ":")
|
||||||
if len(splitAddr) != 2 {
|
if len(splitAddr) != 2 {
|
||||||
log.SFatal("listen addr is error :", listenAddr)
|
log.SFatal("listen addr is error :", listenAddr)
|
||||||
}
|
}
|
||||||
|
|
||||||
server.rpcServer.Addr = ":" + splitAddr[1]
|
server.rpcServer.Addr = ":" + splitAddr[1]
|
||||||
server.rpcServer.LenMsgLen = 4 //uint16
|
|
||||||
server.rpcServer.MinMsgLen = 2
|
server.rpcServer.MinMsgLen = 2
|
||||||
|
server.compressBytesLen = compressBytesLen
|
||||||
if maxRpcParamLen > 0 {
|
if maxRpcParamLen > 0 {
|
||||||
server.rpcServer.MaxMsgLen = maxRpcParamLen
|
server.rpcServer.MaxMsgLen = maxRpcParamLen
|
||||||
} else {
|
} else {
|
||||||
@@ -86,6 +87,8 @@ func (server *Server) Start(listenAddr string, maxRpcParamLen uint32) {
|
|||||||
server.rpcServer.LittleEndian = LittleEndian
|
server.rpcServer.LittleEndian = LittleEndian
|
||||||
server.rpcServer.WriteDeadline = Default_ReadWriteDeadline
|
server.rpcServer.WriteDeadline = Default_ReadWriteDeadline
|
||||||
server.rpcServer.ReadDeadline = Default_ReadWriteDeadline
|
server.rpcServer.ReadDeadline = Default_ReadWriteDeadline
|
||||||
|
server.rpcServer.LenMsgLen = DefaultRpcLenMsgLen
|
||||||
|
|
||||||
server.rpcServer.Start()
|
server.rpcServer.Start()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -112,7 +115,26 @@ func (agent *RpcAgent) WriteResponse(processor IRpcProcessor, serviceMethod stri
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
errM = agent.conn.WriteMsg([]byte{uint8(processor.GetProcessorType())}, bytes)
|
var compressBuff[]byte
|
||||||
|
bCompress := uint8(0)
|
||||||
|
if agent.rpcServer.compressBytesLen >0 && len(bytes) >= agent.rpcServer.compressBytesLen {
|
||||||
|
var cErr error
|
||||||
|
|
||||||
|
compressBuff,cErr = compressor.CompressBlock(bytes)
|
||||||
|
if cErr != nil {
|
||||||
|
log.SError("service method ", serviceMethod, " CompressBlock error:", cErr.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if len(compressBuff) < len(bytes) {
|
||||||
|
bytes = compressBuff
|
||||||
|
bCompress = 1<<7
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
errM = agent.conn.WriteMsg([]byte{uint8(processor.GetProcessorType())|bCompress}, bytes)
|
||||||
|
if cap(compressBuff) >0 {
|
||||||
|
compressor.CompressBufferCollection(compressBuff)
|
||||||
|
}
|
||||||
if errM != nil {
|
if errM != nil {
|
||||||
log.SError("Rpc ", serviceMethod, " return is error:", errM.Error())
|
log.SError("Rpc ", serviceMethod, " return is error:", errM.Error())
|
||||||
}
|
}
|
||||||
@@ -127,16 +149,34 @@ func (agent *RpcAgent) Run() {
|
|||||||
break
|
break
|
||||||
}
|
}
|
||||||
|
|
||||||
processor := GetProcessor(data[0])
|
bCompress := (data[0]>>7) > 0
|
||||||
|
processor := GetProcessor(data[0]&0x7f)
|
||||||
if processor == nil {
|
if processor == nil {
|
||||||
agent.conn.ReleaseReadMsg(data)
|
agent.conn.ReleaseReadMsg(data)
|
||||||
log.SError("remote rpc ", agent.conn.RemoteAddr(), " cannot find processor:", data[0])
|
log.SError("remote rpc ", agent.conn.RemoteAddr().String(), " cannot find processor:", data[0])
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
//解析head
|
//解析head
|
||||||
|
var compressBuff []byte
|
||||||
|
byteData := data[1:]
|
||||||
|
if bCompress == true {
|
||||||
|
var unCompressErr error
|
||||||
|
|
||||||
|
compressBuff,unCompressErr = compressor.UncompressBlock(byteData)
|
||||||
|
if unCompressErr!= nil {
|
||||||
|
agent.conn.ReleaseReadMsg(data)
|
||||||
|
log.SError("rpcClient ", agent.conn.RemoteAddr().String(), " ReadMsg head error:", unCompressErr.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
byteData = compressBuff
|
||||||
|
}
|
||||||
|
|
||||||
req := MakeRpcRequest(processor, 0, 0, "", false, nil)
|
req := MakeRpcRequest(processor, 0, 0, "", false, nil)
|
||||||
err = processor.Unmarshal(data[1:], req.RpcRequestData)
|
err = processor.Unmarshal(byteData, req.RpcRequestData)
|
||||||
|
if cap(compressBuff) > 0 {
|
||||||
|
compressor.UnCompressBufferCollection(compressBuff)
|
||||||
|
}
|
||||||
agent.conn.ReleaseReadMsg(data)
|
agent.conn.ReleaseReadMsg(data)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.SError("rpc Unmarshal request is error:", err.Error())
|
log.SError("rpc Unmarshal request is error:", err.Error())
|
||||||
@@ -148,7 +188,6 @@ func (agent *RpcAgent) Run() {
|
|||||||
ReleaseRpcRequest(req)
|
ReleaseRpcRequest(req)
|
||||||
continue
|
continue
|
||||||
} else {
|
} else {
|
||||||
//will close tcpconn
|
|
||||||
ReleaseRpcRequest(req)
|
ReleaseRpcRequest(req)
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
@@ -245,39 +284,54 @@ func (server *Server) myselfRpcHandlerGo(client *Client,handlerName string, serv
|
|||||||
log.SError(err.Error())
|
log.SError(err.Error())
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
return rpcHandler.CallMethod(client,serviceMethod, args,callBack, reply)
|
return rpcHandler.CallMethod(client,serviceMethod, args,callBack, reply)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (server *Server) selfNodeRpcHandlerGo(processor IRpcProcessor, client *Client, noReply bool, handlerName string, rpcMethodId uint32, serviceMethod string, args interface{}, reply interface{}, rawArgs []byte) *Call {
|
func (server *Server) selfNodeRpcHandlerGo(timeout time.Duration,processor IRpcProcessor, client *Client, noReply bool, handlerName string, rpcMethodId uint32, serviceMethod string, args interface{}, reply interface{}, rawArgs []byte) *Call {
|
||||||
pCall := MakeCall()
|
pCall := MakeCall()
|
||||||
pCall.Seq = client.generateSeq()
|
pCall.Seq = client.generateSeq()
|
||||||
|
pCall.TimeOut = timeout
|
||||||
|
|
||||||
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
|
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
|
||||||
if rpcHandler == nil {
|
if rpcHandler == nil {
|
||||||
|
err := errors.New("service method " + serviceMethod + " not config!")
|
||||||
|
log.SError(err.Error())
|
||||||
pCall.Seq = 0
|
pCall.Seq = 0
|
||||||
pCall.Err = errors.New("service method " + serviceMethod + " not config!")
|
pCall.DoError(err)
|
||||||
pCall.done <- pCall
|
|
||||||
log.SError(pCall.Err.Error())
|
|
||||||
|
|
||||||
return pCall
|
return pCall
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var iParam interface{}
|
||||||
if processor == nil {
|
if processor == nil {
|
||||||
_, processor = GetProcessorType(args)
|
_, processor = GetProcessorType(args)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if args != nil {
|
||||||
|
var err error
|
||||||
|
iParam,err = processor.Clone(args)
|
||||||
|
if err != nil {
|
||||||
|
sErr := errors.New("RpcHandler " + handlerName + "."+serviceMethod+" deep copy inParam is error:" + err.Error())
|
||||||
|
log.SError(sErr.Error())
|
||||||
|
pCall.Seq = 0
|
||||||
|
pCall.DoError(sErr)
|
||||||
|
|
||||||
|
return pCall
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
req := MakeRpcRequest(processor, 0, rpcMethodId, serviceMethod, noReply, nil)
|
req := MakeRpcRequest(processor, 0, rpcMethodId, serviceMethod, noReply, nil)
|
||||||
req.inParam = args
|
req.inParam = iParam
|
||||||
req.localReply = reply
|
req.localReply = reply
|
||||||
if rawArgs != nil {
|
if rawArgs != nil {
|
||||||
var err error
|
var err error
|
||||||
req.inParam, err = rpcHandler.UnmarshalInParam(processor, serviceMethod, rpcMethodId, rawArgs)
|
req.inParam, err = rpcHandler.UnmarshalInParam(processor, serviceMethod, rpcMethodId, rawArgs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
log.SError(err.Error())
|
||||||
|
pCall.Seq = 0
|
||||||
|
pCall.DoError(err)
|
||||||
ReleaseRpcRequest(req)
|
ReleaseRpcRequest(req)
|
||||||
pCall.Err = err
|
|
||||||
pCall.done <- pCall
|
|
||||||
return pCall
|
return pCall
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -289,20 +343,85 @@ func (server *Server) selfNodeRpcHandlerGo(processor IRpcProcessor, client *Clie
|
|||||||
if reply != nil && Returns != reply && Returns != nil {
|
if reply != nil && Returns != reply && Returns != nil {
|
||||||
byteReturns, err := req.rpcProcessor.Marshal(Returns)
|
byteReturns, err := req.rpcProcessor.Marshal(Returns)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.SError("returns data cannot be marshal ", callSeq)
|
Err = ConvertError(err)
|
||||||
ReleaseRpcRequest(req)
|
log.SError("returns data cannot be marshal,callSeq is ", callSeq," error is ",err.Error())
|
||||||
}
|
}else{
|
||||||
|
err = req.rpcProcessor.Unmarshal(byteReturns, reply)
|
||||||
err = req.rpcProcessor.Unmarshal(byteReturns, reply)
|
if err != nil {
|
||||||
if err != nil {
|
Err = ConvertError(err)
|
||||||
log.SError("returns data cannot be Unmarshal ", callSeq)
|
log.SError("returns data cannot be Unmarshal,callSeq is ", callSeq," error is ",err.Error())
|
||||||
ReleaseRpcRequest(req)
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ReleaseRpcRequest(req)
|
||||||
v := client.RemovePending(callSeq)
|
v := client.RemovePending(callSeq)
|
||||||
if v == nil {
|
if v == nil {
|
||||||
log.SError("rpcClient cannot find seq ",callSeq, " in pending")
|
log.SError("rpcClient cannot find seq ",callSeq, " in pending")
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(Err) == 0 {
|
||||||
|
v.Err = nil
|
||||||
|
v.DoOK()
|
||||||
|
} else {
|
||||||
|
log.SError(Err.Error())
|
||||||
|
v.DoError(Err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
err := rpcHandler.PushRpcRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
log.SError(err.Error())
|
||||||
|
pCall.DoError(err)
|
||||||
|
ReleaseRpcRequest(req)
|
||||||
|
}
|
||||||
|
|
||||||
|
return pCall
|
||||||
|
}
|
||||||
|
|
||||||
|
func (server *Server) selfNodeRpcHandlerAsyncGo(timeout time.Duration,client *Client, callerRpcHandler IRpcHandler, noReply bool, handlerName string, serviceMethod string, args interface{}, reply interface{}, callback reflect.Value,cancelable bool) (CancelRpc,error) {
|
||||||
|
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
|
||||||
|
if rpcHandler == nil {
|
||||||
|
err := errors.New("service method " + serviceMethod + " not config!")
|
||||||
|
log.SError(err.Error())
|
||||||
|
return emptyCancelRpc,err
|
||||||
|
}
|
||||||
|
|
||||||
|
_, processor := GetProcessorType(args)
|
||||||
|
iParam,err := processor.Clone(args)
|
||||||
|
if err != nil {
|
||||||
|
errM := errors.New("RpcHandler " + handlerName + "."+serviceMethod+" deep copy inParam is error:" + err.Error())
|
||||||
|
log.SError(errM.Error())
|
||||||
|
return emptyCancelRpc,errM
|
||||||
|
}
|
||||||
|
|
||||||
|
req := MakeRpcRequest(processor, 0, 0, serviceMethod, noReply, nil)
|
||||||
|
req.inParam = iParam
|
||||||
|
req.localReply = reply
|
||||||
|
|
||||||
|
cancelRpc := emptyCancelRpc
|
||||||
|
var callSeq uint64
|
||||||
|
if noReply == false {
|
||||||
|
callSeq = client.generateSeq()
|
||||||
|
pCall := MakeCall()
|
||||||
|
pCall.Seq = callSeq
|
||||||
|
pCall.rpcHandler = callerRpcHandler
|
||||||
|
pCall.callback = &callback
|
||||||
|
pCall.Reply = reply
|
||||||
|
pCall.ServiceMethod = serviceMethod
|
||||||
|
pCall.TimeOut = timeout
|
||||||
|
client.AddPending(pCall)
|
||||||
|
rpcCancel := RpcCancel{CallSeq: callSeq,Cli: client}
|
||||||
|
cancelRpc = rpcCancel.CancelRpc
|
||||||
|
|
||||||
|
req.requestHandle = func(Returns interface{}, Err RpcError) {
|
||||||
|
v := client.RemovePending(callSeq)
|
||||||
|
if v == nil {
|
||||||
|
log.SError("rpcClient cannot find seq ", callSeq, " in pending, service method is ",serviceMethod)
|
||||||
|
//ReleaseCall(pCall)
|
||||||
ReleaseRpcRequest(req)
|
ReleaseRpcRequest(req)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -311,70 +430,23 @@ func (server *Server) selfNodeRpcHandlerGo(processor IRpcProcessor, client *Clie
|
|||||||
} else {
|
} else {
|
||||||
v.Err = Err
|
v.Err = Err
|
||||||
}
|
}
|
||||||
v.done <- v
|
|
||||||
ReleaseRpcRequest(req)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
err := rpcHandler.PushRpcRequest(req)
|
|
||||||
if err != nil {
|
|
||||||
ReleaseRpcRequest(req)
|
|
||||||
pCall.Err = err
|
|
||||||
pCall.done <- pCall
|
|
||||||
}
|
|
||||||
|
|
||||||
return pCall
|
|
||||||
}
|
|
||||||
|
|
||||||
func (server *Server) selfNodeRpcHandlerAsyncGo(client *Client, callerRpcHandler IRpcHandler, noReply bool, handlerName string, serviceMethod string, args interface{}, reply interface{}, callback reflect.Value) error {
|
|
||||||
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
|
|
||||||
if rpcHandler == nil {
|
|
||||||
err := errors.New("service method " + serviceMethod + " not config!")
|
|
||||||
log.SError(err.Error())
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
_, processor := GetProcessorType(args)
|
|
||||||
req := MakeRpcRequest(processor, 0, 0, serviceMethod, noReply, nil)
|
|
||||||
req.inParam = args
|
|
||||||
req.localReply = reply
|
|
||||||
|
|
||||||
if noReply == false {
|
|
||||||
callSeq := client.generateSeq()
|
|
||||||
pCall := MakeCall()
|
|
||||||
pCall.Seq = callSeq
|
|
||||||
pCall.rpcHandler = callerRpcHandler
|
|
||||||
pCall.callback = &callback
|
|
||||||
pCall.Reply = reply
|
|
||||||
pCall.ServiceMethod = serviceMethod
|
|
||||||
client.AddPending(pCall)
|
|
||||||
req.requestHandle = func(Returns interface{}, Err RpcError) {
|
|
||||||
v := client.RemovePending(callSeq)
|
|
||||||
if v == nil {
|
|
||||||
log.SError("rpcClient cannot find seq ", pCall.Seq, " in pending")
|
|
||||||
//ReleaseCall(pCall)
|
|
||||||
ReleaseRpcRequest(req)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if len(Err) == 0 {
|
|
||||||
pCall.Err = nil
|
|
||||||
} else {
|
|
||||||
pCall.Err = Err
|
|
||||||
}
|
|
||||||
|
|
||||||
if Returns != nil {
|
if Returns != nil {
|
||||||
pCall.Reply = Returns
|
v.Reply = Returns
|
||||||
}
|
}
|
||||||
pCall.rpcHandler.PushRpcResponse(pCall)
|
v.rpcHandler.PushRpcResponse(v)
|
||||||
ReleaseRpcRequest(req)
|
ReleaseRpcRequest(req)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
err := rpcHandler.PushRpcRequest(req)
|
err = rpcHandler.PushRpcRequest(req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ReleaseRpcRequest(req)
|
ReleaseRpcRequest(req)
|
||||||
return err
|
if callSeq > 0 {
|
||||||
|
client.RemovePending(callSeq)
|
||||||
|
}
|
||||||
|
return emptyCancelRpc,err
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return cancelRpc,nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -10,11 +10,13 @@ import (
|
|||||||
"github.com/duanhf2012/origin/log"
|
"github.com/duanhf2012/origin/log"
|
||||||
rpcHandle "github.com/duanhf2012/origin/rpc"
|
rpcHandle "github.com/duanhf2012/origin/rpc"
|
||||||
"github.com/duanhf2012/origin/util/timer"
|
"github.com/duanhf2012/origin/util/timer"
|
||||||
|
"github.com/duanhf2012/origin/concurrent"
|
||||||
)
|
)
|
||||||
|
|
||||||
const InitModuleId = 1e9
|
const InitModuleId = 1e9
|
||||||
|
|
||||||
type IModule interface {
|
type IModule interface {
|
||||||
|
concurrent.IConcurrent
|
||||||
SetModuleId(moduleId uint32) bool
|
SetModuleId(moduleId uint32) bool
|
||||||
GetModuleId() uint32
|
GetModuleId() uint32
|
||||||
AddModule(module IModule) (uint32, error)
|
AddModule(module IModule) (uint32, error)
|
||||||
@@ -56,6 +58,7 @@ type Module struct {
|
|||||||
|
|
||||||
//事件管道
|
//事件管道
|
||||||
eventHandler event.IEventHandler
|
eventHandler event.IEventHandler
|
||||||
|
concurrent.IConcurrent
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *Module) SetModuleId(moduleId uint32) bool {
|
func (m *Module) SetModuleId(moduleId uint32) bool {
|
||||||
@@ -105,6 +108,7 @@ func (m *Module) AddModule(module IModule) (uint32, error) {
|
|||||||
pAddModule.moduleName = reflect.Indirect(reflect.ValueOf(module)).Type().Name()
|
pAddModule.moduleName = reflect.Indirect(reflect.ValueOf(module)).Type().Name()
|
||||||
pAddModule.eventHandler = event.NewEventHandler()
|
pAddModule.eventHandler = event.NewEventHandler()
|
||||||
pAddModule.eventHandler.Init(m.eventHandler.GetEventProcessor())
|
pAddModule.eventHandler.Init(m.eventHandler.GetEventProcessor())
|
||||||
|
pAddModule.IConcurrent = m.IConcurrent
|
||||||
err := module.OnInit()
|
err := module.OnInit()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, err
|
return 0, err
|
||||||
@@ -273,6 +277,11 @@ func (m *Module) SafeNewTicker(tickerId *uint64, d time.Duration, AdditionData i
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (m *Module) CancelTimerId(timerId *uint64) bool {
|
func (m *Module) CancelTimerId(timerId *uint64) bool {
|
||||||
|
if timerId==nil || *timerId == 0 {
|
||||||
|
log.SWarning("timerId is invalid")
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
if m.mapActiveIdTimer == nil {
|
if m.mapActiveIdTimer == nil {
|
||||||
log.SError("mapActiveIdTimer is nil")
|
log.SError("mapActiveIdTimer is nil")
|
||||||
return false
|
return false
|
||||||
@@ -280,7 +289,7 @@ func (m *Module) CancelTimerId(timerId *uint64) bool {
|
|||||||
|
|
||||||
t, ok := m.mapActiveIdTimer[*timerId]
|
t, ok := m.mapActiveIdTimer[*timerId]
|
||||||
if ok == false {
|
if ok == false {
|
||||||
log.SError("cannot find timer id ", timerId)
|
log.SStack("cannot find timer id ", timerId)
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -7,22 +7,22 @@ import (
|
|||||||
"github.com/duanhf2012/origin/log"
|
"github.com/duanhf2012/origin/log"
|
||||||
"github.com/duanhf2012/origin/profiler"
|
"github.com/duanhf2012/origin/profiler"
|
||||||
"github.com/duanhf2012/origin/rpc"
|
"github.com/duanhf2012/origin/rpc"
|
||||||
originSync "github.com/duanhf2012/origin/util/sync"
|
|
||||||
"github.com/duanhf2012/origin/util/timer"
|
"github.com/duanhf2012/origin/util/timer"
|
||||||
"reflect"
|
"reflect"
|
||||||
"runtime"
|
"runtime"
|
||||||
"strconv"
|
"strconv"
|
||||||
"sync"
|
"sync"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
|
"github.com/duanhf2012/origin/concurrent"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
var closeSig chan bool
|
|
||||||
var timerDispatcherLen = 100000
|
var timerDispatcherLen = 100000
|
||||||
|
var maxServiceEventChannelNum = 2000000
|
||||||
|
|
||||||
type IService interface {
|
type IService interface {
|
||||||
|
concurrent.IConcurrent
|
||||||
Init(iService IService,getClientFun rpc.FuncRpcClient,getServerFun rpc.FuncRpcServer,serviceCfg interface{})
|
Init(iService IService,getClientFun rpc.FuncRpcClient,getServerFun rpc.FuncRpcServer,serviceCfg interface{})
|
||||||
Wait()
|
Stop()
|
||||||
Start()
|
Start()
|
||||||
|
|
||||||
OnSetup(iService IService)
|
OnSetup(iService IService)
|
||||||
@@ -42,14 +42,9 @@ type IService interface {
|
|||||||
OpenProfiler()
|
OpenProfiler()
|
||||||
}
|
}
|
||||||
|
|
||||||
// eventPool的内存池,缓存Event
|
|
||||||
var maxServiceEventChannel = 2000000
|
|
||||||
var eventPool = originSync.NewPoolEx(make(chan originSync.IPoolData, maxServiceEventChannel), func() originSync.IPoolData {
|
|
||||||
return &event.Event{}
|
|
||||||
})
|
|
||||||
|
|
||||||
type Service struct {
|
type Service struct {
|
||||||
Module
|
Module
|
||||||
|
|
||||||
rpcHandler rpc.RpcHandler //rpc
|
rpcHandler rpc.RpcHandler //rpc
|
||||||
name string //service name
|
name string //service name
|
||||||
wg sync.WaitGroup
|
wg sync.WaitGroup
|
||||||
@@ -61,6 +56,7 @@ type Service struct {
|
|||||||
nodeEventLister rpc.INodeListener
|
nodeEventLister rpc.INodeListener
|
||||||
discoveryServiceLister rpc.IDiscoveryServiceListener
|
discoveryServiceLister rpc.IDiscoveryServiceListener
|
||||||
chanEvent chan event.IEvent
|
chanEvent chan event.IEvent
|
||||||
|
closeSig chan struct{}
|
||||||
}
|
}
|
||||||
|
|
||||||
// RpcConnEvent Node结点连接事件
|
// RpcConnEvent Node结点连接事件
|
||||||
@@ -77,10 +73,7 @@ type DiscoveryServiceEvent struct{
|
|||||||
}
|
}
|
||||||
|
|
||||||
func SetMaxServiceChannel(maxEventChannel int){
|
func SetMaxServiceChannel(maxEventChannel int){
|
||||||
maxServiceEventChannel = maxEventChannel
|
maxServiceEventChannelNum = maxEventChannel
|
||||||
eventPool = originSync.NewPoolEx(make(chan originSync.IPoolData, maxServiceEventChannel), func() originSync.IPoolData {
|
|
||||||
return &event.Event{}
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (rpcEventData *DiscoveryServiceEvent) GetEventType() event.EventType{
|
func (rpcEventData *DiscoveryServiceEvent) GetEventType() event.EventType{
|
||||||
@@ -105,9 +98,10 @@ func (s *Service) OpenProfiler() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *Service) Init(iService IService,getClientFun rpc.FuncRpcClient,getServerFun rpc.FuncRpcServer,serviceCfg interface{}) {
|
func (s *Service) Init(iService IService,getClientFun rpc.FuncRpcClient,getServerFun rpc.FuncRpcServer,serviceCfg interface{}) {
|
||||||
|
s.closeSig = make(chan struct{})
|
||||||
s.dispatcher =timer.NewDispatcher(timerDispatcherLen)
|
s.dispatcher =timer.NewDispatcher(timerDispatcherLen)
|
||||||
if s.chanEvent == nil {
|
if s.chanEvent == nil {
|
||||||
s.chanEvent = make(chan event.IEvent,maxServiceEventChannel)
|
s.chanEvent = make(chan event.IEvent,maxServiceEventChannelNum)
|
||||||
}
|
}
|
||||||
|
|
||||||
s.rpcHandler.InitRpcHandler(iService.(rpc.IRpcHandler),getClientFun,getServerFun,iService.(rpc.IRpcHandlerChannel))
|
s.rpcHandler.InitRpcHandler(iService.(rpc.IRpcHandler),getClientFun,getServerFun,iService.(rpc.IRpcHandlerChannel))
|
||||||
@@ -123,29 +117,42 @@ func (s *Service) Init(iService IService,getClientFun rpc.FuncRpcClient,getServe
|
|||||||
s.eventProcessor.Init(s)
|
s.eventProcessor.Init(s)
|
||||||
s.eventHandler = event.NewEventHandler()
|
s.eventHandler = event.NewEventHandler()
|
||||||
s.eventHandler.Init(s.eventProcessor)
|
s.eventHandler.Init(s.eventProcessor)
|
||||||
|
s.Module.IConcurrent = &concurrent.Concurrent{}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
func (s *Service) Start() {
|
func (s *Service) Start() {
|
||||||
s.startStatus = true
|
s.startStatus = true
|
||||||
|
var waitRun sync.WaitGroup
|
||||||
|
|
||||||
for i:=int32(0);i< s.goroutineNum;i++{
|
for i:=int32(0);i< s.goroutineNum;i++{
|
||||||
s.wg.Add(1)
|
s.wg.Add(1)
|
||||||
|
waitRun.Add(1)
|
||||||
go func(){
|
go func(){
|
||||||
|
log.SRelease(s.GetName()," service is running",)
|
||||||
|
waitRun.Done()
|
||||||
s.Run()
|
s.Run()
|
||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
waitRun.Wait()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Service) Run() {
|
func (s *Service) Run() {
|
||||||
log.SDebug("Start running Service ", s.GetName())
|
|
||||||
defer s.wg.Done()
|
defer s.wg.Done()
|
||||||
var bStop = false
|
var bStop = false
|
||||||
|
|
||||||
|
concurrent := s.IConcurrent.(*concurrent.Concurrent)
|
||||||
|
concurrentCBChannel := concurrent.GetCallBackChannel()
|
||||||
|
|
||||||
s.self.(IService).OnStart()
|
s.self.(IService).OnStart()
|
||||||
for{
|
for{
|
||||||
var analyzer *profiler.Analyzer
|
var analyzer *profiler.Analyzer
|
||||||
select {
|
select {
|
||||||
case <- closeSig:
|
case <- s.closeSig:
|
||||||
bStop = true
|
bStop = true
|
||||||
|
concurrent.Close()
|
||||||
|
case cb:=<-concurrentCBChannel:
|
||||||
|
concurrent.DoCallback(cb)
|
||||||
case ev := <- s.chanEvent:
|
case ev := <- s.chanEvent:
|
||||||
switch ev.GetEventType() {
|
switch ev.GetEventType() {
|
||||||
case event.ServiceRpcRequestEvent:
|
case event.ServiceRpcRequestEvent:
|
||||||
@@ -168,7 +175,7 @@ func (s *Service) Run() {
|
|||||||
analyzer.Pop()
|
analyzer.Pop()
|
||||||
analyzer = nil
|
analyzer = nil
|
||||||
}
|
}
|
||||||
eventPool.Put(cEvent)
|
event.DeleteEvent(cEvent)
|
||||||
case event.ServiceRpcResponseEvent:
|
case event.ServiceRpcResponseEvent:
|
||||||
cEvent,ok := ev.(*event.Event)
|
cEvent,ok := ev.(*event.Event)
|
||||||
if ok == false {
|
if ok == false {
|
||||||
@@ -188,7 +195,7 @@ func (s *Service) Run() {
|
|||||||
analyzer.Pop()
|
analyzer.Pop()
|
||||||
analyzer = nil
|
analyzer = nil
|
||||||
}
|
}
|
||||||
eventPool.Put(cEvent)
|
event.DeleteEvent(cEvent)
|
||||||
default:
|
default:
|
||||||
if s.profiler!=nil {
|
if s.profiler!=nil {
|
||||||
analyzer = s.profiler.Push("[SEvent]"+strconv.Itoa(int(ev.GetEventType())))
|
analyzer = s.profiler.Push("[SEvent]"+strconv.Itoa(int(ev.GetEventType())))
|
||||||
@@ -238,8 +245,8 @@ func (s *Service) Release(){
|
|||||||
log.SError("core dump info[",errString,"]\n",string(buf[:l]))
|
log.SError("core dump info[",errString,"]\n",string(buf[:l]))
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
s.self.OnRelease()
|
s.self.OnRelease()
|
||||||
log.SDebug("Release Service ", s.GetName())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Service) OnRelease(){
|
func (s *Service) OnRelease(){
|
||||||
@@ -249,8 +256,11 @@ func (s *Service) OnInit() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Service) Wait(){
|
func (s *Service) Stop(){
|
||||||
|
log.SRelease("stop ",s.GetName()," service ")
|
||||||
|
close(s.closeSig)
|
||||||
s.wg.Wait()
|
s.wg.Wait()
|
||||||
|
log.SRelease(s.GetName()," service has been stopped")
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Service) GetServiceCfg()interface{}{
|
func (s *Service) GetServiceCfg()interface{}{
|
||||||
@@ -320,9 +330,8 @@ func (s *Service) UnRegDiscoverListener(rpcLister rpc.INodeListener) {
|
|||||||
UnRegDiscoveryServiceEventFun(s.GetName())
|
UnRegDiscoveryServiceEventFun(s.GetName())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
func (s *Service) PushRpcRequest(rpcRequest *rpc.RpcRequest) error{
|
func (s *Service) PushRpcRequest(rpcRequest *rpc.RpcRequest) error{
|
||||||
ev := eventPool.Get().(*event.Event)
|
ev := event.NewEvent()
|
||||||
ev.Type = event.ServiceRpcRequestEvent
|
ev.Type = event.ServiceRpcRequestEvent
|
||||||
ev.Data = rpcRequest
|
ev.Data = rpcRequest
|
||||||
|
|
||||||
@@ -330,7 +339,7 @@ func (s *Service) PushRpcRequest(rpcRequest *rpc.RpcRequest) error{
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *Service) PushRpcResponse(call *rpc.Call) error{
|
func (s *Service) PushRpcResponse(call *rpc.Call) error{
|
||||||
ev := eventPool.Get().(*event.Event)
|
ev := event.NewEvent()
|
||||||
ev.Type = event.ServiceRpcResponseEvent
|
ev.Type = event.ServiceRpcResponseEvent
|
||||||
ev.Data = call
|
ev.Data = call
|
||||||
|
|
||||||
@@ -342,7 +351,7 @@ func (s *Service) PushEvent(ev event.IEvent) error{
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *Service) pushEvent(ev event.IEvent) error{
|
func (s *Service) pushEvent(ev event.IEvent) error{
|
||||||
if len(s.chanEvent) >= maxServiceEventChannel {
|
if len(s.chanEvent) >= maxServiceEventChannelNum {
|
||||||
err := errors.New("The event channel in the service is full")
|
err := errors.New("The event channel in the service is full")
|
||||||
log.SError(err.Error())
|
log.SError(err.Error())
|
||||||
return err
|
return err
|
||||||
|
|||||||
@@ -19,9 +19,7 @@ func init(){
|
|||||||
setupServiceList = []IService{}
|
setupServiceList = []IService{}
|
||||||
}
|
}
|
||||||
|
|
||||||
func Init(chanCloseSig chan bool) {
|
func Init() {
|
||||||
closeSig=chanCloseSig
|
|
||||||
|
|
||||||
for _,s := range setupServiceList {
|
for _,s := range setupServiceList {
|
||||||
err := s.OnInit()
|
err := s.OnInit()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -57,8 +55,8 @@ func Start(){
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func WaitStop(){
|
func StopAllService(){
|
||||||
for i := len(setupServiceList) - 1; i >= 0; i-- {
|
for i := len(setupServiceList) - 1; i >= 0; i-- {
|
||||||
setupServiceList[i].Wait()
|
setupServiceList[i].Stop()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -68,34 +68,39 @@ func (s *Session) NextSeq(db string, collection string, id interface{}) (int, er
|
|||||||
|
|
||||||
after := options.After
|
after := options.After
|
||||||
updateOpts := options.FindOneAndUpdateOptions{ReturnDocument: &after}
|
updateOpts := options.FindOneAndUpdateOptions{ReturnDocument: &after}
|
||||||
err := s.Client.Database(db).Collection(collection).FindOneAndUpdate(ctxTimeout, bson.M{"_id": id}, bson.M{"$inc": bson.M{"Seq": 1}},&updateOpts).Decode(&res)
|
err := s.Client.Database(db).Collection(collection).FindOneAndUpdate(ctxTimeout, bson.M{"_id": id}, bson.M{"$inc": bson.M{"Seq": 1}}, &updateOpts).Decode(&res)
|
||||||
return res.Seq, err
|
return res.Seq, err
|
||||||
}
|
}
|
||||||
|
|
||||||
//indexKeys[索引][每个索引key字段]
|
// indexKeys[索引][每个索引key字段]
|
||||||
func (s *Session) EnsureIndex(db string, collection string, indexKeys [][]string, bBackground bool,sparse bool) error {
|
func (s *Session) EnsureIndex(db string, collection string, indexKeys [][]string, bBackground bool, sparse bool, asc bool) error {
|
||||||
return s.ensureIndex(db, collection, indexKeys, bBackground, false,sparse)
|
return s.ensureIndex(db, collection, indexKeys, bBackground, false, sparse, asc)
|
||||||
}
|
}
|
||||||
|
|
||||||
//indexKeys[索引][每个索引key字段]
|
// indexKeys[索引][每个索引key字段]
|
||||||
func (s *Session) EnsureUniqueIndex(db string, collection string, indexKeys [][]string, bBackground bool,sparse bool) error {
|
func (s *Session) EnsureUniqueIndex(db string, collection string, indexKeys [][]string, bBackground bool, sparse bool, asc bool) error {
|
||||||
return s.ensureIndex(db, collection, indexKeys, bBackground, true,sparse)
|
return s.ensureIndex(db, collection, indexKeys, bBackground, true, sparse, asc)
|
||||||
}
|
}
|
||||||
|
|
||||||
//keys[索引][每个索引key字段]
|
// keys[索引][每个索引key字段]
|
||||||
func (s *Session) ensureIndex(db string, collection string, indexKeys [][]string, bBackground bool, unique bool,sparse bool) error {
|
func (s *Session) ensureIndex(db string, collection string, indexKeys [][]string, bBackground bool, unique bool, sparse bool, asc bool) error {
|
||||||
var indexes []mongo.IndexModel
|
var indexes []mongo.IndexModel
|
||||||
for _, keys := range indexKeys {
|
for _, keys := range indexKeys {
|
||||||
keysDoc := bsonx.Doc{}
|
keysDoc := bsonx.Doc{}
|
||||||
for _, key := range keys {
|
for _, key := range keys {
|
||||||
keysDoc = keysDoc.Append(key, bsonx.Int32(1))
|
if asc {
|
||||||
|
keysDoc = keysDoc.Append(key, bsonx.Int32(1))
|
||||||
|
} else {
|
||||||
|
keysDoc = keysDoc.Append(key, bsonx.Int32(-1))
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
options:= options.Index().SetUnique(unique).SetBackground(bBackground)
|
options := options.Index().SetUnique(unique).SetBackground(bBackground)
|
||||||
if sparse == true {
|
if sparse == true {
|
||||||
options.SetSparse(true)
|
options.SetSparse(true)
|
||||||
}
|
}
|
||||||
indexes = append(indexes, mongo.IndexModel{Keys: keysDoc, Options:options })
|
indexes = append(indexes, mongo.IndexModel{Keys: keysDoc, Options: options})
|
||||||
}
|
}
|
||||||
|
|
||||||
ctxTimeout, cancel := context.WithTimeout(context.Background(), s.maxOperatorTimeOut)
|
ctxTimeout, cancel := context.WithTimeout(context.Background(), s.maxOperatorTimeOut)
|
||||||
|
|||||||
@@ -25,6 +25,7 @@ type CustomerSubscriber struct {
|
|||||||
customerId string
|
customerId string
|
||||||
|
|
||||||
isStop int32 //退出标记
|
isStop int32 //退出标记
|
||||||
|
topicCache []TopicData // 从消息队列中取出来的消息的缓存
|
||||||
}
|
}
|
||||||
|
|
||||||
const DefaultOneBatchQuantity = 1000
|
const DefaultOneBatchQuantity = 1000
|
||||||
@@ -79,6 +80,7 @@ func (cs *CustomerSubscriber) trySetSubscriberBaseInfo(rpcHandler rpc.IRpcHandle
|
|||||||
cs.StartIndex = uint64(zeroTime.Unix() << 32)
|
cs.StartIndex = uint64(zeroTime.Unix() << 32)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
cs.topicCache = make([]TopicData, oneBatchQuantity)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -156,14 +158,14 @@ func (cs *CustomerSubscriber) SubscribeRun() {
|
|||||||
|
|
||||||
func (cs *CustomerSubscriber) subscribe() bool {
|
func (cs *CustomerSubscriber) subscribe() bool {
|
||||||
//先从内存中查找
|
//先从内存中查找
|
||||||
topicData, ret := cs.subscriber.queue.FindData(cs.StartIndex, cs.oneBatchQuantity)
|
topicData, ret := cs.subscriber.queue.FindData(cs.StartIndex+1, cs.oneBatchQuantity, cs.topicCache[:0])
|
||||||
if ret == true {
|
if ret == true {
|
||||||
cs.publishToCustomer(topicData)
|
cs.publishToCustomer(topicData)
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
//从持久化数据中来找
|
//从持久化数据中来找
|
||||||
topicData = cs.subscriber.dataPersist.FindTopicData(cs.topic, cs.StartIndex, int64(cs.oneBatchQuantity))
|
topicData = cs.subscriber.dataPersist.FindTopicData(cs.topic, cs.StartIndex, int64(cs.oneBatchQuantity),cs.topicCache[:0])
|
||||||
return cs.publishToCustomer(topicData)
|
return cs.publishToCustomer(topicData)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -188,7 +190,7 @@ func (cs *CustomerSubscriber) publishToCustomer(topicData []TopicData) bool {
|
|||||||
|
|
||||||
if len(topicData) == 0 {
|
if len(topicData) == 0 {
|
||||||
//没有任何数据待一秒吧
|
//没有任何数据待一秒吧
|
||||||
time.Sleep(time.Millisecond * 100)
|
time.Sleep(time.Second * 1)
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -211,7 +213,7 @@ func (cs *CustomerSubscriber) publishToCustomer(topicData []TopicData) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
//推送数据
|
//推送数据
|
||||||
err := cs.CallNode(cs.fromNodeId, cs.callBackRpcMethod, &dbQueuePublishReq, &dbQueuePushRes)
|
err := cs.CallNodeWithTimeout(4*time.Minute,cs.fromNodeId, cs.callBackRpcMethod, &dbQueuePublishReq, &dbQueuePushRes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
time.Sleep(time.Second * 1)
|
time.Sleep(time.Second * 1)
|
||||||
continue
|
continue
|
||||||
|
|||||||
@@ -49,13 +49,22 @@ func (mq *MemoryQueue) findData(startPos int32, startIndex uint64, limit int32)
|
|||||||
if findStartPos <= mq.tail {
|
if findStartPos <= mq.tail {
|
||||||
findEndPos = mq.tail + 1
|
findEndPos = mq.tail + 1
|
||||||
} else {
|
} else {
|
||||||
findEndPos = int32(cap(mq.topicQueue))
|
findEndPos = int32(len(mq.topicQueue))
|
||||||
|
}
|
||||||
|
|
||||||
|
if findStartPos >= findEndPos {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// 要取的Seq 比内存中最小的数据的Seq还小,那么需要返回错误
|
||||||
|
if mq.topicQueue[findStartPos].Seq > startIndex {
|
||||||
|
return nil, false
|
||||||
}
|
}
|
||||||
|
|
||||||
//二分查找位置
|
//二分查找位置
|
||||||
pos := int32(algorithms.BiSearch(mq.topicQueue[findStartPos:findEndPos], startIndex, 1))
|
pos := int32(algorithms.BiSearch(mq.topicQueue[findStartPos:findEndPos], startIndex, 1))
|
||||||
if pos == -1 {
|
if pos == -1 {
|
||||||
return nil, true
|
return nil, false
|
||||||
}
|
}
|
||||||
|
|
||||||
pos += findStartPos
|
pos += findStartPos
|
||||||
@@ -69,29 +78,31 @@ func (mq *MemoryQueue) findData(startPos int32, startIndex uint64, limit int32)
|
|||||||
}
|
}
|
||||||
|
|
||||||
// FindData 返回参数[]TopicData 表示查找到的数据,nil表示无数据。bool表示是否不应该在内存中来查
|
// FindData 返回参数[]TopicData 表示查找到的数据,nil表示无数据。bool表示是否不应该在内存中来查
|
||||||
func (mq *MemoryQueue) FindData(startIndex uint64, limit int32) ([]TopicData, bool) {
|
func (mq *MemoryQueue) FindData(startIndex uint64, limit int32, dataQueue []TopicData) ([]TopicData, bool) {
|
||||||
mq.locker.RLock()
|
mq.locker.RLock()
|
||||||
defer mq.locker.RUnlock()
|
defer mq.locker.RUnlock()
|
||||||
|
|
||||||
//队列为空时,应该从数据库查找
|
//队列为空时,应该从数据库查找
|
||||||
if mq.head == mq.tail {
|
if mq.head == mq.tail {
|
||||||
return nil, false
|
return nil, false
|
||||||
}
|
} else if mq.head < mq.tail {
|
||||||
|
// 队列没有折叠
|
||||||
/*
|
datas,ret := mq.findData(mq.head + 1, startIndex, limit)
|
||||||
//先判断startIndex是否比第一个元素要大
|
if ret {
|
||||||
headTopic := (mq.head + 1) % int32(len(mq.topicQueue))
|
dataQueue = append(dataQueue, datas...)
|
||||||
//此时需要从持久化数据中取
|
}
|
||||||
if startIndex+1 > mq.topicQueue[headTopic].Seq {
|
return dataQueue, ret
|
||||||
return nil, false
|
} else {
|
||||||
|
// 折叠先找后面的部分
|
||||||
|
datas,ret := mq.findData(mq.head+1, startIndex, limit)
|
||||||
|
if ret {
|
||||||
|
dataQueue = append(dataQueue, datas...)
|
||||||
|
return dataQueue, ret
|
||||||
}
|
}
|
||||||
*/
|
|
||||||
|
|
||||||
retData, ret := mq.findData(mq.head+1, startIndex, limit)
|
// 后面没找到,从前面开始找
|
||||||
if mq.head <= mq.tail || ret == true {
|
datas,ret = mq.findData(0, startIndex, limit)
|
||||||
return retData, true
|
dataQueue = append(dataQueue, datas...)
|
||||||
|
return dataQueue, ret
|
||||||
}
|
}
|
||||||
|
|
||||||
//如果是正常head在后,尾在前,从数组0下标开始找到tail
|
|
||||||
return mq.findData(0, startIndex, limit)
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,8 +15,8 @@ type QueueDataPersist interface {
|
|||||||
OnExit()
|
OnExit()
|
||||||
OnReceiveTopicData(topic string, topicData []TopicData) //当收到推送过来的数据时
|
OnReceiveTopicData(topic string, topicData []TopicData) //当收到推送过来的数据时
|
||||||
OnPushTopicDataToCustomer(topic string, topicData []TopicData) //当推送数据到Customer时回调
|
OnPushTopicDataToCustomer(topic string, topicData []TopicData) //当推送数据到Customer时回调
|
||||||
PersistTopicData(topic string, topicData []TopicData, retryCount int) ([]TopicData, bool) //持久化数据,失败则返回false,上层会重复尝试,直到成功,建议在函数中加入次数,超过次数则返回true
|
PersistTopicData(topic string, topicData []TopicData, retryCount int) ([]TopicData, []TopicData, bool) //持久化数据,失败则返回false,上层会重复尝试,直到成功,建议在函数中加入次数,超过次数则返回true
|
||||||
FindTopicData(topic string, startIndex uint64, limit int64) []TopicData //查找数据,参数bool代表数据库查找是否成功
|
FindTopicData(topic string, startIndex uint64, limit int64, topicBuff []TopicData) []TopicData //查找数据,参数bool代表数据库查找是否成功
|
||||||
LoadCustomerIndex(topic string, customerId string) (uint64, bool) //false时代表获取失败,一般是读取错误,会进行重试。如果不存在时,返回(0,true)
|
LoadCustomerIndex(topic string, customerId string) (uint64, bool) //false时代表获取失败,一般是读取错误,会进行重试。如果不存在时,返回(0,true)
|
||||||
GetIndex(topicData *TopicData) uint64 //通过topic数据获取进度索引号
|
GetIndex(topicData *TopicData) uint64 //通过topic数据获取进度索引号
|
||||||
PersistIndex(topic string, customerId string, index uint64) //持久化进度索引号
|
PersistIndex(topic string, customerId string, index uint64) //持久化进度索引号
|
||||||
|
|||||||
@@ -1,18 +1,49 @@
|
|||||||
package messagequeueservice
|
package messagequeueservice
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"github.com/duanhf2012/origin/log"
|
"github.com/duanhf2012/origin/log"
|
||||||
"github.com/duanhf2012/origin/service"
|
"github.com/duanhf2012/origin/service"
|
||||||
"github.com/duanhf2012/origin/sysmodule/mongodbmodule"
|
"github.com/duanhf2012/origin/sysmodule/mongodbmodule"
|
||||||
"go.mongodb.org/mongo-driver/bson"
|
"go.mongodb.org/mongo-driver/bson"
|
||||||
"go.mongodb.org/mongo-driver/mongo/options"
|
"go.mongodb.org/mongo-driver/mongo/options"
|
||||||
"sunserver/common/util"
|
|
||||||
"time"
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
const MaxDays = 180
|
const MaxDays = 180
|
||||||
|
|
||||||
|
type DataType interface {
|
||||||
|
int | uint | int64 | uint64 | float32 | float64 | int32 | uint32 | int16 | uint16
|
||||||
|
}
|
||||||
|
|
||||||
|
func convertToNumber[DType DataType](val interface{}) (error, DType) {
|
||||||
|
switch val.(type) {
|
||||||
|
case int64:
|
||||||
|
return nil, DType(val.(int64))
|
||||||
|
case int:
|
||||||
|
return nil, DType(val.(int))
|
||||||
|
case uint:
|
||||||
|
return nil, DType(val.(uint))
|
||||||
|
case uint64:
|
||||||
|
return nil, DType(val.(uint64))
|
||||||
|
case float32:
|
||||||
|
return nil, DType(val.(float32))
|
||||||
|
case float64:
|
||||||
|
return nil, DType(val.(float64))
|
||||||
|
case int32:
|
||||||
|
return nil, DType(val.(int32))
|
||||||
|
case uint32:
|
||||||
|
return nil, DType(val.(uint32))
|
||||||
|
case int16:
|
||||||
|
return nil, DType(val.(int16))
|
||||||
|
case uint16:
|
||||||
|
return nil, DType(val.(uint16))
|
||||||
|
}
|
||||||
|
|
||||||
|
return errors.New("unsupported type"), 0
|
||||||
|
}
|
||||||
|
|
||||||
type MongoPersist struct {
|
type MongoPersist struct {
|
||||||
service.Module
|
service.Module
|
||||||
mongo mongodbmodule.MongoModule
|
mongo mongodbmodule.MongoModule
|
||||||
@@ -20,8 +51,6 @@ type MongoPersist struct {
|
|||||||
url string //连接url
|
url string //连接url
|
||||||
dbName string //数据库名称
|
dbName string //数据库名称
|
||||||
retryCount int //落地数据库重试次数
|
retryCount int //落地数据库重试次数
|
||||||
|
|
||||||
topic []TopicData //用于临时缓存
|
|
||||||
}
|
}
|
||||||
|
|
||||||
const CustomerCollectName = "SysCustomer"
|
const CustomerCollectName = "SysCustomer"
|
||||||
@@ -48,7 +77,7 @@ func (mp *MongoPersist) OnInit() error {
|
|||||||
keys = append(keys, "Customer", "Topic")
|
keys = append(keys, "Customer", "Topic")
|
||||||
IndexKey = append(IndexKey, keys)
|
IndexKey = append(IndexKey, keys)
|
||||||
s := mp.mongo.TakeSession()
|
s := mp.mongo.TakeSession()
|
||||||
if err := s.EnsureUniqueIndex(mp.dbName, CustomerCollectName, IndexKey, true, true); err != nil {
|
if err := s.EnsureUniqueIndex(mp.dbName, CustomerCollectName, IndexKey, true, true,true); err != nil {
|
||||||
log.SError("EnsureUniqueIndex is fail ", err.Error())
|
log.SError("EnsureUniqueIndex is fail ", err.Error())
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -85,14 +114,6 @@ func (mp *MongoPersist) ReadCfg() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (mp *MongoPersist) getTopicBuff(limit int) []TopicData {
|
|
||||||
if cap(mp.topic) < limit {
|
|
||||||
mp.topic = make([]TopicData, limit)
|
|
||||||
}
|
|
||||||
|
|
||||||
return mp.topic[:0]
|
|
||||||
}
|
|
||||||
|
|
||||||
func (mp *MongoPersist) OnExit() {
|
func (mp *MongoPersist) OnExit() {
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -123,7 +144,6 @@ func (mp *MongoPersist) OnReceiveTopicData(topic string, topicData []TopicData)
|
|||||||
|
|
||||||
// OnPushTopicDataToCustomer 当推送数据到Customer时回调
|
// OnPushTopicDataToCustomer 当推送数据到Customer时回调
|
||||||
func (mp *MongoPersist) OnPushTopicDataToCustomer(topic string, topicData []TopicData) {
|
func (mp *MongoPersist) OnPushTopicDataToCustomer(topic string, topicData []TopicData) {
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// PersistTopicData 持久化数据
|
// PersistTopicData 持久化数据
|
||||||
@@ -142,20 +162,25 @@ func (mp *MongoPersist) persistTopicData(collectionName string, topicData []Topi
|
|||||||
|
|
||||||
_, err := s.Collection(mp.dbName, collectionName).InsertMany(ctx, documents)
|
_, err := s.Collection(mp.dbName, collectionName).InsertMany(ctx, documents)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.SError("PersistTopicData InsertMany fail,collect name is ", collectionName)
|
log.SError("PersistTopicData InsertMany fail,collect name is ", collectionName," error:",err.Error())
|
||||||
|
|
||||||
//失败最大重试数量
|
//失败最大重试数量
|
||||||
return retryCount >= mp.retryCount
|
return retryCount >= mp.retryCount
|
||||||
}
|
}
|
||||||
|
|
||||||
//log.SRelease("+++++++++====", time.Now().UnixNano())
|
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (mp *MongoPersist) IsSameDay(timestamp1 int64,timestamp2 int64) bool{
|
||||||
|
t1 := time.Unix(timestamp1, 0)
|
||||||
|
t2 := time.Unix(timestamp2, 0)
|
||||||
|
return t1.Year() == t2.Year() && t1.Month() == t2.Month()&&t1.Day() == t2.Day()
|
||||||
|
}
|
||||||
|
|
||||||
// PersistTopicData 持久化数据
|
// PersistTopicData 持久化数据
|
||||||
func (mp *MongoPersist) PersistTopicData(topic string, topicData []TopicData, retryCount int) ([]TopicData, bool) {
|
func (mp *MongoPersist) PersistTopicData(topic string, topicData []TopicData, retryCount int) ([]TopicData, []TopicData, bool) {
|
||||||
if len(topicData) == 0 {
|
if len(topicData) == 0 {
|
||||||
return nil, true
|
return nil, nil,true
|
||||||
}
|
}
|
||||||
|
|
||||||
preDate := topicData[0].Seq >> 32
|
preDate := topicData[0].Seq >> 32
|
||||||
@@ -163,7 +188,7 @@ func (mp *MongoPersist) PersistTopicData(topic string, topicData []TopicData, re
|
|||||||
for findPos = 1; findPos < len(topicData); findPos++ {
|
for findPos = 1; findPos < len(topicData); findPos++ {
|
||||||
newDate := topicData[findPos].Seq >> 32
|
newDate := topicData[findPos].Seq >> 32
|
||||||
//说明换天了
|
//说明换天了
|
||||||
if preDate != newDate {
|
if mp.IsSameDay(int64(preDate),int64(newDate)) == false {
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -172,15 +197,15 @@ func (mp *MongoPersist) PersistTopicData(topic string, topicData []TopicData, re
|
|||||||
ret := mp.persistTopicData(collectName, topicData[:findPos], retryCount)
|
ret := mp.persistTopicData(collectName, topicData[:findPos], retryCount)
|
||||||
//如果失败,下次重试
|
//如果失败,下次重试
|
||||||
if ret == false {
|
if ret == false {
|
||||||
return nil, false
|
return nil, nil, false
|
||||||
}
|
}
|
||||||
|
|
||||||
//如果成功
|
//如果成功
|
||||||
return topicData[findPos:len(topicData)], true
|
return topicData[findPos:len(topicData)], topicData[0:findPos], true
|
||||||
}
|
}
|
||||||
|
|
||||||
// FindTopicData 查找数据
|
// FindTopicData 查找数据
|
||||||
func (mp *MongoPersist) findTopicData(topic string, startIndex uint64, limit int64) ([]TopicData, bool) {
|
func (mp *MongoPersist) findTopicData(topic string, startIndex uint64, limit int64,topicBuff []TopicData) ([]TopicData, bool) {
|
||||||
s := mp.mongo.TakeSession()
|
s := mp.mongo.TakeSession()
|
||||||
|
|
||||||
|
|
||||||
@@ -222,7 +247,6 @@ func (mp *MongoPersist) findTopicData(topic string, startIndex uint64, limit int
|
|||||||
}
|
}
|
||||||
|
|
||||||
//序列化返回
|
//序列化返回
|
||||||
topicBuff := mp.getTopicBuff(int(limit))
|
|
||||||
for i := 0; i < len(res); i++ {
|
for i := 0; i < len(res); i++ {
|
||||||
rawData, errM := bson.Marshal(res[i])
|
rawData, errM := bson.Marshal(res[i])
|
||||||
if errM != nil {
|
if errM != nil {
|
||||||
@@ -257,7 +281,7 @@ func (mp *MongoPersist) getCollectCount(topic string,today string) (int64 ,error
|
|||||||
}
|
}
|
||||||
|
|
||||||
// FindTopicData 查找数据
|
// FindTopicData 查找数据
|
||||||
func (mp *MongoPersist) FindTopicData(topic string, startIndex uint64, limit int64) []TopicData {
|
func (mp *MongoPersist) FindTopicData(topic string, startIndex uint64, limit int64,topicBuff []TopicData) []TopicData {
|
||||||
//某表找不到,一直往前找,找到当前置为止
|
//某表找不到,一直往前找,找到当前置为止
|
||||||
for days := 1; days <= MaxDays; days++ {
|
for days := 1; days <= MaxDays; days++ {
|
||||||
//是否可以跳天
|
//是否可以跳天
|
||||||
@@ -281,7 +305,7 @@ func (mp *MongoPersist) FindTopicData(topic string, startIndex uint64, limit int
|
|||||||
}
|
}
|
||||||
|
|
||||||
//从startIndex开始一直往后查
|
//从startIndex开始一直往后查
|
||||||
topicData, isSucc := mp.findTopicData(topic, startIndex, limit)
|
topicData, isSucc := mp.findTopicData(topic, startIndex, limit,topicBuff)
|
||||||
//有数据或者数据库出错时返回,返回后,会进行下一轮的查询遍历
|
//有数据或者数据库出错时返回,返回后,会进行下一轮的查询遍历
|
||||||
if len(topicData) > 0 || isSucc == false {
|
if len(topicData) > 0 || isSucc == false {
|
||||||
return topicData
|
return topicData
|
||||||
@@ -370,7 +394,7 @@ func (mp *MongoPersist) GetIndex(topicData *TopicData) uint64 {
|
|||||||
|
|
||||||
for _, e := range document {
|
for _, e := range document {
|
||||||
if e.Key == "_id" {
|
if e.Key == "_id" {
|
||||||
errC, seq := util.ConvertToNumber[uint64](e.Value)
|
errC, seq := convertToNumber[uint64](e.Value)
|
||||||
if errC != nil {
|
if errC != nil {
|
||||||
log.Error("value is error:%s,%+v, ", errC.Error(), e.Value)
|
log.Error("value is error:%s,%+v, ", errC.Error(), e.Value)
|
||||||
}
|
}
|
||||||
@@ -394,8 +418,7 @@ func (mp *MongoPersist) PersistIndex(topic string, customerId string, index uint
|
|||||||
|
|
||||||
ctx, cancel := s.GetDefaultContext()
|
ctx, cancel := s.GetDefaultContext()
|
||||||
defer cancel()
|
defer cancel()
|
||||||
ret, err := s.Collection(mp.dbName, CustomerCollectName).UpdateOne(ctx, condition, updata, UpdateOptionsOpts...)
|
_, err := s.Collection(mp.dbName, CustomerCollectName).UpdateOne(ctx, condition, updata, UpdateOptionsOpts...)
|
||||||
fmt.Println(ret)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.SError("PersistIndex fail :", err.Error())
|
log.SError("PersistIndex fail :", err.Error())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -27,7 +27,7 @@ func (ss *Subscriber) PushTopicDataToQueue(topic string, topics []TopicData) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ss *Subscriber) PersistTopicData(topic string, topics []TopicData, retryCount int) ([]TopicData, bool) {
|
func (ss *Subscriber) PersistTopicData(topic string, topics []TopicData, retryCount int) ([]TopicData, []TopicData, bool) {
|
||||||
return ss.dataPersist.PersistTopicData(topic, topics, retryCount)
|
return ss.dataPersist.PersistTopicData(topic, topics, retryCount)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -113,25 +113,28 @@ func (tr *TopicRoom) topicRoomRun() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
//如果落地失败,最大重试maxTryPersistNum次数
|
//如果落地失败,最大重试maxTryPersistNum次数
|
||||||
var ret bool
|
for retryCount := 0; retryCount < maxTryPersistNum; {
|
||||||
for j := 0; j < maxTryPersistNum; {
|
|
||||||
//持久化处理
|
//持久化处理
|
||||||
stagingBuff, ret = tr.PersistTopicData(tr.topic, stagingBuff, j+1)
|
stagingBuff, savedBuff, ret := tr.PersistTopicData(tr.topic, stagingBuff, retryCount+1)
|
||||||
//如果存档成功,并且有后续批次,则继续存档
|
|
||||||
if ret == true && len(stagingBuff) > 0 {
|
if ret == true {
|
||||||
//二次存档不计次数
|
// 1. 把成功存储的数据放入内存中
|
||||||
continue
|
if len(savedBuff) > 0 {
|
||||||
}
|
tr.PushTopicDataToQueue(tr.topic, savedBuff)
|
||||||
|
}
|
||||||
//计数增加一次,并且等待100ms,继续重试
|
|
||||||
j += 1
|
// 2. 如果存档成功,并且有后续批次,则继续存档
|
||||||
if ret == false {
|
if ret == true && len(stagingBuff) > 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. 成功了,跳出
|
||||||
|
break
|
||||||
|
} else {
|
||||||
|
//计数增加一次,并且等待100ms,继续重试
|
||||||
|
retryCount++
|
||||||
time.Sleep(time.Millisecond * 100)
|
time.Sleep(time.Millisecond * 100)
|
||||||
continue
|
|
||||||
}
|
}
|
||||||
|
|
||||||
tr.PushTopicDataToQueue(tr.topic, stagingBuff)
|
|
||||||
break
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ import (
|
|||||||
"github.com/duanhf2012/origin/rpc"
|
"github.com/duanhf2012/origin/rpc"
|
||||||
"github.com/duanhf2012/origin/service"
|
"github.com/duanhf2012/origin/service"
|
||||||
"github.com/duanhf2012/origin/sysmodule/mongodbmodule"
|
"github.com/duanhf2012/origin/sysmodule/mongodbmodule"
|
||||||
"github.com/duanhf2012/origin/util/coroutine"
|
|
||||||
"go.mongodb.org/mongo-driver/bson"
|
"go.mongodb.org/mongo-driver/bson"
|
||||||
"go.mongodb.org/mongo-driver/mongo/options"
|
"go.mongodb.org/mongo-driver/mongo/options"
|
||||||
|
"runtime"
|
||||||
"sync"
|
"sync"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
"time"
|
"time"
|
||||||
@@ -18,10 +18,11 @@ const batchRemoveNum = 128 //一切删除的最大数量
|
|||||||
|
|
||||||
// RankDataDB 排行表数据
|
// RankDataDB 排行表数据
|
||||||
type RankDataDB struct {
|
type RankDataDB struct {
|
||||||
Id uint64 `bson:"_id,omitempty"`
|
Id uint64 `bson:"_id"`
|
||||||
RefreshTime int64 `bson:"RefreshTime,omitempty"`
|
RefreshTime int64 `bson:"RefreshTime"`
|
||||||
SortData []int64 `bson:"SortData,omitempty"`
|
SortData []int64 `bson:"SortData"`
|
||||||
Data []byte `bson:"Data,omitempty"`
|
Data []byte `bson:"Data"`
|
||||||
|
ExData []int64 `bson:"ExData"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// MongoPersist持久化Module
|
// MongoPersist持久化Module
|
||||||
@@ -70,7 +71,9 @@ func (mp *MongoPersist) OnInit() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
//开启协程
|
//开启协程
|
||||||
coroutine.GoRecover(mp.persistCoroutine,-1)
|
mp.waitGroup.Add(1)
|
||||||
|
go mp.persistCoroutine()
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -186,6 +189,9 @@ func (mp *MongoPersist) loadFromDB(rankId uint64,rankCollectName string) error{
|
|||||||
rankData.Data = rankDataDB.Data
|
rankData.Data = rankDataDB.Data
|
||||||
rankData.Key = rankDataDB.Id
|
rankData.Key = rankDataDB.Id
|
||||||
rankData.SortData = rankDataDB.SortData
|
rankData.SortData = rankDataDB.SortData
|
||||||
|
for _,eData := range rankDataDB.ExData{
|
||||||
|
rankData.ExData = append(rankData.ExData,&rpc.ExtendIncData{InitValue:eData})
|
||||||
|
}
|
||||||
|
|
||||||
//更新到排行榜
|
//更新到排行榜
|
||||||
rankSkip.UpsetRank(&rankData,rankDataDB.RefreshTime,true)
|
rankSkip.UpsetRank(&rankData,rankDataDB.RefreshTime,true)
|
||||||
@@ -256,7 +262,6 @@ func (mp *MongoPersist) JugeTimeoutSave() bool{
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (mp *MongoPersist) persistCoroutine(){
|
func (mp *MongoPersist) persistCoroutine(){
|
||||||
mp.waitGroup.Add(1)
|
|
||||||
defer mp.waitGroup.Done()
|
defer mp.waitGroup.Done()
|
||||||
for atomic.LoadInt32(&mp.stop)==0 || mp.hasPersistData(){
|
for atomic.LoadInt32(&mp.stop)==0 || mp.hasPersistData(){
|
||||||
//间隔时间sleep
|
//间隔时间sleep
|
||||||
@@ -287,6 +292,15 @@ func (mp *MongoPersist) hasPersistData() bool{
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (mp *MongoPersist) saveToDB(){
|
func (mp *MongoPersist) saveToDB(){
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
buf := make([]byte, 4096)
|
||||||
|
l := runtime.Stack(buf, false)
|
||||||
|
errString := fmt.Sprint(r)
|
||||||
|
log.SError(" Core dump info[", errString, "]\n", string(buf[:l]))
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
//1.copy数据
|
//1.copy数据
|
||||||
mp.Lock()
|
mp.Lock()
|
||||||
mapRemoveRankData := mp.mapRemoveRankData
|
mapRemoveRankData := mp.mapRemoveRankData
|
||||||
@@ -343,7 +357,7 @@ func (mp *MongoPersist) removeRankData(rankId uint64,keys []uint64) bool {
|
|||||||
|
|
||||||
func (mp *MongoPersist) upsertToDB(collectName string,rankData *RankData) error{
|
func (mp *MongoPersist) upsertToDB(collectName string,rankData *RankData) error{
|
||||||
condition := bson.D{{"_id", rankData.Key}}
|
condition := bson.D{{"_id", rankData.Key}}
|
||||||
upsert := bson.M{"_id":rankData.Key,"RefreshTime": rankData.refreshTimestamp, "SortData": rankData.SortData, "Data": rankData.Data}
|
upsert := bson.M{"_id":rankData.Key,"RefreshTime": rankData.refreshTimestamp, "SortData": rankData.SortData, "Data": rankData.Data,"ExData":rankData.ExData}
|
||||||
update := bson.M{"$set": upsert}
|
update := bson.M{"$set": upsert}
|
||||||
|
|
||||||
s := mp.mongo.TakeSession()
|
s := mp.mongo.TakeSession()
|
||||||
|
|||||||
@@ -14,7 +14,11 @@ var RankDataPool = sync.NewPoolEx(make(chan sync.IPoolData, 10240), func() sync.
|
|||||||
})
|
})
|
||||||
|
|
||||||
type RankData struct {
|
type RankData struct {
|
||||||
*rpc.RankData
|
Key uint64
|
||||||
|
SortData []int64
|
||||||
|
Data []byte
|
||||||
|
ExData []int64
|
||||||
|
|
||||||
refreshTimestamp int64 //刷新时间
|
refreshTimestamp int64 //刷新时间
|
||||||
//bRelease bool
|
//bRelease bool
|
||||||
ref bool
|
ref bool
|
||||||
@@ -27,7 +31,14 @@ func NewRankData(isDec bool, data *rpc.RankData,refreshTimestamp int64) *RankDat
|
|||||||
if isDec {
|
if isDec {
|
||||||
ret.compareFunc = ret.desCompare
|
ret.compareFunc = ret.desCompare
|
||||||
}
|
}
|
||||||
ret.RankData = data
|
ret.Key = data.Key
|
||||||
|
ret.SortData = data.SortData
|
||||||
|
ret.Data = data.Data
|
||||||
|
|
||||||
|
for _,d := range data.ExData{
|
||||||
|
ret.ExData = append(ret.ExData,d.InitValue+d.IncreaseValue)
|
||||||
|
}
|
||||||
|
|
||||||
ret.refreshTimestamp = refreshTimestamp
|
ret.refreshTimestamp = refreshTimestamp
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|||||||
@@ -2,13 +2,15 @@ package rankservice
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/duanhf2012/origin/log"
|
"github.com/duanhf2012/origin/log"
|
||||||
"github.com/duanhf2012/origin/rpc"
|
"github.com/duanhf2012/origin/rpc"
|
||||||
"github.com/duanhf2012/origin/service"
|
"github.com/duanhf2012/origin/service"
|
||||||
"time"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const PreMapRankSkipLen = 10
|
const PreMapRankSkipLen = 10
|
||||||
|
|
||||||
type RankService struct {
|
type RankService struct {
|
||||||
service.Service
|
service.Service
|
||||||
|
|
||||||
@@ -61,11 +63,11 @@ func (rs *RankService) RPC_ManualAddRankSkip(addInfo *rpc.AddRankList, addResult
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
newSkip := NewRankSkip(addRankListData.RankId,addRankListData.RankName,addRankListData.IsDec, transformLevel(addRankListData.SkipListLevel), addRankListData.MaxRank,time.Duration(addRankListData.ExpireMs)*time.Millisecond)
|
newSkip := NewRankSkip(addRankListData.RankId, addRankListData.RankName, addRankListData.IsDec, transformLevel(addRankListData.SkipListLevel), addRankListData.MaxRank, time.Duration(addRankListData.ExpireMs)*time.Millisecond)
|
||||||
newSkip.SetupRankModule(rs.rankModule)
|
newSkip.SetupRankModule(rs.rankModule)
|
||||||
|
|
||||||
rs.mapRankSkip[addRankListData.RankId] = newSkip
|
rs.mapRankSkip[addRankListData.RankId] = newSkip
|
||||||
rs.rankModule.OnSetupRank(true,newSkip)
|
rs.rankModule.OnSetupRank(true, newSkip)
|
||||||
}
|
}
|
||||||
|
|
||||||
addResult.AddCount = 1
|
addResult.AddCount = 1
|
||||||
@@ -82,6 +84,52 @@ func (rs *RankService) RPC_UpsetRank(upsetInfo *rpc.UpsetRankData, upsetResult *
|
|||||||
addCount, updateCount := rankSkip.UpsetRankList(upsetInfo.RankDataList)
|
addCount, updateCount := rankSkip.UpsetRankList(upsetInfo.RankDataList)
|
||||||
upsetResult.AddCount = addCount
|
upsetResult.AddCount = addCount
|
||||||
upsetResult.ModifyCount = updateCount
|
upsetResult.ModifyCount = updateCount
|
||||||
|
|
||||||
|
if upsetInfo.FindNewRank == true {
|
||||||
|
for _, rdata := range upsetInfo.RankDataList {
|
||||||
|
_, rank := rankSkip.GetRankNodeData(rdata.Key)
|
||||||
|
upsetResult.NewRank = append(upsetResult.NewRank, &rpc.RankInfo{Key: rdata.Key, Rank: rank})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// RPC_IncreaseRankData 增量更新排行扩展数据
|
||||||
|
func (rs *RankService) RPC_IncreaseRankData(changeRankData *rpc.IncreaseRankData, changeRankDataRet *rpc.IncreaseRankDataRet) error {
|
||||||
|
rankSkip, ok := rs.mapRankSkip[changeRankData.RankId]
|
||||||
|
if ok == false || rankSkip == nil {
|
||||||
|
return fmt.Errorf("RPC_ChangeRankData[", changeRankData.RankId, "] no this rank id")
|
||||||
|
}
|
||||||
|
|
||||||
|
ret := rankSkip.ChangeExtendData(changeRankData)
|
||||||
|
if ret == false {
|
||||||
|
return fmt.Errorf("RPC_ChangeRankData[", changeRankData.RankId, "] no this key ", changeRankData.Key)
|
||||||
|
}
|
||||||
|
|
||||||
|
if changeRankData.ReturnRankData == true {
|
||||||
|
rankData, rank := rankSkip.GetRankNodeData(changeRankData.Key)
|
||||||
|
changeRankDataRet.PosData = &rpc.RankPosData{}
|
||||||
|
changeRankDataRet.PosData.Rank = rank
|
||||||
|
|
||||||
|
changeRankDataRet.PosData.Key = rankData.Key
|
||||||
|
changeRankDataRet.PosData.Data = rankData.Data
|
||||||
|
changeRankDataRet.PosData.SortData = rankData.SortData
|
||||||
|
changeRankDataRet.PosData.ExtendData = rankData.ExData
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// RPC_UpsetRank 更新排行榜
|
||||||
|
func (rs *RankService) RPC_UpdateRankData(updateRankData *rpc.UpdateRankData, updateRankDataRet *rpc.UpdateRankDataRet) error {
|
||||||
|
rankSkip, ok := rs.mapRankSkip[updateRankData.RankId]
|
||||||
|
if ok == false || rankSkip == nil {
|
||||||
|
updateRankDataRet.Ret = false
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
updateRankDataRet.Ret = rankSkip.UpdateRankData(updateRankData)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -114,6 +162,7 @@ func (rs *RankService) RPC_FindRankDataByKey(findInfo *rpc.FindRankDataByKey, fi
|
|||||||
findResult.Key = findRankData.Key
|
findResult.Key = findRankData.Key
|
||||||
findResult.SortData = findRankData.SortData
|
findResult.SortData = findRankData.SortData
|
||||||
findResult.Rank = rank
|
findResult.Rank = rank
|
||||||
|
findResult.ExtendData = findRankData.ExData
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -131,6 +180,7 @@ func (rs *RankService) RPC_FindRankDataByRank(findInfo *rpc.FindRankDataByRank,
|
|||||||
findResult.Key = findRankData.Key
|
findResult.Key = findRankData.Key
|
||||||
findResult.SortData = findRankData.SortData
|
findResult.SortData = findRankData.SortData
|
||||||
findResult.Rank = rankPos
|
findResult.Rank = rankPos
|
||||||
|
findResult.ExtendData = findRankData.ExData
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -139,7 +189,7 @@ func (rs *RankService) RPC_FindRankDataByRank(findInfo *rpc.FindRankDataByRank,
|
|||||||
func (rs *RankService) RPC_FindRankDataList(findInfo *rpc.FindRankDataList, findResult *rpc.RankDataList) error {
|
func (rs *RankService) RPC_FindRankDataList(findInfo *rpc.FindRankDataList, findResult *rpc.RankDataList) error {
|
||||||
rankObj, ok := rs.mapRankSkip[findInfo.RankId]
|
rankObj, ok := rs.mapRankSkip[findInfo.RankId]
|
||||||
if ok == false || rankObj == nil {
|
if ok == false || rankObj == nil {
|
||||||
err := fmt.Errorf("not config rank %d",findInfo.RankId)
|
err := fmt.Errorf("not config rank %d", findInfo.RankId)
|
||||||
log.SError(err.Error())
|
log.SError(err.Error())
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -151,7 +201,7 @@ func (rs *RankService) RPC_FindRankDataList(findInfo *rpc.FindRankDataList, find
|
|||||||
}
|
}
|
||||||
|
|
||||||
//查询附带的key
|
//查询附带的key
|
||||||
if findInfo.Key!= 0 {
|
if findInfo.Key != 0 {
|
||||||
findRankData, rank := rankObj.GetRankNodeData(findInfo.Key)
|
findRankData, rank := rankObj.GetRankNodeData(findInfo.Key)
|
||||||
if findRankData != nil {
|
if findRankData != nil {
|
||||||
findResult.KeyRank = &rpc.RankPosData{}
|
findResult.KeyRank = &rpc.RankPosData{}
|
||||||
@@ -159,6 +209,7 @@ func (rs *RankService) RPC_FindRankDataList(findInfo *rpc.FindRankDataList, find
|
|||||||
findResult.KeyRank.Key = findRankData.Key
|
findResult.KeyRank.Key = findRankData.Key
|
||||||
findResult.KeyRank.SortData = findRankData.SortData
|
findResult.KeyRank.SortData = findRankData.SortData
|
||||||
findResult.KeyRank.Rank = rank
|
findResult.KeyRank.Rank = rank
|
||||||
|
findResult.KeyRank.ExtendData = findRankData.ExData
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -193,12 +244,12 @@ func (rs *RankService) dealCfg() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
rankId, okId := mapCfg["RankID"].(float64)
|
rankId, okId := mapCfg["RankID"].(float64)
|
||||||
if okId == false || uint64(rankId)==0 {
|
if okId == false || uint64(rankId) == 0 {
|
||||||
return fmt.Errorf("RankService SortCfg data must has RankID[number]")
|
return fmt.Errorf("RankService SortCfg data must has RankID[number]")
|
||||||
}
|
}
|
||||||
|
|
||||||
rankName, okId := mapCfg["RankName"].(string)
|
rankName, okId := mapCfg["RankName"].(string)
|
||||||
if okId == false || len(rankName)==0 {
|
if okId == false || len(rankName) == 0 {
|
||||||
return fmt.Errorf("RankService SortCfg data must has RankName[string]")
|
return fmt.Errorf("RankService SortCfg data must has RankName[string]")
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -207,11 +258,10 @@ func (rs *RankService) dealCfg() error {
|
|||||||
maxRank, _ := mapCfg["MaxRank"].(float64)
|
maxRank, _ := mapCfg["MaxRank"].(float64)
|
||||||
expireMs, _ := mapCfg["ExpireMs"].(float64)
|
expireMs, _ := mapCfg["ExpireMs"].(float64)
|
||||||
|
|
||||||
|
newSkip := NewRankSkip(uint64(rankId), rankName, isDec, transformLevel(int32(level)), uint64(maxRank), time.Duration(expireMs)*time.Millisecond)
|
||||||
newSkip := NewRankSkip(uint64(rankId),rankName,isDec, transformLevel(int32(level)), uint64(maxRank),time.Duration(expireMs)*time.Millisecond)
|
|
||||||
newSkip.SetupRankModule(rs.rankModule)
|
newSkip.SetupRankModule(rs.rankModule)
|
||||||
rs.mapRankSkip[uint64(rankId)] = newSkip
|
rs.mapRankSkip[uint64(rankId)] = newSkip
|
||||||
err := rs.rankModule.OnSetupRank(false,newSkip)
|
err := rs.rankModule.OnSetupRank(false, newSkip)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -219,5 +269,3 @@ func (rs *RankService) dealCfg() error {
|
|||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -2,20 +2,21 @@ package rankservice
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/duanhf2012/origin/rpc"
|
"github.com/duanhf2012/origin/rpc"
|
||||||
"github.com/duanhf2012/origin/util/algorithms/skip"
|
"github.com/duanhf2012/origin/util/algorithms/skip"
|
||||||
"time"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type RankSkip struct {
|
type RankSkip struct {
|
||||||
rankId uint64 //排行榜ID
|
rankId uint64 //排行榜ID
|
||||||
rankName string //排行榜名称
|
rankName string //排行榜名称
|
||||||
isDes bool //是否为降序 true:降序 false:升序
|
isDes bool //是否为降序 true:降序 false:升序
|
||||||
skipList *skip.SkipList //跳表
|
skipList *skip.SkipList //跳表
|
||||||
mapRankData map[uint64]*RankData //排行数据map
|
mapRankData map[uint64]*RankData //排行数据map
|
||||||
maxLen uint64 //排行数据长度
|
maxLen uint64 //排行数据长度
|
||||||
expireMs time.Duration //有效时间
|
expireMs time.Duration //有效时间
|
||||||
rankModule IRankModule
|
rankModule IRankModule
|
||||||
rankDataExpire rankDataHeap
|
rankDataExpire rankDataHeap
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -28,7 +29,7 @@ const (
|
|||||||
)
|
)
|
||||||
|
|
||||||
// NewRankSkip 创建排行榜
|
// NewRankSkip 创建排行榜
|
||||||
func NewRankSkip(rankId uint64,rankName string,isDes bool, level interface{}, maxLen uint64,expireMs time.Duration) *RankSkip {
|
func NewRankSkip(rankId uint64, rankName string, isDes bool, level interface{}, maxLen uint64, expireMs time.Duration) *RankSkip {
|
||||||
rs := &RankSkip{}
|
rs := &RankSkip{}
|
||||||
|
|
||||||
rs.rankId = rankId
|
rs.rankId = rankId
|
||||||
@@ -38,17 +39,17 @@ func NewRankSkip(rankId uint64,rankName string,isDes bool, level interface{}, ma
|
|||||||
rs.mapRankData = make(map[uint64]*RankData, 10240)
|
rs.mapRankData = make(map[uint64]*RankData, 10240)
|
||||||
rs.maxLen = maxLen
|
rs.maxLen = maxLen
|
||||||
rs.expireMs = expireMs
|
rs.expireMs = expireMs
|
||||||
rs.rankDataExpire.Init(int32(maxLen),expireMs)
|
rs.rankDataExpire.Init(int32(maxLen), expireMs)
|
||||||
|
|
||||||
return rs
|
return rs
|
||||||
}
|
}
|
||||||
|
|
||||||
func (rs *RankSkip) pickExpireKey(){
|
func (rs *RankSkip) pickExpireKey() {
|
||||||
if rs.expireMs == 0 {
|
if rs.expireMs == 0 {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
for i:=1;i<=MaxPickExpireNum;i++{
|
for i := 1; i <= MaxPickExpireNum; i++ {
|
||||||
key := rs.rankDataExpire.PopExpireKey()
|
key := rs.rankDataExpire.PopExpireKey()
|
||||||
if key == 0 {
|
if key == 0 {
|
||||||
return
|
return
|
||||||
@@ -79,46 +80,211 @@ func (rs *RankSkip) GetRankLen() uint64 {
|
|||||||
|
|
||||||
func (rs *RankSkip) UpsetRankList(upsetRankData []*rpc.RankData) (addCount int32, modifyCount int32) {
|
func (rs *RankSkip) UpsetRankList(upsetRankData []*rpc.RankData) (addCount int32, modifyCount int32) {
|
||||||
for _, upsetData := range upsetRankData {
|
for _, upsetData := range upsetRankData {
|
||||||
changeType := rs.UpsetRank(upsetData,time.Now().UnixNano(),false)
|
changeType := rs.UpsetRank(upsetData, time.Now().UnixNano(), false)
|
||||||
if changeType == RankDataAdd{
|
if changeType == RankDataAdd {
|
||||||
addCount+=1
|
addCount += 1
|
||||||
} else if changeType == RankDataUpdate{
|
} else if changeType == RankDataUpdate {
|
||||||
modifyCount+=1
|
modifyCount += 1
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
rs.pickExpireKey()
|
rs.pickExpireKey()
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (rs *RankSkip) InsertDataOnNonExistent(changeRankData *rpc.IncreaseRankData) bool {
|
||||||
|
if changeRankData.InsertDataOnNonExistent == false {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
var upsetData rpc.RankData
|
||||||
|
upsetData.Key = changeRankData.Key
|
||||||
|
upsetData.Data = changeRankData.InitData
|
||||||
|
upsetData.SortData = changeRankData.InitSortData
|
||||||
|
|
||||||
|
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(upsetData.SortData); i++ {
|
||||||
|
upsetData.SortData[i] += changeRankData.IncreaseSortData[i]
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, val := range changeRankData.Extend {
|
||||||
|
upsetData.ExData = append(upsetData.ExData, &rpc.ExtendIncData{InitValue: val.InitValue, IncreaseValue: val.IncreaseValue})
|
||||||
|
}
|
||||||
|
|
||||||
|
//强制设计指定值
|
||||||
|
for _, setData := range changeRankData.SetSortAndExtendData {
|
||||||
|
if setData.IsSortData == true {
|
||||||
|
if int(setData.Pos) >= len(upsetData.SortData) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
upsetData.SortData[setData.Pos] = setData.Data
|
||||||
|
} else {
|
||||||
|
if int(setData.Pos) < len(upsetData.ExData) {
|
||||||
|
upsetData.ExData[setData.Pos].IncreaseValue = 0
|
||||||
|
upsetData.ExData[setData.Pos].InitValue = setData.Data
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
refreshTimestamp := time.Now().UnixNano()
|
||||||
|
newRankData := NewRankData(rs.isDes, &upsetData, refreshTimestamp)
|
||||||
|
rs.skipList.Insert(newRankData)
|
||||||
|
rs.mapRankData[upsetData.Key] = newRankData
|
||||||
|
|
||||||
|
//刷新有效期和存档数据
|
||||||
|
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||||
|
rs.rankModule.OnChangeRankData(rs, newRankData)
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rs *RankSkip) UpdateRankData(updateRankData *rpc.UpdateRankData) bool {
|
||||||
|
rankNode, ok := rs.mapRankData[updateRankData.Key]
|
||||||
|
if ok == false {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
rankNode.Data = updateRankData.Data
|
||||||
|
rs.rankDataExpire.PushOrRefreshExpireKey(updateRankData.Key, time.Now().UnixNano())
|
||||||
|
rs.rankModule.OnChangeRankData(rs, rankNode)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rs *RankSkip) ChangeExtendData(changeRankData *rpc.IncreaseRankData) bool {
|
||||||
|
rankNode, ok := rs.mapRankData[changeRankData.Key]
|
||||||
|
if ok == false {
|
||||||
|
return rs.InsertDataOnNonExistent(changeRankData)
|
||||||
|
}
|
||||||
|
|
||||||
|
//先判断是不是有修改
|
||||||
|
bChange := false
|
||||||
|
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(rankNode.SortData); i++ {
|
||||||
|
if changeRankData.IncreaseSortData[i] != 0 {
|
||||||
|
bChange = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if bChange == false {
|
||||||
|
for _, setSortAndExtendData := range changeRankData.SetSortAndExtendData {
|
||||||
|
if setSortAndExtendData.IsSortData == true {
|
||||||
|
bChange = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
//如果有改变,删除原有的数据,重新刷新到跳表
|
||||||
|
rankData := rankNode
|
||||||
|
refreshTimestamp := time.Now().UnixNano()
|
||||||
|
if bChange == true {
|
||||||
|
//copy数据
|
||||||
|
var upsetData rpc.RankData
|
||||||
|
upsetData.Key = rankNode.Key
|
||||||
|
upsetData.Data = rankNode.Data
|
||||||
|
upsetData.SortData = append(upsetData.SortData, rankNode.SortData...)
|
||||||
|
|
||||||
|
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(upsetData.SortData); i++ {
|
||||||
|
if changeRankData.IncreaseSortData[i] != 0 {
|
||||||
|
upsetData.SortData[i] += changeRankData.IncreaseSortData[i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, setData := range changeRankData.SetSortAndExtendData {
|
||||||
|
if setData.IsSortData == true {
|
||||||
|
if int(setData.Pos) < len(upsetData.SortData) {
|
||||||
|
upsetData.SortData[setData.Pos] = setData.Data
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
rankData = NewRankData(rs.isDes, &upsetData, refreshTimestamp)
|
||||||
|
rankData.ExData = append(rankData.ExData, rankNode.ExData...)
|
||||||
|
|
||||||
|
//从排行榜中删除
|
||||||
|
rs.skipList.Delete(rankNode)
|
||||||
|
ReleaseRankData(rankNode)
|
||||||
|
|
||||||
|
rs.skipList.Insert(rankData)
|
||||||
|
rs.mapRankData[upsetData.Key] = rankData
|
||||||
|
}
|
||||||
|
|
||||||
|
//增长扩展参数
|
||||||
|
for i := 0; i < len(changeRankData.Extend); i++ {
|
||||||
|
if i < len(rankData.ExData) {
|
||||||
|
//直接增长
|
||||||
|
rankData.ExData[i] += changeRankData.Extend[i].IncreaseValue
|
||||||
|
} else {
|
||||||
|
//如果不存在的扩展位置,append补充,并按IncreaseValue增长
|
||||||
|
rankData.ExData = append(rankData.ExData, changeRankData.Extend[i].InitValue+changeRankData.Extend[i].IncreaseValue)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
//设置固定值
|
||||||
|
for _, setData := range changeRankData.SetSortAndExtendData {
|
||||||
|
if setData.IsSortData == false {
|
||||||
|
if int(setData.Pos) < len(rankData.ExData) {
|
||||||
|
rankData.ExData[setData.Pos] = setData.Data
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
rs.rankDataExpire.PushOrRefreshExpireKey(rankData.Key, refreshTimestamp)
|
||||||
|
rs.rankModule.OnChangeRankData(rs, rankData)
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
// UpsetRank 更新玩家排行数据,返回变化后的数据及变化类型
|
// UpsetRank 更新玩家排行数据,返回变化后的数据及变化类型
|
||||||
func (rs *RankSkip) UpsetRank(upsetData *rpc.RankData,refreshTimestamp int64,fromLoad bool) RankDataChangeType {
|
func (rs *RankSkip) UpsetRank(upsetData *rpc.RankData, refreshTimestamp int64, fromLoad bool) RankDataChangeType {
|
||||||
rankNode, ok := rs.mapRankData[upsetData.Key]
|
rankNode, ok := rs.mapRankData[upsetData.Key]
|
||||||
if ok == true {
|
if ok == true {
|
||||||
|
//增长扩展数据
|
||||||
|
for i := 0; i < len(upsetData.ExData); i++ {
|
||||||
|
if i < len(rankNode.ExData) {
|
||||||
|
//直接增长
|
||||||
|
rankNode.ExData[i] += upsetData.ExData[i].IncreaseValue
|
||||||
|
} else {
|
||||||
|
//如果不存在的扩展位置,append补充,并按IncreaseValue增长
|
||||||
|
rankNode.ExData = append(rankNode.ExData, upsetData.ExData[i].InitValue+upsetData.ExData[i].IncreaseValue)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
//找到的情况对比排名数据是否有变化,无变化进行data更新,有变化则进行删除更新
|
//找到的情况对比排名数据是否有变化,无变化进行data更新,有变化则进行删除更新
|
||||||
if compareIsEqual(rankNode.SortData, upsetData.SortData) {
|
if compareIsEqual(rankNode.SortData, upsetData.SortData) {
|
||||||
rankNode.Data = upsetData.GetData()
|
rankNode.Data = upsetData.GetData()
|
||||||
rankNode.refreshTimestamp = refreshTimestamp
|
rankNode.refreshTimestamp = refreshTimestamp
|
||||||
|
|
||||||
if fromLoad == false {
|
if fromLoad == false {
|
||||||
rs.rankModule.OnChangeRankData(rs,rankNode)
|
rs.rankModule.OnChangeRankData(rs, rankNode)
|
||||||
}
|
}
|
||||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key,refreshTimestamp)
|
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||||
return RankDataUpdate
|
return RankDataUpdate
|
||||||
}
|
}
|
||||||
|
|
||||||
if upsetData.Data == nil {
|
if upsetData.Data == nil {
|
||||||
upsetData.Data = rankNode.Data
|
upsetData.Data = rankNode.Data
|
||||||
}
|
}
|
||||||
|
|
||||||
|
//设置额外数据
|
||||||
|
for idx, exValue := range rankNode.ExData {
|
||||||
|
currentIncreaseValue := int64(0)
|
||||||
|
if idx < len(upsetData.ExData) {
|
||||||
|
currentIncreaseValue = upsetData.ExData[idx].IncreaseValue
|
||||||
|
}
|
||||||
|
|
||||||
|
upsetData.ExData = append(upsetData.ExData, &rpc.ExtendIncData{
|
||||||
|
InitValue: exValue,
|
||||||
|
IncreaseValue: currentIncreaseValue,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
rs.skipList.Delete(rankNode)
|
rs.skipList.Delete(rankNode)
|
||||||
ReleaseRankData(rankNode)
|
ReleaseRankData(rankNode)
|
||||||
|
|
||||||
newRankData := NewRankData(rs.isDes, upsetData,refreshTimestamp)
|
newRankData := NewRankData(rs.isDes, upsetData, refreshTimestamp)
|
||||||
rs.skipList.Insert(newRankData)
|
rs.skipList.Insert(newRankData)
|
||||||
rs.mapRankData[upsetData.Key] = newRankData
|
rs.mapRankData[upsetData.Key] = newRankData
|
||||||
|
|
||||||
//刷新有效期
|
//刷新有效期
|
||||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key,refreshTimestamp)
|
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||||
|
|
||||||
if fromLoad == false {
|
if fromLoad == false {
|
||||||
rs.rankModule.OnChangeRankData(rs, newRankData)
|
rs.rankModule.OnChangeRankData(rs, newRankData)
|
||||||
@@ -127,10 +293,11 @@ func (rs *RankSkip) UpsetRank(upsetData *rpc.RankData,refreshTimestamp int64,fro
|
|||||||
}
|
}
|
||||||
|
|
||||||
if rs.checkInsertAndReplace(upsetData) {
|
if rs.checkInsertAndReplace(upsetData) {
|
||||||
newRankData := NewRankData(rs.isDes, upsetData,refreshTimestamp)
|
newRankData := NewRankData(rs.isDes, upsetData, refreshTimestamp)
|
||||||
|
|
||||||
rs.skipList.Insert(newRankData)
|
rs.skipList.Insert(newRankData)
|
||||||
rs.mapRankData[upsetData.Key] = newRankData
|
rs.mapRankData[upsetData.Key] = newRankData
|
||||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key,refreshTimestamp)
|
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||||
|
|
||||||
if fromLoad == false {
|
if fromLoad == false {
|
||||||
rs.rankModule.OnEnterRank(rs, newRankData)
|
rs.rankModule.OnEnterRank(rs, newRankData)
|
||||||
@@ -152,7 +319,7 @@ func (rs *RankSkip) DeleteRankData(delKeys []uint64) int32 {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
removeRankData+=1
|
removeRankData += 1
|
||||||
rs.skipList.Delete(rankData)
|
rs.skipList.Delete(rankData)
|
||||||
delete(rs.mapRankData, rankData.Key)
|
delete(rs.mapRankData, rankData.Key)
|
||||||
rs.rankDataExpire.RemoveExpireKey(rankData.Key)
|
rs.rankDataExpire.RemoveExpireKey(rankData.Key)
|
||||||
@@ -172,13 +339,13 @@ func (rs *RankSkip) GetRankNodeData(findKey uint64) (*RankData, uint64) {
|
|||||||
|
|
||||||
rs.pickExpireKey()
|
rs.pickExpireKey()
|
||||||
_, index := rs.skipList.GetWithPosition(rankNode)
|
_, index := rs.skipList.GetWithPosition(rankNode)
|
||||||
return rankNode, index+1
|
return rankNode, index + 1
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetRankNodeDataByPos 获取,返回排名节点与名次
|
// GetRankNodeDataByPos 获取,返回排名节点与名次
|
||||||
func (rs *RankSkip) GetRankNodeDataByRank(rank uint64) (*RankData, uint64) {
|
func (rs *RankSkip) GetRankNodeDataByRank(rank uint64) (*RankData, uint64) {
|
||||||
rs.pickExpireKey()
|
rs.pickExpireKey()
|
||||||
rankNode := rs.skipList.ByPosition(rank-1)
|
rankNode := rs.skipList.ByPosition(rank - 1)
|
||||||
if rankNode == nil {
|
if rankNode == nil {
|
||||||
return nil, 0
|
return nil, 0
|
||||||
}
|
}
|
||||||
@@ -189,12 +356,12 @@ func (rs *RankSkip) GetRankNodeDataByRank(rank uint64) (*RankData, uint64) {
|
|||||||
// GetRankKeyPrevToLimit 获取key前count名的数据
|
// GetRankKeyPrevToLimit 获取key前count名的数据
|
||||||
func (rs *RankSkip) GetRankKeyPrevToLimit(findKey, count uint64, result *rpc.RankDataList) error {
|
func (rs *RankSkip) GetRankKeyPrevToLimit(findKey, count uint64, result *rpc.RankDataList) error {
|
||||||
if rs.GetRankLen() <= 0 {
|
if rs.GetRankLen() <= 0 {
|
||||||
return fmt.Errorf("rank[", rs.rankId, "] no data")
|
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||||
}
|
}
|
||||||
|
|
||||||
findData, ok := rs.mapRankData[findKey]
|
findData, ok := rs.mapRankData[findKey]
|
||||||
if ok == false {
|
if ok == false {
|
||||||
return fmt.Errorf("rank[", rs.rankId, "] no data")
|
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, rankPos := rs.skipList.GetWithPosition(findData)
|
_, rankPos := rs.skipList.GetWithPosition(findData)
|
||||||
@@ -203,10 +370,11 @@ func (rs *RankSkip) GetRankKeyPrevToLimit(findKey, count uint64, result *rpc.Ran
|
|||||||
for iter.Prev() && iterCount < count {
|
for iter.Prev() && iterCount < count {
|
||||||
rankData := iter.Value().(*RankData)
|
rankData := iter.Value().(*RankData)
|
||||||
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
||||||
Key: rankData.Key,
|
Key: rankData.Key,
|
||||||
Rank: rankPos - iterCount+1,
|
Rank: rankPos - iterCount + 1,
|
||||||
SortData: rankData.SortData,
|
SortData: rankData.SortData,
|
||||||
Data: rankData.Data,
|
Data: rankData.Data,
|
||||||
|
ExtendData: rankData.ExData,
|
||||||
})
|
})
|
||||||
iterCount++
|
iterCount++
|
||||||
}
|
}
|
||||||
@@ -217,12 +385,12 @@ func (rs *RankSkip) GetRankKeyPrevToLimit(findKey, count uint64, result *rpc.Ran
|
|||||||
// GetRankKeyPrevToLimit 获取key前count名的数据
|
// GetRankKeyPrevToLimit 获取key前count名的数据
|
||||||
func (rs *RankSkip) GetRankKeyNextToLimit(findKey, count uint64, result *rpc.RankDataList) error {
|
func (rs *RankSkip) GetRankKeyNextToLimit(findKey, count uint64, result *rpc.RankDataList) error {
|
||||||
if rs.GetRankLen() <= 0 {
|
if rs.GetRankLen() <= 0 {
|
||||||
return fmt.Errorf("rank[", rs.rankId, "] no data")
|
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||||
}
|
}
|
||||||
|
|
||||||
findData, ok := rs.mapRankData[findKey]
|
findData, ok := rs.mapRankData[findKey]
|
||||||
if ok == false {
|
if ok == false {
|
||||||
return fmt.Errorf("rank[", rs.rankId, "] no data")
|
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, rankPos := rs.skipList.GetWithPosition(findData)
|
_, rankPos := rs.skipList.GetWithPosition(findData)
|
||||||
@@ -231,10 +399,11 @@ func (rs *RankSkip) GetRankKeyNextToLimit(findKey, count uint64, result *rpc.Ran
|
|||||||
for iter.Next() && iterCount < count {
|
for iter.Next() && iterCount < count {
|
||||||
rankData := iter.Value().(*RankData)
|
rankData := iter.Value().(*RankData)
|
||||||
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
||||||
Key: rankData.Key,
|
Key: rankData.Key,
|
||||||
Rank: rankPos + iterCount+1,
|
Rank: rankPos + iterCount + 1,
|
||||||
SortData: rankData.SortData,
|
SortData: rankData.SortData,
|
||||||
Data: rankData.Data,
|
Data: rankData.Data,
|
||||||
|
ExtendData: rankData.ExData,
|
||||||
})
|
})
|
||||||
iterCount++
|
iterCount++
|
||||||
}
|
}
|
||||||
@@ -259,10 +428,11 @@ func (rs *RankSkip) GetRankDataFromToLimit(startPos, count uint64, result *rpc.R
|
|||||||
for iter.Next() && iterCount < count {
|
for iter.Next() && iterCount < count {
|
||||||
rankData := iter.Value().(*RankData)
|
rankData := iter.Value().(*RankData)
|
||||||
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
||||||
Key: rankData.Key,
|
Key: rankData.Key,
|
||||||
Rank: iterCount + startPos+1,
|
Rank: iterCount + startPos + 1,
|
||||||
SortData: rankData.SortData,
|
SortData: rankData.SortData,
|
||||||
Data: rankData.Data,
|
Data: rankData.Data,
|
||||||
|
ExtendData: rankData.ExData,
|
||||||
})
|
})
|
||||||
iterCount++
|
iterCount++
|
||||||
}
|
}
|
||||||
@@ -301,4 +471,3 @@ func (rs *RankSkip) checkInsertAndReplace(upsetData *rpc.RankData) bool {
|
|||||||
ReleaseRankData(lastRankData)
|
ReleaseRankData(lastRankData)
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -90,6 +90,10 @@ func (tcpService *TcpService) OnInit() error{
|
|||||||
if ok == true {
|
if ok == true {
|
||||||
tcpService.tcpServer.LittleEndian = LittleEndian.(bool)
|
tcpService.tcpServer.LittleEndian = LittleEndian.(bool)
|
||||||
}
|
}
|
||||||
|
LenMsgLen,ok := tcpCfg["LenMsgLen"]
|
||||||
|
if ok == true {
|
||||||
|
tcpService.tcpServer.LenMsgLen = int(LenMsgLen.(float64))
|
||||||
|
}
|
||||||
MinMsgLen,ok := tcpCfg["MinMsgLen"]
|
MinMsgLen,ok := tcpCfg["MinMsgLen"]
|
||||||
if ok == true {
|
if ok == true {
|
||||||
tcpService.tcpServer.MinMsgLen = uint32(MinMsgLen.(float64))
|
tcpService.tcpServer.MinMsgLen = uint32(MinMsgLen.(float64))
|
||||||
|
|||||||
413
util/queue/deque.go
Normal file
413
util/queue/deque.go
Normal file
@@ -0,0 +1,413 @@
|
|||||||
|
package queue
|
||||||
|
|
||||||
|
// minCapacity is the smallest capacity that deque may have. Must be power of 2
|
||||||
|
// for bitwise modulus: x % n == x & (n - 1).
|
||||||
|
const minCapacity = 16
|
||||||
|
|
||||||
|
// Deque represents a single instance of the deque data structure. A Deque
|
||||||
|
// instance contains items of the type sepcified by the type argument.
|
||||||
|
type Deque[T any] struct {
|
||||||
|
buf []T
|
||||||
|
head int
|
||||||
|
tail int
|
||||||
|
count int
|
||||||
|
minCap int
|
||||||
|
}
|
||||||
|
|
||||||
|
// New creates a new Deque, optionally setting the current and minimum capacity
|
||||||
|
// when non-zero values are given for these. The Deque instance returns
|
||||||
|
// operates on items of the type specified by the type argument. For example,
|
||||||
|
// to create a Deque that contains strings,
|
||||||
|
//
|
||||||
|
// stringDeque := deque.New[string]()
|
||||||
|
//
|
||||||
|
// To create a Deque with capacity to store 2048 ints without resizing, and
|
||||||
|
// that will not resize below space for 32 items when removing items:
|
||||||
|
// d := deque.New[int](2048, 32)
|
||||||
|
//
|
||||||
|
// To create a Deque that has not yet allocated memory, but after it does will
|
||||||
|
// never resize to have space for less than 64 items:
|
||||||
|
// d := deque.New[int](0, 64)
|
||||||
|
//
|
||||||
|
// Any size values supplied here are rounded up to the nearest power of 2.
|
||||||
|
func New[T any](size ...int) *Deque[T] {
|
||||||
|
var capacity, minimum int
|
||||||
|
if len(size) >= 1 {
|
||||||
|
capacity = size[0]
|
||||||
|
if len(size) >= 2 {
|
||||||
|
minimum = size[1]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
minCap := minCapacity
|
||||||
|
for minCap < minimum {
|
||||||
|
minCap <<= 1
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf []T
|
||||||
|
if capacity != 0 {
|
||||||
|
bufSize := minCap
|
||||||
|
for bufSize < capacity {
|
||||||
|
bufSize <<= 1
|
||||||
|
}
|
||||||
|
buf = make([]T, bufSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &Deque[T]{
|
||||||
|
buf: buf,
|
||||||
|
minCap: minCap,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cap returns the current capacity of the Deque. If q is nil, q.Cap() is zero.
|
||||||
|
func (q *Deque[T]) Cap() int {
|
||||||
|
if q == nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return len(q.buf)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Len returns the number of elements currently stored in the queue. If q is
|
||||||
|
// nil, q.Len() is zero.
|
||||||
|
func (q *Deque[T]) Len() int {
|
||||||
|
if q == nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return q.count
|
||||||
|
}
|
||||||
|
|
||||||
|
// PushBack appends an element to the back of the queue. Implements FIFO when
|
||||||
|
// elements are removed with PopFront(), and LIFO when elements are removed
|
||||||
|
// with PopBack().
|
||||||
|
func (q *Deque[T]) PushBack(elem T) {
|
||||||
|
q.growIfFull()
|
||||||
|
|
||||||
|
q.buf[q.tail] = elem
|
||||||
|
// Calculate new tail position.
|
||||||
|
q.tail = q.next(q.tail)
|
||||||
|
q.count++
|
||||||
|
}
|
||||||
|
|
||||||
|
// PushFront prepends an element to the front of the queue.
|
||||||
|
func (q *Deque[T]) PushFront(elem T) {
|
||||||
|
q.growIfFull()
|
||||||
|
|
||||||
|
// Calculate new head position.
|
||||||
|
q.head = q.prev(q.head)
|
||||||
|
q.buf[q.head] = elem
|
||||||
|
q.count++
|
||||||
|
}
|
||||||
|
|
||||||
|
// PopFront removes and returns the element from the front of the queue.
|
||||||
|
// Implements FIFO when used with PushBack(). If the queue is empty, the call
|
||||||
|
// panics.
|
||||||
|
func (q *Deque[T]) PopFront() T {
|
||||||
|
if q.count <= 0 {
|
||||||
|
panic("deque: PopFront() called on empty queue")
|
||||||
|
}
|
||||||
|
ret := q.buf[q.head]
|
||||||
|
var zero T
|
||||||
|
q.buf[q.head] = zero
|
||||||
|
// Calculate new head position.
|
||||||
|
q.head = q.next(q.head)
|
||||||
|
q.count--
|
||||||
|
|
||||||
|
q.shrinkIfExcess()
|
||||||
|
return ret
|
||||||
|
}
|
||||||
|
|
||||||
|
// PopBack removes and returns the element from the back of the queue.
|
||||||
|
// Implements LIFO when used with PushBack(). If the queue is empty, the call
|
||||||
|
// panics.
|
||||||
|
func (q *Deque[T]) PopBack() T {
|
||||||
|
if q.count <= 0 {
|
||||||
|
panic("deque: PopBack() called on empty queue")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate new tail position
|
||||||
|
q.tail = q.prev(q.tail)
|
||||||
|
|
||||||
|
// Remove value at tail.
|
||||||
|
ret := q.buf[q.tail]
|
||||||
|
var zero T
|
||||||
|
q.buf[q.tail] = zero
|
||||||
|
q.count--
|
||||||
|
|
||||||
|
q.shrinkIfExcess()
|
||||||
|
return ret
|
||||||
|
}
|
||||||
|
|
||||||
|
// Front returns the element at the front of the queue. This is the element
|
||||||
|
// that would be returned by PopFront(). This call panics if the queue is
|
||||||
|
// empty.
|
||||||
|
func (q *Deque[T]) Front() T {
|
||||||
|
if q.count <= 0 {
|
||||||
|
panic("deque: Front() called when empty")
|
||||||
|
}
|
||||||
|
return q.buf[q.head]
|
||||||
|
}
|
||||||
|
|
||||||
|
// Back returns the element at the back of the queue. This is the element that
|
||||||
|
// would be returned by PopBack(). This call panics if the queue is empty.
|
||||||
|
func (q *Deque[T]) Back() T {
|
||||||
|
if q.count <= 0 {
|
||||||
|
panic("deque: Back() called when empty")
|
||||||
|
}
|
||||||
|
return q.buf[q.prev(q.tail)]
|
||||||
|
}
|
||||||
|
|
||||||
|
// At returns the element at index i in the queue without removing the element
|
||||||
|
// from the queue. This method accepts only non-negative index values. At(0)
|
||||||
|
// refers to the first element and is the same as Front(). At(Len()-1) refers
|
||||||
|
// to the last element and is the same as Back(). If the index is invalid, the
|
||||||
|
// call panics.
|
||||||
|
//
|
||||||
|
// The purpose of At is to allow Deque to serve as a more general purpose
|
||||||
|
// circular buffer, where items are only added to and removed from the ends of
|
||||||
|
// the deque, but may be read from any place within the deque. Consider the
|
||||||
|
// case of a fixed-size circular log buffer: A new entry is pushed onto one end
|
||||||
|
// and when full the oldest is popped from the other end. All the log entries
|
||||||
|
// in the buffer must be readable without altering the buffer contents.
|
||||||
|
func (q *Deque[T]) At(i int) T {
|
||||||
|
if i < 0 || i >= q.count {
|
||||||
|
panic("deque: At() called with index out of range")
|
||||||
|
}
|
||||||
|
// bitwise modulus
|
||||||
|
return q.buf[(q.head+i)&(len(q.buf)-1)]
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set puts the element at index i in the queue. Set shares the same purpose
|
||||||
|
// than At() but perform the opposite operation. The index i is the same index
|
||||||
|
// defined by At(). If the index is invalid, the call panics.
|
||||||
|
func (q *Deque[T]) Set(i int, elem T) {
|
||||||
|
if i < 0 || i >= q.count {
|
||||||
|
panic("deque: Set() called with index out of range")
|
||||||
|
}
|
||||||
|
// bitwise modulus
|
||||||
|
q.buf[(q.head+i)&(len(q.buf)-1)] = elem
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clear removes all elements from the queue, but retains the current capacity.
|
||||||
|
// This is useful when repeatedly reusing the queue at high frequency to avoid
|
||||||
|
// GC during reuse. The queue will not be resized smaller as long as items are
|
||||||
|
// only added. Only when items are removed is the queue subject to getting
|
||||||
|
// resized smaller.
|
||||||
|
func (q *Deque[T]) Clear() {
|
||||||
|
// bitwise modulus
|
||||||
|
modBits := len(q.buf) - 1
|
||||||
|
var zero T
|
||||||
|
for h := q.head; h != q.tail; h = (h + 1) & modBits {
|
||||||
|
q.buf[h] = zero
|
||||||
|
}
|
||||||
|
q.head = 0
|
||||||
|
q.tail = 0
|
||||||
|
q.count = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Rotate rotates the deque n steps front-to-back. If n is negative, rotates
|
||||||
|
// back-to-front. Having Deque provide Rotate() avoids resizing that could
|
||||||
|
// happen if implementing rotation using only Pop and Push methods. If q.Len()
|
||||||
|
// is one or less, or q is nil, then Rotate does nothing.
|
||||||
|
func (q *Deque[T]) Rotate(n int) {
|
||||||
|
if q.Len() <= 1 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// Rotating a multiple of q.count is same as no rotation.
|
||||||
|
n %= q.count
|
||||||
|
if n == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
modBits := len(q.buf) - 1
|
||||||
|
// If no empty space in buffer, only move head and tail indexes.
|
||||||
|
if q.head == q.tail {
|
||||||
|
// Calculate new head and tail using bitwise modulus.
|
||||||
|
q.head = (q.head + n) & modBits
|
||||||
|
q.tail = q.head
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var zero T
|
||||||
|
|
||||||
|
if n < 0 {
|
||||||
|
// Rotate back to front.
|
||||||
|
for ; n < 0; n++ {
|
||||||
|
// Calculate new head and tail using bitwise modulus.
|
||||||
|
q.head = (q.head - 1) & modBits
|
||||||
|
q.tail = (q.tail - 1) & modBits
|
||||||
|
// Put tail value at head and remove value at tail.
|
||||||
|
q.buf[q.head] = q.buf[q.tail]
|
||||||
|
q.buf[q.tail] = zero
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Rotate front to back.
|
||||||
|
for ; n > 0; n-- {
|
||||||
|
// Put head value at tail and remove value at head.
|
||||||
|
q.buf[q.tail] = q.buf[q.head]
|
||||||
|
q.buf[q.head] = zero
|
||||||
|
// Calculate new head and tail using bitwise modulus.
|
||||||
|
q.head = (q.head + 1) & modBits
|
||||||
|
q.tail = (q.tail + 1) & modBits
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Index returns the index into the Deque of the first item satisfying f(item),
|
||||||
|
// or -1 if none do. If q is nil, then -1 is always returned. Search is linear
|
||||||
|
// starting with index 0.
|
||||||
|
func (q *Deque[T]) Index(f func(T) bool) int {
|
||||||
|
if q.Len() > 0 {
|
||||||
|
modBits := len(q.buf) - 1
|
||||||
|
for i := 0; i < q.count; i++ {
|
||||||
|
if f(q.buf[(q.head+i)&modBits]) {
|
||||||
|
return i
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return -1
|
||||||
|
}
|
||||||
|
|
||||||
|
// RIndex is the same as Index, but searches from Back to Front. The index
|
||||||
|
// returned is from Front to Back, where index 0 is the index of the item
|
||||||
|
// returned by Front().
|
||||||
|
func (q *Deque[T]) RIndex(f func(T) bool) int {
|
||||||
|
if q.Len() > 0 {
|
||||||
|
modBits := len(q.buf) - 1
|
||||||
|
for i := q.count - 1; i >= 0; i-- {
|
||||||
|
if f(q.buf[(q.head+i)&modBits]) {
|
||||||
|
return i
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return -1
|
||||||
|
}
|
||||||
|
|
||||||
|
// Insert is used to insert an element into the middle of the queue, before the
|
||||||
|
// element at the specified index. Insert(0,e) is the same as PushFront(e) and
|
||||||
|
// Insert(Len(),e) is the same as PushBack(e). Accepts only non-negative index
|
||||||
|
// values, and panics if index is out of range.
|
||||||
|
//
|
||||||
|
// Important: Deque is optimized for O(1) operations at the ends of the queue,
|
||||||
|
// not for operations in the the middle. Complexity of this function is
|
||||||
|
// constant plus linear in the lesser of the distances between the index and
|
||||||
|
// either of the ends of the queue.
|
||||||
|
func (q *Deque[T]) Insert(at int, item T) {
|
||||||
|
if at < 0 || at > q.count {
|
||||||
|
panic("deque: Insert() called with index out of range")
|
||||||
|
}
|
||||||
|
if at*2 < q.count {
|
||||||
|
q.PushFront(item)
|
||||||
|
front := q.head
|
||||||
|
for i := 0; i < at; i++ {
|
||||||
|
next := q.next(front)
|
||||||
|
q.buf[front], q.buf[next] = q.buf[next], q.buf[front]
|
||||||
|
front = next
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
swaps := q.count - at
|
||||||
|
q.PushBack(item)
|
||||||
|
back := q.prev(q.tail)
|
||||||
|
for i := 0; i < swaps; i++ {
|
||||||
|
prev := q.prev(back)
|
||||||
|
q.buf[back], q.buf[prev] = q.buf[prev], q.buf[back]
|
||||||
|
back = prev
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove removes and returns an element from the middle of the queue, at the
|
||||||
|
// specified index. Remove(0) is the same as PopFront() and Remove(Len()-1) is
|
||||||
|
// the same as PopBack(). Accepts only non-negative index values, and panics if
|
||||||
|
// index is out of range.
|
||||||
|
//
|
||||||
|
// Important: Deque is optimized for O(1) operations at the ends of the queue,
|
||||||
|
// not for operations in the the middle. Complexity of this function is
|
||||||
|
// constant plus linear in the lesser of the distances between the index and
|
||||||
|
// either of the ends of the queue.
|
||||||
|
func (q *Deque[T]) Remove(at int) T {
|
||||||
|
if at < 0 || at >= q.Len() {
|
||||||
|
panic("deque: Remove() called with index out of range")
|
||||||
|
}
|
||||||
|
|
||||||
|
rm := (q.head + at) & (len(q.buf) - 1)
|
||||||
|
if at*2 < q.count {
|
||||||
|
for i := 0; i < at; i++ {
|
||||||
|
prev := q.prev(rm)
|
||||||
|
q.buf[prev], q.buf[rm] = q.buf[rm], q.buf[prev]
|
||||||
|
rm = prev
|
||||||
|
}
|
||||||
|
return q.PopFront()
|
||||||
|
}
|
||||||
|
swaps := q.count - at - 1
|
||||||
|
for i := 0; i < swaps; i++ {
|
||||||
|
next := q.next(rm)
|
||||||
|
q.buf[rm], q.buf[next] = q.buf[next], q.buf[rm]
|
||||||
|
rm = next
|
||||||
|
}
|
||||||
|
return q.PopBack()
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetMinCapacity sets a minimum capacity of 2^minCapacityExp. If the value of
|
||||||
|
// the minimum capacity is less than or equal to the minimum allowed, then
|
||||||
|
// capacity is set to the minimum allowed. This may be called at anytime to set
|
||||||
|
// a new minimum capacity.
|
||||||
|
//
|
||||||
|
// Setting a larger minimum capacity may be used to prevent resizing when the
|
||||||
|
// number of stored items changes frequently across a wide range.
|
||||||
|
func (q *Deque[T]) SetMinCapacity(minCapacityExp uint) {
|
||||||
|
if 1<<minCapacityExp > minCapacity {
|
||||||
|
q.minCap = 1 << minCapacityExp
|
||||||
|
} else {
|
||||||
|
q.minCap = minCapacity
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// prev returns the previous buffer position wrapping around buffer.
|
||||||
|
func (q *Deque[T]) prev(i int) int {
|
||||||
|
return (i - 1) & (len(q.buf) - 1) // bitwise modulus
|
||||||
|
}
|
||||||
|
|
||||||
|
// next returns the next buffer position wrapping around buffer.
|
||||||
|
func (q *Deque[T]) next(i int) int {
|
||||||
|
return (i + 1) & (len(q.buf) - 1) // bitwise modulus
|
||||||
|
}
|
||||||
|
|
||||||
|
// growIfFull resizes up if the buffer is full.
|
||||||
|
func (q *Deque[T]) growIfFull() {
|
||||||
|
if q.count != len(q.buf) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if len(q.buf) == 0 {
|
||||||
|
if q.minCap == 0 {
|
||||||
|
q.minCap = minCapacity
|
||||||
|
}
|
||||||
|
q.buf = make([]T, q.minCap)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
q.resize()
|
||||||
|
}
|
||||||
|
|
||||||
|
// shrinkIfExcess resize down if the buffer 1/4 full.
|
||||||
|
func (q *Deque[T]) shrinkIfExcess() {
|
||||||
|
if len(q.buf) > q.minCap && (q.count<<2) == len(q.buf) {
|
||||||
|
q.resize()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// resize resizes the deque to fit exactly twice its current contents. This is
|
||||||
|
// used to grow the queue when it is full, and also to shrink it when it is
|
||||||
|
// only a quarter full.
|
||||||
|
func (q *Deque[T]) resize() {
|
||||||
|
newBuf := make([]T, q.count<<1)
|
||||||
|
if q.tail > q.head {
|
||||||
|
copy(newBuf, q.buf[q.head:q.tail])
|
||||||
|
} else {
|
||||||
|
n := copy(newBuf, q.buf[q.head:])
|
||||||
|
copy(newBuf[n:], q.buf[:q.tail])
|
||||||
|
}
|
||||||
|
|
||||||
|
q.head = 0
|
||||||
|
q.tail = q.count
|
||||||
|
q.buf = newBuf
|
||||||
|
}
|
||||||
836
util/queue/deque_test.go
Normal file
836
util/queue/deque_test.go
Normal file
@@ -0,0 +1,836 @@
|
|||||||
|
package queue
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"testing"
|
||||||
|
"unicode"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestEmpty(t *testing.T) {
|
||||||
|
q := New[string]()
|
||||||
|
if q.Len() != 0 {
|
||||||
|
t.Error("q.Len() =", q.Len(), "expect 0")
|
||||||
|
}
|
||||||
|
if q.Cap() != 0 {
|
||||||
|
t.Error("expected q.Cap() == 0")
|
||||||
|
}
|
||||||
|
idx := q.Index(func(item string) bool {
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
if idx != -1 {
|
||||||
|
t.Error("should return -1 index for nil deque")
|
||||||
|
}
|
||||||
|
idx = q.RIndex(func(item string) bool {
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
if idx != -1 {
|
||||||
|
t.Error("should return -1 index for nil deque")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNil(t *testing.T) {
|
||||||
|
var q *Deque[int]
|
||||||
|
if q.Len() != 0 {
|
||||||
|
t.Error("expected q.Len() == 0")
|
||||||
|
}
|
||||||
|
if q.Cap() != 0 {
|
||||||
|
t.Error("expected q.Cap() == 0")
|
||||||
|
}
|
||||||
|
q.Rotate(5)
|
||||||
|
idx := q.Index(func(item int) bool {
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
if idx != -1 {
|
||||||
|
t.Error("should return -1 index for nil deque")
|
||||||
|
}
|
||||||
|
idx = q.RIndex(func(item int) bool {
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
if idx != -1 {
|
||||||
|
t.Error("should return -1 index for nil deque")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFrontBack(t *testing.T) {
|
||||||
|
var q Deque[string]
|
||||||
|
q.PushBack("foo")
|
||||||
|
q.PushBack("bar")
|
||||||
|
q.PushBack("baz")
|
||||||
|
if q.Front() != "foo" {
|
||||||
|
t.Error("wrong value at front of queue")
|
||||||
|
}
|
||||||
|
if q.Back() != "baz" {
|
||||||
|
t.Error("wrong value at back of queue")
|
||||||
|
}
|
||||||
|
|
||||||
|
if q.PopFront() != "foo" {
|
||||||
|
t.Error("wrong value removed from front of queue")
|
||||||
|
}
|
||||||
|
if q.Front() != "bar" {
|
||||||
|
t.Error("wrong value remaining at front of queue")
|
||||||
|
}
|
||||||
|
if q.Back() != "baz" {
|
||||||
|
t.Error("wrong value remaining at back of queue")
|
||||||
|
}
|
||||||
|
|
||||||
|
if q.PopBack() != "baz" {
|
||||||
|
t.Error("wrong value removed from back of queue")
|
||||||
|
}
|
||||||
|
if q.Front() != "bar" {
|
||||||
|
t.Error("wrong value remaining at front of queue")
|
||||||
|
}
|
||||||
|
if q.Back() != "bar" {
|
||||||
|
t.Error("wrong value remaining at back of queue")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGrowShrinkBack(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
size := minCapacity * 2
|
||||||
|
|
||||||
|
for i := 0; i < size; i++ {
|
||||||
|
if q.Len() != i {
|
||||||
|
t.Error("q.Len() =", q.Len(), "expected", i)
|
||||||
|
}
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
bufLen := len(q.buf)
|
||||||
|
|
||||||
|
// Remove from back.
|
||||||
|
for i := size; i > 0; i-- {
|
||||||
|
if q.Len() != i {
|
||||||
|
t.Error("q.Len() =", q.Len(), "expected", i)
|
||||||
|
}
|
||||||
|
x := q.PopBack()
|
||||||
|
if x != i-1 {
|
||||||
|
t.Error("q.PopBack() =", x, "expected", i-1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if q.Len() != 0 {
|
||||||
|
t.Error("q.Len() =", q.Len(), "expected 0")
|
||||||
|
}
|
||||||
|
if len(q.buf) == bufLen {
|
||||||
|
t.Error("queue buffer did not shrink")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGrowShrinkFront(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
size := minCapacity * 2
|
||||||
|
|
||||||
|
for i := 0; i < size; i++ {
|
||||||
|
if q.Len() != i {
|
||||||
|
t.Error("q.Len() =", q.Len(), "expected", i)
|
||||||
|
}
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
bufLen := len(q.buf)
|
||||||
|
|
||||||
|
// Remove from Front
|
||||||
|
for i := 0; i < size; i++ {
|
||||||
|
if q.Len() != size-i {
|
||||||
|
t.Error("q.Len() =", q.Len(), "expected", minCapacity*2-i)
|
||||||
|
}
|
||||||
|
x := q.PopFront()
|
||||||
|
if x != i {
|
||||||
|
t.Error("q.PopBack() =", x, "expected", i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if q.Len() != 0 {
|
||||||
|
t.Error("q.Len() =", q.Len(), "expected 0")
|
||||||
|
}
|
||||||
|
if len(q.buf) == bufLen {
|
||||||
|
t.Error("queue buffer did not shrink")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSimple(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
for i := 0; i < minCapacity; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
if q.Front() != 0 {
|
||||||
|
t.Fatalf("expected 0 at front, got %d", q.Front())
|
||||||
|
}
|
||||||
|
if q.Back() != minCapacity-1 {
|
||||||
|
t.Fatalf("expected %d at back, got %d", minCapacity-1, q.Back())
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := 0; i < minCapacity; i++ {
|
||||||
|
if q.Front() != i {
|
||||||
|
t.Error("peek", i, "had value", q.Front())
|
||||||
|
}
|
||||||
|
x := q.PopFront()
|
||||||
|
if x != i {
|
||||||
|
t.Error("remove", i, "had value", x)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
q.Clear()
|
||||||
|
for i := 0; i < minCapacity; i++ {
|
||||||
|
q.PushFront(i)
|
||||||
|
}
|
||||||
|
for i := minCapacity - 1; i >= 0; i-- {
|
||||||
|
x := q.PopFront()
|
||||||
|
if x != i {
|
||||||
|
t.Error("remove", i, "had value", x)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBufferWrap(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
for i := 0; i < minCapacity; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := 0; i < 3; i++ {
|
||||||
|
q.PopFront()
|
||||||
|
q.PushBack(minCapacity + i)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := 0; i < minCapacity; i++ {
|
||||||
|
if q.Front() != i+3 {
|
||||||
|
t.Error("peek", i, "had value", q.Front())
|
||||||
|
}
|
||||||
|
q.PopFront()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBufferWrapReverse(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
for i := 0; i < minCapacity; i++ {
|
||||||
|
q.PushFront(i)
|
||||||
|
}
|
||||||
|
for i := 0; i < 3; i++ {
|
||||||
|
q.PopBack()
|
||||||
|
q.PushFront(minCapacity + i)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := 0; i < minCapacity; i++ {
|
||||||
|
if q.Back() != i+3 {
|
||||||
|
t.Error("peek", i, "had value", q.Front())
|
||||||
|
}
|
||||||
|
q.PopBack()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLen(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
if q.Len() != 0 {
|
||||||
|
t.Error("empty queue length not 0")
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := 0; i < 1000; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
if q.Len() != i+1 {
|
||||||
|
t.Error("adding: queue with", i, "elements has length", q.Len())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for i := 0; i < 1000; i++ {
|
||||||
|
q.PopFront()
|
||||||
|
if q.Len() != 1000-i-1 {
|
||||||
|
t.Error("removing: queue with", 1000-i-i, "elements has length", q.Len())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBack(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
for i := 0; i < minCapacity+5; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
if q.Back() != i {
|
||||||
|
t.Errorf("Back returned wrong value")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNew(t *testing.T) {
|
||||||
|
minCap := 64
|
||||||
|
q := New[string](0, minCap)
|
||||||
|
if q.Cap() != 0 {
|
||||||
|
t.Fatal("should not have allowcated mem yet")
|
||||||
|
}
|
||||||
|
q.PushBack("foo")
|
||||||
|
q.PopFront()
|
||||||
|
if q.Len() != 0 {
|
||||||
|
t.Fatal("Len() should return 0")
|
||||||
|
}
|
||||||
|
if q.Cap() != minCap {
|
||||||
|
t.Fatalf("worng capactiy expected %d, got %d", minCap, q.Cap())
|
||||||
|
}
|
||||||
|
|
||||||
|
curCap := 128
|
||||||
|
q = New[string](curCap, minCap)
|
||||||
|
if q.Cap() != curCap {
|
||||||
|
t.Fatalf("Cap() should return %d, got %d", curCap, q.Cap())
|
||||||
|
}
|
||||||
|
if q.Len() != 0 {
|
||||||
|
t.Fatalf("Len() should return 0")
|
||||||
|
}
|
||||||
|
q.PushBack("foo")
|
||||||
|
if q.Cap() != curCap {
|
||||||
|
t.Fatalf("Cap() should return %d, got %d", curCap, q.Cap())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkRotate(t *testing.T, size int) {
|
||||||
|
var q Deque[int]
|
||||||
|
for i := 0; i < size; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := 0; i < q.Len(); i++ {
|
||||||
|
x := i
|
||||||
|
for n := 0; n < q.Len(); n++ {
|
||||||
|
if q.At(n) != x {
|
||||||
|
t.Fatalf("a[%d] != %d after rotate and copy", n, x)
|
||||||
|
}
|
||||||
|
x++
|
||||||
|
if x == q.Len() {
|
||||||
|
x = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
q.Rotate(1)
|
||||||
|
if q.Back() != i {
|
||||||
|
t.Fatal("wrong value during rotation")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for i := q.Len() - 1; i >= 0; i-- {
|
||||||
|
q.Rotate(-1)
|
||||||
|
if q.Front() != i {
|
||||||
|
t.Fatal("wrong value during reverse rotation")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRotate(t *testing.T) {
|
||||||
|
checkRotate(t, 10)
|
||||||
|
checkRotate(t, minCapacity)
|
||||||
|
checkRotate(t, minCapacity+minCapacity/2)
|
||||||
|
|
||||||
|
var q Deque[int]
|
||||||
|
for i := 0; i < 10; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
q.Rotate(11)
|
||||||
|
if q.Front() != 1 {
|
||||||
|
t.Error("rotating 11 places should have been same as one")
|
||||||
|
}
|
||||||
|
q.Rotate(-21)
|
||||||
|
if q.Front() != 0 {
|
||||||
|
t.Error("rotating -21 places should have been same as one -1")
|
||||||
|
}
|
||||||
|
q.Rotate(q.Len())
|
||||||
|
if q.Front() != 0 {
|
||||||
|
t.Error("should not have rotated")
|
||||||
|
}
|
||||||
|
q.Clear()
|
||||||
|
q.PushBack(0)
|
||||||
|
q.Rotate(13)
|
||||||
|
if q.Front() != 0 {
|
||||||
|
t.Error("should not have rotated")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAt(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
for i := 0; i < 1000; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Front to back.
|
||||||
|
for j := 0; j < q.Len(); j++ {
|
||||||
|
if q.At(j) != j {
|
||||||
|
t.Errorf("index %d doesn't contain %d", j, j)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Back to front
|
||||||
|
for j := 1; j <= q.Len(); j++ {
|
||||||
|
if q.At(q.Len()-j) != q.Len()-j {
|
||||||
|
t.Errorf("index %d doesn't contain %d", q.Len()-j, q.Len()-j)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSet(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
for i := 0; i < 1000; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
q.Set(i, i+50)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Front to back.
|
||||||
|
for j := 0; j < q.Len(); j++ {
|
||||||
|
if q.At(j) != j+50 {
|
||||||
|
t.Errorf("index %d doesn't contain %d", j, j+50)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestClear(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
for i := 0; i < 100; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
if q.Len() != 100 {
|
||||||
|
t.Error("push: queue with 100 elements has length", q.Len())
|
||||||
|
}
|
||||||
|
cap := len(q.buf)
|
||||||
|
q.Clear()
|
||||||
|
if q.Len() != 0 {
|
||||||
|
t.Error("empty queue length not 0 after clear")
|
||||||
|
}
|
||||||
|
if len(q.buf) != cap {
|
||||||
|
t.Error("queue capacity changed after clear")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check that there are no remaining references after Clear()
|
||||||
|
for i := 0; i < len(q.buf); i++ {
|
||||||
|
if q.buf[i] != 0 {
|
||||||
|
t.Error("queue has non-nil deleted elements after Clear()")
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIndex(t *testing.T) {
|
||||||
|
var q Deque[rune]
|
||||||
|
for _, x := range "Hello, 世界" {
|
||||||
|
q.PushBack(x)
|
||||||
|
}
|
||||||
|
idx := q.Index(func(item rune) bool {
|
||||||
|
c := item
|
||||||
|
return unicode.Is(unicode.Han, c)
|
||||||
|
})
|
||||||
|
if idx != 7 {
|
||||||
|
t.Fatal("Expected index 7, got", idx)
|
||||||
|
}
|
||||||
|
idx = q.Index(func(item rune) bool {
|
||||||
|
c := item
|
||||||
|
return c == 'H'
|
||||||
|
})
|
||||||
|
if idx != 0 {
|
||||||
|
t.Fatal("Expected index 0, got", idx)
|
||||||
|
}
|
||||||
|
idx = q.Index(func(item rune) bool {
|
||||||
|
return false
|
||||||
|
})
|
||||||
|
if idx != -1 {
|
||||||
|
t.Fatal("Expected index -1, got", idx)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRIndex(t *testing.T) {
|
||||||
|
var q Deque[rune]
|
||||||
|
for _, x := range "Hello, 世界" {
|
||||||
|
q.PushBack(x)
|
||||||
|
}
|
||||||
|
idx := q.RIndex(func(item rune) bool {
|
||||||
|
c := item
|
||||||
|
return unicode.Is(unicode.Han, c)
|
||||||
|
})
|
||||||
|
if idx != 8 {
|
||||||
|
t.Fatal("Expected index 8, got", idx)
|
||||||
|
}
|
||||||
|
idx = q.RIndex(func(item rune) bool {
|
||||||
|
c := item
|
||||||
|
return c == 'H'
|
||||||
|
})
|
||||||
|
if idx != 0 {
|
||||||
|
t.Fatal("Expected index 0, got", idx)
|
||||||
|
}
|
||||||
|
idx = q.RIndex(func(item rune) bool {
|
||||||
|
return false
|
||||||
|
})
|
||||||
|
if idx != -1 {
|
||||||
|
t.Fatal("Expected index -1, got", idx)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestInsert(t *testing.T) {
|
||||||
|
q := new(Deque[rune])
|
||||||
|
for _, x := range "ABCDEFG" {
|
||||||
|
q.PushBack(x)
|
||||||
|
}
|
||||||
|
q.Insert(4, 'x') // ABCDxEFG
|
||||||
|
if q.At(4) != 'x' {
|
||||||
|
t.Error("expected x at position 4, got", q.At(4))
|
||||||
|
}
|
||||||
|
|
||||||
|
q.Insert(2, 'y') // AByCDxEFG
|
||||||
|
if q.At(2) != 'y' {
|
||||||
|
t.Error("expected y at position 2")
|
||||||
|
}
|
||||||
|
if q.At(5) != 'x' {
|
||||||
|
t.Error("expected x at position 5")
|
||||||
|
}
|
||||||
|
|
||||||
|
q.Insert(0, 'b') // bAByCDxEFG
|
||||||
|
if q.Front() != 'b' {
|
||||||
|
t.Error("expected b inserted at front, got", q.Front())
|
||||||
|
}
|
||||||
|
|
||||||
|
q.Insert(q.Len(), 'e') // bAByCDxEFGe
|
||||||
|
|
||||||
|
for i, x := range "bAByCDxEFGe" {
|
||||||
|
if q.PopFront() != x {
|
||||||
|
t.Error("expected", x, "at position", i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
qs := New[string](16)
|
||||||
|
|
||||||
|
for i := 0; i < qs.Cap(); i++ {
|
||||||
|
qs.PushBack(fmt.Sprint(i))
|
||||||
|
}
|
||||||
|
// deque: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
|
||||||
|
// buffer: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
|
||||||
|
for i := 0; i < qs.Cap()/2; i++ {
|
||||||
|
qs.PopFront()
|
||||||
|
}
|
||||||
|
// deque: 8 9 10 11 12 13 14 15
|
||||||
|
// buffer: [_,_,_,_,_,_,_,_,8,9,10,11,12,13,14,15]
|
||||||
|
for i := 0; i < qs.Cap()/4; i++ {
|
||||||
|
qs.PushBack(fmt.Sprint(qs.Cap() + i))
|
||||||
|
}
|
||||||
|
// deque: 8 9 10 11 12 13 14 15 16 17 18 19
|
||||||
|
// buffer: [16,17,18,19,_,_,_,_,8,9,10,11,12,13,14,15]
|
||||||
|
|
||||||
|
at := qs.Len() - 2
|
||||||
|
qs.Insert(at, "x")
|
||||||
|
// deque: 8 9 10 11 12 13 14 15 16 17 x 18 19
|
||||||
|
// buffer: [16,17,x,18,19,_,_,_,8,9,10,11,12,13,14,15]
|
||||||
|
if qs.At(at) != "x" {
|
||||||
|
t.Error("expected x at position", at)
|
||||||
|
}
|
||||||
|
if qs.At(at) != "x" {
|
||||||
|
t.Error("expected x at position", at)
|
||||||
|
}
|
||||||
|
|
||||||
|
qs.Insert(2, "y")
|
||||||
|
// deque: 8 9 y 10 11 12 13 14 15 16 17 x 18 19
|
||||||
|
// buffer: [16,17,x,18,19,_,_,8,9,y,10,11,12,13,14,15]
|
||||||
|
if qs.At(2) != "y" {
|
||||||
|
t.Error("expected y at position 2")
|
||||||
|
}
|
||||||
|
if qs.At(at+1) != "x" {
|
||||||
|
t.Error("expected x at position 5")
|
||||||
|
}
|
||||||
|
|
||||||
|
qs.Insert(0, "b")
|
||||||
|
// deque: b 8 9 y 10 11 12 13 14 15 16 17 x 18 19
|
||||||
|
// buffer: [16,17,x,18,19,_,b,8,9,y,10,11,12,13,14,15]
|
||||||
|
if qs.Front() != "b" {
|
||||||
|
t.Error("expected b inserted at front, got", qs.Front())
|
||||||
|
}
|
||||||
|
|
||||||
|
qs.Insert(qs.Len(), "e")
|
||||||
|
if qs.Cap() != qs.Len() {
|
||||||
|
t.Fatal("Expected full buffer")
|
||||||
|
}
|
||||||
|
// deque: b 8 9 y 10 11 12 13 14 15 16 17 x 18 19 e
|
||||||
|
// buffer: [16,17,x,18,19,e,b,8,9,y,10,11,12,13,14,15]
|
||||||
|
for i, x := range []string{"16", "17", "x", "18", "19", "e", "b", "8", "9", "y", "10", "11", "12", "13", "14", "15"} {
|
||||||
|
if qs.buf[i] != x {
|
||||||
|
t.Error("expected", x, "at buffer position", i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for i, x := range []string{"b", "8", "9", "y", "10", "11", "12", "13", "14", "15", "16", "17", "x", "18", "19", "e"} {
|
||||||
|
if qs.Front() != x {
|
||||||
|
t.Error("expected", x, "at position", i, "got", qs.Front())
|
||||||
|
}
|
||||||
|
qs.PopFront()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRemove(t *testing.T) {
|
||||||
|
q := new(Deque[rune])
|
||||||
|
for _, x := range "ABCDEFG" {
|
||||||
|
q.PushBack(x)
|
||||||
|
}
|
||||||
|
|
||||||
|
if q.Remove(4) != 'E' { // ABCDFG
|
||||||
|
t.Error("expected E from position 4")
|
||||||
|
}
|
||||||
|
|
||||||
|
if q.Remove(2) != 'C' { // ABDFG
|
||||||
|
t.Error("expected C at position 2")
|
||||||
|
}
|
||||||
|
if q.Back() != 'G' {
|
||||||
|
t.Error("expected G at back")
|
||||||
|
}
|
||||||
|
|
||||||
|
if q.Remove(0) != 'A' { // BDFG
|
||||||
|
t.Error("expected to remove A from front")
|
||||||
|
}
|
||||||
|
if q.Front() != 'B' {
|
||||||
|
t.Error("expected G at back")
|
||||||
|
}
|
||||||
|
|
||||||
|
if q.Remove(q.Len()-1) != 'G' { // BDF
|
||||||
|
t.Error("expected to remove G from back")
|
||||||
|
}
|
||||||
|
if q.Back() != 'F' {
|
||||||
|
t.Error("expected F at back")
|
||||||
|
}
|
||||||
|
|
||||||
|
if q.Len() != 3 {
|
||||||
|
t.Error("wrong length")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFrontBackOutOfRangePanics(t *testing.T) {
|
||||||
|
const msg = "should panic when peeking empty queue"
|
||||||
|
var q Deque[int]
|
||||||
|
assertPanics(t, msg, func() {
|
||||||
|
q.Front()
|
||||||
|
})
|
||||||
|
assertPanics(t, msg, func() {
|
||||||
|
q.Back()
|
||||||
|
})
|
||||||
|
|
||||||
|
q.PushBack(1)
|
||||||
|
q.PopFront()
|
||||||
|
|
||||||
|
assertPanics(t, msg, func() {
|
||||||
|
q.Front()
|
||||||
|
})
|
||||||
|
assertPanics(t, msg, func() {
|
||||||
|
q.Back()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPopFrontOutOfRangePanics(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when removing empty queue", func() {
|
||||||
|
q.PopFront()
|
||||||
|
})
|
||||||
|
|
||||||
|
q.PushBack(1)
|
||||||
|
q.PopFront()
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when removing emptied queue", func() {
|
||||||
|
q.PopFront()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPopBackOutOfRangePanics(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when removing empty queue", func() {
|
||||||
|
q.PopBack()
|
||||||
|
})
|
||||||
|
|
||||||
|
q.PushBack(1)
|
||||||
|
q.PopBack()
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when removing emptied queue", func() {
|
||||||
|
q.PopBack()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAtOutOfRangePanics(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
q.PushBack(1)
|
||||||
|
q.PushBack(2)
|
||||||
|
q.PushBack(3)
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when negative index", func() {
|
||||||
|
q.At(-4)
|
||||||
|
})
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when index greater than length", func() {
|
||||||
|
q.At(4)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSetOutOfRangePanics(t *testing.T) {
|
||||||
|
var q Deque[int]
|
||||||
|
|
||||||
|
q.PushBack(1)
|
||||||
|
q.PushBack(2)
|
||||||
|
q.PushBack(3)
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when negative index", func() {
|
||||||
|
q.Set(-4, 1)
|
||||||
|
})
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when index greater than length", func() {
|
||||||
|
q.Set(4, 1)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestInsertOutOfRangePanics(t *testing.T) {
|
||||||
|
q := new(Deque[string])
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when inserting out of range", func() {
|
||||||
|
q.Insert(1, "X")
|
||||||
|
})
|
||||||
|
|
||||||
|
q.PushBack("A")
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when inserting at negative index", func() {
|
||||||
|
q.Insert(-1, "Y")
|
||||||
|
})
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when inserting out of range", func() {
|
||||||
|
q.Insert(2, "B")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRemoveOutOfRangePanics(t *testing.T) {
|
||||||
|
q := new(Deque[string])
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when removing from empty queue", func() {
|
||||||
|
q.Remove(0)
|
||||||
|
})
|
||||||
|
|
||||||
|
q.PushBack("A")
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when removing at negative index", func() {
|
||||||
|
q.Remove(-1)
|
||||||
|
})
|
||||||
|
|
||||||
|
assertPanics(t, "should panic when removing out of range", func() {
|
||||||
|
q.Remove(1)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSetMinCapacity(t *testing.T) {
|
||||||
|
var q Deque[string]
|
||||||
|
exp := uint(8)
|
||||||
|
q.SetMinCapacity(exp)
|
||||||
|
q.PushBack("A")
|
||||||
|
if q.minCap != 1<<exp {
|
||||||
|
t.Fatal("wrong minimum capacity")
|
||||||
|
}
|
||||||
|
if len(q.buf) != 1<<exp {
|
||||||
|
t.Fatal("wrong buffer size")
|
||||||
|
}
|
||||||
|
q.PopBack()
|
||||||
|
if q.minCap != 1<<exp {
|
||||||
|
t.Fatal("wrong minimum capacity")
|
||||||
|
}
|
||||||
|
if len(q.buf) != 1<<exp {
|
||||||
|
t.Fatal("wrong buffer size")
|
||||||
|
}
|
||||||
|
q.SetMinCapacity(0)
|
||||||
|
if q.minCap != minCapacity {
|
||||||
|
t.Fatal("wrong minimum capacity")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func assertPanics(t *testing.T, name string, f func()) {
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r == nil {
|
||||||
|
t.Errorf("%s: didn't panic as expected", name)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
f()
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkPushFront(b *testing.B) {
|
||||||
|
var q Deque[int]
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.PushFront(i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkPushBack(b *testing.B) {
|
||||||
|
var q Deque[int]
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkSerial(b *testing.B) {
|
||||||
|
var q Deque[int]
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.PopFront()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkSerialReverse(b *testing.B) {
|
||||||
|
var q Deque[int]
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.PushFront(i)
|
||||||
|
}
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.PopBack()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkRotate(b *testing.B) {
|
||||||
|
q := new(Deque[int])
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
b.ResetTimer()
|
||||||
|
// N complete rotations on length N - 1.
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.Rotate(b.N - 1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkInsert(b *testing.B) {
|
||||||
|
q := new(Deque[int])
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
b.ResetTimer()
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.Insert(q.Len()/2, -i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkRemove(b *testing.B) {
|
||||||
|
q := new(Deque[int])
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.PushBack(i)
|
||||||
|
}
|
||||||
|
b.ResetTimer()
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
q.Remove(q.Len() / 2)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkYoyo(b *testing.B) {
|
||||||
|
var q Deque[int]
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
for j := 0; j < 65536; j++ {
|
||||||
|
q.PushBack(j)
|
||||||
|
}
|
||||||
|
for j := 0; j < 65536; j++ {
|
||||||
|
q.PopFront()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkYoyoFixed(b *testing.B) {
|
||||||
|
var q Deque[int]
|
||||||
|
q.SetMinCapacity(16)
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
for j := 0; j < 65536; j++ {
|
||||||
|
q.PushBack(j)
|
||||||
|
}
|
||||||
|
for j := 0; j < 65536; j++ {
|
||||||
|
q.PopFront()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -69,6 +69,13 @@ func (pq *PriorityQueue) Pop() *Item {
|
|||||||
return heap.Pop(&pq.priorityQueueSlice).(*Item)
|
return heap.Pop(&pq.priorityQueueSlice).(*Item)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (pq *PriorityQueue) GetHighest() *Item{
|
||||||
|
if len(pq.priorityQueueSlice)>0 {
|
||||||
|
return pq.priorityQueueSlice[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
func (pq *PriorityQueue) Len() int {
|
func (pq *PriorityQueue) Len() int {
|
||||||
return len(pq.priorityQueueSlice)
|
return len(pq.priorityQueueSlice)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ import (
|
|||||||
"reflect"
|
"reflect"
|
||||||
"runtime"
|
"runtime"
|
||||||
"time"
|
"time"
|
||||||
|
"sync/atomic"
|
||||||
)
|
)
|
||||||
|
|
||||||
// ITimer
|
// ITimer
|
||||||
@@ -29,7 +30,7 @@ type OnAddTimer func(timer ITimer)
|
|||||||
// Timer
|
// Timer
|
||||||
type Timer struct {
|
type Timer struct {
|
||||||
Id uint64
|
Id uint64
|
||||||
cancelled bool //是否关闭
|
cancelled int32 //是否关闭
|
||||||
C chan ITimer //定时器管道
|
C chan ITimer //定时器管道
|
||||||
interval time.Duration // 时间间隔(用于循环定时器)
|
interval time.Duration // 时间间隔(用于循环定时器)
|
||||||
fireTime time.Time // 触发时间
|
fireTime time.Time // 触发时间
|
||||||
@@ -171,12 +172,12 @@ func (t *Timer) GetInterval() time.Duration {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (t *Timer) Cancel() {
|
func (t *Timer) Cancel() {
|
||||||
t.cancelled = true
|
atomic.StoreInt32(&t.cancelled,1)
|
||||||
}
|
}
|
||||||
|
|
||||||
// 判断定时器是否已经取消
|
// 判断定时器是否已经取消
|
||||||
func (t *Timer) IsActive() bool {
|
func (t *Timer) IsActive() bool {
|
||||||
return !t.cancelled
|
return atomic.LoadInt32(&t.cancelled) == 0
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *Timer) GetName() string {
|
func (t *Timer) GetName() string {
|
||||||
|
|||||||
Reference in New Issue
Block a user