mirror of
https://github.com/duanhf2012/origin.git
synced 2026-02-04 23:14:48 +08:00
Compare commits
86 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9ea51ccfd8 | ||
|
|
5275db92bd | ||
|
|
19fd24d9db | ||
|
|
258a6821da | ||
|
|
7780947a96 | ||
|
|
4c169cf0bb | ||
|
|
96d02c8f71 | ||
|
|
3a56282a0b | ||
|
|
c9f30305ce | ||
|
|
9689b7b5fe | ||
|
|
eebbef52c9 | ||
|
|
2ddc54f5ac | ||
|
|
75ef7302de | ||
|
|
ecea9d1706 | ||
|
|
4898116698 | ||
|
|
bcbee6dd11 | ||
|
|
43122190a3 | ||
|
|
39b862e3d9 | ||
|
|
8c9b796fce | ||
|
|
c0971a46a7 | ||
|
|
ba019ac466 | ||
|
|
c803b9b9ad | ||
|
|
3f52ea8331 | ||
|
|
2d1bee4dea | ||
|
|
fa8cbfb40e | ||
|
|
388b946401 | ||
|
|
582a0faa6f | ||
|
|
fa6039e2cb | ||
|
|
25a672ca53 | ||
|
|
75f881be28 | ||
|
|
ef8182eec7 | ||
|
|
4ad8204fde | ||
|
|
8f15546fb1 | ||
|
|
0f3a965d73 | ||
|
|
dfb6959843 | ||
|
|
dd4aaf9c57 | ||
|
|
6ef98a2104 | ||
|
|
1890b300ee | ||
|
|
6fea2226e1 | ||
|
|
ec1c2b4517 | ||
|
|
4b84d9a1d5 | ||
|
|
85a8ec58e5 | ||
|
|
962016d476 | ||
|
|
a61979e985 | ||
|
|
6de25d1c6d | ||
|
|
b392617d6e | ||
|
|
92fdb7860c | ||
|
|
f78d0d58be | ||
|
|
5675681ab1 | ||
|
|
ddeaaf7d77 | ||
|
|
1174b47475 | ||
|
|
18fff3b567 | ||
|
|
7ab6c88f9c | ||
|
|
6b64de06a2 | ||
|
|
95b153f8cf | ||
|
|
f3ff09b90f | ||
|
|
f9738fb9d0 | ||
|
|
91e773aa8c | ||
|
|
c9b96404f4 | ||
|
|
aaae63a674 | ||
|
|
47dc21aee1 | ||
|
|
4d09532801 | ||
|
|
d3ad7fc898 | ||
|
|
ba2b0568b2 | ||
|
|
5a3600bd62 | ||
|
|
4783d05e75 | ||
|
|
8cc1b1afcb | ||
|
|
53d9392901 | ||
|
|
8111b12da5 | ||
|
|
0ebbe0e31d | ||
|
|
e326e342f2 | ||
|
|
a7c6b45764 | ||
|
|
541abd93b4 | ||
|
|
8c8d681093 | ||
|
|
b8150cfc51 | ||
|
|
3833884777 | ||
|
|
60064cbba6 | ||
|
|
66770f07a5 | ||
|
|
76c8541b34 | ||
|
|
b1fee9bc57 | ||
|
|
284d43dc71 | ||
|
|
fd43863b73 | ||
|
|
1fcd870f1d | ||
|
|
11b78f84c4 | ||
|
|
8c6ee24b16 | ||
|
|
ca23925796 |
262
README.md
262
README.md
@@ -1,10 +1,10 @@
|
||||
origin 游戏服务器引擎简介
|
||||
==================
|
||||
|
||||
=========================
|
||||
|
||||
origin 是一个由 Go 语言(golang)编写的分布式开源游戏服务器引擎。origin适用于各类游戏服务器的开发,包括 H5(HTML5)游戏服务器。
|
||||
|
||||
origin 解决的问题:
|
||||
|
||||
* origin总体设计如go语言设计一样,总是尽可能的提供简洁和易用的模式,快速开发。
|
||||
* 能够根据业务需求快速并灵活的制定服务器架构。
|
||||
* 利用多核优势,将不同的service配置到不同的node,并能高效的协同工作。
|
||||
@@ -12,12 +12,16 @@ origin 解决的问题:
|
||||
* 有丰富并健壮的工具库。
|
||||
|
||||
Hello world!
|
||||
---------------
|
||||
------------
|
||||
|
||||
下面我们来一步步的建立origin服务器,先下载[origin引擎](https://github.com/duanhf2012/origin "origin引擎"),或者使用如下命令:
|
||||
|
||||
```go
|
||||
go get -v -u github.com/duanhf2012/origin
|
||||
```
|
||||
[README.md](README.md)
|
||||
于是下载到GOPATH环境目录中,在src中加入main.go,内容如下:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
@@ -29,16 +33,20 @@ func main() {
|
||||
node.Start()
|
||||
}
|
||||
```
|
||||
|
||||
以上只是基础代码,具体运行参数和配置请参照第一章节。
|
||||
|
||||
一个origin进程需要创建一个node对象,Start开始运行。您也可以直接下载origin引擎示例:
|
||||
|
||||
```
|
||||
go get -v -u github.com/duanhf2012/originserver
|
||||
```
|
||||
|
||||
本文所有的说明都是基于该示例为主。
|
||||
|
||||
origin引擎三大对象关系
|
||||
---------------
|
||||
----------------------
|
||||
|
||||
* Node: 可以认为每一个Node代表着一个origin进程
|
||||
* Service:一个独立的服务可以认为是一个大的功能模块,他是Node的子集,创建完成并安装Node对象中。服务可以支持对外部RPC等功能。
|
||||
* Module: 这是origin最小对象单元,强烈建议所有的业务模块都划分成各个小的Module组合,origin引擎将监控所有服务与Module运行状态,例如可以监控它们的慢处理和死循环函数。Module可以建立树状关系。Service本身也是Module的类型。
|
||||
@@ -46,7 +54,8 @@ origin引擎三大对象关系
|
||||
origin集群核心配置文件在config的cluster目录下,如github.com/duanhf2012/originserver的config/cluster目录下有cluster.json与service.json配置:
|
||||
|
||||
cluster.json如下:
|
||||
---------------
|
||||
------------------
|
||||
|
||||
```
|
||||
{
|
||||
"NodeList":[
|
||||
@@ -55,36 +64,44 @@ cluster.json如下:
|
||||
"Private": false,
|
||||
"ListenAddr":"127.0.0.1:8001",
|
||||
"MaxRpcParamLen": 409600,
|
||||
"CompressBytesLen": 20480,
|
||||
"NodeName": "Node_Test1",
|
||||
"remark":"//以_打头的,表示只在本机进程,不对整个子网公开",
|
||||
"remark":"//以_打头的,表示只在本机进程,不对整个子网公开",
|
||||
"ServiceList": ["TestService1","TestService2","TestServiceCall","GateService","_TcpService","HttpService","WSService"]
|
||||
},
|
||||
{
|
||||
{
|
||||
"NodeId": 2,
|
||||
"Private": false,
|
||||
"ListenAddr":"127.0.0.1:8002",
|
||||
"MaxRpcParamLen": 409600,
|
||||
"CompressBytesLen": 20480,
|
||||
"NodeName": "Node_Test1",
|
||||
"remark":"//以_打头的,表示只在本机进程,不对整个子网公开",
|
||||
"remark":"//以_打头的,表示只在本机进程,不对整个子网公开",
|
||||
"ServiceList": ["TestService1","TestService2","TestServiceCall","GateService","TcpService","HttpService","WSService"]
|
||||
}
|
||||
]
|
||||
```
|
||||
---------------
|
||||
|
||||
---
|
||||
|
||||
以上配置了两个结点服务器程序:
|
||||
|
||||
* NodeId: 表示origin程序的结点Id标识,不允许重复。
|
||||
* Private: 是否私有结点,如果为true,表示其他结点不会发现它,但可以自我运行。
|
||||
* ListenAddr:Rpc通信服务的监听地址
|
||||
* MaxRpcParamLen:Rpc参数数据包最大长度,该参数可以缺省,默认一次Rpc调用支持最大4294967295byte长度数据。
|
||||
* CompressBytesLen:Rpc网络数据压缩,当数据>=20480byte时将被压缩。该参数可以缺省或者填0时不进行压缩。
|
||||
* NodeName:结点名称
|
||||
* remark:备注,可选项
|
||||
* ServiceList:该Node将安装的服务列表
|
||||
---------------
|
||||
* ServiceList:该Node拥有的服务列表,注意:origin按配置的顺序进行安装初始化。但停止服务的顺序是相反。
|
||||
|
||||
---
|
||||
|
||||
在启动程序命令originserver -start nodeid=1中nodeid就是根据该配置装载服务。
|
||||
更多参数使用,请使用originserver -help查看。
|
||||
service.json如下:
|
||||
---------------
|
||||
------------------
|
||||
|
||||
```
|
||||
{
|
||||
"Global": {
|
||||
@@ -103,7 +120,7 @@ service.json如下:
|
||||
"Keyfile":""
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
},
|
||||
"TcpService":{
|
||||
"ListenAddr":"0.0.0.0:9030",
|
||||
@@ -160,10 +177,12 @@ service.json如下:
|
||||
}
|
||||
```
|
||||
|
||||
---------------
|
||||
---
|
||||
|
||||
以上配置分为两个部分:Global,Service与NodeService。Global是全局配置,在任何服务中都可以通过cluster.GetCluster().GetGlobalCfg()获取,NodeService中配置的对应结点中服务的配置,如果启动程序中根据nodeid查找该域的对应的服务,如果找不到时,从Service公共部分查找。
|
||||
|
||||
**HttpService配置**
|
||||
|
||||
* ListenAddr:Http监听地址
|
||||
* ReadTimeout:读网络超时毫秒
|
||||
* WriteTimeout:写网络超时毫秒
|
||||
@@ -172,6 +191,7 @@ service.json如下:
|
||||
* CAFile: 证书文件,如果您的服务器通过web服务器代理配置https可以忽略该配置
|
||||
|
||||
**TcpService配置**
|
||||
|
||||
* ListenAddr: 监听地址
|
||||
* MaxConnNum: 允许最大连接数
|
||||
* PendingWriteNum:发送网络队列最大数量
|
||||
@@ -180,20 +200,21 @@ service.json如下:
|
||||
* MaxMsgLen:包最大长度
|
||||
|
||||
**WSService配置**
|
||||
|
||||
* ListenAddr: 监听地址
|
||||
* MaxConnNum: 允许最大连接数
|
||||
* PendingWriteNum:发送网络队列最大数量
|
||||
* MaxMsgLen:包最大长度
|
||||
---------------
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
第一章:origin基础:
|
||||
---------------
|
||||
-------------------
|
||||
|
||||
查看github.com/duanhf2012/originserver中的simple_service中新建两个服务,分别是TestService1.go与CTestService2.go。
|
||||
|
||||
simple_service/TestService1.go如下:
|
||||
|
||||
```
|
||||
package simple_service
|
||||
|
||||
@@ -223,7 +244,9 @@ func (slf *TestService1) OnInit() error {
|
||||
|
||||
|
||||
```
|
||||
|
||||
simple_service/TestService2.go如下:
|
||||
|
||||
```
|
||||
import (
|
||||
"github.com/duanhf2012/origin/node"
|
||||
@@ -263,6 +286,7 @@ func main(){
|
||||
```
|
||||
|
||||
* config/cluster/cluster.json如下:
|
||||
|
||||
```
|
||||
{
|
||||
"NodeList":[
|
||||
@@ -279,6 +303,7 @@ func main(){
|
||||
```
|
||||
|
||||
编译后运行结果如下:
|
||||
|
||||
```
|
||||
#originserver -start nodeid=1
|
||||
TestService1 OnInit.
|
||||
@@ -286,13 +311,15 @@ TestService2 OnInit.
|
||||
```
|
||||
|
||||
第二章:Service中常用功能:
|
||||
---------------
|
||||
--------------------------
|
||||
|
||||
定时器:
|
||||
---------------
|
||||
-------
|
||||
|
||||
在开发中最常用的功能有定时任务,origin提供两种定时方式:
|
||||
|
||||
一种AfterFunc函数,可以间隔一定时间触发回调,参照simple_service/TestService2.go,实现如下:
|
||||
|
||||
```
|
||||
func (slf *TestService2) OnInit() error {
|
||||
fmt.Printf("TestService2 OnInit.\n")
|
||||
@@ -305,10 +332,11 @@ func (slf *TestService2) OnSecondTick(){
|
||||
slf.AfterFunc(time.Second*1,slf.OnSecondTick)
|
||||
}
|
||||
```
|
||||
|
||||
此时日志可以看到每隔1秒钟会print一次"tick.",如果下次还需要触发,需要重新设置定时器
|
||||
|
||||
|
||||
另一种方式是类似Linux系统的crontab命令,使用如下:
|
||||
|
||||
```
|
||||
|
||||
func (slf *TestService2) OnInit() error {
|
||||
@@ -327,27 +355,29 @@ func (slf *TestService2) OnCron(cron *timer.Cron){
|
||||
fmt.Printf(":A minute passed!\n")
|
||||
}
|
||||
```
|
||||
以上运行结果每换分钟时打印:A minute passed!
|
||||
|
||||
以上运行结果每换分钟时打印:A minute passed!
|
||||
|
||||
打开多协程模式:
|
||||
---------------
|
||||
|
||||
在origin引擎设计中,所有的服务是单协程模式,这样在编写逻辑代码时,不用考虑线程安全问题。极大的减少开发难度,但某些开发场景下不用考虑这个问题,而且需要并发执行的情况,比如,某服务只处理数据库操作控制,而数据库处理中发生阻塞等待的问题,因为一个协程,该服务接受的数据库操作只能是一个
|
||||
一个的排队处理,效率过低。于是可以打开此模式指定处理协程数,代码如下:
|
||||
|
||||
```
|
||||
func (slf *TestService1) OnInit() error {
|
||||
fmt.Printf("TestService1 OnInit.\n")
|
||||
|
||||
|
||||
//打开多线程处理模式,10个协程并发处理
|
||||
slf.SetGoRoutineNum(10)
|
||||
return nil
|
||||
}
|
||||
```
|
||||
为了
|
||||
|
||||
|
||||
性能监控功能:
|
||||
---------------
|
||||
-------------
|
||||
|
||||
我们在开发一个大型的系统时,经常由于一些代码质量的原因,产生处理过慢或者死循环的产生,该功能可以被监测到。使用方法如下:
|
||||
|
||||
```
|
||||
@@ -382,6 +412,7 @@ func main(){
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
上面通过GetProfiler().SetOverTime与slf.GetProfiler().SetMaxOverTimer设置监控时间
|
||||
并在main.go中,打开了性能报告器,以每10秒汇报一次,因为上面的例子中,定时器是有死循环,所以可以得到以下报告:
|
||||
|
||||
@@ -390,10 +421,11 @@ process count 0,take time 0 Milliseconds,average 0 Milliseconds/per.
|
||||
too slow process:Timer_orginserver/simple_service.(*TestService1).Loop-fm is take 38003 Milliseconds
|
||||
直接帮助找到TestService1服务中的Loop函数
|
||||
|
||||
|
||||
结点连接和断开事件监听:
|
||||
---------------
|
||||
-----------------------
|
||||
|
||||
在有些业务中需要关注某结点是否断开连接,可以注册回调如下:
|
||||
|
||||
```
|
||||
func (ts *TestService) OnInit() error{
|
||||
ts.RegRpcListener(ts)
|
||||
@@ -408,13 +440,14 @@ func (ts *TestService) OnNodeDisconnect(nodeId int){
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
第三章:Module使用:
|
||||
---------------
|
||||
-------------------
|
||||
|
||||
Module创建与销毁:
|
||||
---------------
|
||||
-----------------
|
||||
|
||||
可以认为Service就是一种Module,它有Module所有的功能。在示例代码中可以参考originserver/simple_module/TestService3.go。
|
||||
|
||||
```
|
||||
package simple_module
|
||||
|
||||
@@ -476,7 +509,9 @@ func (slf *TestService3) OnInit() error {
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
在OnInit中创建了一条线型的模块关系TestService3->module1->module2,调用AddModule后会返回Module的Id,自动生成的Id从10e17开始,内部的id,您可以自己设置Id。当调用ReleaseModule释放时module1时,同样会将module2释放。会自动调用OnRelease函数,日志顺序如下:
|
||||
|
||||
```
|
||||
Module1 OnInit.
|
||||
Module2 OnInit.
|
||||
@@ -484,14 +519,16 @@ module1 id is 100000000000000001, module2 id is 100000000000000002
|
||||
Module2 Release.
|
||||
Module1 Release.
|
||||
```
|
||||
|
||||
在Module中同样可以使用定时器功能,请参照第二章节的定时器部分。
|
||||
|
||||
|
||||
第四章:事件使用
|
||||
---------------
|
||||
----------------
|
||||
|
||||
事件是origin中一个重要的组成部分,可以在同一个node中的service与service或者与module之间进行事件通知。系统内置的几个服务,如:TcpService/HttpService等都是通过事件功能实现。他也是一个典型的观察者设计模型。在event中有两个类型的interface,一个是event.IEventProcessor它提供注册与卸载功能,另一个是event.IEventHandler提供消息广播等功能。
|
||||
|
||||
在目录simple_event/TestService4.go中
|
||||
|
||||
```
|
||||
package simple_event
|
||||
|
||||
@@ -535,6 +572,7 @@ func (slf *TestService4) TriggerEvent(){
|
||||
```
|
||||
|
||||
在目录simple_event/TestService5.go中
|
||||
|
||||
```
|
||||
package simple_event
|
||||
|
||||
@@ -590,19 +628,24 @@ func (slf *TestService5) OnServiceEvent(ev event.IEvent){
|
||||
|
||||
|
||||
```
|
||||
|
||||
程序运行10秒后,调用slf.TriggerEvent函数广播事件,于是在TestService5中会收到
|
||||
|
||||
```
|
||||
OnServiceEvent type :1001 data:event data.
|
||||
OnModuleEvent type :1001 data:event data.
|
||||
```
|
||||
|
||||
在上面的TestModule中监听的事情,当这个Module被Release时监听会自动卸载。
|
||||
|
||||
第五章:RPC使用
|
||||
---------------
|
||||
|
||||
RPC是service与service间通信的重要方式,它允许跨进程node互相访问,当然也可以指定nodeid进行调用。如下示例:
|
||||
|
||||
simple_rpc/TestService6.go文件如下:
|
||||
```
|
||||
|
||||
```go
|
||||
package simple_rpc
|
||||
|
||||
import (
|
||||
@@ -627,6 +670,7 @@ type InputData struct {
|
||||
B int
|
||||
}
|
||||
|
||||
// 注意RPC函数名的格式必需为RPC_FunctionName或者是RPCFunctionName,如下的RPC_Sum也可以写成RPCSum
|
||||
func (slf *TestService6) RPC_Sum(input *InputData,output *int) error{
|
||||
*output = input.A+input.B
|
||||
return nil
|
||||
@@ -635,6 +679,7 @@ func (slf *TestService6) RPC_Sum(input *InputData,output *int) error{
|
||||
```
|
||||
|
||||
simple_rpc/TestService7.go文件如下:
|
||||
|
||||
```
|
||||
package simple_rpc
|
||||
|
||||
@@ -673,6 +718,15 @@ func (slf *TestService7) CallTest(){
|
||||
}else{
|
||||
fmt.Printf("Call output %d\n",output)
|
||||
}
|
||||
|
||||
|
||||
//自定义超时,默认rpc超时时间为15s
|
||||
err = slf.CallWithTimeout(time.Second*1, "TestService6.RPC_Sum", &input, &output)
|
||||
if err != nil {
|
||||
fmt.Printf("Call error :%+v\n", err)
|
||||
} else {
|
||||
fmt.Printf("Call output %d\n", output)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -684,13 +738,27 @@ func (slf *TestService7) AsyncCallTest(){
|
||||
})*/
|
||||
//异步调用,在数据返回时,会回调传入函数
|
||||
//注意函数的第一个参数一定是RPC_Sum函数的第二个参数,err error为RPC_Sum返回值
|
||||
slf.AsyncCall("TestService6.RPC_Sum",&input,func(output *int,err error){
|
||||
err := slf.AsyncCall("TestService6.RPC_Sum", &input, func(output *int, err error) {
|
||||
if err != nil {
|
||||
fmt.Printf("AsyncCall error :%+v\n",err)
|
||||
}else{
|
||||
fmt.Printf("AsyncCall output %d\n",*output)
|
||||
fmt.Printf("AsyncCall error :%+v\n", err)
|
||||
} else {
|
||||
fmt.Printf("AsyncCall output %d\n", *output)
|
||||
}
|
||||
})
|
||||
fmt.Println(err)
|
||||
|
||||
//自定义超时,返回一个cancel函数,可以在业务需要时取消rpc调用
|
||||
rpcCancel, err := slf.AsyncCallWithTimeout(time.Second*1, "TestService6.RPC_Sum", &input, func(output *int, err error) {
|
||||
//如果下面注释的rpcCancel()函数被调用,这里可能将不再返回
|
||||
if err != nil {
|
||||
fmt.Printf("AsyncCall error :%+v\n", err)
|
||||
} else {
|
||||
fmt.Printf("AsyncCall output %d\n", *output)
|
||||
}
|
||||
})
|
||||
//rpcCancel()
|
||||
fmt.Println(err, rpcCancel)
|
||||
|
||||
}
|
||||
|
||||
func (slf *TestService7) GoTest(){
|
||||
@@ -709,26 +777,96 @@ func (slf *TestService7) GoTest(){
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
您可以把TestService6配置到其他的Node中,比如NodeId为2中。只要在一个子网,origin引擎可以无差别调用。开发者只需要关注Service关系。同样它也是您服务器架构设计的核心需要思考的部分。
|
||||
|
||||
第六章:配置服务发现
|
||||
|
||||
第六章:并发函数调用
|
||||
---------------
|
||||
在开发中经常会有将某些任务放到其他协程中并发执行,执行完成后,将服务的工作线程去回调。使用方式很简单,先打开该功能如下代码:
|
||||
```
|
||||
//以下通过cpu数量来定开启协程并发数量,建议:(1)cpu密集型计算使用1.0 (2)i/o密集型使用2.0或者更高
|
||||
slf.OpenConcurrentByNumCPU(1.0)
|
||||
|
||||
//以下通过函数打开并发协程数,以下协程数最小5,最大10,任务管道的cap数量1000000
|
||||
//origin会根据任务的数量在最小与最大协程数间动态伸缩
|
||||
//slf.OpenConcurrent(5, 10, 1000000)
|
||||
```
|
||||
|
||||
使用示例如下:
|
||||
```
|
||||
|
||||
func (slf *TestService13) testAsyncDo() {
|
||||
var context struct {
|
||||
data int64
|
||||
}
|
||||
|
||||
//1.示例普通使用
|
||||
//参数一的函数在其他协程池中执行完成,将执行完成事件放入服务工作协程,
|
||||
//参数二的函数在服务协程中执行,是协程安全的。
|
||||
slf.AsyncDo(func() bool {
|
||||
//该函数回调在协程池中执行
|
||||
context.data = 100
|
||||
return true
|
||||
}, func(err error) {
|
||||
//函数将在服务协程中执行
|
||||
fmt.Print(context.data) //显示100
|
||||
})
|
||||
|
||||
//2.示例按队列顺序
|
||||
//参数一传入队列Id,同一个队列Id将在协程池中被排队执行
|
||||
//以下进行两次调用,因为两次都传入参数queueId都为1,所以它们会都进入queueId为1的排队执行
|
||||
queueId := int64(1)
|
||||
for i := 0; i < 2; i++ {
|
||||
slf.AsyncDoByQueue(queueId, func() bool {
|
||||
//该函数会被2次调用,但是会排队执行
|
||||
return true
|
||||
}, func(err error) {
|
||||
//函数将在服务协程中执行
|
||||
})
|
||||
}
|
||||
|
||||
//3.函数参数可以某中一个为空
|
||||
//参数二函数将被延迟执行
|
||||
slf.AsyncDo(nil, func(err error) {
|
||||
//将在下
|
||||
})
|
||||
|
||||
//参数一函数在协程池中执行,但没有在服务协程中回调
|
||||
slf.AsyncDo(func() bool {
|
||||
return true
|
||||
}, nil)
|
||||
|
||||
//4.函数返回值控制不进行回调
|
||||
slf.AsyncDo(func() bool {
|
||||
//返回false时,参数二函数将不会被执行; 为true时,则会被执行
|
||||
return false
|
||||
}, func(err error) {
|
||||
//该函数将不会被执行
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
第七章:配置服务发现
|
||||
--------------------
|
||||
|
||||
origin引擎默认使用读取所有结点配置的进行确认结点有哪些Service。引擎也支持动态服务发现的方式,使用了内置的DiscoveryMaster服务用于中心Service,DiscoveryClient用于向DiscoveryMaster获取整个origin网络中所有的结点以及服务信息。具体实现细节请查看这两部分的服务实现。具体使用方式,在以下cluster配置中加入以下内容:
|
||||
|
||||
```
|
||||
{
|
||||
"MasterDiscoveryNode": [{
|
||||
"NodeId": 2,
|
||||
"ListenAddr": "127.0.0.1:10001",
|
||||
"MaxRpcParamLen": 409600,
|
||||
"NeighborService":["HttpGateService"]
|
||||
"MaxRpcParamLen": 409600
|
||||
},
|
||||
{
|
||||
"NodeId": 1,
|
||||
"ListenAddr": "127.0.0.1:8801",
|
||||
"MaxRpcParamLen": 409600
|
||||
}],
|
||||
|
||||
|
||||
|
||||
|
||||
"NodeList": [{
|
||||
"NodeId": 1,
|
||||
"ListenAddr": "127.0.0.1:8801",
|
||||
@@ -737,25 +875,26 @@ origin引擎默认使用读取所有结点配置的进行确认结点有哪些Se
|
||||
"Private": false,
|
||||
"remark": "//以_打头的,表示只在本机进程,不对整个子网开发",
|
||||
"ServiceList": ["_TestService1", "TestService9", "TestService10"],
|
||||
"DiscoveryService": ["TestService8"]
|
||||
"MasterDiscoveryService": [
|
||||
{
|
||||
"MasterNodeId": 2,
|
||||
"DiscoveryService": ["TestService8"]
|
||||
}
|
||||
]
|
||||
}]
|
||||
}
|
||||
```
|
||||
新上有两新不同的字段分别为MasterDiscoveryNode与DiscoveryService。其中:
|
||||
MasterDiscoveryNode: 配置了结点Id为1的服务发现Master,他的监听地址ListenAddr为127.0.0.1:8801,结点为2的也是一个服务发现Master。NodeId为1的结点会从结点为1和2的网络中发现服务。
|
||||
|
||||
MasterDiscoveryNode中配置了结点Id为1的服务发现Master,他的监听地址ListenAddr为127.0.0.1:8801,结点为2的也是一个服务发现Master,不同在于多了"NeighborService":["HttpGateService"]配置。如果"NeighborService"有配置具体的服务时,则表示该结点是一个邻居Master结点。当前运行的Node结点会从该Master结点上筛选HttpGateService的服务,并且当前运行的Node结点不会向上同步本地所有公开的服务,和邻居结点关系是单向的。
|
||||
MasterDiscoveryService: 表示将筛选origin网络中MasterNodeId为2中的TestService8服务,注意如果MasterDiscoveryService不配置,则筛选功能不生效。MasterNodeId也可以填为0,表示NodeId为1的结点,在所有网络中只发现TestService8的服务。
|
||||
|
||||
NeighborService可以用在当有多个以Master中心结点的网络,发现跨网络的服务场景。
|
||||
DiscoveryService表示将筛选origin网络中的TestService8服务,注意如果DiscoveryService不配置,则筛选功能不生效。
|
||||
第八章:HttpService使用
|
||||
-----------------------
|
||||
|
||||
|
||||
|
||||
|
||||
第七章:HttpService使用
|
||||
---------------
|
||||
HttpService是origin引擎中系统实现的http服务,http接口中常用的GET,POST以及url路由处理。
|
||||
|
||||
simple_http/TestHttpService.go文件如下:
|
||||
|
||||
```
|
||||
package simple_http
|
||||
|
||||
@@ -825,15 +964,16 @@ func (slf *TestHttpService) HttpPost(session *sysservice.HttpSession){
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
注意,要在main.go中加入import _ "orginserver/simple_service",并且在config/cluster/cluster.json中的ServiceList加入服务。
|
||||
|
||||
第九章:TcpService服务使用
|
||||
--------------------------
|
||||
|
||||
|
||||
第七章:TcpService服务使用
|
||||
---------------
|
||||
TcpService是origin引擎中系统实现的Tcp服务,可以支持自定义消息格式处理器。只要重新实现network.Processor接口。目前内置已经实现最常用的protobuf处理器。
|
||||
|
||||
simple_tcp/TestTcpService.go文件如下:
|
||||
|
||||
```
|
||||
package simple_tcp
|
||||
|
||||
@@ -901,9 +1041,9 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
|
||||
}
|
||||
```
|
||||
|
||||
第十章:其他系统模块介绍
|
||||
------------------------
|
||||
|
||||
第八章:其他系统模块介绍
|
||||
---------------
|
||||
* sysservice/wsservice.go:支持了WebSocket协议,使用方法与TcpService类似
|
||||
* sysmodule/DBModule.go:对mysql数据库操作
|
||||
* sysmodule/RedisModule.go:对Redis数据进行操作
|
||||
@@ -912,9 +1052,9 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
|
||||
* util:在该目录下,有常用的uuid,hash,md5,协程封装等工具库
|
||||
* https://github.com/duanhf2012/originservice: 其他扩展支持的服务可以在该工程上看到,目前支持firebase推送的封装。
|
||||
|
||||
|
||||
备注:
|
||||
---------------
|
||||
-----
|
||||
|
||||
**感觉不错请star, 谢谢!**
|
||||
|
||||
**欢迎加入origin服务器开发QQ交流群:168306674,有任何疑问我都会及时解答**
|
||||
@@ -922,12 +1062,14 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
|
||||
提交bug及特性: https://github.com/duanhf2012/origin/issues
|
||||
|
||||
[因服务器是由个人维护,如果这个项目对您有帮助,您可以点我进行捐赠,感谢!](http://www.cppblog.com/images/cppblog_com/API/21416/r_pay.jpg "Thanks!")
|
||||
|
||||

|
||||
特别感谢以下赞助网友:
|
||||
|
||||
```
|
||||
咕咕兽
|
||||
_
|
||||
死磕代码
|
||||
bp-li
|
||||
阿正
|
||||
大头
|
||||
```
|
||||
|
||||
@@ -20,17 +20,23 @@ const (
|
||||
Discard NodeStatus = 1 //丢弃
|
||||
)
|
||||
|
||||
type MasterDiscoveryService struct {
|
||||
MasterNodeId int32 //要筛选的主结点Id,如果不配置或者配置成0,表示针对所有的主结点
|
||||
DiscoveryService []string //只发现的服务列表
|
||||
}
|
||||
|
||||
type NodeInfo struct {
|
||||
NodeId int
|
||||
NodeName string
|
||||
Private bool
|
||||
ListenAddr string
|
||||
MaxRpcParamLen uint32 //最大Rpc参数长度
|
||||
ServiceList []string //所有的服务列表
|
||||
CompressBytesLen int //超过字节进行压缩的长度
|
||||
ServiceList []string //所有的有序服务列表
|
||||
PublicServiceList []string //对外公开的服务列表
|
||||
DiscoveryService []string //筛选发现的服务,如果不配置,不进行筛选
|
||||
NeighborService []string
|
||||
MasterDiscoveryService []MasterDiscoveryService //筛选发现的服务,如果不配置,不进行筛选
|
||||
status NodeStatus
|
||||
Retire bool
|
||||
}
|
||||
|
||||
type NodeRpcInfo struct {
|
||||
@@ -50,8 +56,8 @@ type Cluster struct {
|
||||
|
||||
|
||||
locker sync.RWMutex //结点与服务关系保护锁
|
||||
mapRpc map[int]NodeRpcInfo //nodeId
|
||||
mapIdNode map[int]NodeInfo //map[NodeId]NodeInfo
|
||||
mapRpc map[int]*NodeRpcInfo //nodeId
|
||||
//mapIdNode map[int]NodeInfo //map[NodeId]NodeInfo
|
||||
mapServiceNode map[string]map[int]struct{} //map[serviceName]map[NodeId]
|
||||
|
||||
rpcServer rpc.Server
|
||||
@@ -73,7 +79,7 @@ func SetServiceDiscovery(serviceDiscovery IServiceDiscovery) {
|
||||
}
|
||||
|
||||
func (cls *Cluster) Start() {
|
||||
cls.rpcServer.Start(cls.localNodeInfo.ListenAddr, cls.localNodeInfo.MaxRpcParamLen)
|
||||
cls.rpcServer.Start(cls.localNodeInfo.ListenAddr, cls.localNodeInfo.MaxRpcParamLen,cls.localNodeInfo.CompressBytesLen)
|
||||
}
|
||||
|
||||
func (cls *Cluster) Stop() {
|
||||
@@ -82,10 +88,11 @@ func (cls *Cluster) Stop() {
|
||||
|
||||
func (cls *Cluster) DiscardNode(nodeId int) {
|
||||
cls.locker.Lock()
|
||||
nodeInfo, ok := cls.mapIdNode[nodeId]
|
||||
nodeInfo, ok := cls.mapRpc[nodeId]
|
||||
bDel := (ok == true) && nodeInfo.nodeInfo.status == Discard
|
||||
cls.locker.Unlock()
|
||||
|
||||
if ok == true && nodeInfo.status == Discard {
|
||||
if bDel {
|
||||
cls.DelNode(nodeId, true)
|
||||
}
|
||||
}
|
||||
@@ -98,41 +105,30 @@ func (cls *Cluster) DelNode(nodeId int, immediately bool) {
|
||||
cls.locker.Lock()
|
||||
defer cls.locker.Unlock()
|
||||
|
||||
nodeInfo, ok := cls.mapIdNode[nodeId]
|
||||
rpc, ok := cls.mapRpc[nodeId]
|
||||
if ok == false {
|
||||
return
|
||||
}
|
||||
|
||||
rpc, ok := cls.mapRpc[nodeId]
|
||||
for {
|
||||
//立即删除
|
||||
if immediately || ok == false {
|
||||
break
|
||||
}
|
||||
|
||||
rpc.client.Lock()
|
||||
if immediately ==false {
|
||||
//正在连接中不主动断开,只断开没有连接中的
|
||||
if rpc.client.IsConnected() {
|
||||
nodeInfo.status = Discard
|
||||
rpc.client.Unlock()
|
||||
log.SRelease("Discard node ", nodeInfo.NodeId, " ", nodeInfo.ListenAddr)
|
||||
rpc.nodeInfo.status = Discard
|
||||
log.Info("Discard node",log.Int("nodeId",rpc.nodeInfo.NodeId),log.String("ListenAddr", rpc.nodeInfo.ListenAddr))
|
||||
return
|
||||
}
|
||||
rpc.client.Unlock()
|
||||
break
|
||||
}
|
||||
|
||||
for _, serviceName := range nodeInfo.ServiceList {
|
||||
for _, serviceName := range rpc.nodeInfo.ServiceList {
|
||||
cls.delServiceNode(serviceName, nodeId)
|
||||
}
|
||||
|
||||
delete(cls.mapIdNode, nodeId)
|
||||
delete(cls.mapRpc, nodeId)
|
||||
if ok == true {
|
||||
rpc.client.Close(false)
|
||||
}
|
||||
|
||||
log.SRelease("remove node ", nodeInfo.NodeId, " ", nodeInfo.ListenAddr)
|
||||
log.Info("remove node ",log.Int("NodeId", rpc.nodeInfo.NodeId),log.String("ListenAddr", rpc.nodeInfo.ListenAddr))
|
||||
}
|
||||
|
||||
func (cls *Cluster) serviceDiscoveryDelNode(nodeId int, immediately bool) {
|
||||
@@ -165,9 +161,9 @@ func (cls *Cluster) serviceDiscoverySetNodeInfo(nodeInfo *NodeInfo) {
|
||||
defer cls.locker.Unlock()
|
||||
|
||||
//先清一次的NodeId对应的所有服务清理
|
||||
lastNodeInfo, ok := cls.mapIdNode[nodeInfo.NodeId]
|
||||
lastNodeInfo, ok := cls.mapRpc[nodeInfo.NodeId]
|
||||
if ok == true {
|
||||
for _, serviceName := range lastNodeInfo.ServiceList {
|
||||
for _, serviceName := range lastNodeInfo.nodeInfo.ServiceList {
|
||||
cls.delServiceNode(serviceName, nodeInfo.NodeId)
|
||||
}
|
||||
}
|
||||
@@ -177,7 +173,7 @@ func (cls *Cluster) serviceDiscoverySetNodeInfo(nodeInfo *NodeInfo) {
|
||||
for _, serviceName := range nodeInfo.PublicServiceList {
|
||||
if _, ok := mapDuplicate[serviceName]; ok == true {
|
||||
//存在重复
|
||||
log.SError("Bad duplicate Service Cfg.")
|
||||
log.Error("Bad duplicate Service Cfg.")
|
||||
continue
|
||||
}
|
||||
mapDuplicate[serviceName] = nil
|
||||
@@ -186,31 +182,22 @@ func (cls *Cluster) serviceDiscoverySetNodeInfo(nodeInfo *NodeInfo) {
|
||||
}
|
||||
cls.mapServiceNode[serviceName][nodeInfo.NodeId] = struct{}{}
|
||||
}
|
||||
cls.mapIdNode[nodeInfo.NodeId] = *nodeInfo
|
||||
|
||||
log.SRelease("Discovery nodeId: ", nodeInfo.NodeId, " services:", nodeInfo.PublicServiceList)
|
||||
|
||||
//已经存在连接,则不需要进行设置
|
||||
if _, rpcInfoOK := cls.mapRpc[nodeInfo.NodeId]; rpcInfoOK == true {
|
||||
if lastNodeInfo != nil {
|
||||
log.Info("Discovery nodeId",log.Int("NodeId", nodeInfo.NodeId),log.Any("services:", nodeInfo.PublicServiceList),log.Bool("Retire",nodeInfo.Retire))
|
||||
lastNodeInfo.nodeInfo = *nodeInfo
|
||||
return
|
||||
}
|
||||
|
||||
//不存在时,则建立连接
|
||||
rpcInfo := NodeRpcInfo{}
|
||||
rpcInfo.nodeInfo = *nodeInfo
|
||||
rpcInfo.client = &rpc.Client{}
|
||||
rpcInfo.client.TriggerRpcEvent = cls.triggerRpcEvent
|
||||
rpcInfo.client.Connect(nodeInfo.NodeId, nodeInfo.ListenAddr, nodeInfo.MaxRpcParamLen)
|
||||
cls.mapRpc[nodeInfo.NodeId] = rpcInfo
|
||||
|
||||
rpcInfo.client =rpc.NewRClient(nodeInfo.NodeId, nodeInfo.ListenAddr, nodeInfo.MaxRpcParamLen,cls.localNodeInfo.CompressBytesLen,cls.triggerRpcEvent)
|
||||
cls.mapRpc[nodeInfo.NodeId] = &rpcInfo
|
||||
log.Info("Discovery nodeId and new rpc client",log.Int("NodeId", nodeInfo.NodeId),log.Any("services:", nodeInfo.PublicServiceList),log.Bool("Retire",nodeInfo.Retire),log.String("nodeListenAddr",nodeInfo.ListenAddr))
|
||||
}
|
||||
|
||||
func (cls *Cluster) buildLocalRpc() {
|
||||
rpcInfo := NodeRpcInfo{}
|
||||
rpcInfo.nodeInfo = cls.localNodeInfo
|
||||
rpcInfo.client = &rpc.Client{}
|
||||
rpcInfo.client.Connect(rpcInfo.nodeInfo.NodeId, "", 0)
|
||||
|
||||
cls.mapRpc[cls.localNodeInfo.NodeId] = rpcInfo
|
||||
}
|
||||
|
||||
func (cls *Cluster) Init(localNodeId int, setupServiceFun SetupServiceFun) error {
|
||||
//1.初始化配置
|
||||
@@ -220,7 +207,6 @@ func (cls *Cluster) Init(localNodeId int, setupServiceFun SetupServiceFun) error
|
||||
}
|
||||
|
||||
cls.rpcServer.Init(cls)
|
||||
cls.buildLocalRpc()
|
||||
|
||||
//2.安装服务发现结点
|
||||
cls.SetupServiceDiscovery(localNodeId, setupServiceFun)
|
||||
@@ -253,8 +239,9 @@ func (cls *Cluster) checkDynamicDiscovery(localNodeId int) (bool, bool) {
|
||||
return localMaster, hasMaster
|
||||
}
|
||||
|
||||
func (cls *Cluster) appendService(serviceName string, bPublicService bool) {
|
||||
cls.localNodeInfo.ServiceList = append(cls.localNodeInfo.ServiceList, serviceName)
|
||||
func (cls *Cluster) AddDynamicDiscoveryService(serviceName string, bPublicService bool) {
|
||||
addServiceList := append([]string{},serviceName)
|
||||
cls.localNodeInfo.ServiceList = append(addServiceList,cls.localNodeInfo.ServiceList...)
|
||||
if bPublicService {
|
||||
cls.localNodeInfo.PublicServiceList = append(cls.localNodeInfo.PublicServiceList, serviceName)
|
||||
}
|
||||
@@ -298,11 +285,10 @@ func (cls *Cluster) SetupServiceDiscovery(localNodeId int, setupServiceFun Setup
|
||||
|
||||
//2.如果为动态服务发现安装本地发现服务
|
||||
cls.serviceDiscovery = getDynamicDiscovery()
|
||||
cls.AddDynamicDiscoveryService(DynamicDiscoveryClientName, true)
|
||||
if localMaster == true {
|
||||
cls.appendService(DynamicDiscoveryMasterName, false)
|
||||
cls.AddDynamicDiscoveryService(DynamicDiscoveryMasterName, false)
|
||||
}
|
||||
cls.appendService(DynamicDiscoveryClientName, true)
|
||||
|
||||
}
|
||||
|
||||
func (cls *Cluster) FindRpcHandler(serviceName string) rpc.IRpcHandler {
|
||||
@@ -314,27 +300,33 @@ func (cls *Cluster) FindRpcHandler(serviceName string) rpc.IRpcHandler {
|
||||
return pService.GetRpcHandler()
|
||||
}
|
||||
|
||||
func (cls *Cluster) getRpcClient(nodeId int) *rpc.Client {
|
||||
func (cls *Cluster) getRpcClient(nodeId int) (*rpc.Client,bool) {
|
||||
c, ok := cls.mapRpc[nodeId]
|
||||
if ok == false {
|
||||
return nil
|
||||
return nil,false
|
||||
}
|
||||
|
||||
return c.client
|
||||
return c.client,c.nodeInfo.Retire
|
||||
}
|
||||
|
||||
func (cls *Cluster) GetRpcClient(nodeId int) *rpc.Client {
|
||||
func (cls *Cluster) GetRpcClient(nodeId int) (*rpc.Client,bool) {
|
||||
cls.locker.RLock()
|
||||
defer cls.locker.RUnlock()
|
||||
return cls.getRpcClient(nodeId)
|
||||
}
|
||||
|
||||
func GetRpcClient(nodeId int, serviceMethod string, clientList []*rpc.Client) (error, int) {
|
||||
func GetRpcClient(nodeId int, serviceMethod string,filterRetire bool, clientList []*rpc.Client) (error, int) {
|
||||
if nodeId > 0 {
|
||||
pClient := GetCluster().GetRpcClient(nodeId)
|
||||
pClient,retire := GetCluster().GetRpcClient(nodeId)
|
||||
if pClient == nil {
|
||||
return fmt.Errorf("cannot find nodeid %d!", nodeId), 0
|
||||
}
|
||||
|
||||
//如果需要筛选掉退休结点
|
||||
if filterRetire == true && retire == true {
|
||||
return fmt.Errorf("cannot find nodeid %d!", nodeId), 0
|
||||
}
|
||||
|
||||
clientList[0] = pClient
|
||||
return nil, 1
|
||||
}
|
||||
@@ -346,7 +338,7 @@ func GetRpcClient(nodeId int, serviceMethod string, clientList []*rpc.Client) (e
|
||||
serviceName := serviceMethod[:findIndex]
|
||||
|
||||
//1.找到对应的rpcNodeid
|
||||
return GetCluster().GetNodeIdByService(serviceName, clientList, true)
|
||||
return GetCluster().GetNodeIdByService(serviceName, clientList, filterRetire)
|
||||
}
|
||||
|
||||
func GetRpcServer() *rpc.Server {
|
||||
@@ -354,14 +346,23 @@ func GetRpcServer() *rpc.Server {
|
||||
}
|
||||
|
||||
func (cls *Cluster) IsNodeConnected(nodeId int) bool {
|
||||
pClient := cls.GetRpcClient(nodeId)
|
||||
pClient,_ := cls.GetRpcClient(nodeId)
|
||||
return pClient != nil && pClient.IsConnected()
|
||||
}
|
||||
|
||||
func (cls *Cluster) triggerRpcEvent(bConnect bool, clientSeq uint32, nodeId int) {
|
||||
func (cls *Cluster) IsNodeRetire(nodeId int) bool {
|
||||
cls.locker.RLock()
|
||||
defer cls.locker.RUnlock()
|
||||
|
||||
_,retire :=cls.getRpcClient(nodeId)
|
||||
return retire
|
||||
}
|
||||
|
||||
|
||||
func (cls *Cluster) triggerRpcEvent(bConnect bool, clientId uint32, nodeId int) {
|
||||
cls.locker.Lock()
|
||||
nodeInfo, ok := cls.mapRpc[nodeId]
|
||||
if ok == false || nodeInfo.client == nil || nodeInfo.client.GetClientSeq() != clientSeq {
|
||||
if ok == false || nodeInfo.client == nil || nodeInfo.client.GetClientId() != clientId {
|
||||
cls.locker.Unlock()
|
||||
return
|
||||
}
|
||||
@@ -372,7 +373,7 @@ func (cls *Cluster) triggerRpcEvent(bConnect bool, clientSeq uint32, nodeId int)
|
||||
for serviceName, _ := range cls.mapServiceListenRpcEvent {
|
||||
ser := service.GetService(serviceName)
|
||||
if ser == nil {
|
||||
log.SError("cannot find service name ", serviceName)
|
||||
log.Error("cannot find service name "+serviceName)
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -383,7 +384,6 @@ func (cls *Cluster) triggerRpcEvent(bConnect bool, clientSeq uint32, nodeId int)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
func (cls *Cluster) TriggerDiscoveryEvent(bDiscovery bool, nodeId int, serviceName []string) {
|
||||
cls.rpcEventLocker.Lock()
|
||||
defer cls.rpcEventLocker.Unlock()
|
||||
@@ -391,7 +391,7 @@ func (cls *Cluster) TriggerDiscoveryEvent(bDiscovery bool, nodeId int, serviceNa
|
||||
for sName, _ := range cls.mapServiceListenDiscoveryEvent {
|
||||
ser := service.GetService(sName)
|
||||
if ser == nil {
|
||||
log.SError("cannot find service name ", serviceName)
|
||||
log.Error("cannot find service",log.Any("services",serviceName))
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -477,11 +477,14 @@ func (cls *Cluster) GetGlobalCfg() interface{} {
|
||||
return cls.globalCfg
|
||||
}
|
||||
|
||||
|
||||
func (cls *Cluster) GetNodeInfo(nodeId int) (NodeInfo,bool) {
|
||||
cls.locker.RLock()
|
||||
defer cls.locker.RUnlock()
|
||||
|
||||
nodeInfo,ok:= cls.mapIdNode[nodeId]
|
||||
return nodeInfo,ok
|
||||
nodeInfo,ok:= cls.mapRpc[nodeId]
|
||||
if ok == false || nodeInfo == nil {
|
||||
return NodeInfo{},false
|
||||
}
|
||||
|
||||
return nodeInfo.nodeInfo,true
|
||||
}
|
||||
|
||||
@@ -5,6 +5,9 @@ import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/service"
|
||||
"time"
|
||||
"github.com/duanhf2012/origin/util/timer"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
const DynamicDiscoveryMasterName = "DiscoveryMaster"
|
||||
@@ -12,6 +15,7 @@ const DynamicDiscoveryClientName = "DiscoveryClient"
|
||||
const RegServiceDiscover = DynamicDiscoveryMasterName + ".RPC_RegServiceDiscover"
|
||||
const SubServiceDiscover = DynamicDiscoveryClientName + ".RPC_SubServiceDiscover"
|
||||
const AddSubServiceDiscover = DynamicDiscoveryMasterName + ".RPC_AddSubServiceDiscover"
|
||||
const NodeRetireRpcMethod = DynamicDiscoveryMasterName+".RPC_NodeRetire"
|
||||
|
||||
type DynamicDiscoveryMaster struct {
|
||||
service.Service
|
||||
@@ -28,6 +32,7 @@ type DynamicDiscoveryClient struct {
|
||||
localNodeId int
|
||||
|
||||
mapDiscovery map[int32]map[int32]struct{} //map[masterNodeId]map[nodeId]struct{}
|
||||
bRetire bool
|
||||
}
|
||||
|
||||
var masterService DynamicDiscoveryMaster
|
||||
@@ -47,19 +52,50 @@ func (ds *DynamicDiscoveryMaster) isRegNode(nodeId int32) bool {
|
||||
return ok
|
||||
}
|
||||
|
||||
func (ds *DynamicDiscoveryMaster) addNodeInfo(nodeInfo *rpc.NodeInfo) {
|
||||
if len(nodeInfo.PublicServiceList) == 0 {
|
||||
func (ds *DynamicDiscoveryMaster) updateNodeInfo(nInfo *rpc.NodeInfo) {
|
||||
if _,ok:= ds.mapNodeInfo[nInfo.NodeId];ok == false {
|
||||
return
|
||||
}
|
||||
|
||||
_, ok := ds.mapNodeInfo[nodeInfo.NodeId]
|
||||
nodeInfo := proto.Clone(nInfo).(*rpc.NodeInfo)
|
||||
for i:=0;i<len(ds.nodeInfo);i++ {
|
||||
if ds.nodeInfo[i].NodeId == nodeInfo.NodeId {
|
||||
ds.nodeInfo[i] = nodeInfo
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (ds *DynamicDiscoveryMaster) addNodeInfo(nInfo *rpc.NodeInfo) {
|
||||
if len(nInfo.PublicServiceList) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
_, ok := ds.mapNodeInfo[nInfo.NodeId]
|
||||
if ok == true {
|
||||
return
|
||||
}
|
||||
ds.mapNodeInfo[nodeInfo.NodeId] = struct{}{}
|
||||
ds.mapNodeInfo[nInfo.NodeId] = struct{}{}
|
||||
|
||||
nodeInfo := proto.Clone(nInfo).(*rpc.NodeInfo)
|
||||
ds.nodeInfo = append(ds.nodeInfo, nodeInfo)
|
||||
}
|
||||
|
||||
func (ds *DynamicDiscoveryMaster) removeNodeInfo(nodeId int32) {
|
||||
if _,ok:= ds.mapNodeInfo[nodeId];ok == false {
|
||||
return
|
||||
}
|
||||
|
||||
for i:=0;i<len(ds.nodeInfo);i++ {
|
||||
if ds.nodeInfo[i].NodeId == nodeId {
|
||||
ds.nodeInfo = append(ds.nodeInfo[:i],ds.nodeInfo[i+1:]...)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
delete(ds.mapNodeInfo,nodeId)
|
||||
}
|
||||
|
||||
func (ds *DynamicDiscoveryMaster) OnInit() error {
|
||||
ds.mapNodeInfo = make(map[int32]struct{}, 20)
|
||||
ds.RegRpcListener(ds)
|
||||
@@ -70,16 +106,14 @@ func (ds *DynamicDiscoveryMaster) OnInit() error {
|
||||
func (ds *DynamicDiscoveryMaster) OnStart() {
|
||||
var nodeInfo rpc.NodeInfo
|
||||
localNodeInfo := cluster.GetLocalNodeInfo()
|
||||
if localNodeInfo.Private == true {
|
||||
return
|
||||
}
|
||||
|
||||
nodeInfo.NodeId = int32(localNodeInfo.NodeId)
|
||||
nodeInfo.NodeName = localNodeInfo.NodeName
|
||||
nodeInfo.ListenAddr = localNodeInfo.ListenAddr
|
||||
nodeInfo.PublicServiceList = localNodeInfo.PublicServiceList
|
||||
nodeInfo.MaxRpcParamLen = localNodeInfo.MaxRpcParamLen
|
||||
|
||||
nodeInfo.Private = localNodeInfo.Private
|
||||
nodeInfo.Retire = localNodeInfo.Retire
|
||||
|
||||
ds.addNodeInfo(&nodeInfo)
|
||||
}
|
||||
|
||||
@@ -103,6 +137,8 @@ func (ds *DynamicDiscoveryMaster) OnNodeDisconnect(nodeId int) {
|
||||
return
|
||||
}
|
||||
|
||||
ds.removeNodeInfo(int32(nodeId))
|
||||
|
||||
var notifyDiscover rpc.SubscribeDiscoverNotify
|
||||
notifyDiscover.MasterNodeId = int32(cluster.GetLocalNodeInfo().NodeId)
|
||||
notifyDiscover.DelNodeId = int32(nodeId)
|
||||
@@ -119,11 +155,24 @@ func (ds *DynamicDiscoveryMaster) RpcCastGo(serviceMethod string, args interface
|
||||
}
|
||||
}
|
||||
|
||||
func (ds *DynamicDiscoveryMaster) RPC_NodeRetire(req *rpc.NodeRetireReq, res *rpc.Empty) error {
|
||||
log.Info("node is retire",log.Int32("nodeId",req.NodeInfo.NodeId),log.Bool("retire",req.NodeInfo.Retire))
|
||||
|
||||
ds.updateNodeInfo(req.NodeInfo)
|
||||
|
||||
var notifyDiscover rpc.SubscribeDiscoverNotify
|
||||
notifyDiscover.MasterNodeId = int32(cluster.GetLocalNodeInfo().NodeId)
|
||||
notifyDiscover.NodeInfo = append(notifyDiscover.NodeInfo, req.NodeInfo)
|
||||
ds.RpcCastGo(SubServiceDiscover, ¬ifyDiscover)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// 收到注册过来的结点
|
||||
func (ds *DynamicDiscoveryMaster) RPC_RegServiceDiscover(req *rpc.ServiceDiscoverReq, res *rpc.Empty) error {
|
||||
if req.NodeInfo == nil {
|
||||
err := errors.New("RPC_RegServiceDiscover req is error.")
|
||||
log.SError(err.Error())
|
||||
log.Error(err.Error())
|
||||
|
||||
return err
|
||||
}
|
||||
@@ -146,6 +195,8 @@ func (ds *DynamicDiscoveryMaster) RPC_RegServiceDiscover(req *rpc.ServiceDiscove
|
||||
nodeInfo.PublicServiceList = req.NodeInfo.PublicServiceList
|
||||
nodeInfo.ListenAddr = req.NodeInfo.ListenAddr
|
||||
nodeInfo.MaxRpcParamLen = req.NodeInfo.MaxRpcParamLen
|
||||
nodeInfo.Retire = req.NodeInfo.Retire
|
||||
|
||||
//主动删除已经存在的结点,确保先断开,再连接
|
||||
cluster.serviceDiscoveryDelNode(nodeInfo.NodeId, true)
|
||||
|
||||
@@ -229,15 +280,6 @@ func (dc *DynamicDiscoveryClient) fullCompareDiffNode(masterNodeId int32, mapNod
|
||||
|
||||
//订阅发现的服务通知
|
||||
func (dc *DynamicDiscoveryClient) RPC_SubServiceDiscover(req *rpc.SubscribeDiscoverNotify) error {
|
||||
//整理当前master结点需要筛选的NeighborService
|
||||
masterDiscoveryNodeInfo := cluster.GetMasterDiscoveryNodeInfo(int(req.MasterNodeId))
|
||||
mapMasterDiscoveryService := map[string]struct{}{}
|
||||
if masterDiscoveryNodeInfo != nil {
|
||||
for i := 0; i < len(masterDiscoveryNodeInfo.NeighborService); i++ {
|
||||
mapMasterDiscoveryService[masterDiscoveryNodeInfo.NeighborService[i]] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
mapNodeInfo := map[int32]*rpc.NodeInfo{}
|
||||
for _, nodeInfo := range req.NodeInfo {
|
||||
//不对本地结点或者不存在任何公开服务的结点
|
||||
@@ -252,13 +294,6 @@ func (dc *DynamicDiscoveryClient) RPC_SubServiceDiscover(req *rpc.SubscribeDisco
|
||||
|
||||
//遍历所有的公开服务,并筛选之
|
||||
for _, serviceName := range nodeInfo.PublicServiceList {
|
||||
//只有存在配置时才做筛选
|
||||
if len(mapMasterDiscoveryService) > 0 {
|
||||
if _, ok := mapMasterDiscoveryService[serviceName]; ok == false {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
nInfo := mapNodeInfo[nodeInfo.NodeId]
|
||||
if nInfo == nil {
|
||||
nInfo = &rpc.NodeInfo{}
|
||||
@@ -266,6 +301,9 @@ func (dc *DynamicDiscoveryClient) RPC_SubServiceDiscover(req *rpc.SubscribeDisco
|
||||
nInfo.NodeName = nodeInfo.NodeName
|
||||
nInfo.ListenAddr = nodeInfo.ListenAddr
|
||||
nInfo.MaxRpcParamLen = nodeInfo.MaxRpcParamLen
|
||||
nInfo.Retire = nodeInfo.Retire
|
||||
nInfo.Private = nodeInfo.Private
|
||||
|
||||
mapNodeInfo[nodeInfo.NodeId] = nInfo
|
||||
}
|
||||
|
||||
@@ -275,7 +313,6 @@ func (dc *DynamicDiscoveryClient) RPC_SubServiceDiscover(req *rpc.SubscribeDisco
|
||||
|
||||
//如果为完整同步,则找出差异的结点
|
||||
var willDelNodeId []int32
|
||||
//如果不是邻居结点,则做筛选
|
||||
if req.IsFull == true {
|
||||
diffNode := dc.fullCompareDiffNode(req.MasterNodeId, mapNodeInfo)
|
||||
if len(diffNode) > 0 {
|
||||
@@ -290,8 +327,7 @@ func (dc *DynamicDiscoveryClient) RPC_SubServiceDiscover(req *rpc.SubscribeDisco
|
||||
|
||||
//删除不必要的结点
|
||||
for _, nodeId := range willDelNodeId {
|
||||
nodeInfo,_ := cluster.GetNodeInfo(int(nodeId))
|
||||
cluster.TriggerDiscoveryEvent(false,int(nodeId),nodeInfo.PublicServiceList)
|
||||
cluster.TriggerDiscoveryEvent(false,int(nodeId),nil)
|
||||
dc.removeMasterNode(req.MasterNodeId, int32(nodeId))
|
||||
if dc.findNodeId(nodeId) == false {
|
||||
dc.funDelService(int(nodeId), false)
|
||||
@@ -300,10 +336,8 @@ func (dc *DynamicDiscoveryClient) RPC_SubServiceDiscover(req *rpc.SubscribeDisco
|
||||
|
||||
//设置新结点
|
||||
for _, nodeInfo := range mapNodeInfo {
|
||||
dc.addMasterNode(req.MasterNodeId, nodeInfo.NodeId)
|
||||
dc.setNodeInfo(nodeInfo)
|
||||
|
||||
if len(nodeInfo.PublicServiceList) == 0 {
|
||||
bSet := dc.setNodeInfo(req.MasterNodeId,nodeInfo)
|
||||
if bSet == false {
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -324,6 +358,33 @@ func (dc *DynamicDiscoveryClient) isDiscoverNode(nodeId int) bool {
|
||||
}
|
||||
|
||||
func (dc *DynamicDiscoveryClient) OnNodeConnected(nodeId int) {
|
||||
dc.regServiceDiscover(nodeId)
|
||||
}
|
||||
|
||||
func (dc *DynamicDiscoveryClient) OnRetire(){
|
||||
dc.bRetire = true
|
||||
|
||||
masterNodeList := cluster.GetDiscoveryNodeList()
|
||||
for i:=0;i<len(masterNodeList);i++{
|
||||
var nodeRetireReq rpc.NodeRetireReq
|
||||
|
||||
nodeRetireReq.NodeInfo = &rpc.NodeInfo{}
|
||||
nodeRetireReq.NodeInfo.NodeId = int32(cluster.localNodeInfo.NodeId)
|
||||
nodeRetireReq.NodeInfo.NodeName = cluster.localNodeInfo.NodeName
|
||||
nodeRetireReq.NodeInfo.ListenAddr = cluster.localNodeInfo.ListenAddr
|
||||
nodeRetireReq.NodeInfo.MaxRpcParamLen = cluster.localNodeInfo.MaxRpcParamLen
|
||||
nodeRetireReq.NodeInfo.PublicServiceList = cluster.localNodeInfo.PublicServiceList
|
||||
nodeRetireReq.NodeInfo.Retire = dc.bRetire
|
||||
nodeRetireReq.NodeInfo.Private = cluster.localNodeInfo.Private
|
||||
|
||||
err := dc.GoNode(int(masterNodeList[i].NodeId),NodeRetireRpcMethod,&nodeRetireReq)
|
||||
if err!= nil {
|
||||
log.Error("call "+NodeRetireRpcMethod+" is fail",log.ErrorAttr("err",err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (dc *DynamicDiscoveryClient) regServiceDiscover(nodeId int){
|
||||
nodeInfo := cluster.GetMasterDiscoveryNodeInfo(nodeId)
|
||||
if nodeInfo == nil {
|
||||
return
|
||||
@@ -335,57 +396,76 @@ func (dc *DynamicDiscoveryClient) OnNodeConnected(nodeId int) {
|
||||
req.NodeInfo.NodeName = cluster.localNodeInfo.NodeName
|
||||
req.NodeInfo.ListenAddr = cluster.localNodeInfo.ListenAddr
|
||||
req.NodeInfo.MaxRpcParamLen = cluster.localNodeInfo.MaxRpcParamLen
|
||||
|
||||
//MasterDiscoveryNode配置中没有配置NeighborService,则同步当前结点所有服务
|
||||
if len(nodeInfo.NeighborService) == 0 {
|
||||
req.NodeInfo.PublicServiceList = cluster.localNodeInfo.PublicServiceList
|
||||
} else {
|
||||
req.NodeInfo.PublicServiceList = append(req.NodeInfo.PublicServiceList, DynamicDiscoveryClientName)
|
||||
}
|
||||
req.NodeInfo.PublicServiceList = cluster.localNodeInfo.PublicServiceList
|
||||
req.NodeInfo.Retire = dc.bRetire
|
||||
req.NodeInfo.Private = cluster.localNodeInfo.Private
|
||||
|
||||
//向Master服务同步本Node服务信息
|
||||
err := dc.AsyncCallNode(nodeId, RegServiceDiscover, &req, func(res *rpc.Empty, err error) {
|
||||
if err != nil {
|
||||
log.SError("call ", RegServiceDiscover, " is fail :", err.Error())
|
||||
log.Error("call "+RegServiceDiscover+" is fail :"+ err.Error())
|
||||
dc.AfterFunc(time.Second*3, func(timer *timer.Timer) {
|
||||
dc.regServiceDiscover(nodeId)
|
||||
})
|
||||
|
||||
return
|
||||
}
|
||||
})
|
||||
if err != nil {
|
||||
log.SError("call ", RegServiceDiscover, " is fail :", err.Error())
|
||||
log.Error("call "+ RegServiceDiscover+" is fail :"+ err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
func (dc *DynamicDiscoveryClient) setNodeInfo(nodeInfo *rpc.NodeInfo) {
|
||||
if nodeInfo == nil || nodeInfo.Private == true || int(nodeInfo.NodeId) == dc.localNodeId {
|
||||
return
|
||||
}
|
||||
func (dc *DynamicDiscoveryClient) canDiscoveryService(fromMasterNodeId int32,serviceName string) bool{
|
||||
canDiscovery := true
|
||||
|
||||
//筛选关注的服务
|
||||
localNodeInfo := cluster.GetLocalNodeInfo()
|
||||
if len(localNodeInfo.DiscoveryService) > 0 {
|
||||
var discoverServiceSlice = make([]string, 0, 24)
|
||||
for _, pubService := range nodeInfo.PublicServiceList {
|
||||
for _, discoverService := range localNodeInfo.DiscoveryService {
|
||||
if pubService == discoverService {
|
||||
discoverServiceSlice = append(discoverServiceSlice, pubService)
|
||||
for i:=0;i<len(cluster.GetLocalNodeInfo().MasterDiscoveryService);i++{
|
||||
masterNodeId := cluster.GetLocalNodeInfo().MasterDiscoveryService[i].MasterNodeId
|
||||
|
||||
if masterNodeId == fromMasterNodeId || masterNodeId == 0 {
|
||||
canDiscovery = false
|
||||
|
||||
for _,discoveryService := range cluster.GetLocalNodeInfo().MasterDiscoveryService[i].DiscoveryService {
|
||||
if discoveryService == serviceName {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
nodeInfo.PublicServiceList = discoverServiceSlice
|
||||
}
|
||||
|
||||
if len(nodeInfo.PublicServiceList) == 0 {
|
||||
return
|
||||
return canDiscovery
|
||||
}
|
||||
|
||||
func (dc *DynamicDiscoveryClient) setNodeInfo(masterNodeId int32,nodeInfo *rpc.NodeInfo) bool{
|
||||
if nodeInfo == nil || nodeInfo.Private == true || int(nodeInfo.NodeId) == dc.localNodeId {
|
||||
return false
|
||||
}
|
||||
|
||||
//筛选关注的服务
|
||||
var discoverServiceSlice = make([]string, 0, 24)
|
||||
for _, pubService := range nodeInfo.PublicServiceList {
|
||||
if dc.canDiscoveryService(masterNodeId,pubService) == true {
|
||||
discoverServiceSlice = append(discoverServiceSlice,pubService)
|
||||
}
|
||||
}
|
||||
|
||||
if len(discoverServiceSlice) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
var nInfo NodeInfo
|
||||
nInfo.ServiceList = nodeInfo.PublicServiceList
|
||||
nInfo.PublicServiceList = nodeInfo.PublicServiceList
|
||||
nInfo.ServiceList = discoverServiceSlice
|
||||
nInfo.PublicServiceList = discoverServiceSlice
|
||||
nInfo.NodeId = int(nodeInfo.NodeId)
|
||||
nInfo.NodeName = nodeInfo.NodeName
|
||||
nInfo.ListenAddr = nodeInfo.ListenAddr
|
||||
nInfo.MaxRpcParamLen = nodeInfo.MaxRpcParamLen
|
||||
nInfo.Retire = nodeInfo.Retire
|
||||
nInfo.Private = nodeInfo.Private
|
||||
|
||||
dc.funSetService(&nInfo)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func (dc *DynamicDiscoveryClient) OnNodeDisconnect(nodeId int) {
|
||||
|
||||
@@ -58,7 +58,7 @@ func (cls *Cluster) readServiceConfig(filepath string) (interface{}, map[string]
|
||||
serviceCfg := v.(map[string]interface{})
|
||||
nodeId, ok := serviceCfg["NodeId"]
|
||||
if ok == false {
|
||||
log.SFatal("NodeService list not find nodeId field")
|
||||
log.Fatal("NodeService list not find nodeId field")
|
||||
}
|
||||
mapNodeService[int(nodeId.(float64))] = serviceCfg
|
||||
}
|
||||
@@ -199,7 +199,11 @@ func (cls *Cluster) readLocalService(localNodeId int) error {
|
||||
}
|
||||
|
||||
func (cls *Cluster) parseLocalCfg() {
|
||||
cls.mapIdNode[cls.localNodeInfo.NodeId] = cls.localNodeInfo
|
||||
rpcInfo := NodeRpcInfo{}
|
||||
rpcInfo.nodeInfo = cls.localNodeInfo
|
||||
rpcInfo.client = rpc.NewLClient(rpcInfo.nodeInfo.NodeId)
|
||||
|
||||
cls.mapRpc[cls.localNodeInfo.NodeId] = &rpcInfo
|
||||
|
||||
for _, sName := range cls.localNodeInfo.ServiceList {
|
||||
if _, ok := cls.mapServiceNode[sName]; ok == false {
|
||||
@@ -225,8 +229,7 @@ func (cls *Cluster) checkDiscoveryNodeList(discoverMasterNode []NodeInfo) bool {
|
||||
|
||||
func (cls *Cluster) InitCfg(localNodeId int) error {
|
||||
cls.localServiceCfg = map[string]interface{}{}
|
||||
cls.mapRpc = map[int]NodeRpcInfo{}
|
||||
cls.mapIdNode = map[int]NodeInfo{}
|
||||
cls.mapRpc = map[int]*NodeRpcInfo{}
|
||||
cls.mapServiceNode = map[string]map[int]struct{}{}
|
||||
|
||||
//加载本地结点的NodeList配置
|
||||
@@ -263,17 +266,24 @@ func (cls *Cluster) IsConfigService(serviceName string) bool {
|
||||
return ok
|
||||
}
|
||||
|
||||
func (cls *Cluster) GetNodeIdByService(serviceName string, rpcClientList []*rpc.Client, bAll bool) (error, int) {
|
||||
|
||||
func (cls *Cluster) GetNodeIdByService(serviceName string, rpcClientList []*rpc.Client, filterRetire bool) (error, int) {
|
||||
cls.locker.RLock()
|
||||
defer cls.locker.RUnlock()
|
||||
mapNodeId, ok := cls.mapServiceNode[serviceName]
|
||||
count := 0
|
||||
if ok == true {
|
||||
for nodeId, _ := range mapNodeId {
|
||||
pClient := GetCluster().getRpcClient(nodeId)
|
||||
if pClient == nil || (bAll == false && pClient.IsConnected() == false) {
|
||||
pClient,retire := GetCluster().getRpcClient(nodeId)
|
||||
if pClient == nil || pClient.IsConnected() == false {
|
||||
continue
|
||||
}
|
||||
|
||||
//如果需要筛选掉退休的,对retire状态的结点略过
|
||||
if filterRetire == true && retire == true {
|
||||
continue
|
||||
}
|
||||
|
||||
rpcClientList[count] = pClient
|
||||
count++
|
||||
if count >= cap(rpcClientList) {
|
||||
|
||||
99
concurrent/concurrent.go
Normal file
99
concurrent/concurrent.go
Normal file
@@ -0,0 +1,99 @@
|
||||
package concurrent
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"runtime"
|
||||
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
const defaultMaxTaskChannelNum = 1000000
|
||||
|
||||
type IConcurrent interface {
|
||||
OpenConcurrentByNumCPU(cpuMul float32)
|
||||
OpenConcurrent(minGoroutineNum int32, maxGoroutineNum int32, maxTaskChannelNum int)
|
||||
AsyncDoByQueue(queueId int64, fn func() bool, cb func(err error))
|
||||
AsyncDo(f func() bool, cb func(err error))
|
||||
}
|
||||
|
||||
type Concurrent struct {
|
||||
dispatch
|
||||
|
||||
tasks chan task
|
||||
cbChannel chan func(error)
|
||||
open int32
|
||||
}
|
||||
|
||||
/*
|
||||
cpuMul 表示cpu的倍数
|
||||
建议:(1)cpu密集型 使用1 (2)i/o密集型使用2或者更高
|
||||
*/
|
||||
func (c *Concurrent) OpenConcurrentByNumCPU(cpuNumMul float32) {
|
||||
goroutineNum := int32(float32(runtime.NumCPU())*cpuNumMul + 1)
|
||||
c.OpenConcurrent(goroutineNum, goroutineNum, defaultMaxTaskChannelNum)
|
||||
}
|
||||
|
||||
func (c *Concurrent) OpenConcurrent(minGoroutineNum int32, maxGoroutineNum int32, maxTaskChannelNum int) {
|
||||
if atomic.AddInt32(&c.open,1) > 1 {
|
||||
panic("repeated calls to OpenConcurrent are not allowed!")
|
||||
}
|
||||
|
||||
c.tasks = make(chan task, maxTaskChannelNum)
|
||||
c.cbChannel = make(chan func(error), maxTaskChannelNum)
|
||||
|
||||
//打开dispach
|
||||
c.dispatch.open(minGoroutineNum, maxGoroutineNum, c.tasks, c.cbChannel)
|
||||
}
|
||||
|
||||
func (c *Concurrent) AsyncDo(f func() bool, cb func(err error)) {
|
||||
c.AsyncDoByQueue(0, f, cb)
|
||||
}
|
||||
|
||||
func (c *Concurrent) AsyncDoByQueue(queueId int64, fn func() bool, cb func(err error)) {
|
||||
if cap(c.tasks) == 0 {
|
||||
panic("not open concurrent")
|
||||
}
|
||||
|
||||
if fn == nil && cb == nil {
|
||||
log.Stack("fn and cb is nil")
|
||||
return
|
||||
}
|
||||
|
||||
if fn == nil {
|
||||
c.pushAsyncDoCallbackEvent(cb)
|
||||
return
|
||||
}
|
||||
|
||||
if queueId != 0 {
|
||||
queueId = queueId % maxTaskQueueSessionId+1
|
||||
}
|
||||
|
||||
select {
|
||||
case c.tasks <- task{queueId, fn, cb}:
|
||||
default:
|
||||
log.Error("tasks channel is full")
|
||||
if cb != nil {
|
||||
c.pushAsyncDoCallbackEvent(func(err error) {
|
||||
cb(errors.New("tasks channel is full"))
|
||||
})
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Concurrent) Close() {
|
||||
if cap(c.tasks) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
log.Info("wait close concurrent")
|
||||
|
||||
c.dispatch.close()
|
||||
|
||||
log.Info("concurrent has successfully exited")
|
||||
}
|
||||
|
||||
func (c *Concurrent) GetCallBackChannel() chan func(error) {
|
||||
return c.cbChannel
|
||||
}
|
||||
195
concurrent/dispatch.go
Normal file
195
concurrent/dispatch.go
Normal file
@@ -0,0 +1,195 @@
|
||||
package concurrent
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"fmt"
|
||||
"runtime"
|
||||
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/util/queue"
|
||||
)
|
||||
|
||||
var idleTimeout = int64(2 * time.Second)
|
||||
const maxTaskQueueSessionId = 10000
|
||||
|
||||
type dispatch struct {
|
||||
minConcurrentNum int32
|
||||
maxConcurrentNum int32
|
||||
|
||||
queueIdChannel chan int64
|
||||
workerQueue chan task
|
||||
tasks chan task
|
||||
idle bool
|
||||
workerNum int32
|
||||
cbChannel chan func(error)
|
||||
|
||||
mapTaskQueueSession map[int64]*queue.Deque[task]
|
||||
|
||||
waitWorker sync.WaitGroup
|
||||
waitDispatch sync.WaitGroup
|
||||
}
|
||||
|
||||
func (d *dispatch) open(minGoroutineNum int32, maxGoroutineNum int32, tasks chan task, cbChannel chan func(error)) {
|
||||
d.minConcurrentNum = minGoroutineNum
|
||||
d.maxConcurrentNum = maxGoroutineNum
|
||||
d.tasks = tasks
|
||||
d.mapTaskQueueSession = make(map[int64]*queue.Deque[task], maxTaskQueueSessionId)
|
||||
d.workerQueue = make(chan task)
|
||||
d.cbChannel = cbChannel
|
||||
d.queueIdChannel = make(chan int64, cap(tasks))
|
||||
|
||||
d.waitDispatch.Add(1)
|
||||
go d.run()
|
||||
}
|
||||
|
||||
func (d *dispatch) run() {
|
||||
defer d.waitDispatch.Done()
|
||||
timeout := time.NewTimer(time.Duration(atomic.LoadInt64(&idleTimeout)))
|
||||
|
||||
for {
|
||||
select {
|
||||
case queueId := <-d.queueIdChannel:
|
||||
d.processqueueEvent(queueId)
|
||||
default:
|
||||
select {
|
||||
case t, ok := <-d.tasks:
|
||||
if ok == false {
|
||||
return
|
||||
}
|
||||
d.processTask(&t)
|
||||
case queueId := <-d.queueIdChannel:
|
||||
d.processqueueEvent(queueId)
|
||||
case <-timeout.C:
|
||||
d.processTimer()
|
||||
if atomic.LoadInt32(&d.minConcurrentNum) == -1 && len(d.tasks) == 0 {
|
||||
atomic.StoreInt64(&idleTimeout,int64(time.Millisecond * 10))
|
||||
}
|
||||
timeout.Reset(time.Duration(atomic.LoadInt64(&idleTimeout)))
|
||||
}
|
||||
}
|
||||
|
||||
if atomic.LoadInt32(&d.minConcurrentNum) == -1 && d.workerNum == 0 {
|
||||
d.waitWorker.Wait()
|
||||
d.cbChannel <- nil
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (d *dispatch) processTimer() {
|
||||
if d.idle == true && d.workerNum > atomic.LoadInt32(&d.minConcurrentNum) {
|
||||
d.processIdle()
|
||||
}
|
||||
|
||||
d.idle = true
|
||||
}
|
||||
|
||||
func (d *dispatch) processqueueEvent(queueId int64) {
|
||||
d.idle = false
|
||||
|
||||
queueSession := d.mapTaskQueueSession[queueId]
|
||||
if queueSession == nil {
|
||||
return
|
||||
}
|
||||
|
||||
queueSession.PopFront()
|
||||
if queueSession.Len() == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
t := queueSession.Front()
|
||||
d.executeTask(&t)
|
||||
}
|
||||
|
||||
func (d *dispatch) executeTask(t *task) {
|
||||
select {
|
||||
case d.workerQueue <- *t:
|
||||
return
|
||||
default:
|
||||
if d.workerNum < d.maxConcurrentNum {
|
||||
var work worker
|
||||
work.start(&d.waitWorker, t, d)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
d.workerQueue <- *t
|
||||
}
|
||||
|
||||
func (d *dispatch) processTask(t *task) {
|
||||
d.idle = false
|
||||
|
||||
//处理有排队任务
|
||||
if t.queueId != 0 {
|
||||
queueSession := d.mapTaskQueueSession[t.queueId]
|
||||
if queueSession == nil {
|
||||
queueSession = &queue.Deque[task]{}
|
||||
d.mapTaskQueueSession[t.queueId] = queueSession
|
||||
}
|
||||
|
||||
//没有正在执行的任务,则直接执行
|
||||
if queueSession.Len() == 0 {
|
||||
d.executeTask(t)
|
||||
}
|
||||
|
||||
queueSession.PushBack(*t)
|
||||
return
|
||||
}
|
||||
|
||||
//普通任务
|
||||
d.executeTask(t)
|
||||
}
|
||||
|
||||
func (d *dispatch) processIdle() {
|
||||
select {
|
||||
case d.workerQueue <- task{}:
|
||||
d.workerNum--
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
func (d *dispatch) pushQueueTaskFinishEvent(queueId int64) {
|
||||
d.queueIdChannel <- queueId
|
||||
}
|
||||
|
||||
func (c *dispatch) pushAsyncDoCallbackEvent(cb func(err error)) {
|
||||
if cb == nil {
|
||||
//不需要回调的情况
|
||||
return
|
||||
}
|
||||
|
||||
c.cbChannel <- cb
|
||||
}
|
||||
|
||||
func (d *dispatch) close() {
|
||||
atomic.StoreInt32(&d.minConcurrentNum, -1)
|
||||
|
||||
breakFor:
|
||||
for {
|
||||
select {
|
||||
case cb := <-d.cbChannel:
|
||||
if cb == nil {
|
||||
break breakFor
|
||||
}
|
||||
cb(nil)
|
||||
}
|
||||
}
|
||||
|
||||
d.waitDispatch.Wait()
|
||||
}
|
||||
|
||||
func (d *dispatch) DoCallback(cb func(err error)) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
}
|
||||
}()
|
||||
|
||||
cb(nil)
|
||||
}
|
||||
78
concurrent/worker.go
Normal file
78
concurrent/worker.go
Normal file
@@ -0,0 +1,78 @@
|
||||
package concurrent
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"errors"
|
||||
"fmt"
|
||||
"runtime"
|
||||
|
||||
"github.com/duanhf2012/origin/log"
|
||||
)
|
||||
|
||||
type task struct {
|
||||
queueId int64
|
||||
fn func() bool
|
||||
cb func(err error)
|
||||
}
|
||||
|
||||
type worker struct {
|
||||
*dispatch
|
||||
}
|
||||
|
||||
func (t *task) isExistTask() bool {
|
||||
return t.fn == nil
|
||||
}
|
||||
|
||||
func (w *worker) start(waitGroup *sync.WaitGroup, t *task, d *dispatch) {
|
||||
w.dispatch = d
|
||||
d.workerNum += 1
|
||||
waitGroup.Add(1)
|
||||
go w.run(waitGroup, *t)
|
||||
}
|
||||
|
||||
func (w *worker) run(waitGroup *sync.WaitGroup, t task) {
|
||||
defer waitGroup.Done()
|
||||
|
||||
w.exec(&t)
|
||||
for {
|
||||
select {
|
||||
case tw := <-w.workerQueue:
|
||||
if tw.isExistTask() {
|
||||
//exit goroutine
|
||||
log.Info("worker goroutine exit")
|
||||
return
|
||||
}
|
||||
w.exec(&tw)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (w *worker) exec(t *task) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
|
||||
cb := t.cb
|
||||
t.cb = func(err error) {
|
||||
cb(errors.New(errString))
|
||||
}
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
w.endCallFun(true,t)
|
||||
}
|
||||
}()
|
||||
|
||||
w.endCallFun(t.fn(),t)
|
||||
}
|
||||
|
||||
func (w *worker) endCallFun(isDocallBack bool,t *task) {
|
||||
if isDocallBack {
|
||||
w.pushAsyncDoCallbackEvent(t.cb)
|
||||
}
|
||||
|
||||
if t.queueId != 0 {
|
||||
w.pushQueueTaskFinishEvent(t.queueId)
|
||||
}
|
||||
}
|
||||
@@ -11,8 +11,9 @@ type CommandFunctionCB func(args interface{}) error
|
||||
var commandList []*command
|
||||
var programName string
|
||||
const(
|
||||
boolType valueType = iota
|
||||
stringType valueType = iota
|
||||
boolType valueType = 0
|
||||
stringType valueType = 1
|
||||
intType valueType = 2
|
||||
)
|
||||
|
||||
type command struct{
|
||||
@@ -20,6 +21,7 @@ type command struct{
|
||||
name string
|
||||
bValue bool
|
||||
strValue string
|
||||
intValue int
|
||||
usage string
|
||||
fn CommandFunctionCB
|
||||
}
|
||||
@@ -29,6 +31,8 @@ func (cmd *command) execute() error{
|
||||
return cmd.fn(cmd.bValue)
|
||||
}else if cmd.valType == stringType {
|
||||
return cmd.fn(cmd.strValue)
|
||||
}else if cmd.valType == intType {
|
||||
return cmd.fn(cmd.intValue)
|
||||
}else{
|
||||
return fmt.Errorf("Unknow command type.")
|
||||
}
|
||||
@@ -72,6 +76,16 @@ func RegisterCommandBool(cmdName string, defaultValue bool, usage string,fn Comm
|
||||
commandList = append(commandList,&cmd)
|
||||
}
|
||||
|
||||
func RegisterCommandInt(cmdName string, defaultValue int, usage string,fn CommandFunctionCB){
|
||||
var cmd command
|
||||
cmd.valType = intType
|
||||
cmd.name = cmdName
|
||||
cmd.fn = fn
|
||||
cmd.usage = usage
|
||||
flag.IntVar(&cmd.intValue, cmdName, defaultValue, usage)
|
||||
commandList = append(commandList,&cmd)
|
||||
}
|
||||
|
||||
func RegisterCommandString(cmdName string, defaultValue string, usage string,fn CommandFunctionCB){
|
||||
var cmd command
|
||||
cmd.valType = stringType
|
||||
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
|
||||
//事件接受器
|
||||
type EventCallBack func(event IEvent)
|
||||
|
||||
@@ -216,7 +215,7 @@ func (processor *EventProcessor) EventHandler(ev IEvent) {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError("core dump info[",errString,"]\n",string(buf[:l]))
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -229,16 +228,15 @@ func (processor *EventProcessor) EventHandler(ev IEvent) {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
func (processor *EventProcessor) castEvent(event IEvent){
|
||||
if processor.mapListenerEvent == nil {
|
||||
log.SError("mapListenerEvent not init!")
|
||||
log.Error("mapListenerEvent not init!")
|
||||
return
|
||||
}
|
||||
|
||||
eventProcessor,ok := processor.mapListenerEvent[event.GetEventType()]
|
||||
if ok == false || processor == nil{
|
||||
log.SDebug("event type ",event.GetEventType()," not listen.")
|
||||
log.Debug("event is not listen",log.Int("event type",int(event.GetEventType())))
|
||||
return
|
||||
}
|
||||
|
||||
@@ -246,3 +244,4 @@ func (processor *EventProcessor) castEvent(event IEvent){
|
||||
proc.PushEvent(event)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
24
event/eventpool.go
Normal file
24
event/eventpool.go
Normal file
@@ -0,0 +1,24 @@
|
||||
package event
|
||||
|
||||
import "github.com/duanhf2012/origin/util/sync"
|
||||
|
||||
// eventPool的内存池,缓存Event
|
||||
const defaultMaxEventChannelNum = 2000000
|
||||
|
||||
var eventPool = sync.NewPoolEx(make(chan sync.IPoolData, defaultMaxEventChannelNum), func() sync.IPoolData {
|
||||
return &Event{}
|
||||
})
|
||||
|
||||
func NewEvent() *Event{
|
||||
return eventPool.Get().(*Event)
|
||||
}
|
||||
|
||||
func DeleteEvent(event IEvent){
|
||||
eventPool.Put(event.(sync.IPoolData))
|
||||
}
|
||||
|
||||
func SetEventPoolSize(eventPoolSize int){
|
||||
eventPool = sync.NewPoolEx(make(chan sync.IPoolData, eventPoolSize), func() sync.IPoolData {
|
||||
return &Event{}
|
||||
})
|
||||
}
|
||||
@@ -7,12 +7,15 @@ const (
|
||||
ServiceRpcRequestEvent EventType = -1
|
||||
ServiceRpcResponseEvent EventType = -2
|
||||
|
||||
Sys_Event_Tcp EventType = -3
|
||||
Sys_Event_Http_Event EventType = -4
|
||||
Sys_Event_WebSocket EventType = -5
|
||||
Sys_Event_Node_Event EventType = -6
|
||||
Sys_Event_DiscoverService EventType = -7
|
||||
Sys_Event_Tcp EventType = -3
|
||||
Sys_Event_Http_Event EventType = -4
|
||||
Sys_Event_WebSocket EventType = -5
|
||||
Sys_Event_Node_Event EventType = -6
|
||||
Sys_Event_DiscoverService EventType = -7
|
||||
Sys_Event_DiscardGoroutine EventType = -8
|
||||
Sys_Event_QueueTaskFinish EventType = -9
|
||||
Sys_Event_Retire EventType = -10
|
||||
|
||||
Sys_Event_User_Define EventType = 1
|
||||
Sys_Event_User_Define EventType = 1
|
||||
)
|
||||
|
||||
|
||||
15
go.mod
15
go.mod
@@ -1,30 +1,37 @@
|
||||
module github.com/duanhf2012/origin
|
||||
|
||||
go 1.19
|
||||
go 1.21
|
||||
|
||||
require (
|
||||
github.com/go-sql-driver/mysql v1.6.0
|
||||
github.com/gogo/protobuf v1.3.2
|
||||
github.com/gomodule/redigo v1.8.8
|
||||
github.com/gorilla/websocket v1.5.0
|
||||
github.com/json-iterator/go v1.1.12
|
||||
github.com/pierrec/lz4/v4 v4.1.18
|
||||
github.com/shirou/gopsutil v3.21.11+incompatible
|
||||
go.mongodb.org/mongo-driver v1.9.1
|
||||
google.golang.org/protobuf v1.31.0
|
||||
gopkg.in/mgo.v2 v2.0.0-20190816093944-a6b53ec6cb22
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/go-ole/go-ole v1.2.6 // indirect
|
||||
github.com/go-stack/stack v1.8.0 // indirect
|
||||
github.com/golang/snappy v0.0.1 // indirect
|
||||
github.com/klauspost/compress v1.13.6 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/tklauser/go-sysconf v0.3.13 // indirect
|
||||
github.com/tklauser/numcpus v0.7.0 // indirect
|
||||
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
|
||||
github.com/xdg-go/scram v1.0.2 // indirect
|
||||
github.com/xdg-go/stringprep v1.0.2 // indirect
|
||||
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d // indirect
|
||||
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f // indirect
|
||||
github.com/yusufpapurcu/wmi v1.2.4 // indirect
|
||||
golang.org/x/crypto v0.1.0 // indirect
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 // indirect
|
||||
golang.org/x/text v0.3.6 // indirect
|
||||
golang.org/x/sys v0.15.0 // indirect
|
||||
golang.org/x/text v0.4.0 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
)
|
||||
|
||||
53
go.sum
53
go.sum
@@ -1,25 +1,25 @@
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
|
||||
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
|
||||
github.com/go-sql-driver/mysql v1.6.0 h1:BCTh4TKNUYmOmMUcQ3IipzF5prigylS7XXjEkfCHuOE=
|
||||
github.com/go-sql-driver/mysql v1.6.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
|
||||
github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
|
||||
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/gomodule/redigo v1.8.8 h1:f6cXq6RRfiyrOJEV7p3JhLDlmawGBVBBP1MggY8Mo4E=
|
||||
github.com/gomodule/redigo v1.8.8/go.mod h1:7ArFNvsTjH8GMMzB4uy1snslv2BwmginuMs06a1uzZE=
|
||||
github.com/google/go-cmp v0.5.2 h1:X2ev0eStA3AbceY54o37/0PQ/UWqKEiiO2dKL5OPaFM=
|
||||
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
|
||||
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.13.6 h1:P76CopJELS0TiO2mebmnzgWaajssP/EszplttgQxcgc=
|
||||
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||
@@ -32,10 +32,14 @@ github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJ
|
||||
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
|
||||
github.com/pierrec/lz4/v4 v4.1.18 h1:xaKrnTkyoqfh1YItXl56+6KJNVYWlEEPuAQW9xsplYQ=
|
||||
github.com/pierrec/lz4/v4 v4.1.18/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI=
|
||||
github.com/shirou/gopsutil v3.21.11+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
@@ -43,6 +47,10 @@ github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5Cc
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
|
||||
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
|
||||
github.com/tklauser/go-sysconf v0.3.13 h1:GBUpcahXSpR2xN01jhkNAbTLRk2Yzgggk8IM08lq3r4=
|
||||
github.com/tklauser/go-sysconf v0.3.13/go.mod h1:zwleP4Q4OehZHGn4CYZDipCgg9usW5IJePewFCGVEa0=
|
||||
github.com/tklauser/numcpus v0.7.0 h1:yjuerZP127QG9m5Zh/mSO4wqurYil27tHrqwRoRjpr4=
|
||||
github.com/tklauser/numcpus v0.7.0/go.mod h1:bb6dMVcj8A42tSE7i32fsIUCbQNllK5iDguyOZRUzAY=
|
||||
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
|
||||
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
|
||||
github.com/xdg-go/scram v1.0.2 h1:akYIkZ28e6A96dkWNJQu3nmCzH3YfwMPQExUYDaRv7w=
|
||||
@@ -51,46 +59,37 @@ github.com/xdg-go/stringprep v1.0.2 h1:6iq84/ryjjeRmMJwxutI51F2GIPlP5BfTvXHeYjyh
|
||||
github.com/xdg-go/stringprep v1.0.2/go.mod h1:8F9zXuvzgwmyT5DUm4GUfZGDdT3W+LCvS6+da4O5kxM=
|
||||
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d h1:splanxYIlg+5LfHAM6xpdFEAYOk8iySO56hMFq6uLyA=
|
||||
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
|
||||
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
|
||||
go.mongodb.org/mongo-driver v1.9.1 h1:m078y9v7sBItkt1aaoe2YlvWEXcD263e1a4E1fBrJ1c=
|
||||
go.mongodb.org/mongo-driver v1.9.1/go.mod h1:0sQWfOeY63QTntERDJJ/0SuKK0T1uVSgKCuAROlKEPY=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f h1:aZp0e2vLN4MToVqnjNEYEtrEA8RH8U8FN1CU7JgqsPU=
|
||||
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
|
||||
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 h1:SQFwaSi55rU7vdNs9Yr0Z324VNlrF+0wMqRXT4St8ck=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
|
||||
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
|
||||
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190531172133-b3315ee88b7d/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
|
||||
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
|
||||
@@ -2,27 +2,20 @@ package log // import "go.uber.org/zap/buffer"
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"sync"
|
||||
)
|
||||
|
||||
const _size = 9216
|
||||
|
||||
type Buffer struct {
|
||||
bs []byte
|
||||
mu sync.Mutex // ensures atomic writes; protects the following fields
|
||||
//mu sync.Mutex // ensures atomic writes; protects the following fields
|
||||
}
|
||||
|
||||
func (buff *Buffer) Init(){
|
||||
buff.bs = make([]byte,_size)
|
||||
}
|
||||
|
||||
func (buff *Buffer) Locker() {
|
||||
buff.mu.Lock()
|
||||
}
|
||||
|
||||
func (buff *Buffer) UnLocker() {
|
||||
buff.mu.Unlock()
|
||||
}
|
||||
|
||||
// AppendByte writes a single byte to the Buffer.
|
||||
func (b *Buffer) AppendByte(v byte) {
|
||||
|
||||
147
log/handler.go
Normal file
147
log/handler.go
Normal file
@@ -0,0 +1,147 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"log/slog"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"runtime/debug"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type IOriginHandler interface {
|
||||
slog.Handler
|
||||
Lock()
|
||||
UnLock()
|
||||
}
|
||||
|
||||
type BaseHandler struct {
|
||||
addSource bool
|
||||
w io.Writer
|
||||
locker sync.Mutex
|
||||
}
|
||||
|
||||
type OriginTextHandler struct {
|
||||
BaseHandler
|
||||
*slog.TextHandler
|
||||
}
|
||||
|
||||
type OriginJsonHandler struct {
|
||||
BaseHandler
|
||||
*slog.JSONHandler
|
||||
}
|
||||
|
||||
func getStrLevel(level slog.Level) string{
|
||||
switch level {
|
||||
case LevelTrace:
|
||||
return "Trace"
|
||||
case LevelDebug:
|
||||
return "Debug"
|
||||
case LevelInfo:
|
||||
return "Info"
|
||||
case LevelWarning:
|
||||
return "Warning"
|
||||
case LevelError:
|
||||
return "Error"
|
||||
case LevelStack:
|
||||
return "Stack"
|
||||
case LevelDump:
|
||||
return "Dump"
|
||||
case LevelFatal:
|
||||
return "Fatal"
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
func defaultReplaceAttr(groups []string, a slog.Attr) slog.Attr {
|
||||
if a.Key == slog.LevelKey {
|
||||
level := a.Value.Any().(slog.Level)
|
||||
a.Value = slog.StringValue(getStrLevel(level))
|
||||
}else if a.Key == slog.TimeKey && len(groups) == 0 {
|
||||
a.Value = slog.StringValue(a.Value.Time().Format("2006/01/02 15:04:05"))
|
||||
}else if a.Key == slog.SourceKey {
|
||||
source := a.Value.Any().(*slog.Source)
|
||||
source.File = filepath.Base(source.File)
|
||||
}
|
||||
return a
|
||||
}
|
||||
|
||||
func NewOriginTextHandler(level slog.Level,w io.Writer,addSource bool,replaceAttr func([]string,slog.Attr) slog.Attr) slog.Handler{
|
||||
var textHandler OriginTextHandler
|
||||
textHandler.addSource = addSource
|
||||
textHandler.w = w
|
||||
textHandler.TextHandler = slog.NewTextHandler(w,&slog.HandlerOptions{
|
||||
AddSource: addSource,
|
||||
Level: level,
|
||||
ReplaceAttr: replaceAttr,
|
||||
})
|
||||
|
||||
return &textHandler
|
||||
}
|
||||
|
||||
func (oh *OriginTextHandler) Handle(context context.Context, record slog.Record) error{
|
||||
oh.Fill(context,&record)
|
||||
oh.locker.Lock()
|
||||
defer oh.locker.Unlock()
|
||||
|
||||
if record.Level == LevelStack || record.Level == LevelFatal{
|
||||
err := oh.TextHandler.Handle(context, record)
|
||||
oh.logStack(&record)
|
||||
return err
|
||||
}else if record.Level == LevelDump {
|
||||
strDump := record.Message
|
||||
record.Message = "dump info"
|
||||
err := oh.TextHandler.Handle(context, record)
|
||||
oh.w.Write([]byte(strDump))
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
return oh.TextHandler.Handle(context, record)
|
||||
}
|
||||
|
||||
func (b *BaseHandler) logStack(record *slog.Record){
|
||||
b.w.Write(debug.Stack())
|
||||
}
|
||||
|
||||
func (b *BaseHandler) Lock(){
|
||||
b.locker.Lock()
|
||||
}
|
||||
|
||||
func (b *BaseHandler) UnLock(){
|
||||
b.locker.Unlock()
|
||||
}
|
||||
|
||||
func NewOriginJsonHandler(level slog.Level,w io.Writer,addSource bool,replaceAttr func([]string,slog.Attr) slog.Attr) slog.Handler{
|
||||
var jsonHandler OriginJsonHandler
|
||||
jsonHandler.addSource = addSource
|
||||
jsonHandler.w = w
|
||||
jsonHandler.JSONHandler = slog.NewJSONHandler(w,&slog.HandlerOptions{
|
||||
AddSource: addSource,
|
||||
Level: level,
|
||||
ReplaceAttr: replaceAttr,
|
||||
})
|
||||
|
||||
return &jsonHandler
|
||||
}
|
||||
|
||||
func (oh *OriginJsonHandler) Handle(context context.Context, record slog.Record) error{
|
||||
oh.Fill(context,&record)
|
||||
if record.Level == LevelStack || record.Level == LevelFatal || record.Level == LevelDump{
|
||||
record.Add("stack",debug.Stack())
|
||||
}
|
||||
|
||||
oh.locker.Lock()
|
||||
defer oh.locker.Unlock()
|
||||
return oh.JSONHandler.Handle(context, record)
|
||||
}
|
||||
|
||||
func (b *BaseHandler) Fill(context context.Context, record *slog.Record) {
|
||||
if b.addSource {
|
||||
var pcs [1]uintptr
|
||||
runtime.Callers(7, pcs[:])
|
||||
record.PC = pcs[0]
|
||||
}
|
||||
}
|
||||
1007
log/log.go
1007
log/log.go
File diff suppressed because it is too large
Load Diff
@@ -47,7 +47,7 @@ func (slf *HttpServer) startListen() error {
|
||||
for _, caFile := range slf.caFileList {
|
||||
cer, err := tls.LoadX509KeyPair(caFile.CertFile, caFile.Keyfile)
|
||||
if err != nil {
|
||||
log.SFatal("Load CA [",caFile.CertFile,"]-[",caFile.Keyfile,"] file is fail:",err.Error())
|
||||
log.Fatal("Load CA file is fail",log.String("error",err.Error()),log.String("certFile",caFile.CertFile),log.String("keyFile",caFile.Keyfile))
|
||||
return err
|
||||
}
|
||||
tlsCaList = append(tlsCaList, cer)
|
||||
@@ -74,7 +74,7 @@ func (slf *HttpServer) startListen() error {
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
log.SFatal("Listen for address ",slf.listenAddr," failure:",err.Error())
|
||||
log.Fatal("Listen failure",log.String("error",err.Error()),log.String("addr:",slf.listenAddr))
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
@@ -3,9 +3,9 @@ package processor
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/network"
|
||||
"reflect"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/util/bytespool"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
type MessageJsonInfo struct {
|
||||
@@ -24,7 +24,7 @@ type JsonProcessor struct {
|
||||
unknownMessageHandler UnknownMessageJsonHandler
|
||||
connectHandler ConnectJsonHandler
|
||||
disconnectHandler ConnectJsonHandler
|
||||
network.INetMempool
|
||||
bytespool.IBytesMempool
|
||||
}
|
||||
|
||||
type JsonPackInfo struct {
|
||||
@@ -35,7 +35,7 @@ type JsonPackInfo struct {
|
||||
|
||||
func NewJsonProcessor() *JsonProcessor {
|
||||
processor := &JsonProcessor{mapMsg:map[uint16]MessageJsonInfo{}}
|
||||
processor.INetMempool = network.NewMemAreaPool()
|
||||
processor.IBytesMempool = bytespool.NewMemAreaPool()
|
||||
|
||||
return processor
|
||||
}
|
||||
@@ -58,7 +58,7 @@ func (jsonProcessor *JsonProcessor ) MsgRoute(clientId uint64,msg interface{}) e
|
||||
|
||||
func (jsonProcessor *JsonProcessor) Unmarshal(clientId uint64,data []byte) (interface{}, error) {
|
||||
typeStruct := struct {Type int `json:"typ"`}{}
|
||||
defer jsonProcessor.ReleaseByteSlice(data)
|
||||
defer jsonProcessor.ReleaseBytes(data)
|
||||
err := json.Unmarshal(data, &typeStruct)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -106,7 +106,7 @@ func (jsonProcessor *JsonProcessor) MakeRawMsg(msgType uint16,msg []byte) *JsonP
|
||||
|
||||
func (jsonProcessor *JsonProcessor) UnknownMsgRoute(clientId uint64,msg interface{}){
|
||||
if jsonProcessor.unknownMessageHandler==nil {
|
||||
log.SDebug("Unknown message received from ",clientId)
|
||||
log.Debug("Unknown message",log.Uint64("clientId",clientId))
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -3,8 +3,8 @@ package processor
|
||||
import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/network"
|
||||
"github.com/gogo/protobuf/proto"
|
||||
"github.com/duanhf2012/origin/util/bytespool"
|
||||
"google.golang.org/protobuf/proto"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
@@ -26,7 +26,7 @@ type PBProcessor struct {
|
||||
unknownMessageHandler UnknownMessageHandler
|
||||
connectHandler ConnectHandler
|
||||
disconnectHandler ConnectHandler
|
||||
network.INetMempool
|
||||
bytespool.IBytesMempool
|
||||
}
|
||||
|
||||
type PBPackInfo struct {
|
||||
@@ -37,7 +37,7 @@ type PBPackInfo struct {
|
||||
|
||||
func NewPBProcessor() *PBProcessor {
|
||||
processor := &PBProcessor{mapMsg: map[uint16]MessageInfo{}}
|
||||
processor.INetMempool = network.NewMemAreaPool()
|
||||
processor.IBytesMempool = bytespool.NewMemAreaPool()
|
||||
return processor
|
||||
}
|
||||
|
||||
@@ -67,7 +67,12 @@ func (pbProcessor *PBProcessor) MsgRoute(clientId uint64, msg interface{}) error
|
||||
|
||||
// must goroutine safe
|
||||
func (pbProcessor *PBProcessor) Unmarshal(clientId uint64, data []byte) (interface{}, error) {
|
||||
defer pbProcessor.ReleaseByteSlice(data)
|
||||
defer pbProcessor.ReleaseBytes(data)
|
||||
return pbProcessor.UnmarshalWithOutRelease(clientId, data)
|
||||
}
|
||||
|
||||
// unmarshal but not release data
|
||||
func (pbProcessor *PBProcessor) UnmarshalWithOutRelease(clientId uint64, data []byte) (interface{}, error) {
|
||||
var msgType uint16
|
||||
if pbProcessor.LittleEndian == true {
|
||||
msgType = binary.LittleEndian.Uint16(data[:2])
|
||||
|
||||
@@ -78,7 +78,6 @@ func (pbRawProcessor *PBRawProcessor) SetRawMsgHandler(handle RawMessageHandler)
|
||||
func (pbRawProcessor *PBRawProcessor) MakeRawMsg(msgType uint16,msg []byte,pbRawPackInfo *PBRawPackInfo) {
|
||||
pbRawPackInfo.typ = msgType
|
||||
pbRawPackInfo.rawMsg = msg
|
||||
//return &PBRawPackInfo{typ:msgType,rawMsg:msg}
|
||||
}
|
||||
|
||||
func (pbRawProcessor *PBRawProcessor) UnknownMsgRoute(clientId uint64,msg interface{}){
|
||||
|
||||
@@ -17,17 +17,11 @@ type IProcessor interface {
|
||||
}
|
||||
|
||||
type IRawProcessor interface {
|
||||
SetByteOrder(littleEndian bool)
|
||||
MsgRoute(clientId uint64,msg interface{}) error
|
||||
Unmarshal(clientId uint64,data []byte) (interface{}, error)
|
||||
Marshal(clientId uint64,msg interface{}) ([]byte, error)
|
||||
IProcessor
|
||||
|
||||
SetByteOrder(littleEndian bool)
|
||||
SetRawMsgHandler(handle RawMessageHandler)
|
||||
MakeRawMsg(msgType uint16,msg []byte,pbRawPackInfo *PBRawPackInfo)
|
||||
UnknownMsgRoute(clientId uint64,msg interface{})
|
||||
ConnectedRoute(clientId uint64)
|
||||
DisConnectedRoute(clientId uint64)
|
||||
|
||||
SetUnknownMsgHandler(unknownMessageHandler UnknownRawMessageHandler)
|
||||
SetConnectedHandler(connectHandler RawConnectHandler)
|
||||
SetDisConnectedHandler(disconnectHandler RawConnectHandler)
|
||||
|
||||
@@ -22,11 +22,7 @@ type TCPClient struct {
|
||||
closeFlag bool
|
||||
|
||||
// msg parser
|
||||
LenMsgLen int
|
||||
MinMsgLen uint32
|
||||
MaxMsgLen uint32
|
||||
LittleEndian bool
|
||||
msgParser *MsgParser
|
||||
MsgParser
|
||||
}
|
||||
|
||||
func (client *TCPClient) Start() {
|
||||
@@ -44,39 +40,49 @@ func (client *TCPClient) init() {
|
||||
|
||||
if client.ConnNum <= 0 {
|
||||
client.ConnNum = 1
|
||||
log.SRelease("invalid ConnNum, reset to ", client.ConnNum)
|
||||
log.Info("invalid ConnNum",log.Int("reset", client.ConnNum))
|
||||
}
|
||||
if client.ConnectInterval <= 0 {
|
||||
client.ConnectInterval = 3 * time.Second
|
||||
log.SRelease("invalid ConnectInterval, reset to ", client.ConnectInterval)
|
||||
log.Info("invalid ConnectInterval",log.Duration("reset", client.ConnectInterval))
|
||||
}
|
||||
if client.PendingWriteNum <= 0 {
|
||||
client.PendingWriteNum = 1000
|
||||
log.SRelease("invalid PendingWriteNum, reset to ", client.PendingWriteNum)
|
||||
log.Info("invalid PendingWriteNum",log.Int("reset",client.PendingWriteNum))
|
||||
}
|
||||
if client.ReadDeadline == 0 {
|
||||
client.ReadDeadline = 15*time.Second
|
||||
log.SRelease("invalid ReadDeadline, reset to ", int64(client.ReadDeadline.Seconds()),"s")
|
||||
log.Info("invalid ReadDeadline",log.Int64("reset", int64(client.ReadDeadline.Seconds())))
|
||||
}
|
||||
if client.WriteDeadline == 0 {
|
||||
client.WriteDeadline = 15*time.Second
|
||||
log.SRelease("invalid WriteDeadline, reset to ", int64(client.WriteDeadline.Seconds()),"s")
|
||||
log.Info("invalid WriteDeadline",log.Int64("reset", int64(client.WriteDeadline.Seconds())))
|
||||
}
|
||||
if client.NewAgent == nil {
|
||||
log.SFatal("NewAgent must not be nil")
|
||||
log.Fatal("NewAgent must not be nil")
|
||||
}
|
||||
if client.cons != nil {
|
||||
log.SFatal("client is running")
|
||||
log.Fatal("client is running")
|
||||
}
|
||||
|
||||
if client.MinMsgLen == 0 {
|
||||
client.MinMsgLen = Default_MinMsgLen
|
||||
}
|
||||
if client.MaxMsgLen == 0 {
|
||||
client.MaxMsgLen = Default_MaxMsgLen
|
||||
}
|
||||
if client.LenMsgLen ==0 {
|
||||
client.LenMsgLen = Default_LenMsgLen
|
||||
}
|
||||
maxMsgLen := client.MsgParser.getMaxMsgLen(client.LenMsgLen)
|
||||
if client.MaxMsgLen > maxMsgLen {
|
||||
client.MaxMsgLen = maxMsgLen
|
||||
log.Info("invalid MaxMsgLen",log.Uint32("reset", maxMsgLen))
|
||||
}
|
||||
|
||||
client.cons = make(ConnSet)
|
||||
client.closeFlag = false
|
||||
|
||||
// msg parser
|
||||
msgParser := NewMsgParser()
|
||||
msgParser.SetMsgLen(client.LenMsgLen, client.MinMsgLen, client.MaxMsgLen)
|
||||
msgParser.SetByteOrder(client.LittleEndian)
|
||||
client.msgParser = msgParser
|
||||
client.MsgParser.init()
|
||||
}
|
||||
|
||||
func (client *TCPClient) GetCloseFlag() bool{
|
||||
@@ -96,7 +102,7 @@ func (client *TCPClient) dial() net.Conn {
|
||||
return conn
|
||||
}
|
||||
|
||||
log.SWarning("connect to ",client.Addr," error:", err.Error())
|
||||
log.Warning("connect error ",log.String("error",err.Error()), log.String("Addr",client.Addr))
|
||||
time.Sleep(client.ConnectInterval)
|
||||
continue
|
||||
}
|
||||
@@ -120,7 +126,7 @@ reconnect:
|
||||
client.cons[conn] = struct{}{}
|
||||
client.Unlock()
|
||||
|
||||
tcpConn := newTCPConn(conn, client.PendingWriteNum, client.msgParser,client.WriteDeadline)
|
||||
tcpConn := newTCPConn(conn, client.PendingWriteNum, &client.MsgParser,client.WriteDeadline)
|
||||
agent := client.NewAgent(tcpConn)
|
||||
agent.Run()
|
||||
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
package network
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"net"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
"errors"
|
||||
)
|
||||
|
||||
type ConnSet map[net.Conn]struct{}
|
||||
@@ -14,7 +15,7 @@ type TCPConn struct {
|
||||
sync.Mutex
|
||||
conn net.Conn
|
||||
writeChan chan []byte
|
||||
closeFlag bool
|
||||
closeFlag int32
|
||||
msgParser *MsgParser
|
||||
}
|
||||
|
||||
@@ -40,7 +41,7 @@ func newTCPConn(conn net.Conn, pendingWriteNum int, msgParser *MsgParser,writeDe
|
||||
|
||||
conn.SetWriteDeadline(time.Now().Add(writeDeadline))
|
||||
_, err := conn.Write(b)
|
||||
tcpConn.msgParser.ReleaseByteSlice(b)
|
||||
tcpConn.msgParser.ReleaseBytes(b)
|
||||
|
||||
if err != nil {
|
||||
break
|
||||
@@ -49,7 +50,7 @@ func newTCPConn(conn net.Conn, pendingWriteNum int, msgParser *MsgParser,writeDe
|
||||
conn.Close()
|
||||
tcpConn.Lock()
|
||||
freeChannel(tcpConn)
|
||||
tcpConn.closeFlag = true
|
||||
atomic.StoreInt32(&tcpConn.closeFlag,1)
|
||||
tcpConn.Unlock()
|
||||
}()
|
||||
|
||||
@@ -60,9 +61,9 @@ func (tcpConn *TCPConn) doDestroy() {
|
||||
tcpConn.conn.(*net.TCPConn).SetLinger(0)
|
||||
tcpConn.conn.Close()
|
||||
|
||||
if !tcpConn.closeFlag {
|
||||
if atomic.LoadInt32(&tcpConn.closeFlag)==0 {
|
||||
close(tcpConn.writeChan)
|
||||
tcpConn.closeFlag = true
|
||||
atomic.StoreInt32(&tcpConn.closeFlag,1)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -76,12 +77,12 @@ func (tcpConn *TCPConn) Destroy() {
|
||||
func (tcpConn *TCPConn) Close() {
|
||||
tcpConn.Lock()
|
||||
defer tcpConn.Unlock()
|
||||
if tcpConn.closeFlag {
|
||||
if atomic.LoadInt32(&tcpConn.closeFlag)==1 {
|
||||
return
|
||||
}
|
||||
|
||||
tcpConn.doWrite(nil)
|
||||
tcpConn.closeFlag = true
|
||||
atomic.StoreInt32(&tcpConn.closeFlag,1)
|
||||
}
|
||||
|
||||
func (tcpConn *TCPConn) GetRemoteIp() string {
|
||||
@@ -91,7 +92,7 @@ func (tcpConn *TCPConn) GetRemoteIp() string {
|
||||
func (tcpConn *TCPConn) doWrite(b []byte) error{
|
||||
if len(tcpConn.writeChan) == cap(tcpConn.writeChan) {
|
||||
tcpConn.ReleaseReadMsg(b)
|
||||
log.SError("close conn: channel full")
|
||||
log.Error("close conn: channel full")
|
||||
tcpConn.doDestroy()
|
||||
return errors.New("close conn: channel full")
|
||||
}
|
||||
@@ -104,7 +105,7 @@ func (tcpConn *TCPConn) doWrite(b []byte) error{
|
||||
func (tcpConn *TCPConn) Write(b []byte) error{
|
||||
tcpConn.Lock()
|
||||
defer tcpConn.Unlock()
|
||||
if tcpConn.closeFlag || b == nil {
|
||||
if atomic.LoadInt32(&tcpConn.closeFlag)==1 || b == nil {
|
||||
tcpConn.ReleaseReadMsg(b)
|
||||
return errors.New("conn is close")
|
||||
}
|
||||
@@ -129,18 +130,18 @@ func (tcpConn *TCPConn) ReadMsg() ([]byte, error) {
|
||||
}
|
||||
|
||||
func (tcpConn *TCPConn) ReleaseReadMsg(byteBuff []byte){
|
||||
tcpConn.msgParser.ReleaseByteSlice(byteBuff)
|
||||
tcpConn.msgParser.ReleaseBytes(byteBuff)
|
||||
}
|
||||
|
||||
func (tcpConn *TCPConn) WriteMsg(args ...[]byte) error {
|
||||
if tcpConn.closeFlag == true {
|
||||
if atomic.LoadInt32(&tcpConn.closeFlag) == 1 {
|
||||
return errors.New("conn is close")
|
||||
}
|
||||
return tcpConn.msgParser.Write(tcpConn, args...)
|
||||
}
|
||||
|
||||
func (tcpConn *TCPConn) WriteRawMsg(args []byte) error {
|
||||
if tcpConn.closeFlag == true {
|
||||
if atomic.LoadInt32(&tcpConn.closeFlag) == 1 {
|
||||
return errors.New("conn is close")
|
||||
}
|
||||
|
||||
@@ -149,7 +150,7 @@ func (tcpConn *TCPConn) WriteRawMsg(args []byte) error {
|
||||
|
||||
|
||||
func (tcpConn *TCPConn) IsConnected() bool {
|
||||
return tcpConn.closeFlag == false
|
||||
return atomic.LoadInt32(&tcpConn.closeFlag) == 0
|
||||
}
|
||||
|
||||
func (tcpConn *TCPConn) SetReadDeadline(d time.Duration) {
|
||||
|
||||
@@ -3,6 +3,7 @@ package network
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"github.com/duanhf2012/origin/util/bytespool"
|
||||
"io"
|
||||
"math"
|
||||
)
|
||||
@@ -11,62 +12,36 @@ import (
|
||||
// | len | data |
|
||||
// --------------
|
||||
type MsgParser struct {
|
||||
lenMsgLen int
|
||||
minMsgLen uint32
|
||||
maxMsgLen uint32
|
||||
littleEndian bool
|
||||
LenMsgLen int
|
||||
MinMsgLen uint32
|
||||
MaxMsgLen uint32
|
||||
LittleEndian bool
|
||||
|
||||
INetMempool
|
||||
bytespool.IBytesMempool
|
||||
}
|
||||
|
||||
func NewMsgParser() *MsgParser {
|
||||
p := new(MsgParser)
|
||||
p.lenMsgLen = 2
|
||||
p.minMsgLen = 1
|
||||
p.maxMsgLen = 4096
|
||||
p.littleEndian = false
|
||||
p.INetMempool = NewMemAreaPool()
|
||||
return p
|
||||
}
|
||||
|
||||
// It's dangerous to call the method on reading or writing
|
||||
func (p *MsgParser) SetMsgLen(lenMsgLen int, minMsgLen uint32, maxMsgLen uint32) {
|
||||
if lenMsgLen == 1 || lenMsgLen == 2 || lenMsgLen == 4 {
|
||||
p.lenMsgLen = lenMsgLen
|
||||
}
|
||||
if minMsgLen != 0 {
|
||||
p.minMsgLen = minMsgLen
|
||||
}
|
||||
if maxMsgLen != 0 {
|
||||
p.maxMsgLen = maxMsgLen
|
||||
}
|
||||
|
||||
var max uint32
|
||||
switch p.lenMsgLen {
|
||||
func (p *MsgParser) getMaxMsgLen(lenMsgLen int) uint32 {
|
||||
switch p.LenMsgLen {
|
||||
case 1:
|
||||
max = math.MaxUint8
|
||||
return math.MaxUint8
|
||||
case 2:
|
||||
max = math.MaxUint16
|
||||
return math.MaxUint16
|
||||
case 4:
|
||||
max = math.MaxUint32
|
||||
}
|
||||
if p.minMsgLen > max {
|
||||
p.minMsgLen = max
|
||||
}
|
||||
if p.maxMsgLen > max {
|
||||
p.maxMsgLen = max
|
||||
return math.MaxUint32
|
||||
default:
|
||||
panic("LenMsgLen value must be 1 or 2 or 4")
|
||||
}
|
||||
}
|
||||
|
||||
// It's dangerous to call the method on reading or writing
|
||||
func (p *MsgParser) SetByteOrder(littleEndian bool) {
|
||||
p.littleEndian = littleEndian
|
||||
func (p *MsgParser) init(){
|
||||
p.IBytesMempool = bytespool.NewMemAreaPool()
|
||||
}
|
||||
|
||||
// goroutine safe
|
||||
func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
|
||||
var b [4]byte
|
||||
bufMsgLen := b[:p.lenMsgLen]
|
||||
bufMsgLen := b[:p.LenMsgLen]
|
||||
|
||||
// read len
|
||||
if _, err := io.ReadFull(conn, bufMsgLen); err != nil {
|
||||
@@ -75,17 +50,17 @@ func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
|
||||
|
||||
// parse len
|
||||
var msgLen uint32
|
||||
switch p.lenMsgLen {
|
||||
switch p.LenMsgLen {
|
||||
case 1:
|
||||
msgLen = uint32(bufMsgLen[0])
|
||||
case 2:
|
||||
if p.littleEndian {
|
||||
if p.LittleEndian {
|
||||
msgLen = uint32(binary.LittleEndian.Uint16(bufMsgLen))
|
||||
} else {
|
||||
msgLen = uint32(binary.BigEndian.Uint16(bufMsgLen))
|
||||
}
|
||||
case 4:
|
||||
if p.littleEndian {
|
||||
if p.LittleEndian {
|
||||
msgLen = binary.LittleEndian.Uint32(bufMsgLen)
|
||||
} else {
|
||||
msgLen = binary.BigEndian.Uint32(bufMsgLen)
|
||||
@@ -93,16 +68,16 @@ func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
|
||||
}
|
||||
|
||||
// check len
|
||||
if msgLen > p.maxMsgLen {
|
||||
if msgLen > p.MaxMsgLen {
|
||||
return nil, errors.New("message too long")
|
||||
} else if msgLen < p.minMsgLen {
|
||||
} else if msgLen < p.MinMsgLen {
|
||||
return nil, errors.New("message too short")
|
||||
}
|
||||
|
||||
// data
|
||||
msgData := p.MakeByteSlice(int(msgLen))
|
||||
msgData := p.MakeBytes(int(msgLen))
|
||||
if _, err := io.ReadFull(conn, msgData[:msgLen]); err != nil {
|
||||
p.ReleaseByteSlice(msgData)
|
||||
p.ReleaseBytes(msgData)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -118,26 +93,26 @@ func (p *MsgParser) Write(conn *TCPConn, args ...[]byte) error {
|
||||
}
|
||||
|
||||
// check len
|
||||
if msgLen > p.maxMsgLen {
|
||||
if msgLen > p.MaxMsgLen {
|
||||
return errors.New("message too long")
|
||||
} else if msgLen < p.minMsgLen {
|
||||
} else if msgLen < p.MinMsgLen {
|
||||
return errors.New("message too short")
|
||||
}
|
||||
|
||||
//msg := make([]byte, uint32(p.lenMsgLen)+msgLen)
|
||||
msg := p.MakeByteSlice(p.lenMsgLen+int(msgLen))
|
||||
msg := p.MakeBytes(p.LenMsgLen+int(msgLen))
|
||||
// write len
|
||||
switch p.lenMsgLen {
|
||||
switch p.LenMsgLen {
|
||||
case 1:
|
||||
msg[0] = byte(msgLen)
|
||||
case 2:
|
||||
if p.littleEndian {
|
||||
if p.LittleEndian {
|
||||
binary.LittleEndian.PutUint16(msg, uint16(msgLen))
|
||||
} else {
|
||||
binary.BigEndian.PutUint16(msg, uint16(msgLen))
|
||||
}
|
||||
case 4:
|
||||
if p.littleEndian {
|
||||
if p.LittleEndian {
|
||||
binary.LittleEndian.PutUint32(msg, msgLen)
|
||||
} else {
|
||||
binary.BigEndian.PutUint32(msg, msgLen)
|
||||
@@ -145,7 +120,7 @@ func (p *MsgParser) Write(conn *TCPConn, args ...[]byte) error {
|
||||
}
|
||||
|
||||
// write data
|
||||
l := p.lenMsgLen
|
||||
l := p.LenMsgLen
|
||||
for i := 0; i < len(args); i++ {
|
||||
copy(msg[l:], args[i])
|
||||
l += len(args[i])
|
||||
|
||||
@@ -2,19 +2,22 @@ package network
|
||||
|
||||
import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/util/bytespool"
|
||||
"net"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const Default_ReadDeadline = time.Second*30 //30s
|
||||
const Default_WriteDeadline = time.Second*30 //30s
|
||||
const Default_MaxConnNum = 3000
|
||||
const Default_PendingWriteNum = 10000
|
||||
const Default_LittleEndian = false
|
||||
const Default_MinMsgLen = 2
|
||||
const Default_MaxMsgLen = 65535
|
||||
|
||||
const(
|
||||
Default_ReadDeadline = time.Second*30 //默认读超时30s
|
||||
Default_WriteDeadline = time.Second*30 //默认写超时30s
|
||||
Default_MaxConnNum = 1000000 //默认最大连接数
|
||||
Default_PendingWriteNum = 100000 //单连接写消息Channel容量
|
||||
Default_LittleEndian = false //默认大小端
|
||||
Default_MinMsgLen = 2 //最小消息长度2byte
|
||||
Default_LenMsgLen = 2 //包头字段长度占用2byte
|
||||
Default_MaxMsgLen = 65535 //最大消息长度
|
||||
)
|
||||
|
||||
type TCPServer struct {
|
||||
Addr string
|
||||
@@ -22,6 +25,7 @@ type TCPServer struct {
|
||||
PendingWriteNum int
|
||||
ReadDeadline time.Duration
|
||||
WriteDeadline time.Duration
|
||||
|
||||
NewAgent func(*TCPConn) Agent
|
||||
ln net.Listener
|
||||
conns ConnSet
|
||||
@@ -29,14 +33,7 @@ type TCPServer struct {
|
||||
wgLn sync.WaitGroup
|
||||
wgConns sync.WaitGroup
|
||||
|
||||
|
||||
// msg parser
|
||||
LenMsgLen int
|
||||
MinMsgLen uint32
|
||||
MaxMsgLen uint32
|
||||
LittleEndian bool
|
||||
msgParser *MsgParser
|
||||
netMemPool INetMempool
|
||||
MsgParser
|
||||
}
|
||||
|
||||
func (server *TCPServer) Start() {
|
||||
@@ -47,61 +44,65 @@ func (server *TCPServer) Start() {
|
||||
func (server *TCPServer) init() {
|
||||
ln, err := net.Listen("tcp", server.Addr)
|
||||
if err != nil {
|
||||
log.SFatal("Listen tcp error:", err.Error())
|
||||
log.Fatal("Listen tcp fail",log.String("error", err.Error()))
|
||||
}
|
||||
|
||||
if server.MaxConnNum <= 0 {
|
||||
server.MaxConnNum = Default_MaxConnNum
|
||||
log.SRelease("invalid MaxConnNum, reset to ", server.MaxConnNum)
|
||||
}
|
||||
if server.PendingWriteNum <= 0 {
|
||||
server.PendingWriteNum = Default_PendingWriteNum
|
||||
log.SRelease("invalid PendingWriteNum, reset to ", server.PendingWriteNum)
|
||||
log.Info("invalid MaxConnNum",log.Int("reset", server.MaxConnNum))
|
||||
}
|
||||
|
||||
if server.MinMsgLen <= 0 {
|
||||
server.MinMsgLen = Default_MinMsgLen
|
||||
log.SRelease("invalid MinMsgLen, reset to ", server.MinMsgLen)
|
||||
if server.PendingWriteNum <= 0 {
|
||||
server.PendingWriteNum = Default_PendingWriteNum
|
||||
log.Info("invalid PendingWriteNum",log.Int("reset", server.PendingWriteNum))
|
||||
}
|
||||
|
||||
if server.LenMsgLen <= 0 {
|
||||
server.LenMsgLen = Default_LenMsgLen
|
||||
log.Info("invalid LenMsgLen", log.Int("reset", server.LenMsgLen))
|
||||
}
|
||||
|
||||
if server.MaxMsgLen <= 0 {
|
||||
server.MaxMsgLen = Default_MaxMsgLen
|
||||
log.SRelease("invalid MaxMsgLen, reset to ", server.MaxMsgLen)
|
||||
log.Info("invalid MaxMsgLen", log.Uint32("reset to", server.MaxMsgLen))
|
||||
}
|
||||
|
||||
maxMsgLen := server.MsgParser.getMaxMsgLen(server.LenMsgLen)
|
||||
if server.MaxMsgLen > maxMsgLen {
|
||||
server.MaxMsgLen = maxMsgLen
|
||||
log.Info("invalid MaxMsgLen",log.Uint32("reset", maxMsgLen))
|
||||
}
|
||||
|
||||
if server.MinMsgLen <= 0 {
|
||||
server.MinMsgLen = Default_MinMsgLen
|
||||
log.Info("invalid MinMsgLen",log.Uint32("reset", server.MinMsgLen))
|
||||
}
|
||||
|
||||
if server.WriteDeadline == 0 {
|
||||
server.WriteDeadline = Default_WriteDeadline
|
||||
log.SRelease("invalid WriteDeadline, reset to ", server.WriteDeadline.Seconds(),"s")
|
||||
log.Info("invalid WriteDeadline",log.Int64("reset",int64(server.WriteDeadline.Seconds())))
|
||||
}
|
||||
|
||||
if server.ReadDeadline == 0 {
|
||||
server.ReadDeadline = Default_ReadDeadline
|
||||
log.SRelease("invalid ReadDeadline, reset to ", server.ReadDeadline.Seconds(),"s")
|
||||
log.Info("invalid ReadDeadline",log.Int64("reset", int64(server.ReadDeadline.Seconds())))
|
||||
}
|
||||
|
||||
if server.NewAgent == nil {
|
||||
log.SFatal("NewAgent must not be nil")
|
||||
log.Fatal("NewAgent must not be nil")
|
||||
}
|
||||
|
||||
server.ln = ln
|
||||
server.conns = make(ConnSet)
|
||||
|
||||
// msg parser
|
||||
msgParser := NewMsgParser()
|
||||
if msgParser.INetMempool == nil {
|
||||
msgParser.INetMempool = NewMemAreaPool()
|
||||
}
|
||||
|
||||
msgParser.SetMsgLen(server.LenMsgLen, server.MinMsgLen, server.MaxMsgLen)
|
||||
msgParser.SetByteOrder(server.LittleEndian)
|
||||
server.msgParser = msgParser
|
||||
server.MsgParser.init()
|
||||
}
|
||||
|
||||
func (server *TCPServer) SetNetMempool(mempool INetMempool){
|
||||
server.msgParser.INetMempool = mempool
|
||||
func (server *TCPServer) SetNetMempool(mempool bytespool.IBytesMempool){
|
||||
server.IBytesMempool = mempool
|
||||
}
|
||||
|
||||
func (server *TCPServer) GetNetMempool() INetMempool{
|
||||
return server.msgParser.INetMempool
|
||||
func (server *TCPServer) GetNetMempool() bytespool.IBytesMempool {
|
||||
return server.IBytesMempool
|
||||
}
|
||||
|
||||
func (server *TCPServer) run() {
|
||||
@@ -121,12 +122,13 @@ func (server *TCPServer) run() {
|
||||
if max := 1 * time.Second; tempDelay > max {
|
||||
tempDelay = max
|
||||
}
|
||||
log.SRelease("accept error:",err.Error(),"; retrying in ", tempDelay)
|
||||
log.Info("accept fail",log.String("error",err.Error()),log.Duration("sleep time", tempDelay))
|
||||
time.Sleep(tempDelay)
|
||||
continue
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
conn.(*net.TCPConn).SetNoDelay(true)
|
||||
tempDelay = 0
|
||||
|
||||
@@ -134,19 +136,19 @@ func (server *TCPServer) run() {
|
||||
if len(server.conns) >= server.MaxConnNum {
|
||||
server.mutexConns.Unlock()
|
||||
conn.Close()
|
||||
log.SWarning("too many connections")
|
||||
log.Warning("too many connections")
|
||||
continue
|
||||
}
|
||||
|
||||
server.conns[conn] = struct{}{}
|
||||
server.mutexConns.Unlock()
|
||||
|
||||
server.wgConns.Add(1)
|
||||
|
||||
tcpConn := newTCPConn(conn, server.PendingWriteNum, server.msgParser,server.WriteDeadline)
|
||||
tcpConn := newTCPConn(conn, server.PendingWriteNum, &server.MsgParser,server.WriteDeadline)
|
||||
agent := server.NewAgent(tcpConn)
|
||||
|
||||
go func() {
|
||||
agent.Run()
|
||||
|
||||
// cleanup
|
||||
tcpConn.Close()
|
||||
server.mutexConns.Lock()
|
||||
|
||||
@@ -40,29 +40,29 @@ func (client *WSClient) init() {
|
||||
|
||||
if client.ConnNum <= 0 {
|
||||
client.ConnNum = 1
|
||||
log.SRelease("invalid ConnNum, reset to ", client.ConnNum)
|
||||
log.Info("invalid ConnNum",log.Int("reset", client.ConnNum))
|
||||
}
|
||||
if client.ConnectInterval <= 0 {
|
||||
client.ConnectInterval = 3 * time.Second
|
||||
log.SRelease("invalid ConnectInterval, reset to ", client.ConnectInterval)
|
||||
log.Info("invalid ConnectInterval",log.Duration("reset", client.ConnectInterval))
|
||||
}
|
||||
if client.PendingWriteNum <= 0 {
|
||||
client.PendingWriteNum = 100
|
||||
log.SRelease("invalid PendingWriteNum, reset to ", client.PendingWriteNum)
|
||||
log.Info("invalid PendingWriteNum",log.Int("reset", client.PendingWriteNum))
|
||||
}
|
||||
if client.MaxMsgLen <= 0 {
|
||||
client.MaxMsgLen = 4096
|
||||
log.SRelease("invalid MaxMsgLen, reset to ", client.MaxMsgLen)
|
||||
log.Info("invalid MaxMsgLen",log.Uint32("reset", client.MaxMsgLen))
|
||||
}
|
||||
if client.HandshakeTimeout <= 0 {
|
||||
client.HandshakeTimeout = 10 * time.Second
|
||||
log.SRelease("invalid HandshakeTimeout, reset to ", client.HandshakeTimeout)
|
||||
log.Info("invalid HandshakeTimeout",log.Duration("reset", client.HandshakeTimeout))
|
||||
}
|
||||
if client.NewAgent == nil {
|
||||
log.SFatal("NewAgent must not be nil")
|
||||
log.Fatal("NewAgent must not be nil")
|
||||
}
|
||||
if client.cons != nil {
|
||||
log.SFatal("client is running")
|
||||
log.Fatal("client is running")
|
||||
}
|
||||
|
||||
if client.MessageType == 0 {
|
||||
@@ -83,7 +83,7 @@ func (client *WSClient) dial() *websocket.Conn {
|
||||
return conn
|
||||
}
|
||||
|
||||
log.SRelease("connect to ", client.Addr," error: ", err.Error())
|
||||
log.Info("connect fail", log.String("error",err.Error()),log.String("addr",client.Addr))
|
||||
time.Sleep(client.ConnectInterval)
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -75,7 +75,7 @@ func (wsConn *WSConn) Close() {
|
||||
|
||||
func (wsConn *WSConn) doWrite(b []byte) {
|
||||
if len(wsConn.writeChan) == cap(wsConn.writeChan) {
|
||||
log.SDebug("close conn: channel full")
|
||||
log.Debug("close conn: channel full")
|
||||
wsConn.doDestroy()
|
||||
return
|
||||
}
|
||||
|
||||
@@ -47,7 +47,7 @@ func (handler *WSHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
conn, err := handler.upgrader.Upgrade(w, r, nil)
|
||||
if err != nil {
|
||||
log.SError("upgrade error: ", err.Error())
|
||||
log.Error("upgrade fail",log.String("error",err.Error()))
|
||||
return
|
||||
}
|
||||
conn.SetReadLimit(int64(handler.maxMsgLen))
|
||||
@@ -67,7 +67,7 @@ func (handler *WSHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
if len(handler.conns) >= handler.maxConnNum {
|
||||
handler.mutexConns.Unlock()
|
||||
conn.Close()
|
||||
log.SWarning("too many connections")
|
||||
log.Warning("too many connections")
|
||||
return
|
||||
}
|
||||
handler.conns[conn] = struct{}{}
|
||||
@@ -95,27 +95,27 @@ func (server *WSServer) SetMessageType(messageType int) {
|
||||
func (server *WSServer) Start() {
|
||||
ln, err := net.Listen("tcp", server.Addr)
|
||||
if err != nil {
|
||||
log.SFatal("WSServer Listen fail:", err.Error())
|
||||
log.Fatal("WSServer Listen fail",log.String("error", err.Error()))
|
||||
}
|
||||
|
||||
if server.MaxConnNum <= 0 {
|
||||
server.MaxConnNum = 100
|
||||
log.SRelease("invalid MaxConnNum, reset to ", server.MaxConnNum)
|
||||
log.Info("invalid MaxConnNum", log.Int("reset", server.MaxConnNum))
|
||||
}
|
||||
if server.PendingWriteNum <= 0 {
|
||||
server.PendingWriteNum = 100
|
||||
log.SRelease("invalid PendingWriteNum, reset to ", server.PendingWriteNum)
|
||||
log.Info("invalid PendingWriteNum", log.Int("reset", server.PendingWriteNum))
|
||||
}
|
||||
if server.MaxMsgLen <= 0 {
|
||||
server.MaxMsgLen = 4096
|
||||
log.SRelease("invalid MaxMsgLen, reset to ", server.MaxMsgLen)
|
||||
log.Info("invalid MaxMsgLen", log.Uint32("reset", server.MaxMsgLen))
|
||||
}
|
||||
if server.HTTPTimeout <= 0 {
|
||||
server.HTTPTimeout = 10 * time.Second
|
||||
log.SRelease("invalid HTTPTimeout, reset to ", server.HTTPTimeout)
|
||||
log.Info("invalid HTTPTimeout", log.Duration("reset", server.HTTPTimeout))
|
||||
}
|
||||
if server.NewAgent == nil {
|
||||
log.SFatal("NewAgent must not be nil")
|
||||
log.Fatal("NewAgent must not be nil")
|
||||
}
|
||||
|
||||
if server.CertFile != "" || server.KeyFile != "" {
|
||||
@@ -126,7 +126,7 @@ func (server *WSServer) Start() {
|
||||
config.Certificates = make([]tls.Certificate, 1)
|
||||
config.Certificates[0], err = tls.LoadX509KeyPair(server.CertFile, server.KeyFile)
|
||||
if err != nil {
|
||||
log.SFatal("LoadX509KeyPair fail:", err.Error())
|
||||
log.Fatal("LoadX509KeyPair fail",log.String("error", err.Error()))
|
||||
}
|
||||
|
||||
ln = tls.NewListener(ln, config)
|
||||
|
||||
222
node/node.go
222
node/node.go
@@ -11,7 +11,6 @@ import (
|
||||
"github.com/duanhf2012/origin/util/buildtime"
|
||||
"github.com/duanhf2012/origin/util/timer"
|
||||
"io"
|
||||
slog "log"
|
||||
"net/http"
|
||||
_ "net/http/pprof"
|
||||
"os"
|
||||
@@ -20,17 +19,21 @@ import (
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
"github.com/duanhf2012/origin/util/sysprocess"
|
||||
)
|
||||
|
||||
var closeSig chan bool
|
||||
var sig chan os.Signal
|
||||
var nodeId int
|
||||
var preSetupService []service.IService //预安装
|
||||
var profilerInterval time.Duration
|
||||
var bValid bool
|
||||
var configDir = "./config/"
|
||||
var logLevel string = "debug"
|
||||
var logPath string
|
||||
|
||||
const(
|
||||
SingleStop syscall.Signal = 10
|
||||
SignalRetire syscall.Signal = 12
|
||||
)
|
||||
|
||||
type BuildOSType = int8
|
||||
|
||||
const(
|
||||
@@ -40,22 +43,28 @@ const(
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
closeSig = make(chan bool, 1)
|
||||
sig = make(chan os.Signal, 3)
|
||||
signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM, syscall.Signal(10))
|
||||
sig = make(chan os.Signal, 4)
|
||||
signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM, SingleStop,SignalRetire)
|
||||
|
||||
console.RegisterCommandBool("help", false, "<-help> This help.", usage)
|
||||
console.RegisterCommandString("name", "", "<-name nodeName> Node's name.", setName)
|
||||
console.RegisterCommandString("start", "", "<-start nodeid=nodeid> Run originserver.", startNode)
|
||||
console.RegisterCommandString("stop", "", "<-stop nodeid=nodeid> Stop originserver process.", stopNode)
|
||||
console.RegisterCommandString("retire", "", "<-retire nodeid=nodeid> retire originserver process.", retireNode)
|
||||
console.RegisterCommandString("config", "", "<-config path> Configuration file path.", setConfigPath)
|
||||
console.RegisterCommandString("console", "", "<-console true|false> Turn on or off screen log output.", openConsole)
|
||||
console.RegisterCommandString("loglevel", "debug", "<-loglevel debug|release|warning|error|fatal> Set loglevel.", setLevel)
|
||||
console.RegisterCommandString("logpath", "", "<-logpath path> Set log file path.", setLogPath)
|
||||
console.RegisterCommandInt("logsize", 0, "<-logsize size> Set log size(MB).", setLogSize)
|
||||
console.RegisterCommandInt("logchannelcap", 0, "<-logchannelcap num> Set log channel cap.", setLogChannelCapNum)
|
||||
console.RegisterCommandString("pprof", "", "<-pprof ip:port> Open performance analysis.", setPprof)
|
||||
}
|
||||
|
||||
|
||||
func notifyAllServiceRetire(){
|
||||
service.NotifyAllServiceRetire()
|
||||
}
|
||||
|
||||
func usage(val interface{}) error {
|
||||
ret := val.(bool)
|
||||
if ret == false {
|
||||
@@ -147,7 +156,7 @@ func initNode(id int) {
|
||||
nodeId = id
|
||||
err := cluster.GetCluster().Init(GetNodeId(), Setup)
|
||||
if err != nil {
|
||||
log.SFatal("read system config is error ", err.Error())
|
||||
log.Fatal("read system config is error ",log.ErrorAttr("error",err))
|
||||
}
|
||||
|
||||
err = initLog()
|
||||
@@ -155,36 +164,43 @@ func initNode(id int) {
|
||||
return
|
||||
}
|
||||
|
||||
//2.setup service
|
||||
for _, s := range preSetupService {
|
||||
//是否配置的service
|
||||
if cluster.GetCluster().IsConfigService(s.GetName()) == false {
|
||||
continue
|
||||
//2.顺序安装服务
|
||||
serviceOrder := cluster.GetCluster().GetLocalNodeInfo().ServiceList
|
||||
for _,serviceName:= range serviceOrder{
|
||||
bSetup := false
|
||||
for _, s := range preSetupService {
|
||||
if s.GetName() != serviceName {
|
||||
continue
|
||||
}
|
||||
bSetup = true
|
||||
pServiceCfg := cluster.GetCluster().GetServiceCfg(s.GetName())
|
||||
s.Init(s, cluster.GetRpcClient, cluster.GetRpcServer, pServiceCfg)
|
||||
|
||||
service.Setup(s)
|
||||
}
|
||||
|
||||
pServiceCfg := cluster.GetCluster().GetServiceCfg(s.GetName())
|
||||
s.Init(s, cluster.GetRpcClient, cluster.GetRpcServer, pServiceCfg)
|
||||
|
||||
service.Setup(s)
|
||||
if bSetup == false {
|
||||
log.Fatal("Service name "+serviceName+" configuration error")
|
||||
}
|
||||
}
|
||||
|
||||
//3.service初始化
|
||||
service.Init(closeSig)
|
||||
service.Init()
|
||||
}
|
||||
|
||||
func initLog() error {
|
||||
if logPath == "" {
|
||||
if log.LogPath == "" {
|
||||
setLogPath("./log")
|
||||
}
|
||||
|
||||
localnodeinfo := cluster.GetCluster().GetLocalNodeInfo()
|
||||
filepre := fmt.Sprintf("%s_%d_", localnodeinfo.NodeName, localnodeinfo.NodeId)
|
||||
logger, err := log.New(logLevel, logPath, filepre, slog.LstdFlags|slog.Lshortfile, 10)
|
||||
logger, err := log.NewTextLogger(log.LogLevel,log.LogPath,filepre,true,log.LogChannelCap)
|
||||
if err != nil {
|
||||
fmt.Printf("cannot create log file!\n")
|
||||
return err
|
||||
}
|
||||
log.Export(logger)
|
||||
log.SetLogger(logger)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -196,6 +212,37 @@ func Start() {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
func retireNode(args interface{}) error {
|
||||
//1.解析参数
|
||||
param := args.(string)
|
||||
if param == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
sParam := strings.Split(param, "=")
|
||||
if len(sParam) != 2 {
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
if sParam[0] != "nodeid" {
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
nId, err := strconv.Atoi(sParam[1])
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
|
||||
processId, err := getRunProcessPid(nId)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
RetireProcess(processId)
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
func stopNode(args interface{}) error {
|
||||
//1.解析参数
|
||||
param := args.(string)
|
||||
@@ -210,12 +257,12 @@ func stopNode(args interface{}) error {
|
||||
if sParam[0] != "nodeid" {
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
nodeId, err := strconv.Atoi(sParam[1])
|
||||
nId, err := strconv.Atoi(sParam[1])
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
|
||||
processId, err := getRunProcessPid(nodeId)
|
||||
processId, err := getRunProcessPid(nId)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -242,20 +289,43 @@ func startNode(args interface{}) error {
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid option %s", param)
|
||||
}
|
||||
for{
|
||||
processId, pErr := getRunProcessPid(nodeId)
|
||||
if pErr != nil {
|
||||
break
|
||||
}
|
||||
|
||||
name, cErr := sysprocess.GetProcessNameByPID(int32(processId))
|
||||
myName, mErr := sysprocess.GetMyProcessName()
|
||||
//当前进程名获取失败,不应该发生
|
||||
if mErr != nil {
|
||||
log.SInfo("get my process's name is error,", err.Error())
|
||||
os.Exit(-1)
|
||||
}
|
||||
|
||||
//进程id存在,而且进程名也相同,被认为是当前进程重复运行
|
||||
if cErr == nil && name == myName {
|
||||
log.SInfo(fmt.Sprintf("repeat runs are not allowed,node is %d,processid is %d",nodeId,processId))
|
||||
os.Exit(-1)
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
//2.记录进程id号
|
||||
log.Info("Start running server.")
|
||||
writeProcessPid(nodeId)
|
||||
timer.StartTimer(10*time.Millisecond, 1000000)
|
||||
log.SRelease("Start running server.")
|
||||
//2.初始化node
|
||||
|
||||
//3.初始化node
|
||||
initNode(nodeId)
|
||||
|
||||
//3.运行service
|
||||
//4.运行service
|
||||
service.Start()
|
||||
|
||||
//4.运行集群
|
||||
//5.运行集群
|
||||
cluster.GetCluster().Start()
|
||||
|
||||
//5.记录进程id号
|
||||
writeProcessPid(nodeId)
|
||||
|
||||
|
||||
//6.监听程序退出信号&性能报告
|
||||
bRun := true
|
||||
@@ -263,21 +333,29 @@ func startNode(args interface{}) error {
|
||||
if profilerInterval > 0 {
|
||||
pProfilerTicker = time.NewTicker(profilerInterval)
|
||||
}
|
||||
|
||||
for bRun {
|
||||
select {
|
||||
case <-sig:
|
||||
log.SRelease("receipt stop signal.")
|
||||
bRun = false
|
||||
case s := <-sig:
|
||||
signal := s.(syscall.Signal)
|
||||
if signal == SignalRetire {
|
||||
log.Info("receipt retire signal.")
|
||||
notifyAllServiceRetire()
|
||||
}else {
|
||||
bRun = false
|
||||
log.Info("receipt stop signal.")
|
||||
}
|
||||
case <-pProfilerTicker.C:
|
||||
profiler.Report()
|
||||
}
|
||||
}
|
||||
|
||||
cluster.GetCluster().Stop()
|
||||
//7.退出
|
||||
close(closeSig)
|
||||
service.WaitStop()
|
||||
service.StopAllService()
|
||||
|
||||
log.SRelease("Server is stop.")
|
||||
log.Info("Server is stop.")
|
||||
log.Close()
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -292,20 +370,15 @@ func GetService(serviceName string) service.IService {
|
||||
return service.GetService(serviceName)
|
||||
}
|
||||
|
||||
func SetConfigDir(configDir string) {
|
||||
configDir = configDir
|
||||
cluster.SetConfigDir(configDir)
|
||||
func SetConfigDir(cfgDir string) {
|
||||
configDir = cfgDir
|
||||
cluster.SetConfigDir(cfgDir)
|
||||
}
|
||||
|
||||
func GetConfigDir() string {
|
||||
return configDir
|
||||
}
|
||||
|
||||
func SetSysLog(strLevel string, pathname string, flag int) {
|
||||
logs, _ := log.New(strLevel, pathname, "", flag, 10)
|
||||
log.Export(logs)
|
||||
}
|
||||
|
||||
func OpenProfilerReport(interval time.Duration) {
|
||||
profilerInterval = interval
|
||||
}
|
||||
@@ -330,9 +403,24 @@ func setLevel(args interface{}) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
logLevel = strings.TrimSpace(args.(string))
|
||||
if logLevel != "debug" && logLevel != "release" && logLevel != "warning" && logLevel != "error" && logLevel != "fatal" {
|
||||
return errors.New("unknown level: " + logLevel)
|
||||
strlogLevel := strings.TrimSpace(args.(string))
|
||||
switch strlogLevel {
|
||||
case "trace":
|
||||
log.LogLevel = log.LevelTrace
|
||||
case "debug":
|
||||
log.LogLevel = log.LevelDebug
|
||||
case "info":
|
||||
log.LogLevel = log.LevelInfo
|
||||
case "warning":
|
||||
log.LogLevel = log.LevelWarning
|
||||
case "error":
|
||||
log.LogLevel = log.LevelError
|
||||
case "stack":
|
||||
log.LogLevel = log.LevelStack
|
||||
case "fatal":
|
||||
log.LogLevel = log.LevelFatal
|
||||
default:
|
||||
return errors.New("unknown level: " + strlogLevel)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -341,18 +429,48 @@ func setLogPath(args interface{}) error {
|
||||
if args == "" {
|
||||
return nil
|
||||
}
|
||||
logPath = strings.TrimSpace(args.(string))
|
||||
dir, err := os.Stat(logPath) //这个文件夹不存在
|
||||
|
||||
log.LogPath = strings.TrimSpace(args.(string))
|
||||
dir, err := os.Stat(log.LogPath) //这个文件夹不存在
|
||||
if err == nil && dir.IsDir() == false {
|
||||
return errors.New("Not found dir " + logPath)
|
||||
return errors.New("Not found dir " + log.LogPath)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
err = os.Mkdir(logPath, os.ModePerm)
|
||||
err = os.Mkdir(log.LogPath, os.ModePerm)
|
||||
if err != nil {
|
||||
return errors.New("Cannot create dir " + logPath)
|
||||
return errors.New("Cannot create dir " + log.LogPath)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func setLogSize(args interface{}) error {
|
||||
if args == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
logSize,ok := args.(int)
|
||||
if ok == false{
|
||||
return errors.New("param logsize is error")
|
||||
}
|
||||
|
||||
log.LogSize = int64(logSize)*1024*1024
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func setLogChannelCapNum(args interface{}) error {
|
||||
if args == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
logChannelCap,ok := args.(int)
|
||||
if ok == false{
|
||||
return errors.New("param logsize is error")
|
||||
}
|
||||
|
||||
log.LogChannelCap = logChannelCap
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -8,7 +8,7 @@ import (
|
||||
)
|
||||
|
||||
func KillProcess(processId int){
|
||||
err := syscall.Kill(processId,syscall.Signal(10))
|
||||
err := syscall.Kill(processId,SingleStop)
|
||||
if err != nil {
|
||||
fmt.Printf("kill processid %d is fail:%+v.\n",processId,err)
|
||||
}else{
|
||||
@@ -19,3 +19,12 @@ func KillProcess(processId int){
|
||||
func GetBuildOSType() BuildOSType{
|
||||
return Linux
|
||||
}
|
||||
|
||||
func RetireProcess(processId int){
|
||||
err := syscall.Kill(processId,SignalRetire)
|
||||
if err != nil {
|
||||
fmt.Printf("retire processid %d is fail:%+v.\n",processId,err)
|
||||
}else{
|
||||
fmt.Printf("retire processid %d is successful.\n",processId)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -8,7 +8,7 @@ import (
|
||||
)
|
||||
|
||||
func KillProcess(processId int){
|
||||
err := syscall.Kill(processId,syscall.Signal(10))
|
||||
err := syscall.Kill(processId,SingleStop)
|
||||
if err != nil {
|
||||
fmt.Printf("kill processid %d is fail:%+v.\n",processId,err)
|
||||
}else{
|
||||
@@ -19,3 +19,12 @@ func KillProcess(processId int){
|
||||
func GetBuildOSType() BuildOSType{
|
||||
return Mac
|
||||
}
|
||||
|
||||
func RetireProcess(processId int){
|
||||
err := syscall.Kill(processId,SignalRetire)
|
||||
if err != nil {
|
||||
fmt.Printf("retire processid %d is fail:%+v.\n",processId,err)
|
||||
}else{
|
||||
fmt.Printf("retire processid %d is successful.\n",processId)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,10 +2,28 @@
|
||||
|
||||
package node
|
||||
|
||||
func KillProcess(processId int){
|
||||
import (
|
||||
"os"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func KillProcess(processId int){
|
||||
procss,err := os.FindProcess(processId)
|
||||
if err != nil {
|
||||
fmt.Printf("kill processid %d is fail:%+v.\n",processId,err)
|
||||
return
|
||||
}
|
||||
|
||||
err = procss.Kill()
|
||||
if err != nil {
|
||||
fmt.Printf("kill processid %d is fail:%+v.\n",processId,err)
|
||||
}
|
||||
}
|
||||
|
||||
func GetBuildOSType() BuildOSType{
|
||||
return Windows
|
||||
}
|
||||
|
||||
func RetireProcess(processId int){
|
||||
fmt.Printf("This command does not support Windows")
|
||||
}
|
||||
|
||||
@@ -167,7 +167,7 @@ func DefaultReportFunction(name string,callNum int,costTime time.Duration,record
|
||||
elem = elem.Next()
|
||||
}
|
||||
|
||||
log.SRelease(strReport)
|
||||
log.SInfo("report",strReport)
|
||||
}
|
||||
|
||||
func Report() {
|
||||
@@ -193,9 +193,11 @@ func Report() {
|
||||
|
||||
record = prof.record
|
||||
prof.record = list.New()
|
||||
callNum := prof.callNum
|
||||
totalCostTime := prof.totalCostTime
|
||||
prof.stackLocker.RUnlock()
|
||||
|
||||
DefaultReportFunction(name,prof.callNum,prof.totalCostTime,record)
|
||||
DefaultReportFunction(name,callNum,totalCostTime,record)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
418
rpc/client.go
418
rpc/client.go
@@ -1,93 +1,64 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"container/list"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/network"
|
||||
"math"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
)
|
||||
|
||||
type Client struct {
|
||||
clientSeq uint32
|
||||
id int
|
||||
bSelfNode bool
|
||||
network.TCPClient
|
||||
conn *network.TCPConn
|
||||
const(
|
||||
DefaultRpcConnNum = 1
|
||||
DefaultRpcLenMsgLen = 4
|
||||
DefaultRpcMinMsgLen = 2
|
||||
DefaultMaxCheckCallRpcCount = 1000
|
||||
DefaultMaxPendingWriteNum = 200000
|
||||
|
||||
pendingLock sync.RWMutex
|
||||
startSeq uint64
|
||||
pending map[uint64]*list.Element
|
||||
pendingTimer *list.List
|
||||
callRpcTimeout time.Duration
|
||||
maxCheckCallRpcCount int
|
||||
TriggerRpcEvent
|
||||
}
|
||||
|
||||
const MaxCheckCallRpcCount = 1000
|
||||
const MaxPendingWriteNum = 200000
|
||||
const ConnectInterval = 2*time.Second
|
||||
DefaultConnectInterval = 2*time.Second
|
||||
DefaultCheckRpcCallTimeoutInterval = 1*time.Second
|
||||
DefaultRpcTimeout = 15*time.Second
|
||||
)
|
||||
|
||||
var clientSeq uint32
|
||||
|
||||
type IRealClient interface {
|
||||
SetConn(conn *network.TCPConn)
|
||||
Close(waitDone bool)
|
||||
|
||||
AsyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{},cancelable bool) (CancelRpc,error)
|
||||
Go(timeout time.Duration,rpcHandler IRpcHandler, noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call
|
||||
RawGo(timeout time.Duration,rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, rawArgs []byte, reply interface{}) *Call
|
||||
IsConnected() bool
|
||||
|
||||
Run()
|
||||
OnClose()
|
||||
}
|
||||
|
||||
type Client struct {
|
||||
clientId uint32
|
||||
nodeId int
|
||||
pendingLock sync.RWMutex
|
||||
startSeq uint64
|
||||
pending map[uint64]*Call
|
||||
callRpcTimeout time.Duration
|
||||
maxCheckCallRpcCount int
|
||||
|
||||
callTimerHeap CallTimerHeap
|
||||
IRealClient
|
||||
}
|
||||
|
||||
func (client *Client) NewClientAgent(conn *network.TCPConn) network.Agent {
|
||||
client.conn = conn
|
||||
client.ResetPending()
|
||||
client.SetConn(conn)
|
||||
|
||||
return client
|
||||
}
|
||||
|
||||
|
||||
func (client *Client) Connect(id int, addr string, maxRpcParamLen uint32) error {
|
||||
client.clientSeq = atomic.AddUint32(&clientSeq, 1)
|
||||
client.id = id
|
||||
client.Addr = addr
|
||||
client.maxCheckCallRpcCount = MaxCheckCallRpcCount
|
||||
client.callRpcTimeout = 15 * time.Second
|
||||
client.ConnectInterval = ConnectInterval
|
||||
client.PendingWriteNum = MaxPendingWriteNum
|
||||
client.AutoReconnect = true
|
||||
|
||||
client.ConnNum = 1
|
||||
client.LenMsgLen = 4
|
||||
client.MinMsgLen = 2
|
||||
client.ReadDeadline = Default_ReadWriteDeadline
|
||||
client.WriteDeadline = Default_ReadWriteDeadline
|
||||
|
||||
if maxRpcParamLen > 0 {
|
||||
client.MaxMsgLen = maxRpcParamLen
|
||||
} else {
|
||||
client.MaxMsgLen = math.MaxUint32
|
||||
}
|
||||
|
||||
client.NewAgent = client.NewClientAgent
|
||||
client.LittleEndian = LittleEndian
|
||||
client.ResetPending()
|
||||
go client.startCheckRpcCallTimer()
|
||||
if addr == "" {
|
||||
client.bSelfNode = true
|
||||
return nil
|
||||
}
|
||||
|
||||
client.Start()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (client *Client) startCheckRpcCallTimer() {
|
||||
for {
|
||||
time.Sleep(5 * time.Second)
|
||||
client.checkRpcCallTimeout()
|
||||
}
|
||||
}
|
||||
|
||||
func (client *Client) makeCallFail(call *Call) {
|
||||
client.removePending(call.Seq)
|
||||
func (bc *Client) makeCallFail(call *Call) {
|
||||
if call.callback != nil && call.callback.IsValid() {
|
||||
call.rpcHandler.PushRpcResponse(call)
|
||||
} else {
|
||||
@@ -95,271 +66,120 @@ func (client *Client) makeCallFail(call *Call) {
|
||||
}
|
||||
}
|
||||
|
||||
func (client *Client) checkRpcCallTimeout() {
|
||||
now := time.Now()
|
||||
func (bc *Client) checkRpcCallTimeout() {
|
||||
for{
|
||||
time.Sleep(DefaultCheckRpcCallTimeoutInterval)
|
||||
for i := 0; i < bc.maxCheckCallRpcCount; i++ {
|
||||
bc.pendingLock.Lock()
|
||||
|
||||
callSeq := bc.callTimerHeap.PopTimeout()
|
||||
if callSeq == 0 {
|
||||
bc.pendingLock.Unlock()
|
||||
break
|
||||
}
|
||||
|
||||
for i := 0; i < client.maxCheckCallRpcCount; i++ {
|
||||
client.pendingLock.Lock()
|
||||
pElem := client.pendingTimer.Front()
|
||||
if pElem == nil {
|
||||
client.pendingLock.Unlock()
|
||||
break
|
||||
}
|
||||
pCall := pElem.Value.(*Call)
|
||||
if now.Sub(pCall.callTime) > client.callRpcTimeout {
|
||||
strTimeout := strconv.FormatInt(int64(client.callRpcTimeout/time.Second), 10)
|
||||
pCall.Err = errors.New("RPC call takes more than " + strTimeout + " seconds")
|
||||
client.makeCallFail(pCall)
|
||||
client.pendingLock.Unlock()
|
||||
pCall := bc.pending[callSeq]
|
||||
if pCall == nil {
|
||||
bc.pendingLock.Unlock()
|
||||
log.Error("call seq is not find",log.Uint64("seq", callSeq))
|
||||
continue
|
||||
}
|
||||
|
||||
delete(bc.pending,callSeq)
|
||||
strTimeout := strconv.FormatInt(int64(pCall.TimeOut.Seconds()), 10)
|
||||
pCall.Err = errors.New("RPC call takes more than " + strTimeout + " seconds,method is "+pCall.ServiceMethod)
|
||||
log.Error("call timeout",log.String("error",pCall.Err.Error()))
|
||||
bc.makeCallFail(pCall)
|
||||
bc.pendingLock.Unlock()
|
||||
continue
|
||||
}
|
||||
client.pendingLock.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
func (client *Client) ResetPending() {
|
||||
func (client *Client) InitPending() {
|
||||
client.pendingLock.Lock()
|
||||
if client.pending != nil {
|
||||
for _, v := range client.pending {
|
||||
v.Value.(*Call).Err = errors.New("node is disconnect")
|
||||
v.Value.(*Call).done <- v.Value.(*Call)
|
||||
}
|
||||
client.callTimerHeap.Init()
|
||||
client.pending = make(map[uint64]*Call,4096)
|
||||
client.pendingLock.Unlock()
|
||||
}
|
||||
|
||||
func (bc *Client) AddPending(call *Call) {
|
||||
bc.pendingLock.Lock()
|
||||
|
||||
if call.Seq == 0 {
|
||||
bc.pendingLock.Unlock()
|
||||
log.Stack("call is error.")
|
||||
return
|
||||
}
|
||||
|
||||
client.pending = make(map[uint64]*list.Element, 4096)
|
||||
client.pendingTimer = list.New()
|
||||
client.pendingLock.Unlock()
|
||||
bc.pending[call.Seq] = call
|
||||
bc.callTimerHeap.AddTimer(call.Seq,call.TimeOut)
|
||||
|
||||
bc.pendingLock.Unlock()
|
||||
}
|
||||
|
||||
func (client *Client) AddPending(call *Call) {
|
||||
client.pendingLock.Lock()
|
||||
call.callTime = time.Now()
|
||||
elemTimer := client.pendingTimer.PushBack(call)
|
||||
client.pending[call.Seq] = elemTimer //如果下面发送失败,将会一一直存在这里
|
||||
client.pendingLock.Unlock()
|
||||
}
|
||||
|
||||
func (client *Client) RemovePending(seq uint64) *Call {
|
||||
if seq == 0 {
|
||||
func (bc *Client) RemovePending(seq uint64) *Call {
|
||||
if seq == 0 {
|
||||
return nil
|
||||
}
|
||||
client.pendingLock.Lock()
|
||||
call := client.removePending(seq)
|
||||
client.pendingLock.Unlock()
|
||||
bc.pendingLock.Lock()
|
||||
call := bc.removePending(seq)
|
||||
bc.pendingLock.Unlock()
|
||||
return call
|
||||
}
|
||||
|
||||
func (client *Client) removePending(seq uint64) *Call {
|
||||
v, ok := client.pending[seq]
|
||||
func (bc *Client) removePending(seq uint64) *Call {
|
||||
v, ok := bc.pending[seq]
|
||||
if ok == false {
|
||||
return nil
|
||||
}
|
||||
call := v.Value.(*Call)
|
||||
client.pendingTimer.Remove(v)
|
||||
delete(client.pending, seq)
|
||||
return call
|
||||
|
||||
bc.callTimerHeap.Cancel(seq)
|
||||
delete(bc.pending, seq)
|
||||
return v
|
||||
}
|
||||
|
||||
func (client *Client) FindPending(seq uint64) *Call {
|
||||
func (bc *Client) FindPending(seq uint64) (pCall *Call) {
|
||||
if seq == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
client.pendingLock.Lock()
|
||||
v, ok := client.pending[seq]
|
||||
if ok == false {
|
||||
client.pendingLock.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
pCall := v.Value.(*Call)
|
||||
client.pendingLock.Unlock()
|
||||
bc.pendingLock.Lock()
|
||||
pCall = bc.pending[seq]
|
||||
bc.pendingLock.Unlock()
|
||||
|
||||
return pCall
|
||||
}
|
||||
|
||||
func (client *Client) generateSeq() uint64 {
|
||||
return atomic.AddUint64(&client.startSeq, 1)
|
||||
}
|
||||
|
||||
func (client *Client) AsyncCall(rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{}) error {
|
||||
processorType, processor := GetProcessorType(args)
|
||||
InParam, herr := processor.Marshal(args)
|
||||
if herr != nil {
|
||||
return herr
|
||||
}
|
||||
|
||||
seq := client.generateSeq()
|
||||
request := MakeRpcRequest(processor, seq, 0, serviceMethod, false, InParam)
|
||||
bytes, err := processor.Marshal(request.RpcRequestData)
|
||||
ReleaseRpcRequest(request)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if client.conn == nil {
|
||||
return errors.New("Rpc server is disconnect,call " + serviceMethod)
|
||||
}
|
||||
|
||||
call := MakeCall()
|
||||
call.Reply = replyParam
|
||||
call.callback = &callback
|
||||
call.rpcHandler = rpcHandler
|
||||
call.ServiceMethod = serviceMethod
|
||||
call.Seq = seq
|
||||
client.AddPending(call)
|
||||
|
||||
err = client.conn.WriteMsg([]byte{uint8(processorType)}, bytes)
|
||||
if err != nil {
|
||||
client.RemovePending(call.Seq)
|
||||
ReleaseCall(call)
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (client *Client) RawGo(processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, args []byte, reply interface{}) *Call {
|
||||
call := MakeCall()
|
||||
call.ServiceMethod = serviceMethod
|
||||
call.Reply = reply
|
||||
call.Seq = client.generateSeq()
|
||||
|
||||
request := MakeRpcRequest(processor, call.Seq, rpcMethodId, serviceMethod, noReply, args)
|
||||
bytes, err := processor.Marshal(request.RpcRequestData)
|
||||
ReleaseRpcRequest(request)
|
||||
if err != nil {
|
||||
call.Seq = 0
|
||||
call.Err = err
|
||||
return call
|
||||
}
|
||||
|
||||
if client.conn == nil {
|
||||
call.Seq = 0
|
||||
call.Err = errors.New(serviceMethod + " was called failed,rpc client is disconnect")
|
||||
return call
|
||||
}
|
||||
|
||||
if noReply == false {
|
||||
client.AddPending(call)
|
||||
}
|
||||
|
||||
err = client.conn.WriteMsg([]byte{uint8(processor.GetProcessorType())}, bytes)
|
||||
if err != nil {
|
||||
client.RemovePending(call.Seq)
|
||||
call.Seq = 0
|
||||
call.Err = err
|
||||
}
|
||||
|
||||
return call
|
||||
}
|
||||
|
||||
func (client *Client) Go(noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
|
||||
_, processor := GetProcessorType(args)
|
||||
InParam, err := processor.Marshal(args)
|
||||
if err != nil {
|
||||
call := MakeCall()
|
||||
call.Err = err
|
||||
return call
|
||||
}
|
||||
|
||||
return client.RawGo(processor, noReply, 0, serviceMethod, InParam, reply)
|
||||
}
|
||||
|
||||
func (client *Client) Run() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||
func (bc *Client) cleanPending(){
|
||||
bc.pendingLock.Lock()
|
||||
for {
|
||||
callSeq := bc.callTimerHeap.PopFirst()
|
||||
if callSeq == 0 {
|
||||
break
|
||||
}
|
||||
}()
|
||||
|
||||
client.TriggerRpcEvent(true, client.GetClientSeq(), client.GetId())
|
||||
for {
|
||||
bytes, err := client.conn.ReadMsg()
|
||||
if err != nil {
|
||||
log.SError("rpcClient ", client.Addr, " ReadMsg error:", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
processor := GetProcessor(bytes[0])
|
||||
if processor == nil {
|
||||
client.conn.ReleaseReadMsg(bytes)
|
||||
log.SError("rpcClient ", client.Addr, " ReadMsg head error:", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
//1.解析head
|
||||
response := RpcResponse{}
|
||||
response.RpcResponseData = processor.MakeRpcResponse(0, "", nil)
|
||||
|
||||
err = processor.Unmarshal(bytes[1:], response.RpcResponseData)
|
||||
client.conn.ReleaseReadMsg(bytes)
|
||||
if err != nil {
|
||||
processor.ReleaseRpcResponse(response.RpcResponseData)
|
||||
log.SError("rpcClient Unmarshal head error:", err.Error())
|
||||
pCall := bc.pending[callSeq]
|
||||
if pCall == nil {
|
||||
log.Error("call Seq is not find",log.Uint64("seq",callSeq))
|
||||
continue
|
||||
}
|
||||
|
||||
v := client.RemovePending(response.RpcResponseData.GetSeq())
|
||||
if v == nil {
|
||||
log.SError("rpcClient cannot find seq ", response.RpcResponseData.GetSeq(), " in pending")
|
||||
} else {
|
||||
v.Err = nil
|
||||
if len(response.RpcResponseData.GetReply()) > 0 {
|
||||
err = processor.Unmarshal(response.RpcResponseData.GetReply(), v.Reply)
|
||||
if err != nil {
|
||||
log.SError("rpcClient Unmarshal body error:", err.Error())
|
||||
v.Err = err
|
||||
}
|
||||
}
|
||||
|
||||
if response.RpcResponseData.GetErr() != nil {
|
||||
v.Err = response.RpcResponseData.GetErr()
|
||||
}
|
||||
|
||||
if v.callback != nil && v.callback.IsValid() {
|
||||
v.rpcHandler.PushRpcResponse(v)
|
||||
} else {
|
||||
v.done <- v
|
||||
}
|
||||
}
|
||||
|
||||
processor.ReleaseRpcResponse(response.RpcResponseData)
|
||||
}
|
||||
}
|
||||
|
||||
func (client *Client) OnClose() {
|
||||
client.TriggerRpcEvent(false, client.GetClientSeq(), client.GetId())
|
||||
}
|
||||
|
||||
func (client *Client) IsConnected() bool {
|
||||
return client.bSelfNode || (client.conn != nil && client.conn.IsConnected() == true)
|
||||
}
|
||||
|
||||
func (client *Client) GetId() int {
|
||||
return client.id
|
||||
}
|
||||
|
||||
func (client *Client) Close(waitDone bool) {
|
||||
client.TCPClient.Close(waitDone)
|
||||
|
||||
client.pendingLock.Lock()
|
||||
for {
|
||||
pElem := client.pendingTimer.Front()
|
||||
if pElem == nil {
|
||||
break
|
||||
}
|
||||
|
||||
pCall := pElem.Value.(*Call)
|
||||
delete(bc.pending,callSeq)
|
||||
pCall.Err = errors.New("nodeid is disconnect ")
|
||||
client.makeCallFail(pCall)
|
||||
bc.makeCallFail(pCall)
|
||||
}
|
||||
client.pendingLock.Unlock()
|
||||
|
||||
bc.pendingLock.Unlock()
|
||||
}
|
||||
|
||||
func (client *Client) GetClientSeq() uint32 {
|
||||
return client.clientSeq
|
||||
func (bc *Client) generateSeq() uint64 {
|
||||
return atomic.AddUint64(&bc.startSeq, 1)
|
||||
}
|
||||
|
||||
func (client *Client) GetNodeId() int {
|
||||
return client.nodeId
|
||||
}
|
||||
|
||||
func (client *Client) GetClientId() uint32 {
|
||||
return client.clientId
|
||||
}
|
||||
|
||||
102
rpc/compressor.go
Normal file
102
rpc/compressor.go
Normal file
@@ -0,0 +1,102 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/util/bytespool"
|
||||
"github.com/pierrec/lz4/v4"
|
||||
"runtime"
|
||||
)
|
||||
|
||||
var memPool bytespool.IBytesMempool = bytespool.NewMemAreaPool()
|
||||
|
||||
type ICompressor interface {
|
||||
CompressBlock(src []byte) ([]byte, error) //dst如果有预申请使用dst内存,传入nil时内部申请
|
||||
UncompressBlock(src []byte) ([]byte, error) //dst如果有预申请使用dst内存,传入nil时内部申请
|
||||
|
||||
CompressBufferCollection(buffer []byte) //压缩的Buffer内存回收
|
||||
UnCompressBufferCollection(buffer []byte) //解压缩的Buffer内存回收
|
||||
}
|
||||
|
||||
var compressor ICompressor
|
||||
func init(){
|
||||
SetCompressor(&Lz4Compressor{})
|
||||
}
|
||||
|
||||
func SetCompressor(cp ICompressor){
|
||||
compressor = cp
|
||||
}
|
||||
|
||||
type Lz4Compressor struct {
|
||||
}
|
||||
|
||||
func (lc *Lz4Compressor) CompressBlock(src []byte) (dest []byte, err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
err = errors.New("core dump info[" + errString + "]\n" + string(buf[:l]))
|
||||
}
|
||||
}()
|
||||
|
||||
var c lz4.Compressor
|
||||
var cnt int
|
||||
dest = memPool.MakeBytes(lz4.CompressBlockBound(len(src))+1)
|
||||
cnt, err = c.CompressBlock(src, dest[1:])
|
||||
if err != nil {
|
||||
memPool.ReleaseBytes(dest)
|
||||
return nil,err
|
||||
}
|
||||
|
||||
ratio := len(src) / cnt
|
||||
if len(src) % cnt > 0 {
|
||||
ratio += 1
|
||||
}
|
||||
|
||||
if ratio > 255 {
|
||||
memPool.ReleaseBytes(dest)
|
||||
return nil,fmt.Errorf("Impermissible errors")
|
||||
}
|
||||
|
||||
dest[0] = uint8(ratio)
|
||||
dest = dest[:cnt+1]
|
||||
return
|
||||
}
|
||||
|
||||
func (lc *Lz4Compressor) UncompressBlock(src []byte) (dest []byte, err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
err = errors.New("core dump info[" + errString + "]\n" + string(buf[:l]))
|
||||
}
|
||||
}()
|
||||
|
||||
radio := uint8(src[0])
|
||||
if radio == 0 {
|
||||
return nil,fmt.Errorf("Impermissible errors")
|
||||
}
|
||||
|
||||
dest = memPool.MakeBytes(len(src)*int(radio))
|
||||
cnt, err := lz4.UncompressBlock(src[1:], dest)
|
||||
if err != nil {
|
||||
memPool.ReleaseBytes(dest)
|
||||
return nil,err
|
||||
}
|
||||
|
||||
return dest[:cnt],nil
|
||||
}
|
||||
|
||||
func (lc *Lz4Compressor) compressBlockBound(n int) int{
|
||||
return lz4.CompressBlockBound(n)
|
||||
}
|
||||
|
||||
func (lc *Lz4Compressor) CompressBufferCollection(buffer []byte){
|
||||
memPool.ReleaseBytes(buffer)
|
||||
}
|
||||
|
||||
func (lc *Lz4Compressor) UnCompressBufferCollection(buffer []byte) {
|
||||
memPool.ReleaseBytes(buffer)
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
syntax = "proto3";
|
||||
package rpc;
|
||||
option go_package = "./rpc";
|
||||
option go_package = ".;rpc";
|
||||
|
||||
message NodeInfo{
|
||||
int32 NodeId = 1;
|
||||
@@ -8,7 +8,8 @@ message NodeInfo{
|
||||
string ListenAddr = 3;
|
||||
uint32 MaxRpcParamLen = 4;
|
||||
bool Private = 5;
|
||||
repeated string PublicServiceList = 6;
|
||||
bool Retire = 6;
|
||||
repeated string PublicServiceList = 7;
|
||||
}
|
||||
|
||||
//Client->Master
|
||||
@@ -24,6 +25,12 @@ message SubscribeDiscoverNotify{
|
||||
repeated NodeInfo nodeInfo = 4;
|
||||
}
|
||||
|
||||
|
||||
//Client->Master
|
||||
message NodeRetireReq{
|
||||
NodeInfo nodeInfo = 1;
|
||||
}
|
||||
|
||||
//Master->Client
|
||||
message Empty{
|
||||
}
|
||||
@@ -1,95 +0,0 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"github.com/duanhf2012/origin/util/sync"
|
||||
"github.com/gogo/protobuf/proto"
|
||||
)
|
||||
|
||||
type GoGoPBProcessor struct {
|
||||
}
|
||||
|
||||
var rpcGoGoPbResponseDataPool =sync.NewPool(make(chan interface{},10240), func()interface{}{
|
||||
return &GoGoPBRpcResponseData{}
|
||||
})
|
||||
|
||||
var rpcGoGoPbRequestDataPool =sync.NewPool(make(chan interface{},10240), func()interface{}{
|
||||
return &GoGoPBRpcRequestData{}
|
||||
})
|
||||
|
||||
func (slf *GoGoPBRpcRequestData) MakeRequest(seq uint64,rpcMethodId uint32,serviceMethod string,noReply bool,inParam []byte) *GoGoPBRpcRequestData{
|
||||
slf.Seq = seq
|
||||
slf.RpcMethodId = rpcMethodId
|
||||
slf.ServiceMethod = serviceMethod
|
||||
slf.NoReply = noReply
|
||||
slf.InParam = inParam
|
||||
|
||||
return slf
|
||||
}
|
||||
|
||||
|
||||
func (slf *GoGoPBRpcResponseData) MakeRespone(seq uint64,err RpcError,reply []byte) *GoGoPBRpcResponseData{
|
||||
slf.Seq = seq
|
||||
slf.Error = err.Error()
|
||||
slf.Reply = reply
|
||||
|
||||
return slf
|
||||
}
|
||||
|
||||
func (slf *GoGoPBProcessor) Marshal(v interface{}) ([]byte, error){
|
||||
return proto.Marshal(v.(proto.Message))
|
||||
}
|
||||
|
||||
func (slf *GoGoPBProcessor) Unmarshal(data []byte, msg interface{}) error{
|
||||
protoMsg := msg.(proto.Message)
|
||||
return proto.Unmarshal(data, protoMsg)
|
||||
}
|
||||
|
||||
func (slf *GoGoPBProcessor) MakeRpcRequest(seq uint64,rpcMethodId uint32,serviceMethod string,noReply bool,inParam []byte) IRpcRequestData{
|
||||
pGogoPbRpcRequestData := rpcGoGoPbRequestDataPool.Get().(*GoGoPBRpcRequestData)
|
||||
pGogoPbRpcRequestData.MakeRequest(seq,rpcMethodId,serviceMethod,noReply,inParam)
|
||||
return pGogoPbRpcRequestData
|
||||
}
|
||||
|
||||
func (slf *GoGoPBProcessor) MakeRpcResponse(seq uint64,err RpcError,reply []byte) IRpcResponseData {
|
||||
pGoGoPBRpcResponseData := rpcGoGoPbResponseDataPool.Get().(*GoGoPBRpcResponseData)
|
||||
pGoGoPBRpcResponseData.MakeRespone(seq,err,reply)
|
||||
return pGoGoPBRpcResponseData
|
||||
}
|
||||
|
||||
func (slf *GoGoPBProcessor) ReleaseRpcRequest(rpcRequestData IRpcRequestData){
|
||||
rpcGoGoPbRequestDataPool.Put(rpcRequestData)
|
||||
}
|
||||
|
||||
func (slf *GoGoPBProcessor) ReleaseRpcResponse(rpcResponseData IRpcResponseData){
|
||||
rpcGoGoPbResponseDataPool.Put(rpcResponseData)
|
||||
}
|
||||
|
||||
func (slf *GoGoPBProcessor) IsParse(param interface{}) bool {
|
||||
_,ok := param.(proto.Message)
|
||||
return ok
|
||||
}
|
||||
|
||||
func (slf *GoGoPBProcessor) GetProcessorType() RpcProcessorType{
|
||||
return RpcProcessorGoGoPB
|
||||
}
|
||||
|
||||
func (slf *GoGoPBRpcRequestData) IsNoReply() bool{
|
||||
return slf.GetNoReply()
|
||||
}
|
||||
|
||||
func (slf *GoGoPBRpcResponseData) GetErr() *RpcError {
|
||||
if slf.GetError() == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
err := RpcError(slf.GetError())
|
||||
return &err
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,769 +0,0 @@
|
||||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: gogorpc.proto
|
||||
|
||||
package rpc
|
||||
|
||||
import (
|
||||
fmt "fmt"
|
||||
proto "github.com/gogo/protobuf/proto"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
type GoGoPBRpcRequestData struct {
|
||||
Seq uint64 `protobuf:"varint,1,opt,name=Seq,proto3" json:"Seq,omitempty"`
|
||||
RpcMethodId uint32 `protobuf:"varint,2,opt,name=RpcMethodId,proto3" json:"RpcMethodId,omitempty"`
|
||||
ServiceMethod string `protobuf:"bytes,3,opt,name=ServiceMethod,proto3" json:"ServiceMethod,omitempty"`
|
||||
NoReply bool `protobuf:"varint,4,opt,name=NoReply,proto3" json:"NoReply,omitempty"`
|
||||
InParam []byte `protobuf:"bytes,5,opt,name=InParam,proto3" json:"InParam,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcRequestData) Reset() { *m = GoGoPBRpcRequestData{} }
|
||||
func (m *GoGoPBRpcRequestData) String() string { return proto.CompactTextString(m) }
|
||||
func (*GoGoPBRpcRequestData) ProtoMessage() {}
|
||||
func (*GoGoPBRpcRequestData) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_d0e25d3af112ec8f, []int{0}
|
||||
}
|
||||
func (m *GoGoPBRpcRequestData) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *GoGoPBRpcRequestData) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_GoGoPBRpcRequestData.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (m *GoGoPBRpcRequestData) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_GoGoPBRpcRequestData.Merge(m, src)
|
||||
}
|
||||
func (m *GoGoPBRpcRequestData) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *GoGoPBRpcRequestData) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_GoGoPBRpcRequestData.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_GoGoPBRpcRequestData proto.InternalMessageInfo
|
||||
|
||||
func (m *GoGoPBRpcRequestData) GetSeq() uint64 {
|
||||
if m != nil {
|
||||
return m.Seq
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcRequestData) GetRpcMethodId() uint32 {
|
||||
if m != nil {
|
||||
return m.RpcMethodId
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcRequestData) GetServiceMethod() string {
|
||||
if m != nil {
|
||||
return m.ServiceMethod
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcRequestData) GetNoReply() bool {
|
||||
if m != nil {
|
||||
return m.NoReply
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcRequestData) GetInParam() []byte {
|
||||
if m != nil {
|
||||
return m.InParam
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type GoGoPBRpcResponseData struct {
|
||||
Seq uint64 `protobuf:"varint,1,opt,name=Seq,proto3" json:"Seq,omitempty"`
|
||||
Error string `protobuf:"bytes,2,opt,name=Error,proto3" json:"Error,omitempty"`
|
||||
Reply []byte `protobuf:"bytes,3,opt,name=Reply,proto3" json:"Reply,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcResponseData) Reset() { *m = GoGoPBRpcResponseData{} }
|
||||
func (m *GoGoPBRpcResponseData) String() string { return proto.CompactTextString(m) }
|
||||
func (*GoGoPBRpcResponseData) ProtoMessage() {}
|
||||
func (*GoGoPBRpcResponseData) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_d0e25d3af112ec8f, []int{1}
|
||||
}
|
||||
func (m *GoGoPBRpcResponseData) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *GoGoPBRpcResponseData) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_GoGoPBRpcResponseData.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (m *GoGoPBRpcResponseData) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_GoGoPBRpcResponseData.Merge(m, src)
|
||||
}
|
||||
func (m *GoGoPBRpcResponseData) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *GoGoPBRpcResponseData) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_GoGoPBRpcResponseData.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_GoGoPBRpcResponseData proto.InternalMessageInfo
|
||||
|
||||
func (m *GoGoPBRpcResponseData) GetSeq() uint64 {
|
||||
if m != nil {
|
||||
return m.Seq
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcResponseData) GetError() string {
|
||||
if m != nil {
|
||||
return m.Error
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcResponseData) GetReply() []byte {
|
||||
if m != nil {
|
||||
return m.Reply
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*GoGoPBRpcRequestData)(nil), "rpc.GoGoPBRpcRequestData")
|
||||
proto.RegisterType((*GoGoPBRpcResponseData)(nil), "rpc.GoGoPBRpcResponseData")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("gogorpc.proto", fileDescriptor_d0e25d3af112ec8f) }
|
||||
|
||||
var fileDescriptor_d0e25d3af112ec8f = []byte{
|
||||
// 233 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4d, 0xcf, 0x4f, 0xcf,
|
||||
0x2f, 0x2a, 0x48, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x2e, 0x2a, 0x48, 0x56, 0x5a,
|
||||
0xc2, 0xc8, 0x25, 0xe2, 0x9e, 0xef, 0x9e, 0x1f, 0xe0, 0x14, 0x54, 0x90, 0x1c, 0x94, 0x5a, 0x58,
|
||||
0x9a, 0x5a, 0x5c, 0xe2, 0x92, 0x58, 0x92, 0x28, 0x24, 0xc0, 0xc5, 0x1c, 0x9c, 0x5a, 0x28, 0xc1,
|
||||
0xa8, 0xc0, 0xa8, 0xc1, 0x12, 0x04, 0x62, 0x0a, 0x29, 0x70, 0x71, 0x07, 0x15, 0x24, 0xfb, 0xa6,
|
||||
0x96, 0x64, 0xe4, 0xa7, 0x78, 0xa6, 0x48, 0x30, 0x29, 0x30, 0x6a, 0xf0, 0x06, 0x21, 0x0b, 0x09,
|
||||
0xa9, 0x70, 0xf1, 0x06, 0xa7, 0x16, 0x95, 0x65, 0x26, 0xa7, 0x42, 0x84, 0x24, 0x98, 0x15, 0x18,
|
||||
0x35, 0x38, 0x83, 0x50, 0x05, 0x85, 0x24, 0xb8, 0xd8, 0xfd, 0xf2, 0x83, 0x52, 0x0b, 0x72, 0x2a,
|
||||
0x25, 0x58, 0x14, 0x18, 0x35, 0x38, 0x82, 0x60, 0x5c, 0x90, 0x8c, 0x67, 0x5e, 0x40, 0x62, 0x51,
|
||||
0x62, 0xae, 0x04, 0xab, 0x02, 0xa3, 0x06, 0x4f, 0x10, 0x8c, 0xab, 0x14, 0xca, 0x25, 0x8a, 0xe4,
|
||||
0xca, 0xe2, 0x82, 0xfc, 0xbc, 0xe2, 0x54, 0x1c, 0xce, 0x14, 0xe1, 0x62, 0x75, 0x2d, 0x2a, 0xca,
|
||||
0x2f, 0x02, 0x3b, 0x90, 0x33, 0x08, 0xc2, 0x01, 0x89, 0x42, 0xac, 0x64, 0x06, 0x1b, 0x0c, 0xe1,
|
||||
0x38, 0x09, 0x9f, 0x78, 0x24, 0xc7, 0x78, 0xe1, 0x91, 0x1c, 0xe3, 0x83, 0x47, 0x72, 0x8c, 0x51,
|
||||
0xac, 0x7a, 0xfa, 0x45, 0x05, 0xc9, 0x49, 0x6c, 0xe0, 0xe0, 0x31, 0x06, 0x04, 0x00, 0x00, 0xff,
|
||||
0xff, 0x26, 0xcf, 0x31, 0x39, 0x2f, 0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcRequestData) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcRequestData) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcRequestData) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if len(m.InParam) > 0 {
|
||||
i -= len(m.InParam)
|
||||
copy(dAtA[i:], m.InParam)
|
||||
i = encodeVarintGogorpc(dAtA, i, uint64(len(m.InParam)))
|
||||
i--
|
||||
dAtA[i] = 0x2a
|
||||
}
|
||||
if m.NoReply {
|
||||
i--
|
||||
if m.NoReply {
|
||||
dAtA[i] = 1
|
||||
} else {
|
||||
dAtA[i] = 0
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x20
|
||||
}
|
||||
if len(m.ServiceMethod) > 0 {
|
||||
i -= len(m.ServiceMethod)
|
||||
copy(dAtA[i:], m.ServiceMethod)
|
||||
i = encodeVarintGogorpc(dAtA, i, uint64(len(m.ServiceMethod)))
|
||||
i--
|
||||
dAtA[i] = 0x1a
|
||||
}
|
||||
if m.RpcMethodId != 0 {
|
||||
i = encodeVarintGogorpc(dAtA, i, uint64(m.RpcMethodId))
|
||||
i--
|
||||
dAtA[i] = 0x10
|
||||
}
|
||||
if m.Seq != 0 {
|
||||
i = encodeVarintGogorpc(dAtA, i, uint64(m.Seq))
|
||||
i--
|
||||
dAtA[i] = 0x8
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcResponseData) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcResponseData) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcResponseData) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if len(m.Reply) > 0 {
|
||||
i -= len(m.Reply)
|
||||
copy(dAtA[i:], m.Reply)
|
||||
i = encodeVarintGogorpc(dAtA, i, uint64(len(m.Reply)))
|
||||
i--
|
||||
dAtA[i] = 0x1a
|
||||
}
|
||||
if len(m.Error) > 0 {
|
||||
i -= len(m.Error)
|
||||
copy(dAtA[i:], m.Error)
|
||||
i = encodeVarintGogorpc(dAtA, i, uint64(len(m.Error)))
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
}
|
||||
if m.Seq != 0 {
|
||||
i = encodeVarintGogorpc(dAtA, i, uint64(m.Seq))
|
||||
i--
|
||||
dAtA[i] = 0x8
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintGogorpc(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovGogorpc(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *GoGoPBRpcRequestData) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Seq != 0 {
|
||||
n += 1 + sovGogorpc(uint64(m.Seq))
|
||||
}
|
||||
if m.RpcMethodId != 0 {
|
||||
n += 1 + sovGogorpc(uint64(m.RpcMethodId))
|
||||
}
|
||||
l = len(m.ServiceMethod)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovGogorpc(uint64(l))
|
||||
}
|
||||
if m.NoReply {
|
||||
n += 2
|
||||
}
|
||||
l = len(m.InParam)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovGogorpc(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *GoGoPBRpcResponseData) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Seq != 0 {
|
||||
n += 1 + sovGogorpc(uint64(m.Seq))
|
||||
}
|
||||
l = len(m.Error)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovGogorpc(uint64(l))
|
||||
}
|
||||
l = len(m.Reply)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovGogorpc(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovGogorpc(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozGogorpc(x uint64) (n int) {
|
||||
return sovGogorpc(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *GoGoPBRpcRequestData) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: GoGoPBRpcRequestData: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: GoGoPBRpcRequestData: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Seq", wireType)
|
||||
}
|
||||
m.Seq = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.Seq |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field RpcMethodId", wireType)
|
||||
}
|
||||
m.RpcMethodId = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.RpcMethodId |= uint32(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 3:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field ServiceMethod", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.ServiceMethod = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
case 4:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field NoReply", wireType)
|
||||
}
|
||||
var v int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
v |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
m.NoReply = bool(v != 0)
|
||||
case 5:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field InParam", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.InParam = append(m.InParam[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.InParam == nil {
|
||||
m.InParam = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipGogorpc(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *GoGoPBRpcResponseData) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: GoGoPBRpcResponseData: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: GoGoPBRpcResponseData: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Seq", wireType)
|
||||
}
|
||||
m.Seq = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.Seq |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Error = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
case 3:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Reply", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Reply = append(m.Reply[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.Reply == nil {
|
||||
m.Reply = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipGogorpc(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthGogorpc
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipGogorpc(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowGogorpc
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthGogorpc
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupGogorpc
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthGogorpc
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthGogorpc = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowGogorpc = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupGogorpc = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
||||
@@ -3,6 +3,7 @@ package rpc
|
||||
import (
|
||||
"github.com/duanhf2012/origin/util/sync"
|
||||
jsoniter "github.com/json-iterator/go"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
var json = jsoniter.ConfigCompatibleWithStandardLibrary
|
||||
@@ -119,6 +120,22 @@ func (jsonRpcResponseData *JsonRpcResponseData) GetReply() []byte{
|
||||
}
|
||||
|
||||
|
||||
func (jsonProcessor *JsonProcessor) Clone(src interface{}) (interface{},error){
|
||||
dstValue := reflect.New(reflect.ValueOf(src).Type().Elem())
|
||||
bytes,err := json.Marshal(src)
|
||||
if err != nil {
|
||||
return nil,err
|
||||
}
|
||||
|
||||
dst := dstValue.Interface()
|
||||
err = json.Unmarshal(bytes,dst)
|
||||
if err != nil {
|
||||
return nil,err
|
||||
}
|
||||
|
||||
return dst,nil
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
135
rpc/lclient.go
Normal file
135
rpc/lclient.go
Normal file
@@ -0,0 +1,135 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/network"
|
||||
"reflect"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
//本结点的Client
|
||||
type LClient struct {
|
||||
selfClient *Client
|
||||
}
|
||||
|
||||
func (rc *LClient) Lock(){
|
||||
}
|
||||
|
||||
func (rc *LClient) Unlock(){
|
||||
}
|
||||
|
||||
func (lc *LClient) Run(){
|
||||
}
|
||||
|
||||
func (lc *LClient) OnClose(){
|
||||
}
|
||||
|
||||
func (lc *LClient) IsConnected() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (lc *LClient) SetConn(conn *network.TCPConn){
|
||||
}
|
||||
|
||||
func (lc *LClient) Close(waitDone bool){
|
||||
}
|
||||
|
||||
func (lc *LClient) Go(timeout time.Duration,rpcHandler IRpcHandler,noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
|
||||
pLocalRpcServer := rpcHandler.GetRpcServer()()
|
||||
//判断是否是同一服务
|
||||
findIndex := strings.Index(serviceMethod, ".")
|
||||
if findIndex == -1 {
|
||||
sErr := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
||||
log.Error("call rpc fail",log.String("error",sErr.Error()))
|
||||
call := MakeCall()
|
||||
call.DoError(sErr)
|
||||
|
||||
return call
|
||||
}
|
||||
|
||||
serviceName := serviceMethod[:findIndex]
|
||||
if serviceName == rpcHandler.GetName() { //自己服务调用
|
||||
//调用自己rpcHandler处理器
|
||||
err := pLocalRpcServer.myselfRpcHandlerGo(lc.selfClient,serviceName, serviceMethod, args, requestHandlerNull,reply)
|
||||
call := MakeCall()
|
||||
|
||||
if err != nil {
|
||||
call.DoError(err)
|
||||
return call
|
||||
}
|
||||
|
||||
call.DoOK()
|
||||
return call
|
||||
}
|
||||
|
||||
//其他的rpcHandler的处理器
|
||||
return pLocalRpcServer.selfNodeRpcHandlerGo(timeout,nil, lc.selfClient, noReply, serviceName, 0, serviceMethod, args, reply, nil)
|
||||
}
|
||||
|
||||
|
||||
func (rc *LClient) RawGo(timeout time.Duration,rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceName string, rawArgs []byte, reply interface{}) *Call {
|
||||
pLocalRpcServer := rpcHandler.GetRpcServer()()
|
||||
|
||||
//服务自我调用
|
||||
if serviceName == rpcHandler.GetName() {
|
||||
call := MakeCall()
|
||||
call.ServiceMethod = serviceName
|
||||
call.Reply = reply
|
||||
call.TimeOut = timeout
|
||||
|
||||
err := pLocalRpcServer.myselfRpcHandlerGo(rc.selfClient,serviceName, serviceName, rawArgs, requestHandlerNull,nil)
|
||||
call.Err = err
|
||||
call.done <- call
|
||||
|
||||
return call
|
||||
}
|
||||
|
||||
//其他的rpcHandler的处理器
|
||||
return pLocalRpcServer.selfNodeRpcHandlerGo(timeout,processor,rc.selfClient, true, serviceName, rpcMethodId, serviceName, nil, nil, rawArgs)
|
||||
}
|
||||
|
||||
|
||||
func (lc *LClient) AsyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, reply interface{},cancelable bool) (CancelRpc,error) {
|
||||
pLocalRpcServer := rpcHandler.GetRpcServer()()
|
||||
|
||||
//判断是否是同一服务
|
||||
findIndex := strings.Index(serviceMethod, ".")
|
||||
if findIndex == -1 {
|
||||
err := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
||||
callback.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
log.Error("serviceMethod format is error",log.String("error",err.Error()))
|
||||
return emptyCancelRpc,nil
|
||||
}
|
||||
|
||||
serviceName := serviceMethod[:findIndex]
|
||||
//调用自己rpcHandler处理器
|
||||
if serviceName == rpcHandler.GetName() { //自己服务调用
|
||||
return emptyCancelRpc,pLocalRpcServer.myselfRpcHandlerGo(lc.selfClient,serviceName, serviceMethod, args,callback ,reply)
|
||||
}
|
||||
|
||||
//其他的rpcHandler的处理器
|
||||
calcelRpc,err := pLocalRpcServer.selfNodeRpcHandlerAsyncGo(timeout,lc.selfClient, rpcHandler, false, serviceName, serviceMethod, args, reply, callback,cancelable)
|
||||
if err != nil {
|
||||
callback.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
}
|
||||
|
||||
return calcelRpc,nil
|
||||
}
|
||||
|
||||
func NewLClient(nodeId int) *Client{
|
||||
client := &Client{}
|
||||
client.clientId = atomic.AddUint32(&clientSeq, 1)
|
||||
client.nodeId = nodeId
|
||||
client.maxCheckCallRpcCount = DefaultMaxCheckCallRpcCount
|
||||
client.callRpcTimeout = DefaultRpcTimeout
|
||||
|
||||
lClient := &LClient{}
|
||||
lClient.selfClient = client
|
||||
client.IRealClient = lClient
|
||||
client.InitPending()
|
||||
go client.checkRpcCallTimeout()
|
||||
return client
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
106
rpc/pbprocessor.go
Normal file
106
rpc/pbprocessor.go
Normal file
@@ -0,0 +1,106 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"github.com/duanhf2012/origin/util/sync"
|
||||
"google.golang.org/protobuf/proto"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type PBProcessor struct {
|
||||
}
|
||||
|
||||
var rpcPbResponseDataPool =sync.NewPool(make(chan interface{},10240), func()interface{}{
|
||||
return &PBRpcResponseData{}
|
||||
})
|
||||
|
||||
var rpcPbRequestDataPool =sync.NewPool(make(chan interface{},10240), func()interface{}{
|
||||
return &PBRpcRequestData{}
|
||||
})
|
||||
|
||||
func (slf *PBRpcRequestData) MakeRequest(seq uint64,rpcMethodId uint32,serviceMethod string,noReply bool,inParam []byte) *PBRpcRequestData{
|
||||
slf.Seq = seq
|
||||
slf.RpcMethodId = rpcMethodId
|
||||
slf.ServiceMethod = serviceMethod
|
||||
slf.NoReply = noReply
|
||||
slf.InParam = inParam
|
||||
|
||||
return slf
|
||||
}
|
||||
|
||||
|
||||
func (slf *PBRpcResponseData) MakeRespone(seq uint64,err RpcError,reply []byte) *PBRpcResponseData{
|
||||
slf.Seq = seq
|
||||
slf.Error = err.Error()
|
||||
slf.Reply = reply
|
||||
|
||||
return slf
|
||||
}
|
||||
|
||||
func (slf *PBProcessor) Marshal(v interface{}) ([]byte, error){
|
||||
return proto.Marshal(v.(proto.Message))
|
||||
}
|
||||
|
||||
func (slf *PBProcessor) Unmarshal(data []byte, msg interface{}) error{
|
||||
protoMsg,ok := msg.(proto.Message)
|
||||
if ok == false {
|
||||
return fmt.Errorf("%+v is not of proto.Message type",msg)
|
||||
}
|
||||
return proto.Unmarshal(data, protoMsg)
|
||||
}
|
||||
|
||||
func (slf *PBProcessor) MakeRpcRequest(seq uint64,rpcMethodId uint32,serviceMethod string,noReply bool,inParam []byte) IRpcRequestData{
|
||||
pGogoPbRpcRequestData := rpcPbRequestDataPool.Get().(*PBRpcRequestData)
|
||||
pGogoPbRpcRequestData.MakeRequest(seq,rpcMethodId,serviceMethod,noReply,inParam)
|
||||
return pGogoPbRpcRequestData
|
||||
}
|
||||
|
||||
func (slf *PBProcessor) MakeRpcResponse(seq uint64,err RpcError,reply []byte) IRpcResponseData {
|
||||
pPBRpcResponseData := rpcPbResponseDataPool.Get().(*PBRpcResponseData)
|
||||
pPBRpcResponseData.MakeRespone(seq,err,reply)
|
||||
return pPBRpcResponseData
|
||||
}
|
||||
|
||||
func (slf *PBProcessor) ReleaseRpcRequest(rpcRequestData IRpcRequestData){
|
||||
rpcPbRequestDataPool.Put(rpcRequestData)
|
||||
}
|
||||
|
||||
func (slf *PBProcessor) ReleaseRpcResponse(rpcResponseData IRpcResponseData){
|
||||
rpcPbResponseDataPool.Put(rpcResponseData)
|
||||
}
|
||||
|
||||
func (slf *PBProcessor) IsParse(param interface{}) bool {
|
||||
_,ok := param.(proto.Message)
|
||||
return ok
|
||||
}
|
||||
|
||||
func (slf *PBProcessor) GetProcessorType() RpcProcessorType{
|
||||
return RpcProcessorPB
|
||||
}
|
||||
|
||||
func (slf *PBProcessor) Clone(src interface{}) (interface{},error){
|
||||
srcMsg,ok := src.(proto.Message)
|
||||
if ok == false {
|
||||
return nil,fmt.Errorf("param is not of proto.message type")
|
||||
}
|
||||
|
||||
return proto.Clone(srcMsg),nil
|
||||
}
|
||||
|
||||
func (slf *PBRpcRequestData) IsNoReply() bool{
|
||||
return slf.GetNoReply()
|
||||
}
|
||||
|
||||
func (slf *PBRpcResponseData) GetErr() *RpcError {
|
||||
if slf.GetError() == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
err := RpcError(slf.GetError())
|
||||
return &err
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package rpc
|
||||
|
||||
type IRpcProcessor interface {
|
||||
Clone(src interface{}) (interface{},error)
|
||||
Marshal(v interface{}) ([]byte, error) //b表示自定义缓冲区,可以填nil,由系统自动分配
|
||||
Unmarshal(data []byte, v interface{}) error
|
||||
MakeRpcRequest(seq uint64,rpcMethodId uint32,serviceMethod string,noReply bool,inParam []byte) IRpcRequestData
|
||||
|
||||
263
rpc/protorpc.pb.go
Normal file
263
rpc/protorpc.pb.go
Normal file
@@ -0,0 +1,263 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.31.0
|
||||
// protoc v3.11.4
|
||||
// source: test/rpc/protorpc.proto
|
||||
|
||||
package rpc
|
||||
|
||||
import (
|
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
reflect "reflect"
|
||||
sync "sync"
|
||||
)
|
||||
|
||||
const (
|
||||
// Verify that this generated code is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
type PBRpcRequestData struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Seq uint64 `protobuf:"varint,1,opt,name=Seq,proto3" json:"Seq,omitempty"`
|
||||
RpcMethodId uint32 `protobuf:"varint,2,opt,name=RpcMethodId,proto3" json:"RpcMethodId,omitempty"`
|
||||
ServiceMethod string `protobuf:"bytes,3,opt,name=ServiceMethod,proto3" json:"ServiceMethod,omitempty"`
|
||||
NoReply bool `protobuf:"varint,4,opt,name=NoReply,proto3" json:"NoReply,omitempty"`
|
||||
InParam []byte `protobuf:"bytes,5,opt,name=InParam,proto3" json:"InParam,omitempty"`
|
||||
}
|
||||
|
||||
func (x *PBRpcRequestData) Reset() {
|
||||
*x = PBRpcRequestData{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_test_rpc_protorpc_proto_msgTypes[0]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *PBRpcRequestData) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*PBRpcRequestData) ProtoMessage() {}
|
||||
|
||||
func (x *PBRpcRequestData) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_test_rpc_protorpc_proto_msgTypes[0]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use PBRpcRequestData.ProtoReflect.Descriptor instead.
|
||||
func (*PBRpcRequestData) Descriptor() ([]byte, []int) {
|
||||
return file_test_rpc_protorpc_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
func (x *PBRpcRequestData) GetSeq() uint64 {
|
||||
if x != nil {
|
||||
return x.Seq
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *PBRpcRequestData) GetRpcMethodId() uint32 {
|
||||
if x != nil {
|
||||
return x.RpcMethodId
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *PBRpcRequestData) GetServiceMethod() string {
|
||||
if x != nil {
|
||||
return x.ServiceMethod
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *PBRpcRequestData) GetNoReply() bool {
|
||||
if x != nil {
|
||||
return x.NoReply
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (x *PBRpcRequestData) GetInParam() []byte {
|
||||
if x != nil {
|
||||
return x.InParam
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type PBRpcResponseData struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Seq uint64 `protobuf:"varint,1,opt,name=Seq,proto3" json:"Seq,omitempty"`
|
||||
Error string `protobuf:"bytes,2,opt,name=Error,proto3" json:"Error,omitempty"`
|
||||
Reply []byte `protobuf:"bytes,3,opt,name=Reply,proto3" json:"Reply,omitempty"`
|
||||
}
|
||||
|
||||
func (x *PBRpcResponseData) Reset() {
|
||||
*x = PBRpcResponseData{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_test_rpc_protorpc_proto_msgTypes[1]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *PBRpcResponseData) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*PBRpcResponseData) ProtoMessage() {}
|
||||
|
||||
func (x *PBRpcResponseData) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_test_rpc_protorpc_proto_msgTypes[1]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use PBRpcResponseData.ProtoReflect.Descriptor instead.
|
||||
func (*PBRpcResponseData) Descriptor() ([]byte, []int) {
|
||||
return file_test_rpc_protorpc_proto_rawDescGZIP(), []int{1}
|
||||
}
|
||||
|
||||
func (x *PBRpcResponseData) GetSeq() uint64 {
|
||||
if x != nil {
|
||||
return x.Seq
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *PBRpcResponseData) GetError() string {
|
||||
if x != nil {
|
||||
return x.Error
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *PBRpcResponseData) GetReply() []byte {
|
||||
if x != nil {
|
||||
return x.Reply
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var File_test_rpc_protorpc_proto protoreflect.FileDescriptor
|
||||
|
||||
var file_test_rpc_protorpc_proto_rawDesc = []byte{
|
||||
0x0a, 0x17, 0x74, 0x65, 0x73, 0x74, 0x2f, 0x72, 0x70, 0x63, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f,
|
||||
0x72, 0x70, 0x63, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x03, 0x72, 0x70, 0x63, 0x22, 0xa0,
|
||||
0x01, 0x0a, 0x10, 0x50, 0x42, 0x52, 0x70, 0x63, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x44,
|
||||
0x61, 0x74, 0x61, 0x12, 0x10, 0x0a, 0x03, 0x53, 0x65, 0x71, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04,
|
||||
0x52, 0x03, 0x53, 0x65, 0x71, 0x12, 0x20, 0x0a, 0x0b, 0x52, 0x70, 0x63, 0x4d, 0x65, 0x74, 0x68,
|
||||
0x6f, 0x64, 0x49, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0b, 0x52, 0x70, 0x63, 0x4d,
|
||||
0x65, 0x74, 0x68, 0x6f, 0x64, 0x49, 0x64, 0x12, 0x24, 0x0a, 0x0d, 0x53, 0x65, 0x72, 0x76, 0x69,
|
||||
0x63, 0x65, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d,
|
||||
0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x12, 0x18, 0x0a,
|
||||
0x07, 0x4e, 0x6f, 0x52, 0x65, 0x70, 0x6c, 0x79, 0x18, 0x04, 0x20, 0x01, 0x28, 0x08, 0x52, 0x07,
|
||||
0x4e, 0x6f, 0x52, 0x65, 0x70, 0x6c, 0x79, 0x12, 0x18, 0x0a, 0x07, 0x49, 0x6e, 0x50, 0x61, 0x72,
|
||||
0x61, 0x6d, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x49, 0x6e, 0x50, 0x61, 0x72, 0x61,
|
||||
0x6d, 0x22, 0x51, 0x0a, 0x11, 0x50, 0x42, 0x52, 0x70, 0x63, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
|
||||
0x73, 0x65, 0x44, 0x61, 0x74, 0x61, 0x12, 0x10, 0x0a, 0x03, 0x53, 0x65, 0x71, 0x18, 0x01, 0x20,
|
||||
0x01, 0x28, 0x04, 0x52, 0x03, 0x53, 0x65, 0x71, 0x12, 0x14, 0x0a, 0x05, 0x45, 0x72, 0x72, 0x6f,
|
||||
0x72, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x12, 0x14,
|
||||
0x0a, 0x05, 0x52, 0x65, 0x70, 0x6c, 0x79, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x05, 0x52,
|
||||
0x65, 0x70, 0x6c, 0x79, 0x42, 0x07, 0x5a, 0x05, 0x2e, 0x3b, 0x72, 0x70, 0x63, 0x62, 0x06, 0x70,
|
||||
0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
}
|
||||
|
||||
var (
|
||||
file_test_rpc_protorpc_proto_rawDescOnce sync.Once
|
||||
file_test_rpc_protorpc_proto_rawDescData = file_test_rpc_protorpc_proto_rawDesc
|
||||
)
|
||||
|
||||
func file_test_rpc_protorpc_proto_rawDescGZIP() []byte {
|
||||
file_test_rpc_protorpc_proto_rawDescOnce.Do(func() {
|
||||
file_test_rpc_protorpc_proto_rawDescData = protoimpl.X.CompressGZIP(file_test_rpc_protorpc_proto_rawDescData)
|
||||
})
|
||||
return file_test_rpc_protorpc_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_test_rpc_protorpc_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
|
||||
var file_test_rpc_protorpc_proto_goTypes = []interface{}{
|
||||
(*PBRpcRequestData)(nil), // 0: rpc.PBRpcRequestData
|
||||
(*PBRpcResponseData)(nil), // 1: rpc.PBRpcResponseData
|
||||
}
|
||||
var file_test_rpc_protorpc_proto_depIdxs = []int32{
|
||||
0, // [0:0] is the sub-list for method output_type
|
||||
0, // [0:0] is the sub-list for method input_type
|
||||
0, // [0:0] is the sub-list for extension type_name
|
||||
0, // [0:0] is the sub-list for extension extendee
|
||||
0, // [0:0] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_test_rpc_protorpc_proto_init() }
|
||||
func file_test_rpc_protorpc_proto_init() {
|
||||
if File_test_rpc_protorpc_proto != nil {
|
||||
return
|
||||
}
|
||||
if !protoimpl.UnsafeEnabled {
|
||||
file_test_rpc_protorpc_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*PBRpcRequestData); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_test_rpc_protorpc_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*PBRpcResponseData); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
type x struct{}
|
||||
out := protoimpl.TypeBuilder{
|
||||
File: protoimpl.DescBuilder{
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_test_rpc_protorpc_proto_rawDesc,
|
||||
NumEnums: 0,
|
||||
NumMessages: 2,
|
||||
NumExtensions: 0,
|
||||
NumServices: 0,
|
||||
},
|
||||
GoTypes: file_test_rpc_protorpc_proto_goTypes,
|
||||
DependencyIndexes: file_test_rpc_protorpc_proto_depIdxs,
|
||||
MessageInfos: file_test_rpc_protorpc_proto_msgTypes,
|
||||
}.Build()
|
||||
File_test_rpc_protorpc_proto = out.File
|
||||
file_test_rpc_protorpc_proto_rawDesc = nil
|
||||
file_test_rpc_protorpc_proto_goTypes = nil
|
||||
file_test_rpc_protorpc_proto_depIdxs = nil
|
||||
}
|
||||
@@ -1,8 +1,8 @@
|
||||
syntax = "proto3";
|
||||
package rpc;
|
||||
option go_package = "./rpc";
|
||||
option go_package = ".;rpc";
|
||||
|
||||
message GoGoPBRpcRequestData{
|
||||
message PBRpcRequestData{
|
||||
uint64 Seq = 1;
|
||||
uint32 RpcMethodId = 2;
|
||||
string ServiceMethod = 3;
|
||||
@@ -10,7 +10,7 @@ message GoGoPBRpcRequestData{
|
||||
bytes InParam = 5;
|
||||
}
|
||||
|
||||
message GoGoPBRpcResponseData{
|
||||
message PBRpcResponseData{
|
||||
uint64 Seq = 1;
|
||||
string Error = 2;
|
||||
bytes Reply = 3;
|
||||
4281
rpc/rank.pb.go
4281
rpc/rank.pb.go
File diff suppressed because it is too large
Load Diff
@@ -2,19 +2,48 @@ syntax = "proto3";
|
||||
package rpc;
|
||||
option go_package = ".;rpc";
|
||||
|
||||
// RankData 排行数据
|
||||
message RankData {
|
||||
uint64 Key = 1; //数据主建
|
||||
repeated int64 SortData = 2; //参与排行的数据
|
||||
bytes Data = 3; //不参与排行的数据
|
||||
message SetSortAndExtendData{
|
||||
bool IsSortData = 1; //是否为排序字段,为true时,修改Sort字段,否则修改Extend数据
|
||||
int32 Pos = 2; //排序位置
|
||||
int64 Data = 3; //排序值
|
||||
}
|
||||
|
||||
//自增值
|
||||
message IncreaseRankData {
|
||||
uint64 RankId = 1; //排行榜的ID
|
||||
uint64 Key = 2; //数据主建
|
||||
repeated ExtendIncData Extend = 3; //扩展数据
|
||||
repeated int64 IncreaseSortData = 4;//自增排行数值
|
||||
repeated SetSortAndExtendData SetSortAndExtendData = 5;//设置排序数据值
|
||||
bool ReturnRankData = 6; //是否查找最新排名,否则不返回排行Rank字段
|
||||
|
||||
bool InsertDataOnNonExistent = 7; //为true时:存在不进行更新,不存在则插入InitData与InitSortData数据。为false时:忽略不对InitData与InitSortData数据
|
||||
bytes InitData = 8; //不参与排行的数据
|
||||
repeated int64 InitSortData = 9; //参与排行的数据
|
||||
}
|
||||
|
||||
message IncreaseRankDataRet{
|
||||
RankPosData PosData = 1;
|
||||
}
|
||||
|
||||
//用于单独刷新排行榜数据
|
||||
message UpdateRankData {
|
||||
uint64 RankId = 1; //排行榜的ID
|
||||
uint64 Key = 2; //数据主建
|
||||
bytes Data = 3; //数据部分
|
||||
}
|
||||
|
||||
message UpdateRankDataRet {
|
||||
bool Ret = 1;
|
||||
}
|
||||
|
||||
// RankPosData 排行数据——查询返回
|
||||
message RankPosData {
|
||||
uint64 Key = 1; //数据主建
|
||||
uint64 Rank = 2; //名次
|
||||
uint64 Rank = 2; //名次
|
||||
repeated int64 SortData = 3; //参与排行的数据
|
||||
bytes Data = 4; //不参与排行的数据
|
||||
repeated int64 ExtendData = 5; //扩展数据
|
||||
}
|
||||
|
||||
// RankList 排行榜数据
|
||||
@@ -31,6 +60,22 @@ message RankList {
|
||||
message UpsetRankData {
|
||||
uint64 RankId = 1; //排行榜的ID
|
||||
repeated RankData RankDataList = 2; //排行数据
|
||||
bool FindNewRank = 3; //是否查找最新排名
|
||||
}
|
||||
|
||||
message ExtendIncData {
|
||||
int64 InitValue = 1;
|
||||
int64 IncreaseValue = 2;
|
||||
}
|
||||
|
||||
// RankData 排行数据
|
||||
message RankData {
|
||||
uint64 Key = 1; //数据主建
|
||||
repeated int64 SortData = 2; //参与排行的数据
|
||||
|
||||
bytes Data = 3; //不参与排行的数据
|
||||
|
||||
repeated ExtendIncData ExData = 4; //扩展增量数据
|
||||
}
|
||||
|
||||
// DeleteByKey 删除排行榜数据
|
||||
@@ -71,9 +116,15 @@ message RankDataList {
|
||||
RankPosData KeyRank = 3; //附带的Key查询排行结果信息
|
||||
}
|
||||
|
||||
message RankInfo{
|
||||
uint64 Key = 1;
|
||||
uint64 Rank = 2;
|
||||
}
|
||||
|
||||
// RankResult
|
||||
message RankResult {
|
||||
int32 AddCount = 1;//新增数量
|
||||
int32 ModifyCount = 2; //修改数量
|
||||
int32 RemoveCount = 3;//删除数量
|
||||
repeated RankInfo NewRank = 4; //新的排名名次,只有UpsetRankData.FindNewRank为true时才生效
|
||||
}
|
||||
|
||||
320
rpc/rclient.go
Normal file
320
rpc/rclient.go
Normal file
@@ -0,0 +1,320 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/network"
|
||||
"math"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
//跨结点连接的Client
|
||||
type RClient struct {
|
||||
compressBytesLen int
|
||||
selfClient *Client
|
||||
network.TCPClient
|
||||
conn *network.TCPConn
|
||||
TriggerRpcConnEvent
|
||||
}
|
||||
|
||||
func (rc *RClient) IsConnected() bool {
|
||||
rc.Lock()
|
||||
defer rc.Unlock()
|
||||
|
||||
return rc.conn != nil && rc.conn.IsConnected() == true
|
||||
}
|
||||
|
||||
func (rc *RClient) GetConn() *network.TCPConn{
|
||||
rc.Lock()
|
||||
conn := rc.conn
|
||||
rc.Unlock()
|
||||
|
||||
return conn
|
||||
}
|
||||
|
||||
func (rc *RClient) SetConn(conn *network.TCPConn){
|
||||
rc.Lock()
|
||||
rc.conn = conn
|
||||
rc.Unlock()
|
||||
}
|
||||
|
||||
func (rc *RClient) Go(timeout time.Duration,rpcHandler IRpcHandler,noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
|
||||
_, processor := GetProcessorType(args)
|
||||
InParam, err := processor.Marshal(args)
|
||||
if err != nil {
|
||||
log.Error("Marshal is fail",log.ErrorAttr("error",err))
|
||||
call := MakeCall()
|
||||
call.DoError(err)
|
||||
return call
|
||||
}
|
||||
|
||||
return rc.RawGo(timeout,rpcHandler,processor, noReply, 0, serviceMethod, InParam, reply)
|
||||
}
|
||||
|
||||
func (rc *RClient) RawGo(timeout time.Duration,rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, rawArgs []byte, reply interface{}) *Call {
|
||||
call := MakeCall()
|
||||
call.ServiceMethod = serviceMethod
|
||||
call.Reply = reply
|
||||
call.Seq = rc.selfClient.generateSeq()
|
||||
call.TimeOut = timeout
|
||||
|
||||
request := MakeRpcRequest(processor, call.Seq, rpcMethodId, serviceMethod, noReply, rawArgs)
|
||||
bytes, err := processor.Marshal(request.RpcRequestData)
|
||||
ReleaseRpcRequest(request)
|
||||
|
||||
if err != nil {
|
||||
call.Seq = 0
|
||||
log.Error("marshal is fail",log.String("error",err.Error()))
|
||||
call.DoError(err)
|
||||
return call
|
||||
}
|
||||
|
||||
conn := rc.GetConn()
|
||||
if conn == nil || conn.IsConnected()==false {
|
||||
call.Seq = 0
|
||||
sErr := errors.New(serviceMethod + " was called failed,rpc client is disconnect")
|
||||
log.Error("conn is disconnect",log.String("error",sErr.Error()))
|
||||
call.DoError(sErr)
|
||||
return call
|
||||
}
|
||||
|
||||
var compressBuff[]byte
|
||||
bCompress := uint8(0)
|
||||
if rc.compressBytesLen > 0 && len(bytes) >= rc.compressBytesLen {
|
||||
var cErr error
|
||||
compressBuff,cErr = compressor.CompressBlock(bytes)
|
||||
if cErr != nil {
|
||||
call.Seq = 0
|
||||
log.Error("compress fail",log.String("error",cErr.Error()))
|
||||
call.DoError(cErr)
|
||||
return call
|
||||
}
|
||||
if len(compressBuff) < len(bytes) {
|
||||
bytes = compressBuff
|
||||
bCompress = 1<<7
|
||||
}
|
||||
}
|
||||
|
||||
if noReply == false {
|
||||
rc.selfClient.AddPending(call)
|
||||
}
|
||||
|
||||
err = conn.WriteMsg([]byte{uint8(processor.GetProcessorType())|bCompress}, bytes)
|
||||
if cap(compressBuff) >0 {
|
||||
compressor.CompressBufferCollection(compressBuff)
|
||||
}
|
||||
if err != nil {
|
||||
rc.selfClient.RemovePending(call.Seq)
|
||||
log.Error("WiteMsg is fail",log.ErrorAttr("error",err))
|
||||
call.Seq = 0
|
||||
call.DoError(err)
|
||||
}
|
||||
|
||||
return call
|
||||
}
|
||||
|
||||
|
||||
func (rc *RClient) AsyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{},cancelable bool) (CancelRpc,error) {
|
||||
cancelRpc,err := rc.asyncCall(timeout,rpcHandler, serviceMethod, callback, args, replyParam,cancelable)
|
||||
if err != nil {
|
||||
callback.Call([]reflect.Value{reflect.ValueOf(replyParam), reflect.ValueOf(err)})
|
||||
}
|
||||
|
||||
return cancelRpc,nil
|
||||
}
|
||||
|
||||
func (rc *RClient) asyncCall(timeout time.Duration,rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{},cancelable bool) (CancelRpc,error) {
|
||||
processorType, processor := GetProcessorType(args)
|
||||
InParam, herr := processor.Marshal(args)
|
||||
if herr != nil {
|
||||
return emptyCancelRpc,herr
|
||||
}
|
||||
|
||||
seq := rc.selfClient.generateSeq()
|
||||
request := MakeRpcRequest(processor, seq, 0, serviceMethod, false, InParam)
|
||||
bytes, err := processor.Marshal(request.RpcRequestData)
|
||||
ReleaseRpcRequest(request)
|
||||
if err != nil {
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
conn := rc.GetConn()
|
||||
if conn == nil || conn.IsConnected()==false {
|
||||
return emptyCancelRpc,errors.New("Rpc server is disconnect,call " + serviceMethod)
|
||||
}
|
||||
|
||||
var compressBuff[]byte
|
||||
bCompress := uint8(0)
|
||||
if rc.compressBytesLen>0 &&len(bytes) >= rc.compressBytesLen {
|
||||
var cErr error
|
||||
compressBuff,cErr = compressor.CompressBlock(bytes)
|
||||
if cErr != nil {
|
||||
return emptyCancelRpc,cErr
|
||||
}
|
||||
|
||||
if len(compressBuff) < len(bytes) {
|
||||
bytes = compressBuff
|
||||
bCompress = 1<<7
|
||||
}
|
||||
}
|
||||
|
||||
call := MakeCall()
|
||||
call.Reply = replyParam
|
||||
call.callback = &callback
|
||||
call.rpcHandler = rpcHandler
|
||||
call.ServiceMethod = serviceMethod
|
||||
call.Seq = seq
|
||||
call.TimeOut = timeout
|
||||
rc.selfClient.AddPending(call)
|
||||
|
||||
err = conn.WriteMsg([]byte{uint8(processorType)|bCompress}, bytes)
|
||||
if cap(compressBuff) >0 {
|
||||
compressor.CompressBufferCollection(compressBuff)
|
||||
}
|
||||
if err != nil {
|
||||
rc.selfClient.RemovePending(call.Seq)
|
||||
ReleaseCall(call)
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
if cancelable {
|
||||
rpcCancel := RpcCancel{CallSeq:seq,Cli: rc.selfClient}
|
||||
return rpcCancel.CancelRpc,nil
|
||||
}
|
||||
|
||||
return emptyCancelRpc,nil
|
||||
}
|
||||
|
||||
func (rc *RClient) Run() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
}
|
||||
}()
|
||||
|
||||
rc.TriggerRpcConnEvent(true, rc.selfClient.GetClientId(), rc.selfClient.GetNodeId())
|
||||
for {
|
||||
bytes, err := rc.conn.ReadMsg()
|
||||
if err != nil {
|
||||
log.Error("rclient read msg is failed",log.ErrorAttr("error",err))
|
||||
return
|
||||
}
|
||||
|
||||
bCompress := (bytes[0]>>7) > 0
|
||||
processor := GetProcessor(bytes[0]&0x7f)
|
||||
if processor == nil {
|
||||
rc.conn.ReleaseReadMsg(bytes)
|
||||
log.Error("cannot find process",log.Uint8("process type",bytes[0]&0x7f))
|
||||
return
|
||||
}
|
||||
|
||||
//1.解析head
|
||||
response := RpcResponse{}
|
||||
response.RpcResponseData = processor.MakeRpcResponse(0, "", nil)
|
||||
|
||||
//解压缩
|
||||
byteData := bytes[1:]
|
||||
var compressBuff []byte
|
||||
|
||||
if bCompress == true {
|
||||
var unCompressErr error
|
||||
compressBuff,unCompressErr = compressor.UncompressBlock(byteData)
|
||||
if unCompressErr!= nil {
|
||||
rc.conn.ReleaseReadMsg(bytes)
|
||||
log.Error("uncompressBlock failed",log.ErrorAttr("error",unCompressErr))
|
||||
return
|
||||
}
|
||||
byteData = compressBuff
|
||||
}
|
||||
|
||||
err = processor.Unmarshal(byteData, response.RpcResponseData)
|
||||
if cap(compressBuff) > 0 {
|
||||
compressor.UnCompressBufferCollection(compressBuff)
|
||||
}
|
||||
|
||||
rc.conn.ReleaseReadMsg(bytes)
|
||||
if err != nil {
|
||||
processor.ReleaseRpcResponse(response.RpcResponseData)
|
||||
log.Error("rpcClient Unmarshal head error",log.ErrorAttr("error",err))
|
||||
continue
|
||||
}
|
||||
|
||||
v := rc.selfClient.RemovePending(response.RpcResponseData.GetSeq())
|
||||
if v == nil {
|
||||
log.Error("rpcClient cannot find seq",log.Uint64("seq",response.RpcResponseData.GetSeq()))
|
||||
} else {
|
||||
v.Err = nil
|
||||
if len(response.RpcResponseData.GetReply()) > 0 {
|
||||
err = processor.Unmarshal(response.RpcResponseData.GetReply(), v.Reply)
|
||||
if err != nil {
|
||||
log.Error("rpcClient Unmarshal body failed",log.ErrorAttr("error",err))
|
||||
v.Err = err
|
||||
}
|
||||
}
|
||||
|
||||
if response.RpcResponseData.GetErr() != nil {
|
||||
v.Err = response.RpcResponseData.GetErr()
|
||||
}
|
||||
|
||||
if v.callback != nil && v.callback.IsValid() {
|
||||
v.rpcHandler.PushRpcResponse(v)
|
||||
} else {
|
||||
v.done <- v
|
||||
}
|
||||
}
|
||||
|
||||
processor.ReleaseRpcResponse(response.RpcResponseData)
|
||||
}
|
||||
}
|
||||
|
||||
func (rc *RClient) OnClose() {
|
||||
rc.TriggerRpcConnEvent(false, rc.selfClient.GetClientId(), rc.selfClient.GetNodeId())
|
||||
}
|
||||
|
||||
func NewRClient(nodeId int, addr string, maxRpcParamLen uint32,compressBytesLen int,triggerRpcConnEvent TriggerRpcConnEvent) *Client{
|
||||
client := &Client{}
|
||||
client.clientId = atomic.AddUint32(&clientSeq, 1)
|
||||
client.nodeId = nodeId
|
||||
client.maxCheckCallRpcCount = DefaultMaxCheckCallRpcCount
|
||||
client.callRpcTimeout = DefaultRpcTimeout
|
||||
c:= &RClient{}
|
||||
c.compressBytesLen = compressBytesLen
|
||||
c.selfClient = client
|
||||
c.Addr = addr
|
||||
c.ConnectInterval = DefaultConnectInterval
|
||||
c.PendingWriteNum = DefaultMaxPendingWriteNum
|
||||
c.AutoReconnect = true
|
||||
c.TriggerRpcConnEvent = triggerRpcConnEvent
|
||||
c.ConnNum = DefaultRpcConnNum
|
||||
c.LenMsgLen = DefaultRpcLenMsgLen
|
||||
c.MinMsgLen = DefaultRpcMinMsgLen
|
||||
c.ReadDeadline = Default_ReadWriteDeadline
|
||||
c.WriteDeadline = Default_ReadWriteDeadline
|
||||
c.LittleEndian = LittleEndian
|
||||
c.NewAgent = client.NewClientAgent
|
||||
|
||||
if maxRpcParamLen > 0 {
|
||||
c.MaxMsgLen = maxRpcParamLen
|
||||
} else {
|
||||
c.MaxMsgLen = math.MaxUint32
|
||||
}
|
||||
client.IRealClient = c
|
||||
client.InitPending()
|
||||
go client.checkRpcCallTimeout()
|
||||
c.Start()
|
||||
return client
|
||||
}
|
||||
|
||||
|
||||
func (rc *RClient) Close(waitDone bool) {
|
||||
rc.TCPClient.Close(waitDone)
|
||||
rc.selfClient.cleanPending()
|
||||
}
|
||||
|
||||
28
rpc/rpc.go
28
rpc/rpc.go
@@ -51,12 +51,6 @@ type IRpcResponseData interface {
|
||||
GetReply() []byte
|
||||
}
|
||||
|
||||
type IRawInputArgs interface {
|
||||
GetRawData() []byte //获取原始数据
|
||||
DoFree() //处理完成,回收内存
|
||||
DoEscape() //逃逸,GC自动回收
|
||||
}
|
||||
|
||||
type RpcHandleFinder interface {
|
||||
FindRpcHandler(serviceMethod string) IRpcHandler
|
||||
}
|
||||
@@ -74,7 +68,16 @@ type Call struct {
|
||||
connId int
|
||||
callback *reflect.Value
|
||||
rpcHandler IRpcHandler
|
||||
callTime time.Time
|
||||
TimeOut time.Duration
|
||||
}
|
||||
|
||||
type RpcCancel struct {
|
||||
Cli *Client
|
||||
CallSeq uint64
|
||||
}
|
||||
|
||||
func (rc *RpcCancel) CancelRpc(){
|
||||
rc.Cli.RemovePending(rc.CallSeq)
|
||||
}
|
||||
|
||||
func (slf *RpcRequest) Clear() *RpcRequest{
|
||||
@@ -108,6 +111,15 @@ func (rpcResponse *RpcResponse) Clear() *RpcResponse{
|
||||
return rpcResponse
|
||||
}
|
||||
|
||||
func (call *Call) DoError(err error){
|
||||
call.Err = err
|
||||
call.done <- call
|
||||
}
|
||||
|
||||
func (call *Call) DoOK(){
|
||||
call.done <- call
|
||||
}
|
||||
|
||||
func (call *Call) Clear() *Call{
|
||||
call.Seq = 0
|
||||
call.ServiceMethod = ""
|
||||
@@ -121,6 +133,8 @@ func (call *Call) Clear() *Call{
|
||||
call.connId = 0
|
||||
call.callback = nil
|
||||
call.rpcHandler = nil
|
||||
call.TimeOut = 0
|
||||
|
||||
return call
|
||||
}
|
||||
|
||||
|
||||
@@ -6,17 +6,18 @@ import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
"time"
|
||||
)
|
||||
|
||||
const maxClusterNode int = 128
|
||||
|
||||
type FuncRpcClient func(nodeId int, serviceMethod string, client []*Client) (error, int)
|
||||
type FuncRpcClient func(nodeId int, serviceMethod string,filterRetire bool, client []*Client) (error, int)
|
||||
type FuncRpcServer func() *Server
|
||||
|
||||
|
||||
var nilError = reflect.Zero(reflect.TypeOf((*error)(nil)).Elem())
|
||||
|
||||
type RpcError string
|
||||
@@ -45,10 +46,7 @@ type RpcMethodInfo struct {
|
||||
rpcProcessorType RpcProcessorType
|
||||
}
|
||||
|
||||
type RawRpcCallBack interface {
|
||||
Unmarshal(data []byte) (interface{}, error)
|
||||
CB(data interface{})
|
||||
}
|
||||
type RawRpcCallBack func(rawData []byte)
|
||||
|
||||
type IRpcHandlerChannel interface {
|
||||
PushRpcResponse(call *Call) error
|
||||
@@ -67,7 +65,7 @@ type RpcHandler struct {
|
||||
pClientList []*Client
|
||||
}
|
||||
|
||||
type TriggerRpcEvent func(bConnect bool, clientSeq uint32, nodeId int)
|
||||
type TriggerRpcConnEvent func(bConnect bool, clientSeq uint32, nodeId int)
|
||||
type INodeListener interface {
|
||||
OnNodeConnected(nodeId int)
|
||||
OnNodeDisconnect(nodeId int)
|
||||
@@ -75,9 +73,12 @@ type INodeListener interface {
|
||||
|
||||
type IDiscoveryServiceListener interface {
|
||||
OnDiscoveryService(nodeId int, serviceName []string)
|
||||
OnUnDiscoveryService(nodeId int, serviceName []string)
|
||||
OnUnDiscoveryService(nodeId int)
|
||||
}
|
||||
|
||||
type CancelRpc func()
|
||||
func emptyCancelRpc(){}
|
||||
|
||||
type IRpcHandler interface {
|
||||
IRpcHandlerChannel
|
||||
GetName() string
|
||||
@@ -86,16 +87,24 @@ type IRpcHandler interface {
|
||||
HandlerRpcRequest(request *RpcRequest)
|
||||
HandlerRpcResponseCB(call *Call)
|
||||
CallMethod(client *Client,ServiceMethod string, param interface{},callBack reflect.Value, reply interface{}) error
|
||||
AsyncCall(serviceMethod string, args interface{}, callback interface{}) error
|
||||
|
||||
Call(serviceMethod string, args interface{}, reply interface{}) error
|
||||
Go(serviceMethod string, args interface{}) error
|
||||
AsyncCallNode(nodeId int, serviceMethod string, args interface{}, callback interface{}) error
|
||||
CallNode(nodeId int, serviceMethod string, args interface{}, reply interface{}) error
|
||||
AsyncCall(serviceMethod string, args interface{}, callback interface{}) error
|
||||
AsyncCallNode(nodeId int, serviceMethod string, args interface{}, callback interface{}) error
|
||||
|
||||
CallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, reply interface{}) error
|
||||
CallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, reply interface{}) error
|
||||
AsyncCallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error)
|
||||
AsyncCallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error)
|
||||
|
||||
Go(serviceMethod string, args interface{}) error
|
||||
GoNode(nodeId int, serviceMethod string, args interface{}) error
|
||||
RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs IRawInputArgs) error
|
||||
RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs []byte) error
|
||||
CastGo(serviceMethod string, args interface{}) error
|
||||
IsSingleCoroutine() bool
|
||||
UnmarshalInParam(rpcProcessor IRpcProcessor, serviceMethod string, rawRpcMethodId uint32, inParam []byte) (interface{}, error)
|
||||
GetRpcServer() FuncRpcServer
|
||||
}
|
||||
|
||||
func reqHandlerNull(Returns interface{}, Err RpcError) {
|
||||
@@ -140,7 +149,7 @@ func (handler *RpcHandler) isExportedOrBuiltinType(t reflect.Type) bool {
|
||||
|
||||
func (handler *RpcHandler) suitableMethods(method reflect.Method) error {
|
||||
//只有RPC_开头的才能被调用
|
||||
if strings.Index(method.Name, "RPC_") != 0 {
|
||||
if strings.Index(method.Name, "RPC_") != 0 && strings.Index(method.Name, "RPC") != 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -206,7 +215,7 @@ func (handler *RpcHandler) HandlerRpcResponseCB(call *Call) {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -228,7 +237,7 @@ func (handler *RpcHandler) HandlerRpcRequest(request *RpcRequest) {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError("Handler Rpc ", request.RpcRequestData.GetServiceMethod(), " Core dump info[", errString, "]\n", string(buf[:l]))
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
rpcErr := RpcError("call error : core dumps")
|
||||
if request.requestHandle != nil {
|
||||
request.requestHandle(nil, rpcErr)
|
||||
@@ -241,11 +250,16 @@ func (handler *RpcHandler) HandlerRpcRequest(request *RpcRequest) {
|
||||
if rawRpcId > 0 {
|
||||
v, ok := handler.mapRawFunctions[rawRpcId]
|
||||
if ok == false {
|
||||
log.SError("RpcHandler cannot find request rpc id", rawRpcId)
|
||||
log.Error("RpcHandler cannot find request rpc id",log.Uint32("rawRpcId",rawRpcId))
|
||||
return
|
||||
}
|
||||
rawData,ok := request.inParam.([]byte)
|
||||
if ok == false {
|
||||
log.Error("RpcHandler cannot convert",log.String("RpcHandlerName",handler.rpcHandler.GetName()),log.Uint32("rawRpcId",rawRpcId))
|
||||
return
|
||||
}
|
||||
|
||||
v.CB(request.inParam)
|
||||
v(rawData)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -253,7 +267,7 @@ func (handler *RpcHandler) HandlerRpcRequest(request *RpcRequest) {
|
||||
v, ok := handler.mapFunctions[request.RpcRequestData.GetServiceMethod()]
|
||||
if ok == false {
|
||||
err := "RpcHandler " + handler.rpcHandler.GetName() + "cannot find " + request.RpcRequestData.GetServiceMethod()
|
||||
log.SError(err)
|
||||
log.Error("HandlerRpcRequest cannot find serviceMethod",log.String("RpcHandlerName",handler.rpcHandler.GetName()),log.String("serviceMethod",request.RpcRequestData.GetServiceMethod()))
|
||||
if request.requestHandle != nil {
|
||||
request.requestHandle(nil, RpcError(err))
|
||||
}
|
||||
@@ -284,18 +298,20 @@ func (handler *RpcHandler) HandlerRpcRequest(request *RpcRequest) {
|
||||
paramList = append(paramList, oParam) //输出参数
|
||||
} else if request.requestHandle != nil && v.hasResponder == false { //调用方有返回值,但被调用函数没有返回参数
|
||||
rErr := "Call Rpc " + request.RpcRequestData.GetServiceMethod() + " without return parameter!"
|
||||
log.SError(rErr)
|
||||
log.Error("call serviceMethod without return parameter",log.String("serviceMethod",request.RpcRequestData.GetServiceMethod()))
|
||||
request.requestHandle(nil, RpcError(rErr))
|
||||
return
|
||||
}
|
||||
|
||||
requestHanle := request.requestHandle
|
||||
returnValues := v.method.Func.Call(paramList)
|
||||
errInter := returnValues[0].Interface()
|
||||
if errInter != nil {
|
||||
err = errInter.(error)
|
||||
}
|
||||
|
||||
if request.requestHandle != nil && v.hasResponder == false {
|
||||
request.requestHandle(oParam.Interface(), ConvertError(err))
|
||||
if v.hasResponder == false && requestHanle != nil {
|
||||
requestHanle(oParam.Interface(), ConvertError(err))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -304,7 +320,7 @@ func (handler *RpcHandler) CallMethod(client *Client,ServiceMethod string, param
|
||||
v, ok := handler.mapFunctions[ServiceMethod]
|
||||
if ok == false {
|
||||
err = errors.New("RpcHandler " + handler.rpcHandler.GetName() + " cannot find" + ServiceMethod)
|
||||
log.SError(err.Error())
|
||||
log.Error("CallMethod cannot find serviceMethod",log.String("rpcHandlerName",handler.rpcHandler.GetName()),log.String("serviceMethod",ServiceMethod))
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -318,7 +334,8 @@ func (handler *RpcHandler) CallMethod(client *Client,ServiceMethod string, param
|
||||
pCall.callback = &callBack
|
||||
pCall.Seq = client.generateSeq()
|
||||
callSeq = pCall.Seq
|
||||
|
||||
pCall.TimeOut = DefaultRpcTimeout
|
||||
pCall.ServiceMethod = ServiceMethod
|
||||
client.AddPending(pCall)
|
||||
|
||||
//有返回值时
|
||||
@@ -327,7 +344,7 @@ func (handler *RpcHandler) CallMethod(client *Client,ServiceMethod string, param
|
||||
hander :=func(Returns interface{}, Err RpcError) {
|
||||
rpcCall := client.RemovePending(callSeq)
|
||||
if rpcCall == nil {
|
||||
log.SError("cannot find call seq ",callSeq)
|
||||
log.Error("cannot find call seq",log.Uint64("seq",callSeq))
|
||||
return
|
||||
}
|
||||
|
||||
@@ -411,52 +428,24 @@ func (handler *RpcHandler) CallMethod(client *Client,ServiceMethod string, param
|
||||
|
||||
func (handler *RpcHandler) goRpc(processor IRpcProcessor, bCast bool, nodeId int, serviceMethod string, args interface{}) error {
|
||||
var pClientList [maxClusterNode]*Client
|
||||
err, count := handler.funcRpcClient(nodeId, serviceMethod, pClientList[:])
|
||||
err, count := handler.funcRpcClient(nodeId, serviceMethod,false, pClientList[:])
|
||||
if count == 0 {
|
||||
if err != nil {
|
||||
log.SError("Call ", serviceMethod, " is error:", err.Error())
|
||||
log.Error("call serviceMethod is failed",log.String("serviceMethod",serviceMethod),log.ErrorAttr("error",err))
|
||||
} else {
|
||||
log.SError("Can not find ", serviceMethod)
|
||||
log.Error("cannot find serviceMethod",log.String("serviceMethod",serviceMethod))
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if count > 1 && bCast == false {
|
||||
log.SError("Cannot call %s more then 1 node!", serviceMethod)
|
||||
log.Error("cannot call serviceMethod more then 1 node",log.String("serviceMethod",serviceMethod))
|
||||
return errors.New("cannot call more then 1 node")
|
||||
}
|
||||
|
||||
//2.rpcClient调用
|
||||
//如果调用本结点服务
|
||||
for i := 0; i < count; i++ {
|
||||
if pClientList[i].bSelfNode == true {
|
||||
pLocalRpcServer := handler.funcRpcServer()
|
||||
//判断是否是同一服务
|
||||
findIndex := strings.Index(serviceMethod, ".")
|
||||
if findIndex == -1 {
|
||||
sErr := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
||||
log.SError(sErr.Error())
|
||||
err = sErr
|
||||
|
||||
continue
|
||||
}
|
||||
serviceName := serviceMethod[:findIndex]
|
||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
||||
//调用自己rpcHandler处理器
|
||||
return pLocalRpcServer.myselfRpcHandlerGo(pClientList[i],serviceName, serviceMethod, args, requestHandlerNull,nil)
|
||||
}
|
||||
//其他的rpcHandler的处理器
|
||||
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(processor, pClientList[i], true, serviceName, 0, serviceMethod, args, nil, nil)
|
||||
if pCall.Err != nil {
|
||||
err = pCall.Err
|
||||
}
|
||||
pClientList[i].RemovePending(pCall.Seq)
|
||||
ReleaseCall(pCall)
|
||||
continue
|
||||
}
|
||||
|
||||
//跨node调用
|
||||
pCall := pClientList[i].Go(true, serviceMethod, args, nil)
|
||||
pCall := pClientList[i].Go(DefaultRpcTimeout,handler.rpcHandler,true, serviceMethod, args, nil)
|
||||
if pCall.Err != nil {
|
||||
err = pCall.Err
|
||||
}
|
||||
@@ -467,132 +456,76 @@ func (handler *RpcHandler) goRpc(processor IRpcProcessor, bCast bool, nodeId int
|
||||
return err
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) callRpc(nodeId int, serviceMethod string, args interface{}, reply interface{}) error {
|
||||
func (handler *RpcHandler) callRpc(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, reply interface{}) error {
|
||||
var pClientList [maxClusterNode]*Client
|
||||
err, count := handler.funcRpcClient(nodeId, serviceMethod, pClientList[:])
|
||||
err, count := handler.funcRpcClient(nodeId, serviceMethod,false, pClientList[:])
|
||||
if err != nil {
|
||||
log.SError("Call serviceMethod is error:", err.Error())
|
||||
log.Error("Call serviceMethod is failed",log.ErrorAttr("error",err))
|
||||
return err
|
||||
} else if count <= 0 {
|
||||
err = errors.New("Call serviceMethod is error:cannot find " + serviceMethod)
|
||||
log.SError(err.Error())
|
||||
log.Error("cannot find serviceMethod",log.String("serviceMethod",serviceMethod))
|
||||
return err
|
||||
} else if count > 1 {
|
||||
log.SError("Cannot call more then 1 node!")
|
||||
log.Error("Cannot call more then 1 node!",log.String("serviceMethod",serviceMethod))
|
||||
return errors.New("cannot call more then 1 node")
|
||||
}
|
||||
|
||||
//2.rpcClient调用
|
||||
//如果调用本结点服务
|
||||
pClient := pClientList[0]
|
||||
if pClient.bSelfNode == true {
|
||||
pLocalRpcServer := handler.funcRpcServer()
|
||||
//判断是否是同一服务
|
||||
findIndex := strings.Index(serviceMethod, ".")
|
||||
if findIndex == -1 {
|
||||
err := errors.New("Call serviceMethod " + serviceMethod + "is error!")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
}
|
||||
serviceName := serviceMethod[:findIndex]
|
||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
||||
//调用自己rpcHandler处理器
|
||||
return pLocalRpcServer.myselfRpcHandlerGo(pClient,serviceName, serviceMethod, args,requestHandlerNull, reply)
|
||||
}
|
||||
//其他的rpcHandler的处理器
|
||||
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(nil, pClient, false, serviceName, 0, serviceMethod, args, reply, nil)
|
||||
err = pCall.Done().Err
|
||||
pClient.RemovePending(pCall.Seq)
|
||||
ReleaseCall(pCall)
|
||||
return err
|
||||
}
|
||||
pCall := pClient.Go(timeout,handler.rpcHandler,false, serviceMethod, args, reply)
|
||||
|
||||
//跨node调用
|
||||
pCall := pClient.Go(false, serviceMethod, args, reply)
|
||||
if pCall.Err != nil {
|
||||
err = pCall.Err
|
||||
ReleaseCall(pCall)
|
||||
return err
|
||||
}
|
||||
err = pCall.Done().Err
|
||||
pClient.RemovePending(pCall.Seq)
|
||||
ReleaseCall(pCall)
|
||||
return err
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) asyncCallRpc(nodeId int, serviceMethod string, args interface{}, callback interface{}) error {
|
||||
func (handler *RpcHandler) asyncCallRpc(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error) {
|
||||
fVal := reflect.ValueOf(callback)
|
||||
if fVal.Kind() != reflect.Func {
|
||||
err := errors.New("call " + serviceMethod + " input callback param is error!")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
log.Error("input callback param is error",log.String("serviceMethod",serviceMethod))
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
if fVal.Type().NumIn() != 2 {
|
||||
err := errors.New("call " + serviceMethod + " callback param function is error!")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
log.Error("callback param function is error",log.String("serviceMethod",serviceMethod))
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
if fVal.Type().In(0).Kind() != reflect.Ptr || fVal.Type().In(1).String() != "error" {
|
||||
err := errors.New("call " + serviceMethod + " callback param function is error!")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
log.Error("callback param function is error",log.String("serviceMethod",serviceMethod))
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
reply := reflect.New(fVal.Type().In(0).Elem()).Interface()
|
||||
var pClientList [maxClusterNode]*Client
|
||||
err, count := handler.funcRpcClient(nodeId, serviceMethod, pClientList[:])
|
||||
var pClientList [2]*Client
|
||||
err, count := handler.funcRpcClient(nodeId, serviceMethod,false, pClientList[:])
|
||||
if count == 0 || err != nil {
|
||||
strNodeId := strconv.Itoa(nodeId)
|
||||
if err == nil {
|
||||
err = errors.New("cannot find rpcClient from nodeId " + strNodeId + " " + serviceMethod)
|
||||
if nodeId > 0 {
|
||||
err = fmt.Errorf("cannot find %s from nodeId %d",serviceMethod,nodeId)
|
||||
}else {
|
||||
err = fmt.Errorf("No %s service found in the origin network",serviceMethod)
|
||||
}
|
||||
}
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
log.SError("Call serviceMethod is error:", err.Error())
|
||||
return nil
|
||||
log.Error("cannot find serviceMethod from node",log.String("serviceMethod",serviceMethod),log.Int("nodeId",nodeId))
|
||||
return emptyCancelRpc,nil
|
||||
}
|
||||
|
||||
if count > 1 {
|
||||
err := errors.New("cannot call more then 1 node")
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
log.SError(err.Error())
|
||||
return nil
|
||||
log.Error("cannot call more then 1 node",log.String("serviceMethod",serviceMethod))
|
||||
return emptyCancelRpc,nil
|
||||
}
|
||||
|
||||
//2.rpcClient调用
|
||||
//如果调用本结点服务
|
||||
pClient := pClientList[0]
|
||||
if pClient.bSelfNode == true {
|
||||
pLocalRpcServer := handler.funcRpcServer()
|
||||
//判断是否是同一服务
|
||||
findIndex := strings.Index(serviceMethod, ".")
|
||||
if findIndex == -1 {
|
||||
err := errors.New("Call serviceMethod " + serviceMethod + " is error!")
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
log.SError(err.Error())
|
||||
return nil
|
||||
}
|
||||
serviceName := serviceMethod[:findIndex]
|
||||
//调用自己rpcHandler处理器
|
||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
||||
return pLocalRpcServer.myselfRpcHandlerGo(pClient,serviceName, serviceMethod, args,fVal ,reply)
|
||||
}
|
||||
|
||||
//其他的rpcHandler的处理器
|
||||
err = pLocalRpcServer.selfNodeRpcHandlerAsyncGo(pClient, handler, false, serviceName, serviceMethod, args, reply, fVal)
|
||||
if err != nil {
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
//跨node调用
|
||||
err = pClient.AsyncCall(handler, serviceMethod, fVal, args, reply)
|
||||
if err != nil {
|
||||
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
|
||||
}
|
||||
return nil
|
||||
return pClientList[0].AsyncCall(timeout,handler.rpcHandler, serviceMethod, fVal, args, reply,false)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) GetName() string {
|
||||
@@ -603,12 +536,29 @@ func (handler *RpcHandler) IsSingleCoroutine() bool {
|
||||
return handler.rpcHandler.IsSingleCoroutine()
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) CallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, reply interface{}) error {
|
||||
return handler.callRpc(timeout,0, serviceMethod, args, reply)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) CallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, reply interface{}) error{
|
||||
return handler.callRpc(timeout,nodeId, serviceMethod, args, reply)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) AsyncCallWithTimeout(timeout time.Duration,serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error){
|
||||
return handler.asyncCallRpc(timeout,0, serviceMethod, args, callback)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) AsyncCallNodeWithTimeout(timeout time.Duration,nodeId int, serviceMethod string, args interface{}, callback interface{}) (CancelRpc,error){
|
||||
return handler.asyncCallRpc(timeout,nodeId, serviceMethod, args, callback)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) AsyncCall(serviceMethod string, args interface{}, callback interface{}) error {
|
||||
return handler.asyncCallRpc(0, serviceMethod, args, callback)
|
||||
_,err := handler.asyncCallRpc(DefaultRpcTimeout,0, serviceMethod, args, callback)
|
||||
return err
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) Call(serviceMethod string, args interface{}, reply interface{}) error {
|
||||
return handler.callRpc(0, serviceMethod, args, reply)
|
||||
return handler.callRpc(DefaultRpcTimeout,0, serviceMethod, args, reply)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) Go(serviceMethod string, args interface{}) error {
|
||||
@@ -616,11 +566,13 @@ func (handler *RpcHandler) Go(serviceMethod string, args interface{}) error {
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) AsyncCallNode(nodeId int, serviceMethod string, args interface{}, callback interface{}) error {
|
||||
return handler.asyncCallRpc(nodeId, serviceMethod, args, callback)
|
||||
_,err:= handler.asyncCallRpc(DefaultRpcTimeout,nodeId, serviceMethod, args, callback)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) CallNode(nodeId int, serviceMethod string, args interface{}, reply interface{}) error {
|
||||
return handler.callRpc(nodeId, serviceMethod, args, reply)
|
||||
return handler.callRpc(DefaultRpcTimeout,nodeId, serviceMethod, args, reply)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) GoNode(nodeId int, serviceMethod string, args interface{}) error {
|
||||
@@ -631,50 +583,28 @@ func (handler *RpcHandler) CastGo(serviceMethod string, args interface{}) error
|
||||
return handler.goRpc(nil, true, 0, serviceMethod, args)
|
||||
}
|
||||
|
||||
func (handler *RpcHandler) RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs IRawInputArgs) error {
|
||||
func (handler *RpcHandler) RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs []byte) error {
|
||||
processor := GetProcessor(uint8(rpcProcessorType))
|
||||
err, count := handler.funcRpcClient(nodeId, serviceName, handler.pClientList)
|
||||
err, count := handler.funcRpcClient(nodeId, serviceName,false, handler.pClientList)
|
||||
if count == 0 || err != nil {
|
||||
//args.DoGc()
|
||||
log.SError("Call serviceMethod is error:", err.Error())
|
||||
log.Error("call serviceMethod is failed",log.ErrorAttr("error",err))
|
||||
return err
|
||||
}
|
||||
if count > 1 {
|
||||
//args.DoGc()
|
||||
err := errors.New("cannot call more then 1 node")
|
||||
log.SError(err.Error())
|
||||
log.Error("cannot call more then 1 node",log.String("serviceName",serviceName))
|
||||
return err
|
||||
}
|
||||
|
||||
//2.rpcClient调用
|
||||
//如果调用本结点服务
|
||||
for i := 0; i < count; i++ {
|
||||
if handler.pClientList[i].bSelfNode == true {
|
||||
pLocalRpcServer := handler.funcRpcServer()
|
||||
//调用自己rpcHandler处理器
|
||||
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
|
||||
err := pLocalRpcServer.myselfRpcHandlerGo(handler.pClientList[i],serviceName, serviceName, rawArgs.GetRawData(), requestHandlerNull,nil)
|
||||
//args.DoGc()
|
||||
return err
|
||||
}
|
||||
|
||||
//其他的rpcHandler的处理器
|
||||
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(processor, handler.pClientList[i], true, serviceName, rpcMethodId, serviceName, nil, nil, rawArgs.GetRawData())
|
||||
rawArgs.DoEscape()
|
||||
if pCall.Err != nil {
|
||||
err = pCall.Err
|
||||
}
|
||||
handler.pClientList[i].RemovePending(pCall.Seq)
|
||||
ReleaseCall(pCall)
|
||||
continue
|
||||
}
|
||||
|
||||
//跨node调用
|
||||
pCall := handler.pClientList[i].RawGo(processor, true, rpcMethodId, serviceName, rawArgs.GetRawData(), nil)
|
||||
rawArgs.DoFree()
|
||||
pCall := handler.pClientList[i].RawGo(DefaultRpcTimeout,handler.rpcHandler,processor, true, rpcMethodId, serviceName, rawArgs, nil)
|
||||
if pCall.Err != nil {
|
||||
err = pCall.Err
|
||||
}
|
||||
|
||||
handler.pClientList[i].RemovePending(pCall.Seq)
|
||||
ReleaseCall(pCall)
|
||||
}
|
||||
@@ -688,23 +618,7 @@ func (handler *RpcHandler) RegRawRpc(rpcMethodId uint32, rawRpcCB RawRpcCallBack
|
||||
|
||||
func (handler *RpcHandler) UnmarshalInParam(rpcProcessor IRpcProcessor, serviceMethod string, rawRpcMethodId uint32, inParam []byte) (interface{}, error) {
|
||||
if rawRpcMethodId > 0 {
|
||||
v, ok := handler.mapRawFunctions[rawRpcMethodId]
|
||||
if ok == false {
|
||||
strRawRpcMethodId := strconv.FormatUint(uint64(rawRpcMethodId), 10)
|
||||
err := errors.New("RpcHandler cannot find request rpc id " + strRawRpcMethodId)
|
||||
log.SError(err.Error())
|
||||
return nil, err
|
||||
}
|
||||
|
||||
msg, err := v.Unmarshal(inParam)
|
||||
if err != nil {
|
||||
strRawRpcMethodId := strconv.FormatUint(uint64(rawRpcMethodId), 10)
|
||||
err := errors.New("RpcHandler cannot Unmarshal rpc id " + strRawRpcMethodId)
|
||||
log.SError(err.Error())
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return msg, err
|
||||
return inParam,nil
|
||||
}
|
||||
|
||||
v, ok := handler.mapFunctions[serviceMethod]
|
||||
@@ -717,3 +631,8 @@ func (handler *RpcHandler) UnmarshalInParam(rpcProcessor IRpcProcessor, serviceM
|
||||
err = rpcProcessor.Unmarshal(inParam, param)
|
||||
return param, err
|
||||
}
|
||||
|
||||
|
||||
func (handler *RpcHandler) GetRpcServer() FuncRpcServer{
|
||||
return handler.funcRpcServer
|
||||
}
|
||||
|
||||
89
rpc/rpctimer.go
Normal file
89
rpc/rpctimer.go
Normal file
@@ -0,0 +1,89 @@
|
||||
package rpc
|
||||
|
||||
import (
|
||||
"container/heap"
|
||||
"time"
|
||||
)
|
||||
|
||||
type CallTimer struct {
|
||||
SeqId uint64
|
||||
FireTime int64
|
||||
}
|
||||
|
||||
type CallTimerHeap struct {
|
||||
callTimer []CallTimer
|
||||
mapSeqIndex map[uint64]int
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Init() {
|
||||
h.mapSeqIndex = make(map[uint64]int, 4096)
|
||||
h.callTimer = make([]CallTimer, 0, 4096)
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Len() int {
|
||||
return len(h.callTimer)
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Less(i, j int) bool {
|
||||
return h.callTimer[i].FireTime < h.callTimer[j].FireTime
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Swap(i, j int) {
|
||||
h.callTimer[i], h.callTimer[j] = h.callTimer[j], h.callTimer[i]
|
||||
h.mapSeqIndex[h.callTimer[i].SeqId] = i
|
||||
h.mapSeqIndex[h.callTimer[j].SeqId] = j
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Push(t any) {
|
||||
callTimer := t.(CallTimer)
|
||||
h.mapSeqIndex[callTimer.SeqId] = len(h.callTimer)
|
||||
h.callTimer = append(h.callTimer, callTimer)
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Pop() any {
|
||||
l := len(h.callTimer)
|
||||
seqId := h.callTimer[l-1].SeqId
|
||||
|
||||
h.callTimer = h.callTimer[:l-1]
|
||||
delete(h.mapSeqIndex, seqId)
|
||||
return seqId
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) Cancel(seq uint64) bool {
|
||||
index, ok := h.mapSeqIndex[seq]
|
||||
if ok == false {
|
||||
return false
|
||||
}
|
||||
|
||||
heap.Remove(h, index)
|
||||
return true
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) AddTimer(seqId uint64,d time.Duration){
|
||||
heap.Push(h, CallTimer{
|
||||
SeqId: seqId,
|
||||
FireTime: time.Now().Add(d).UnixNano(),
|
||||
})
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) PopTimeout() uint64 {
|
||||
if h.Len() == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
nextFireTime := h.callTimer[0].FireTime
|
||||
if nextFireTime > time.Now().UnixNano() {
|
||||
return 0
|
||||
}
|
||||
|
||||
return heap.Pop(h).(uint64)
|
||||
}
|
||||
|
||||
func (h *CallTimerHeap) PopFirst() uint64 {
|
||||
if h.Len() == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
return heap.Pop(h).(uint64)
|
||||
}
|
||||
|
||||
268
rpc/server.go
268
rpc/server.go
@@ -10,17 +10,17 @@ import (
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
"runtime"
|
||||
)
|
||||
|
||||
type RpcProcessorType uint8
|
||||
|
||||
const (
|
||||
RpcProcessorJson RpcProcessorType = 0
|
||||
RpcProcessorGoGoPB RpcProcessorType = 1
|
||||
RpcProcessorPB RpcProcessorType = 1
|
||||
)
|
||||
|
||||
//var processor IRpcProcessor = &JsonProcessor{}
|
||||
var arrayProcessor = []IRpcProcessor{&JsonProcessor{}, &GoGoPBProcessor{}}
|
||||
var arrayProcessor = []IRpcProcessor{&JsonProcessor{}, &PBProcessor{}}
|
||||
var arrayProcessorLen uint8 = 2
|
||||
var LittleEndian bool
|
||||
|
||||
@@ -28,6 +28,8 @@ type Server struct {
|
||||
functions map[interface{}]interface{}
|
||||
rpcHandleFinder RpcHandleFinder
|
||||
rpcServer *network.TCPServer
|
||||
|
||||
compressBytesLen int
|
||||
}
|
||||
|
||||
type RpcAgent struct {
|
||||
@@ -65,15 +67,15 @@ func (server *Server) Init(rpcHandleFinder RpcHandleFinder) {
|
||||
|
||||
const Default_ReadWriteDeadline = 15*time.Second
|
||||
|
||||
func (server *Server) Start(listenAddr string, maxRpcParamLen uint32) {
|
||||
func (server *Server) Start(listenAddr string, maxRpcParamLen uint32,compressBytesLen int) {
|
||||
splitAddr := strings.Split(listenAddr, ":")
|
||||
if len(splitAddr) != 2 {
|
||||
log.SFatal("listen addr is error :", listenAddr)
|
||||
log.Fatal("listen addr is failed", log.String("listenAddr",listenAddr))
|
||||
}
|
||||
|
||||
server.rpcServer.Addr = ":" + splitAddr[1]
|
||||
server.rpcServer.LenMsgLen = 4 //uint16
|
||||
server.rpcServer.MinMsgLen = 2
|
||||
server.compressBytesLen = compressBytesLen
|
||||
if maxRpcParamLen > 0 {
|
||||
server.rpcServer.MaxMsgLen = maxRpcParamLen
|
||||
} else {
|
||||
@@ -86,6 +88,8 @@ func (server *Server) Start(listenAddr string, maxRpcParamLen uint32) {
|
||||
server.rpcServer.LittleEndian = LittleEndian
|
||||
server.rpcServer.WriteDeadline = Default_ReadWriteDeadline
|
||||
server.rpcServer.ReadDeadline = Default_ReadWriteDeadline
|
||||
server.rpcServer.LenMsgLen = DefaultRpcLenMsgLen
|
||||
|
||||
server.rpcServer.Start()
|
||||
}
|
||||
|
||||
@@ -108,38 +112,84 @@ func (agent *RpcAgent) WriteResponse(processor IRpcProcessor, serviceMethod stri
|
||||
defer processor.ReleaseRpcResponse(rpcResponse.RpcResponseData)
|
||||
|
||||
if errM != nil {
|
||||
log.SError("service method ", serviceMethod, " Marshal error:", errM.Error())
|
||||
log.Error("mashal RpcResponseData failed",log.String("serviceMethod",serviceMethod),log.ErrorAttr("error",errM))
|
||||
return
|
||||
}
|
||||
|
||||
errM = agent.conn.WriteMsg([]byte{uint8(processor.GetProcessorType())}, bytes)
|
||||
var compressBuff[]byte
|
||||
bCompress := uint8(0)
|
||||
if agent.rpcServer.compressBytesLen >0 && len(bytes) >= agent.rpcServer.compressBytesLen {
|
||||
var cErr error
|
||||
|
||||
compressBuff,cErr = compressor.CompressBlock(bytes)
|
||||
if cErr != nil {
|
||||
log.Error("CompressBlock failed",log.String("serviceMethod",serviceMethod),log.ErrorAttr("error",cErr))
|
||||
return
|
||||
}
|
||||
if len(compressBuff) < len(bytes) {
|
||||
bytes = compressBuff
|
||||
bCompress = 1<<7
|
||||
}
|
||||
}
|
||||
|
||||
errM = agent.conn.WriteMsg([]byte{uint8(processor.GetProcessorType())|bCompress}, bytes)
|
||||
if cap(compressBuff) >0 {
|
||||
compressor.CompressBufferCollection(compressBuff)
|
||||
}
|
||||
if errM != nil {
|
||||
log.SError("Rpc ", serviceMethod, " return is error:", errM.Error())
|
||||
log.Error("WriteMsg error,Rpc return is fail",log.String("serviceMethod",serviceMethod),log.ErrorAttr("error",errM))
|
||||
}
|
||||
}
|
||||
|
||||
func (agent *RpcAgent) Run() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
}
|
||||
}()
|
||||
|
||||
for {
|
||||
data, err := agent.conn.ReadMsg()
|
||||
if err != nil {
|
||||
log.SError("remoteAddress:", agent.conn.RemoteAddr().String(), ",read message: ", err.Error())
|
||||
log.Error("read message is error",log.String("remoteAddress",agent.conn.RemoteAddr().String()),log.ErrorAttr("error",err))
|
||||
//will close tcpconn
|
||||
break
|
||||
}
|
||||
|
||||
processor := GetProcessor(data[0])
|
||||
bCompress := (data[0]>>7) > 0
|
||||
processor := GetProcessor(data[0]&0x7f)
|
||||
if processor == nil {
|
||||
agent.conn.ReleaseReadMsg(data)
|
||||
log.SError("remote rpc ", agent.conn.RemoteAddr(), " cannot find processor:", data[0])
|
||||
log.Warning("cannot find processor",log.String("RemoteAddr",agent.conn.RemoteAddr().String()))
|
||||
return
|
||||
}
|
||||
|
||||
//解析head
|
||||
var compressBuff []byte
|
||||
byteData := data[1:]
|
||||
if bCompress == true {
|
||||
var unCompressErr error
|
||||
|
||||
compressBuff,unCompressErr = compressor.UncompressBlock(byteData)
|
||||
if unCompressErr!= nil {
|
||||
agent.conn.ReleaseReadMsg(data)
|
||||
log.Error("UncompressBlock failed",log.String("RemoteAddr",agent.conn.RemoteAddr().String()),log.ErrorAttr("error",unCompressErr))
|
||||
return
|
||||
}
|
||||
byteData = compressBuff
|
||||
}
|
||||
|
||||
req := MakeRpcRequest(processor, 0, 0, "", false, nil)
|
||||
err = processor.Unmarshal(data[1:], req.RpcRequestData)
|
||||
err = processor.Unmarshal(byteData, req.RpcRequestData)
|
||||
if cap(compressBuff) > 0 {
|
||||
compressor.UnCompressBufferCollection(compressBuff)
|
||||
}
|
||||
agent.conn.ReleaseReadMsg(data)
|
||||
if err != nil {
|
||||
log.SError("rpc Unmarshal request is error:", err.Error())
|
||||
log.Error("Unmarshal failed",log.String("RemoteAddr",agent.conn.RemoteAddr().String()),log.ErrorAttr("error",err))
|
||||
if req.RpcRequestData.GetSeq() > 0 {
|
||||
rpcError := RpcError(err.Error())
|
||||
if req.RpcRequestData.IsNoReply() == false {
|
||||
@@ -148,7 +198,6 @@ func (agent *RpcAgent) Run() {
|
||||
ReleaseRpcRequest(req)
|
||||
continue
|
||||
} else {
|
||||
//will close tcpconn
|
||||
ReleaseRpcRequest(req)
|
||||
break
|
||||
}
|
||||
@@ -162,7 +211,7 @@ func (agent *RpcAgent) Run() {
|
||||
agent.WriteResponse(processor, req.RpcRequestData.GetServiceMethod(), req.RpcRequestData.GetSeq(), nil, rpcError)
|
||||
}
|
||||
ReleaseRpcRequest(req)
|
||||
log.SError("rpc request req.ServiceMethod is error")
|
||||
log.Error("rpc request req.ServiceMethod is error")
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -172,8 +221,7 @@ func (agent *RpcAgent) Run() {
|
||||
if req.RpcRequestData.IsNoReply() == false {
|
||||
agent.WriteResponse(processor, req.RpcRequestData.GetServiceMethod(), req.RpcRequestData.GetSeq(), nil, rpcError)
|
||||
}
|
||||
|
||||
log.SError("service method ", req.RpcRequestData.GetServiceMethod(), " not config!")
|
||||
log.Error("serviceMethod not config",log.String("serviceMethod",req.RpcRequestData.GetServiceMethod()))
|
||||
ReleaseRpcRequest(req)
|
||||
continue
|
||||
}
|
||||
@@ -188,12 +236,13 @@ func (agent *RpcAgent) Run() {
|
||||
req.inParam, err = rpcHandler.UnmarshalInParam(req.rpcProcessor, req.RpcRequestData.GetServiceMethod(), req.RpcRequestData.GetRpcMethodId(), req.RpcRequestData.GetInParam())
|
||||
if err != nil {
|
||||
rErr := "Call Rpc " + req.RpcRequestData.GetServiceMethod() + " Param error " + err.Error()
|
||||
log.Error("call rpc param error",log.String("serviceMethod",req.RpcRequestData.GetServiceMethod()),log.ErrorAttr("error",err))
|
||||
if req.requestHandle != nil {
|
||||
req.requestHandle(nil, RpcError(rErr))
|
||||
} else {
|
||||
ReleaseRpcRequest(req)
|
||||
}
|
||||
log.SError(rErr)
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -242,42 +291,58 @@ func (server *Server) myselfRpcHandlerGo(client *Client,handlerName string, serv
|
||||
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
|
||||
if rpcHandler == nil {
|
||||
err := errors.New("service method " + serviceMethod + " not config!")
|
||||
log.SError(err.Error())
|
||||
log.Error("service method not config",log.String("serviceMethod",serviceMethod))
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
return rpcHandler.CallMethod(client,serviceMethod, args,callBack, reply)
|
||||
}
|
||||
|
||||
func (server *Server) selfNodeRpcHandlerGo(processor IRpcProcessor, client *Client, noReply bool, handlerName string, rpcMethodId uint32, serviceMethod string, args interface{}, reply interface{}, rawArgs []byte) *Call {
|
||||
func (server *Server) selfNodeRpcHandlerGo(timeout time.Duration,processor IRpcProcessor, client *Client, noReply bool, handlerName string, rpcMethodId uint32, serviceMethod string, args interface{}, reply interface{}, rawArgs []byte) *Call {
|
||||
pCall := MakeCall()
|
||||
pCall.Seq = client.generateSeq()
|
||||
pCall.TimeOut = timeout
|
||||
pCall.ServiceMethod = serviceMethod
|
||||
|
||||
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
|
||||
if rpcHandler == nil {
|
||||
err := errors.New("service method " + serviceMethod + " not config!")
|
||||
log.Error("service method not config",log.String("serviceMethod",serviceMethod),log.ErrorAttr("error",err))
|
||||
pCall.Seq = 0
|
||||
pCall.Err = errors.New("service method " + serviceMethod + " not config!")
|
||||
pCall.done <- pCall
|
||||
log.SError(pCall.Err.Error())
|
||||
pCall.DoError(err)
|
||||
|
||||
return pCall
|
||||
}
|
||||
|
||||
var iParam interface{}
|
||||
if processor == nil {
|
||||
_, processor = GetProcessorType(args)
|
||||
}
|
||||
|
||||
if args != nil {
|
||||
var err error
|
||||
iParam,err = processor.Clone(args)
|
||||
if err != nil {
|
||||
sErr := errors.New("RpcHandler " + handlerName + "."+serviceMethod+" deep copy inParam is error:" + err.Error())
|
||||
log.Error("deep copy inParam is failed",log.String("handlerName",handlerName),log.String("serviceMethod",serviceMethod))
|
||||
pCall.Seq = 0
|
||||
pCall.DoError(sErr)
|
||||
|
||||
return pCall
|
||||
}
|
||||
}
|
||||
|
||||
req := MakeRpcRequest(processor, 0, rpcMethodId, serviceMethod, noReply, nil)
|
||||
req.inParam = args
|
||||
req.inParam = iParam
|
||||
req.localReply = reply
|
||||
if rawArgs != nil {
|
||||
var err error
|
||||
req.inParam, err = rpcHandler.UnmarshalInParam(processor, serviceMethod, rpcMethodId, rawArgs)
|
||||
if err != nil {
|
||||
log.Error("unmarshalInParam is failed",log.String("serviceMethod",serviceMethod),log.Uint32("rpcMethodId",rpcMethodId),log.ErrorAttr("error",err))
|
||||
pCall.Seq = 0
|
||||
pCall.DoError(err)
|
||||
ReleaseRpcRequest(req)
|
||||
pCall.Err = err
|
||||
pCall.done <- pCall
|
||||
return pCall
|
||||
}
|
||||
}
|
||||
@@ -289,20 +354,82 @@ func (server *Server) selfNodeRpcHandlerGo(processor IRpcProcessor, client *Clie
|
||||
if reply != nil && Returns != reply && Returns != nil {
|
||||
byteReturns, err := req.rpcProcessor.Marshal(Returns)
|
||||
if err != nil {
|
||||
log.SError("returns data cannot be marshal ", callSeq)
|
||||
ReleaseRpcRequest(req)
|
||||
}
|
||||
|
||||
err = req.rpcProcessor.Unmarshal(byteReturns, reply)
|
||||
if err != nil {
|
||||
log.SError("returns data cannot be Unmarshal ", callSeq)
|
||||
ReleaseRpcRequest(req)
|
||||
Err = ConvertError(err)
|
||||
log.Error("returns data cannot be marshal",log.Uint64("seq",callSeq),log.ErrorAttr("error",err))
|
||||
}else{
|
||||
err = req.rpcProcessor.Unmarshal(byteReturns, reply)
|
||||
if err != nil {
|
||||
Err = ConvertError(err)
|
||||
log.Error("returns data cannot be Unmarshal",log.Uint64("seq",callSeq),log.ErrorAttr("error",err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ReleaseRpcRequest(req)
|
||||
v := client.RemovePending(callSeq)
|
||||
if v == nil {
|
||||
log.Error("rpcClient cannot find seq",log.Uint64("seq",callSeq))
|
||||
return
|
||||
}
|
||||
|
||||
if len(Err) == 0 {
|
||||
v.Err = nil
|
||||
v.DoOK()
|
||||
} else {
|
||||
log.Error(Err.Error())
|
||||
v.DoError(Err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
err := rpcHandler.PushRpcRequest(req)
|
||||
if err != nil {
|
||||
log.Error(err.Error())
|
||||
pCall.DoError(err)
|
||||
ReleaseRpcRequest(req)
|
||||
}
|
||||
|
||||
return pCall
|
||||
}
|
||||
|
||||
func (server *Server) selfNodeRpcHandlerAsyncGo(timeout time.Duration,client *Client, callerRpcHandler IRpcHandler, noReply bool, handlerName string, serviceMethod string, args interface{}, reply interface{}, callback reflect.Value,cancelable bool) (CancelRpc,error) {
|
||||
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
|
||||
if rpcHandler == nil {
|
||||
err := errors.New("service method " + serviceMethod + " not config!")
|
||||
log.Error(err.Error())
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
_, processor := GetProcessorType(args)
|
||||
iParam,err := processor.Clone(args)
|
||||
if err != nil {
|
||||
errM := errors.New("RpcHandler " + handlerName + "."+serviceMethod+" deep copy inParam is error:" + err.Error())
|
||||
log.Error(errM.Error())
|
||||
return emptyCancelRpc,errM
|
||||
}
|
||||
|
||||
req := MakeRpcRequest(processor, 0, 0, serviceMethod, noReply, nil)
|
||||
req.inParam = iParam
|
||||
req.localReply = reply
|
||||
|
||||
cancelRpc := emptyCancelRpc
|
||||
var callSeq uint64
|
||||
if noReply == false {
|
||||
callSeq = client.generateSeq()
|
||||
pCall := MakeCall()
|
||||
pCall.Seq = callSeq
|
||||
pCall.rpcHandler = callerRpcHandler
|
||||
pCall.callback = &callback
|
||||
pCall.Reply = reply
|
||||
pCall.ServiceMethod = serviceMethod
|
||||
pCall.TimeOut = timeout
|
||||
client.AddPending(pCall)
|
||||
rpcCancel := RpcCancel{CallSeq: callSeq,Cli: client}
|
||||
cancelRpc = rpcCancel.CancelRpc
|
||||
|
||||
req.requestHandle = func(Returns interface{}, Err RpcError) {
|
||||
v := client.RemovePending(callSeq)
|
||||
if v == nil {
|
||||
log.SError("rpcClient cannot find seq ",callSeq, " in pending")
|
||||
ReleaseRpcRequest(req)
|
||||
return
|
||||
}
|
||||
@@ -311,70 +438,23 @@ func (server *Server) selfNodeRpcHandlerGo(processor IRpcProcessor, client *Clie
|
||||
} else {
|
||||
v.Err = Err
|
||||
}
|
||||
v.done <- v
|
||||
ReleaseRpcRequest(req)
|
||||
}
|
||||
}
|
||||
|
||||
err := rpcHandler.PushRpcRequest(req)
|
||||
if err != nil {
|
||||
ReleaseRpcRequest(req)
|
||||
pCall.Err = err
|
||||
pCall.done <- pCall
|
||||
}
|
||||
|
||||
return pCall
|
||||
}
|
||||
|
||||
func (server *Server) selfNodeRpcHandlerAsyncGo(client *Client, callerRpcHandler IRpcHandler, noReply bool, handlerName string, serviceMethod string, args interface{}, reply interface{}, callback reflect.Value) error {
|
||||
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
|
||||
if rpcHandler == nil {
|
||||
err := errors.New("service method " + serviceMethod + " not config!")
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
_, processor := GetProcessorType(args)
|
||||
req := MakeRpcRequest(processor, 0, 0, serviceMethod, noReply, nil)
|
||||
req.inParam = args
|
||||
req.localReply = reply
|
||||
|
||||
if noReply == false {
|
||||
callSeq := client.generateSeq()
|
||||
pCall := MakeCall()
|
||||
pCall.Seq = callSeq
|
||||
pCall.rpcHandler = callerRpcHandler
|
||||
pCall.callback = &callback
|
||||
pCall.Reply = reply
|
||||
pCall.ServiceMethod = serviceMethod
|
||||
client.AddPending(pCall)
|
||||
req.requestHandle = func(Returns interface{}, Err RpcError) {
|
||||
v := client.RemovePending(callSeq)
|
||||
if v == nil {
|
||||
log.SError("rpcClient cannot find seq ", pCall.Seq, " in pending")
|
||||
//ReleaseCall(pCall)
|
||||
ReleaseRpcRequest(req)
|
||||
return
|
||||
}
|
||||
if len(Err) == 0 {
|
||||
pCall.Err = nil
|
||||
} else {
|
||||
pCall.Err = Err
|
||||
}
|
||||
|
||||
if Returns != nil {
|
||||
pCall.Reply = Returns
|
||||
v.Reply = Returns
|
||||
}
|
||||
pCall.rpcHandler.PushRpcResponse(pCall)
|
||||
v.rpcHandler.PushRpcResponse(v)
|
||||
ReleaseRpcRequest(req)
|
||||
}
|
||||
}
|
||||
|
||||
err := rpcHandler.PushRpcRequest(req)
|
||||
err = rpcHandler.PushRpcRequest(req)
|
||||
if err != nil {
|
||||
ReleaseRpcRequest(req)
|
||||
return err
|
||||
if callSeq > 0 {
|
||||
client.RemovePending(callSeq)
|
||||
}
|
||||
return emptyCancelRpc,err
|
||||
}
|
||||
|
||||
return nil
|
||||
return cancelRpc,nil
|
||||
}
|
||||
|
||||
@@ -10,11 +10,13 @@ import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
rpcHandle "github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/util/timer"
|
||||
"github.com/duanhf2012/origin/concurrent"
|
||||
)
|
||||
|
||||
const InitModuleId = 1e9
|
||||
|
||||
type IModule interface {
|
||||
concurrent.IConcurrent
|
||||
SetModuleId(moduleId uint32) bool
|
||||
GetModuleId() uint32
|
||||
AddModule(module IModule) (uint32, error)
|
||||
@@ -56,6 +58,7 @@ type Module struct {
|
||||
|
||||
//事件管道
|
||||
eventHandler event.IEventHandler
|
||||
concurrent.IConcurrent
|
||||
}
|
||||
|
||||
func (m *Module) SetModuleId(moduleId uint32) bool {
|
||||
@@ -105,6 +108,7 @@ func (m *Module) AddModule(module IModule) (uint32, error) {
|
||||
pAddModule.moduleName = reflect.Indirect(reflect.ValueOf(module)).Type().Name()
|
||||
pAddModule.eventHandler = event.NewEventHandler()
|
||||
pAddModule.eventHandler.Init(m.eventHandler.GetEventProcessor())
|
||||
pAddModule.IConcurrent = m.IConcurrent
|
||||
err := module.OnInit()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
@@ -113,7 +117,7 @@ func (m *Module) AddModule(module IModule) (uint32, error) {
|
||||
m.child[module.GetModuleId()] = module
|
||||
m.ancestor.getBaseModule().(*Module).descendants[module.GetModuleId()] = module
|
||||
|
||||
log.SDebug("Add module ", module.GetModuleName(), " completed")
|
||||
log.Debug("Add module "+module.GetModuleName()+ " completed")
|
||||
return module.GetModuleId(), nil
|
||||
}
|
||||
|
||||
@@ -127,7 +131,7 @@ func (m *Module) ReleaseModule(moduleId uint32) {
|
||||
|
||||
pModule.self.OnRelease()
|
||||
pModule.GetEventHandler().Destroy()
|
||||
log.SDebug("Release module ", pModule.GetModuleName())
|
||||
log.Debug("Release module "+ pModule.GetModuleName())
|
||||
for pTimer := range pModule.mapActiveTimer {
|
||||
pTimer.Cancel()
|
||||
}
|
||||
@@ -273,14 +277,19 @@ func (m *Module) SafeNewTicker(tickerId *uint64, d time.Duration, AdditionData i
|
||||
}
|
||||
|
||||
func (m *Module) CancelTimerId(timerId *uint64) bool {
|
||||
if timerId==nil || *timerId == 0 {
|
||||
log.Warning("timerId is invalid")
|
||||
return false
|
||||
}
|
||||
|
||||
if m.mapActiveIdTimer == nil {
|
||||
log.SError("mapActiveIdTimer is nil")
|
||||
log.Error("mapActiveIdTimer is nil")
|
||||
return false
|
||||
}
|
||||
|
||||
t, ok := m.mapActiveIdTimer[*timerId]
|
||||
if ok == false {
|
||||
log.SError("cannot find timer id ", timerId)
|
||||
log.Stack("cannot find timer id ", log.Uint64("timerId",*timerId))
|
||||
return false
|
||||
}
|
||||
|
||||
|
||||
@@ -7,27 +7,28 @@ import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/profiler"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
originSync "github.com/duanhf2012/origin/util/sync"
|
||||
"github.com/duanhf2012/origin/util/timer"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"github.com/duanhf2012/origin/concurrent"
|
||||
)
|
||||
|
||||
|
||||
var closeSig chan bool
|
||||
var timerDispatcherLen = 100000
|
||||
var maxServiceEventChannelNum = 2000000
|
||||
|
||||
type IService interface {
|
||||
concurrent.IConcurrent
|
||||
Init(iService IService,getClientFun rpc.FuncRpcClient,getServerFun rpc.FuncRpcServer,serviceCfg interface{})
|
||||
Wait()
|
||||
Stop()
|
||||
Start()
|
||||
|
||||
OnSetup(iService IService)
|
||||
OnInit() error
|
||||
OnStart()
|
||||
OnRetire()
|
||||
OnRelease()
|
||||
|
||||
SetName(serviceName string)
|
||||
@@ -40,27 +41,27 @@ type IService interface {
|
||||
|
||||
SetEventChannelNum(num int)
|
||||
OpenProfiler()
|
||||
}
|
||||
|
||||
// eventPool的内存池,缓存Event
|
||||
var maxServiceEventChannel = 2000000
|
||||
var eventPool = originSync.NewPoolEx(make(chan originSync.IPoolData, maxServiceEventChannel), func() originSync.IPoolData {
|
||||
return &event.Event{}
|
||||
})
|
||||
SetRetire() //设置服务退休状态
|
||||
IsRetire() bool //服务是否退休
|
||||
}
|
||||
|
||||
type Service struct {
|
||||
Module
|
||||
|
||||
rpcHandler rpc.RpcHandler //rpc
|
||||
name string //service name
|
||||
wg sync.WaitGroup
|
||||
serviceCfg interface{}
|
||||
goroutineNum int32
|
||||
startStatus bool
|
||||
retire int32
|
||||
eventProcessor event.IEventProcessor
|
||||
profiler *profiler.Profiler //性能分析器
|
||||
nodeEventLister rpc.INodeListener
|
||||
discoveryServiceLister rpc.IDiscoveryServiceListener
|
||||
chanEvent chan event.IEvent
|
||||
closeSig chan struct{}
|
||||
}
|
||||
|
||||
// RpcConnEvent Node结点连接事件
|
||||
@@ -77,10 +78,7 @@ type DiscoveryServiceEvent struct{
|
||||
}
|
||||
|
||||
func SetMaxServiceChannel(maxEventChannel int){
|
||||
maxServiceEventChannel = maxEventChannel
|
||||
eventPool = originSync.NewPoolEx(make(chan originSync.IPoolData, maxServiceEventChannel), func() originSync.IPoolData {
|
||||
return &event.Event{}
|
||||
})
|
||||
maxServiceEventChannelNum = maxEventChannel
|
||||
}
|
||||
|
||||
func (rpcEventData *DiscoveryServiceEvent) GetEventType() event.EventType{
|
||||
@@ -100,14 +98,28 @@ func (s *Service) OnSetup(iService IService){
|
||||
func (s *Service) OpenProfiler() {
|
||||
s.profiler = profiler.RegProfiler(s.GetName())
|
||||
if s.profiler==nil {
|
||||
log.SFatal("rofiler.RegProfiler ",s.GetName()," fail.")
|
||||
log.Fatal("rofiler.RegProfiler "+s.GetName()+" fail.")
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) IsRetire() bool{
|
||||
return atomic.LoadInt32(&s.retire) != 0
|
||||
}
|
||||
|
||||
func (s *Service) SetRetire(){
|
||||
atomic.StoreInt32(&s.retire,1)
|
||||
|
||||
ev := event.NewEvent()
|
||||
ev.Type = event.Sys_Event_Retire
|
||||
|
||||
s.pushEvent(ev)
|
||||
}
|
||||
|
||||
func (s *Service) Init(iService IService,getClientFun rpc.FuncRpcClient,getServerFun rpc.FuncRpcServer,serviceCfg interface{}) {
|
||||
s.closeSig = make(chan struct{})
|
||||
s.dispatcher =timer.NewDispatcher(timerDispatcherLen)
|
||||
if s.chanEvent == nil {
|
||||
s.chanEvent = make(chan event.IEvent,maxServiceEventChannel)
|
||||
s.chanEvent = make(chan event.IEvent,maxServiceEventChannelNum)
|
||||
}
|
||||
|
||||
s.rpcHandler.InitRpcHandler(iService.(rpc.IRpcHandler),getClientFun,getServerFun,iService.(rpc.IRpcHandlerChannel))
|
||||
@@ -123,40 +135,56 @@ func (s *Service) Init(iService IService,getClientFun rpc.FuncRpcClient,getServe
|
||||
s.eventProcessor.Init(s)
|
||||
s.eventHandler = event.NewEventHandler()
|
||||
s.eventHandler.Init(s.eventProcessor)
|
||||
s.Module.IConcurrent = &concurrent.Concurrent{}
|
||||
}
|
||||
|
||||
|
||||
func (s *Service) Start() {
|
||||
s.startStatus = true
|
||||
var waitRun sync.WaitGroup
|
||||
|
||||
for i:=int32(0);i< s.goroutineNum;i++{
|
||||
s.wg.Add(1)
|
||||
waitRun.Add(1)
|
||||
go func(){
|
||||
log.Info(s.GetName()+" service is running",)
|
||||
waitRun.Done()
|
||||
s.Run()
|
||||
}()
|
||||
}
|
||||
|
||||
waitRun.Wait()
|
||||
}
|
||||
|
||||
func (s *Service) Run() {
|
||||
log.SDebug("Start running Service ", s.GetName())
|
||||
defer s.wg.Done()
|
||||
var bStop = false
|
||||
|
||||
concurrent := s.IConcurrent.(*concurrent.Concurrent)
|
||||
concurrentCBChannel := concurrent.GetCallBackChannel()
|
||||
|
||||
s.self.(IService).OnStart()
|
||||
for{
|
||||
var analyzer *profiler.Analyzer
|
||||
select {
|
||||
case <- closeSig:
|
||||
case <- s.closeSig:
|
||||
bStop = true
|
||||
concurrent.Close()
|
||||
case cb:=<-concurrentCBChannel:
|
||||
concurrent.DoCallback(cb)
|
||||
case ev := <- s.chanEvent:
|
||||
switch ev.GetEventType() {
|
||||
case event.Sys_Event_Retire:
|
||||
log.Info("service OnRetire",log.String("servceName",s.GetName()))
|
||||
s.self.(IService).OnRetire()
|
||||
case event.ServiceRpcRequestEvent:
|
||||
cEvent,ok := ev.(*event.Event)
|
||||
if ok == false {
|
||||
log.SError("Type event conversion error")
|
||||
log.Error("Type event conversion error")
|
||||
break
|
||||
}
|
||||
rpcRequest,ok := cEvent.Data.(*rpc.RpcRequest)
|
||||
if ok == false {
|
||||
log.SError("Type *rpc.RpcRequest conversion error")
|
||||
log.Error("Type *rpc.RpcRequest conversion error")
|
||||
break
|
||||
}
|
||||
if s.profiler!=nil {
|
||||
@@ -168,16 +196,16 @@ func (s *Service) Run() {
|
||||
analyzer.Pop()
|
||||
analyzer = nil
|
||||
}
|
||||
eventPool.Put(cEvent)
|
||||
event.DeleteEvent(cEvent)
|
||||
case event.ServiceRpcResponseEvent:
|
||||
cEvent,ok := ev.(*event.Event)
|
||||
if ok == false {
|
||||
log.SError("Type event conversion error")
|
||||
log.Error("Type event conversion error")
|
||||
break
|
||||
}
|
||||
rpcResponseCB,ok := cEvent.Data.(*rpc.Call)
|
||||
if ok == false {
|
||||
log.SError("Type *rpc.Call conversion error")
|
||||
log.Error("Type *rpc.Call conversion error")
|
||||
break
|
||||
}
|
||||
if s.profiler!=nil {
|
||||
@@ -188,7 +216,7 @@ func (s *Service) Run() {
|
||||
analyzer.Pop()
|
||||
analyzer = nil
|
||||
}
|
||||
eventPool.Put(cEvent)
|
||||
event.DeleteEvent(cEvent)
|
||||
default:
|
||||
if s.profiler!=nil {
|
||||
analyzer = s.profiler.Push("[SEvent]"+strconv.Itoa(int(ev.GetEventType())))
|
||||
@@ -235,11 +263,11 @@ func (s *Service) Release(){
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError("core dump info[",errString,"]\n",string(buf[:l]))
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
}
|
||||
}()
|
||||
|
||||
s.self.OnRelease()
|
||||
log.SDebug("Release Service ", s.GetName())
|
||||
}
|
||||
|
||||
func (s *Service) OnRelease(){
|
||||
@@ -249,8 +277,11 @@ func (s *Service) OnInit() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) Wait(){
|
||||
func (s *Service) Stop(){
|
||||
log.Info("stop "+s.GetName()+" service ")
|
||||
close(s.closeSig)
|
||||
s.wg.Wait()
|
||||
log.Info(s.GetName()+" service has been stopped")
|
||||
}
|
||||
|
||||
func (s *Service) GetServiceCfg()interface{}{
|
||||
@@ -294,7 +325,7 @@ func (s *Service) OnDiscoverServiceEvent(ev event.IEvent){
|
||||
if event.IsDiscovery {
|
||||
s.discoveryServiceLister.OnDiscoveryService(event.NodeId,event.ServiceName)
|
||||
}else{
|
||||
s.discoveryServiceLister.OnUnDiscoveryService(event.NodeId,event.ServiceName)
|
||||
s.discoveryServiceLister.OnUnDiscoveryService(event.NodeId)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -320,9 +351,8 @@ func (s *Service) UnRegDiscoverListener(rpcLister rpc.INodeListener) {
|
||||
UnRegDiscoveryServiceEventFun(s.GetName())
|
||||
}
|
||||
|
||||
|
||||
func (s *Service) PushRpcRequest(rpcRequest *rpc.RpcRequest) error{
|
||||
ev := eventPool.Get().(*event.Event)
|
||||
ev := event.NewEvent()
|
||||
ev.Type = event.ServiceRpcRequestEvent
|
||||
ev.Data = rpcRequest
|
||||
|
||||
@@ -330,7 +360,7 @@ func (s *Service) PushRpcRequest(rpcRequest *rpc.RpcRequest) error{
|
||||
}
|
||||
|
||||
func (s *Service) PushRpcResponse(call *rpc.Call) error{
|
||||
ev := eventPool.Get().(*event.Event)
|
||||
ev := event.NewEvent()
|
||||
ev.Type = event.ServiceRpcResponseEvent
|
||||
ev.Data = call
|
||||
|
||||
@@ -342,9 +372,9 @@ func (s *Service) PushEvent(ev event.IEvent) error{
|
||||
}
|
||||
|
||||
func (s *Service) pushEvent(ev event.IEvent) error{
|
||||
if len(s.chanEvent) >= maxServiceEventChannel {
|
||||
if len(s.chanEvent) >= maxServiceEventChannelNum {
|
||||
err := errors.New("The event channel in the service is full")
|
||||
log.SError(err.Error())
|
||||
log.Error(err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -371,10 +401,13 @@ func (s *Service) SetEventChannelNum(num int){
|
||||
func (s *Service) SetGoRoutineNum(goroutineNum int32) bool {
|
||||
//已经开始状态不允许修改协程数量,打开性能分析器不允许开多线程
|
||||
if s.startStatus == true || s.profiler!=nil {
|
||||
log.SError("open profiler mode is not allowed to set Multi-coroutine.")
|
||||
log.Error("open profiler mode is not allowed to set Multi-coroutine.")
|
||||
return false
|
||||
}
|
||||
|
||||
s.goroutineNum = goroutineNum
|
||||
return true
|
||||
}
|
||||
|
||||
func (s *Service) OnRetire(){
|
||||
}
|
||||
@@ -19,9 +19,7 @@ func init(){
|
||||
setupServiceList = []IService{}
|
||||
}
|
||||
|
||||
func Init(chanCloseSig chan bool) {
|
||||
closeSig=chanCloseSig
|
||||
|
||||
func Init() {
|
||||
for _,s := range setupServiceList {
|
||||
err := s.OnInit()
|
||||
if err != nil {
|
||||
@@ -57,8 +55,14 @@ func Start(){
|
||||
}
|
||||
}
|
||||
|
||||
func WaitStop(){
|
||||
func StopAllService(){
|
||||
for i := len(setupServiceList) - 1; i >= 0; i-- {
|
||||
setupServiceList[i].Wait()
|
||||
setupServiceList[i].Stop()
|
||||
}
|
||||
}
|
||||
|
||||
func NotifyAllServiceRetire(){
|
||||
for i := len(setupServiceList) - 1; i >= 0; i-- {
|
||||
setupServiceList[i].SetRetire()
|
||||
}
|
||||
}
|
||||
@@ -43,7 +43,15 @@ func (slf *SyncHttpResponse) Get(timeoutMs int) HttpResponse {
|
||||
}
|
||||
}
|
||||
|
||||
func (m *HttpClientModule) Init(maxpool int, proxyUrl string) {
|
||||
func (m *HttpClientModule) InitHttpClient(transport http.RoundTripper,timeout time.Duration,checkRedirect func(req *http.Request, via []*http.Request) error){
|
||||
m.client = &http.Client{
|
||||
Transport: transport,
|
||||
Timeout: timeout,
|
||||
CheckRedirect: checkRedirect,
|
||||
}
|
||||
}
|
||||
|
||||
func (m *HttpClientModule) Init(proxyUrl string, maxpool int, dialTimeout time.Duration,dialKeepAlive time.Duration,idleConnTimeout time.Duration,timeout time.Duration) {
|
||||
type ProxyFun func(_ *http.Request) (*url.URL, error)
|
||||
var proxyFun ProxyFun
|
||||
if proxyUrl != "" {
|
||||
@@ -55,16 +63,16 @@ func (m *HttpClientModule) Init(maxpool int, proxyUrl string) {
|
||||
m.client = &http.Client{
|
||||
Transport: &http.Transport{
|
||||
DialContext: (&net.Dialer{
|
||||
Timeout: 5 * time.Second,
|
||||
KeepAlive: 30 * time.Second,
|
||||
Timeout: dialTimeout,
|
||||
KeepAlive: dialKeepAlive,
|
||||
}).DialContext,
|
||||
MaxIdleConns: maxpool,
|
||||
MaxIdleConnsPerHost: maxpool,
|
||||
IdleConnTimeout: 60 * time.Second,
|
||||
IdleConnTimeout: idleConnTimeout,
|
||||
Proxy: proxyFun,
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
|
||||
},
|
||||
Timeout: 5 * time.Second,
|
||||
Timeout: timeout,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -68,34 +68,39 @@ func (s *Session) NextSeq(db string, collection string, id interface{}) (int, er
|
||||
|
||||
after := options.After
|
||||
updateOpts := options.FindOneAndUpdateOptions{ReturnDocument: &after}
|
||||
err := s.Client.Database(db).Collection(collection).FindOneAndUpdate(ctxTimeout, bson.M{"_id": id}, bson.M{"$inc": bson.M{"Seq": 1}},&updateOpts).Decode(&res)
|
||||
err := s.Client.Database(db).Collection(collection).FindOneAndUpdate(ctxTimeout, bson.M{"_id": id}, bson.M{"$inc": bson.M{"Seq": 1}}, &updateOpts).Decode(&res)
|
||||
return res.Seq, err
|
||||
}
|
||||
|
||||
//indexKeys[索引][每个索引key字段]
|
||||
func (s *Session) EnsureIndex(db string, collection string, indexKeys [][]string, bBackground bool,sparse bool) error {
|
||||
return s.ensureIndex(db, collection, indexKeys, bBackground, false,sparse)
|
||||
// indexKeys[索引][每个索引key字段]
|
||||
func (s *Session) EnsureIndex(db string, collection string, indexKeys [][]string, bBackground bool, sparse bool, asc bool) error {
|
||||
return s.ensureIndex(db, collection, indexKeys, bBackground, false, sparse, asc)
|
||||
}
|
||||
|
||||
//indexKeys[索引][每个索引key字段]
|
||||
func (s *Session) EnsureUniqueIndex(db string, collection string, indexKeys [][]string, bBackground bool,sparse bool) error {
|
||||
return s.ensureIndex(db, collection, indexKeys, bBackground, true,sparse)
|
||||
// indexKeys[索引][每个索引key字段]
|
||||
func (s *Session) EnsureUniqueIndex(db string, collection string, indexKeys [][]string, bBackground bool, sparse bool, asc bool) error {
|
||||
return s.ensureIndex(db, collection, indexKeys, bBackground, true, sparse, asc)
|
||||
}
|
||||
|
||||
//keys[索引][每个索引key字段]
|
||||
func (s *Session) ensureIndex(db string, collection string, indexKeys [][]string, bBackground bool, unique bool,sparse bool) error {
|
||||
// keys[索引][每个索引key字段]
|
||||
func (s *Session) ensureIndex(db string, collection string, indexKeys [][]string, bBackground bool, unique bool, sparse bool, asc bool) error {
|
||||
var indexes []mongo.IndexModel
|
||||
for _, keys := range indexKeys {
|
||||
keysDoc := bsonx.Doc{}
|
||||
for _, key := range keys {
|
||||
keysDoc = keysDoc.Append(key, bsonx.Int32(1))
|
||||
if asc {
|
||||
keysDoc = keysDoc.Append(key, bsonx.Int32(1))
|
||||
} else {
|
||||
keysDoc = keysDoc.Append(key, bsonx.Int32(-1))
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
options:= options.Index().SetUnique(unique).SetBackground(bBackground)
|
||||
options := options.Index().SetUnique(unique).SetBackground(bBackground)
|
||||
if sparse == true {
|
||||
options.SetSparse(true)
|
||||
}
|
||||
indexes = append(indexes, mongo.IndexModel{Keys: keysDoc, Options:options })
|
||||
indexes = append(indexes, mongo.IndexModel{Keys: keysDoc, Options: options})
|
||||
}
|
||||
|
||||
ctxTimeout, cancel := context.WithTimeout(context.Background(), s.maxOperatorTimeOut)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
package mysqlmondule
|
||||
package mysqlmodule
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
package mysqlmondule
|
||||
package mysqlmodule
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
@@ -25,6 +25,7 @@ type CustomerSubscriber struct {
|
||||
customerId string
|
||||
|
||||
isStop int32 //退出标记
|
||||
topicCache []TopicData // 从消息队列中取出来的消息的缓存
|
||||
}
|
||||
|
||||
const DefaultOneBatchQuantity = 1000
|
||||
@@ -79,6 +80,7 @@ func (cs *CustomerSubscriber) trySetSubscriberBaseInfo(rpcHandler rpc.IRpcHandle
|
||||
cs.StartIndex = uint64(zeroTime.Unix() << 32)
|
||||
}
|
||||
|
||||
cs.topicCache = make([]TopicData, oneBatchQuantity)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -102,11 +104,11 @@ func (cs *CustomerSubscriber) UnSubscribe() {
|
||||
func (cs *CustomerSubscriber) LoadLastIndex() {
|
||||
for {
|
||||
if atomic.LoadInt32(&cs.isStop) != 0 {
|
||||
log.SRelease("topic ", cs.topic, " out of subscription")
|
||||
log.Info("topic ", cs.topic, " out of subscription")
|
||||
break
|
||||
}
|
||||
|
||||
log.SRelease("customer ", cs.customerId, " start load last index ")
|
||||
log.Info("customer ", cs.customerId, " start load last index ")
|
||||
lastIndex, ret := cs.subscriber.dataPersist.LoadCustomerIndex(cs.topic, cs.customerId)
|
||||
if ret == true {
|
||||
if lastIndex > 0 {
|
||||
@@ -114,18 +116,18 @@ func (cs *CustomerSubscriber) LoadLastIndex() {
|
||||
} else {
|
||||
//否则直接使用客户端发回来的
|
||||
}
|
||||
log.SRelease("customer ", cs.customerId, " load finish,start index is ", cs.StartIndex)
|
||||
log.Info("customer ", cs.customerId, " load finish,start index is ", cs.StartIndex)
|
||||
break
|
||||
}
|
||||
|
||||
log.SRelease("customer ", cs.customerId, " load last index is fail...")
|
||||
log.Info("customer ", cs.customerId, " load last index is fail...")
|
||||
time.Sleep(5 * time.Second)
|
||||
}
|
||||
}
|
||||
|
||||
func (cs *CustomerSubscriber) SubscribeRun() {
|
||||
defer cs.subscriber.queueWait.Done()
|
||||
log.SRelease("topic ", cs.topic, " start subscription")
|
||||
log.Info("topic ", cs.topic, " start subscription")
|
||||
|
||||
//加载之前的位置
|
||||
if cs.subscribeMethod == MethodLast {
|
||||
@@ -134,7 +136,7 @@ func (cs *CustomerSubscriber) SubscribeRun() {
|
||||
|
||||
for {
|
||||
if atomic.LoadInt32(&cs.isStop) != 0 {
|
||||
log.SRelease("topic ", cs.topic, " out of subscription")
|
||||
log.Info("topic ", cs.topic, " out of subscription")
|
||||
break
|
||||
}
|
||||
|
||||
@@ -144,26 +146,26 @@ func (cs *CustomerSubscriber) SubscribeRun() {
|
||||
|
||||
//todo 检测退出
|
||||
if cs.subscribe() == false {
|
||||
log.SRelease("topic ", cs.topic, " out of subscription")
|
||||
log.Info("topic ", cs.topic, " out of subscription")
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
//删除订阅关系
|
||||
cs.subscriber.removeCustomer(cs.customerId, cs)
|
||||
log.SRelease("topic ", cs.topic, " unsubscription")
|
||||
log.Info("topic ", cs.topic, " unsubscription")
|
||||
}
|
||||
|
||||
func (cs *CustomerSubscriber) subscribe() bool {
|
||||
//先从内存中查找
|
||||
topicData, ret := cs.subscriber.queue.FindData(cs.StartIndex, cs.oneBatchQuantity)
|
||||
topicData, ret := cs.subscriber.queue.FindData(cs.StartIndex+1, cs.oneBatchQuantity, cs.topicCache[:0])
|
||||
if ret == true {
|
||||
cs.publishToCustomer(topicData)
|
||||
return true
|
||||
}
|
||||
|
||||
|
||||
//从持久化数据中来找
|
||||
topicData = cs.subscriber.dataPersist.FindTopicData(cs.topic, cs.StartIndex, int64(cs.oneBatchQuantity))
|
||||
topicData = cs.subscriber.dataPersist.FindTopicData(cs.topic, cs.StartIndex, int64(cs.oneBatchQuantity),cs.topicCache[:0])
|
||||
return cs.publishToCustomer(topicData)
|
||||
}
|
||||
|
||||
@@ -188,7 +190,7 @@ func (cs *CustomerSubscriber) publishToCustomer(topicData []TopicData) bool {
|
||||
|
||||
if len(topicData) == 0 {
|
||||
//没有任何数据待一秒吧
|
||||
time.Sleep(time.Millisecond * 100)
|
||||
time.Sleep(time.Second * 1)
|
||||
return true
|
||||
}
|
||||
|
||||
@@ -211,7 +213,7 @@ func (cs *CustomerSubscriber) publishToCustomer(topicData []TopicData) bool {
|
||||
}
|
||||
|
||||
//推送数据
|
||||
err := cs.CallNode(cs.fromNodeId, cs.callBackRpcMethod, &dbQueuePublishReq, &dbQueuePushRes)
|
||||
err := cs.CallNodeWithTimeout(4*time.Minute,cs.fromNodeId, cs.callBackRpcMethod, &dbQueuePublishReq, &dbQueuePushRes)
|
||||
if err != nil {
|
||||
time.Sleep(time.Second * 1)
|
||||
continue
|
||||
|
||||
@@ -49,13 +49,22 @@ func (mq *MemoryQueue) findData(startPos int32, startIndex uint64, limit int32)
|
||||
if findStartPos <= mq.tail {
|
||||
findEndPos = mq.tail + 1
|
||||
} else {
|
||||
findEndPos = int32(cap(mq.topicQueue))
|
||||
findEndPos = int32(len(mq.topicQueue))
|
||||
}
|
||||
|
||||
if findStartPos >= findEndPos {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
// 要取的Seq 比内存中最小的数据的Seq还小,那么需要返回错误
|
||||
if mq.topicQueue[findStartPos].Seq > startIndex {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
//二分查找位置
|
||||
pos := int32(algorithms.BiSearch(mq.topicQueue[findStartPos:findEndPos], startIndex, 1))
|
||||
if pos == -1 {
|
||||
return nil, true
|
||||
return nil, false
|
||||
}
|
||||
|
||||
pos += findStartPos
|
||||
@@ -69,29 +78,31 @@ func (mq *MemoryQueue) findData(startPos int32, startIndex uint64, limit int32)
|
||||
}
|
||||
|
||||
// FindData 返回参数[]TopicData 表示查找到的数据,nil表示无数据。bool表示是否不应该在内存中来查
|
||||
func (mq *MemoryQueue) FindData(startIndex uint64, limit int32) ([]TopicData, bool) {
|
||||
func (mq *MemoryQueue) FindData(startIndex uint64, limit int32, dataQueue []TopicData) ([]TopicData, bool) {
|
||||
mq.locker.RLock()
|
||||
defer mq.locker.RUnlock()
|
||||
|
||||
//队列为空时,应该从数据库查找
|
||||
if mq.head == mq.tail {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
/*
|
||||
//先判断startIndex是否比第一个元素要大
|
||||
headTopic := (mq.head + 1) % int32(len(mq.topicQueue))
|
||||
//此时需要从持久化数据中取
|
||||
if startIndex+1 > mq.topicQueue[headTopic].Seq {
|
||||
return nil, false
|
||||
} else if mq.head < mq.tail {
|
||||
// 队列没有折叠
|
||||
datas,ret := mq.findData(mq.head + 1, startIndex, limit)
|
||||
if ret {
|
||||
dataQueue = append(dataQueue, datas...)
|
||||
}
|
||||
return dataQueue, ret
|
||||
} else {
|
||||
// 折叠先找后面的部分
|
||||
datas,ret := mq.findData(mq.head+1, startIndex, limit)
|
||||
if ret {
|
||||
dataQueue = append(dataQueue, datas...)
|
||||
return dataQueue, ret
|
||||
}
|
||||
*/
|
||||
|
||||
retData, ret := mq.findData(mq.head+1, startIndex, limit)
|
||||
if mq.head <= mq.tail || ret == true {
|
||||
return retData, true
|
||||
// 后面没找到,从前面开始找
|
||||
datas,ret = mq.findData(0, startIndex, limit)
|
||||
dataQueue = append(dataQueue, datas...)
|
||||
return dataQueue, ret
|
||||
}
|
||||
|
||||
//如果是正常head在后,尾在前,从数组0下标开始找到tail
|
||||
return mq.findData(0, startIndex, limit)
|
||||
}
|
||||
|
||||
@@ -15,8 +15,8 @@ type QueueDataPersist interface {
|
||||
OnExit()
|
||||
OnReceiveTopicData(topic string, topicData []TopicData) //当收到推送过来的数据时
|
||||
OnPushTopicDataToCustomer(topic string, topicData []TopicData) //当推送数据到Customer时回调
|
||||
PersistTopicData(topic string, topicData []TopicData, retryCount int) ([]TopicData, bool) //持久化数据,失败则返回false,上层会重复尝试,直到成功,建议在函数中加入次数,超过次数则返回true
|
||||
FindTopicData(topic string, startIndex uint64, limit int64) []TopicData //查找数据,参数bool代表数据库查找是否成功
|
||||
PersistTopicData(topic string, topicData []TopicData, retryCount int) ([]TopicData, []TopicData, bool) //持久化数据,失败则返回false,上层会重复尝试,直到成功,建议在函数中加入次数,超过次数则返回true
|
||||
FindTopicData(topic string, startIndex uint64, limit int64, topicBuff []TopicData) []TopicData //查找数据,参数bool代表数据库查找是否成功
|
||||
LoadCustomerIndex(topic string, customerId string) (uint64, bool) //false时代表获取失败,一般是读取错误,会进行重试。如果不存在时,返回(0,true)
|
||||
GetIndex(topicData *TopicData) uint64 //通过topic数据获取进度索引号
|
||||
PersistIndex(topic string, customerId string, index uint64) //持久化进度索引号
|
||||
@@ -63,7 +63,7 @@ func (ms *MessageQueueService) ReadCfg() error {
|
||||
maxProcessTopicBacklogNum, ok := mapDBServiceCfg["MaxProcessTopicBacklogNum"]
|
||||
if ok == false {
|
||||
ms.maxProcessTopicBacklogNum = DefaultMaxTopicBacklogNum
|
||||
log.SRelease("MaxProcessTopicBacklogNum config is set to the default value of ", maxProcessTopicBacklogNum)
|
||||
log.Info("MaxProcessTopicBacklogNum config is set to the default value of ", maxProcessTopicBacklogNum)
|
||||
} else {
|
||||
ms.maxProcessTopicBacklogNum = int32(maxProcessTopicBacklogNum.(float64))
|
||||
}
|
||||
@@ -71,7 +71,7 @@ func (ms *MessageQueueService) ReadCfg() error {
|
||||
memoryQueueLen, ok := mapDBServiceCfg["MemoryQueueLen"]
|
||||
if ok == false {
|
||||
ms.memoryQueueLen = DefaultMemoryQueueLen
|
||||
log.SRelease("MemoryQueueLen config is set to the default value of ", DefaultMemoryQueueLen)
|
||||
log.Info("MemoryQueueLen config is set to the default value of ", DefaultMemoryQueueLen)
|
||||
} else {
|
||||
ms.memoryQueueLen = int32(memoryQueueLen.(float64))
|
||||
}
|
||||
|
||||
@@ -1,18 +1,49 @@
|
||||
package messagequeueservice
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/service"
|
||||
"github.com/duanhf2012/origin/sysmodule/mongodbmodule"
|
||||
"go.mongodb.org/mongo-driver/bson"
|
||||
"go.mongodb.org/mongo-driver/mongo/options"
|
||||
"sunserver/common/util"
|
||||
"time"
|
||||
)
|
||||
|
||||
const MaxDays = 180
|
||||
|
||||
type DataType interface {
|
||||
int | uint | int64 | uint64 | float32 | float64 | int32 | uint32 | int16 | uint16
|
||||
}
|
||||
|
||||
func convertToNumber[DType DataType](val interface{}) (error, DType) {
|
||||
switch val.(type) {
|
||||
case int64:
|
||||
return nil, DType(val.(int64))
|
||||
case int:
|
||||
return nil, DType(val.(int))
|
||||
case uint:
|
||||
return nil, DType(val.(uint))
|
||||
case uint64:
|
||||
return nil, DType(val.(uint64))
|
||||
case float32:
|
||||
return nil, DType(val.(float32))
|
||||
case float64:
|
||||
return nil, DType(val.(float64))
|
||||
case int32:
|
||||
return nil, DType(val.(int32))
|
||||
case uint32:
|
||||
return nil, DType(val.(uint32))
|
||||
case int16:
|
||||
return nil, DType(val.(int16))
|
||||
case uint16:
|
||||
return nil, DType(val.(uint16))
|
||||
}
|
||||
|
||||
return errors.New("unsupported type"), 0
|
||||
}
|
||||
|
||||
type MongoPersist struct {
|
||||
service.Module
|
||||
mongo mongodbmodule.MongoModule
|
||||
@@ -20,8 +51,6 @@ type MongoPersist struct {
|
||||
url string //连接url
|
||||
dbName string //数据库名称
|
||||
retryCount int //落地数据库重试次数
|
||||
|
||||
topic []TopicData //用于临时缓存
|
||||
}
|
||||
|
||||
const CustomerCollectName = "SysCustomer"
|
||||
@@ -48,7 +77,7 @@ func (mp *MongoPersist) OnInit() error {
|
||||
keys = append(keys, "Customer", "Topic")
|
||||
IndexKey = append(IndexKey, keys)
|
||||
s := mp.mongo.TakeSession()
|
||||
if err := s.EnsureUniqueIndex(mp.dbName, CustomerCollectName, IndexKey, true, true); err != nil {
|
||||
if err := s.EnsureUniqueIndex(mp.dbName, CustomerCollectName, IndexKey, true, true,true); err != nil {
|
||||
log.SError("EnsureUniqueIndex is fail ", err.Error())
|
||||
return err
|
||||
}
|
||||
@@ -85,14 +114,6 @@ func (mp *MongoPersist) ReadCfg() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) getTopicBuff(limit int) []TopicData {
|
||||
if cap(mp.topic) < limit {
|
||||
mp.topic = make([]TopicData, limit)
|
||||
}
|
||||
|
||||
return mp.topic[:0]
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) OnExit() {
|
||||
}
|
||||
|
||||
@@ -123,7 +144,6 @@ func (mp *MongoPersist) OnReceiveTopicData(topic string, topicData []TopicData)
|
||||
|
||||
// OnPushTopicDataToCustomer 当推送数据到Customer时回调
|
||||
func (mp *MongoPersist) OnPushTopicDataToCustomer(topic string, topicData []TopicData) {
|
||||
|
||||
}
|
||||
|
||||
// PersistTopicData 持久化数据
|
||||
@@ -142,20 +162,25 @@ func (mp *MongoPersist) persistTopicData(collectionName string, topicData []Topi
|
||||
|
||||
_, err := s.Collection(mp.dbName, collectionName).InsertMany(ctx, documents)
|
||||
if err != nil {
|
||||
log.SError("PersistTopicData InsertMany fail,collect name is ", collectionName)
|
||||
log.SError("PersistTopicData InsertMany fail,collect name is ", collectionName," error:",err.Error())
|
||||
|
||||
//失败最大重试数量
|
||||
return retryCount >= mp.retryCount
|
||||
}
|
||||
|
||||
//log.SRelease("+++++++++====", time.Now().UnixNano())
|
||||
return true
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) IsSameDay(timestamp1 int64,timestamp2 int64) bool{
|
||||
t1 := time.Unix(timestamp1, 0)
|
||||
t2 := time.Unix(timestamp2, 0)
|
||||
return t1.Year() == t2.Year() && t1.Month() == t2.Month()&&t1.Day() == t2.Day()
|
||||
}
|
||||
|
||||
// PersistTopicData 持久化数据
|
||||
func (mp *MongoPersist) PersistTopicData(topic string, topicData []TopicData, retryCount int) ([]TopicData, bool) {
|
||||
func (mp *MongoPersist) PersistTopicData(topic string, topicData []TopicData, retryCount int) ([]TopicData, []TopicData, bool) {
|
||||
if len(topicData) == 0 {
|
||||
return nil, true
|
||||
return nil, nil,true
|
||||
}
|
||||
|
||||
preDate := topicData[0].Seq >> 32
|
||||
@@ -163,7 +188,7 @@ func (mp *MongoPersist) PersistTopicData(topic string, topicData []TopicData, re
|
||||
for findPos = 1; findPos < len(topicData); findPos++ {
|
||||
newDate := topicData[findPos].Seq >> 32
|
||||
//说明换天了
|
||||
if preDate != newDate {
|
||||
if mp.IsSameDay(int64(preDate),int64(newDate)) == false {
|
||||
break
|
||||
}
|
||||
}
|
||||
@@ -172,15 +197,15 @@ func (mp *MongoPersist) PersistTopicData(topic string, topicData []TopicData, re
|
||||
ret := mp.persistTopicData(collectName, topicData[:findPos], retryCount)
|
||||
//如果失败,下次重试
|
||||
if ret == false {
|
||||
return nil, false
|
||||
return nil, nil, false
|
||||
}
|
||||
|
||||
//如果成功
|
||||
return topicData[findPos:len(topicData)], true
|
||||
return topicData[findPos:len(topicData)], topicData[0:findPos], true
|
||||
}
|
||||
|
||||
// FindTopicData 查找数据
|
||||
func (mp *MongoPersist) findTopicData(topic string, startIndex uint64, limit int64) ([]TopicData, bool) {
|
||||
func (mp *MongoPersist) findTopicData(topic string, startIndex uint64, limit int64,topicBuff []TopicData) ([]TopicData, bool) {
|
||||
s := mp.mongo.TakeSession()
|
||||
|
||||
|
||||
@@ -222,7 +247,6 @@ func (mp *MongoPersist) findTopicData(topic string, startIndex uint64, limit int
|
||||
}
|
||||
|
||||
//序列化返回
|
||||
topicBuff := mp.getTopicBuff(int(limit))
|
||||
for i := 0; i < len(res); i++ {
|
||||
rawData, errM := bson.Marshal(res[i])
|
||||
if errM != nil {
|
||||
@@ -257,7 +281,7 @@ func (mp *MongoPersist) getCollectCount(topic string,today string) (int64 ,error
|
||||
}
|
||||
|
||||
// FindTopicData 查找数据
|
||||
func (mp *MongoPersist) FindTopicData(topic string, startIndex uint64, limit int64) []TopicData {
|
||||
func (mp *MongoPersist) FindTopicData(topic string, startIndex uint64, limit int64,topicBuff []TopicData) []TopicData {
|
||||
//某表找不到,一直往前找,找到当前置为止
|
||||
for days := 1; days <= MaxDays; days++ {
|
||||
//是否可以跳天
|
||||
@@ -281,7 +305,7 @@ func (mp *MongoPersist) FindTopicData(topic string, startIndex uint64, limit int
|
||||
}
|
||||
|
||||
//从startIndex开始一直往后查
|
||||
topicData, isSucc := mp.findTopicData(topic, startIndex, limit)
|
||||
topicData, isSucc := mp.findTopicData(topic, startIndex, limit,topicBuff)
|
||||
//有数据或者数据库出错时返回,返回后,会进行下一轮的查询遍历
|
||||
if len(topicData) > 0 || isSucc == false {
|
||||
return topicData
|
||||
@@ -370,7 +394,7 @@ func (mp *MongoPersist) GetIndex(topicData *TopicData) uint64 {
|
||||
|
||||
for _, e := range document {
|
||||
if e.Key == "_id" {
|
||||
errC, seq := util.ConvertToNumber[uint64](e.Value)
|
||||
errC, seq := convertToNumber[uint64](e.Value)
|
||||
if errC != nil {
|
||||
log.Error("value is error:%s,%+v, ", errC.Error(), e.Value)
|
||||
}
|
||||
@@ -394,8 +418,7 @@ func (mp *MongoPersist) PersistIndex(topic string, customerId string, index uint
|
||||
|
||||
ctx, cancel := s.GetDefaultContext()
|
||||
defer cancel()
|
||||
ret, err := s.Collection(mp.dbName, CustomerCollectName).UpdateOne(ctx, condition, updata, UpdateOptionsOpts...)
|
||||
fmt.Println(ret)
|
||||
_, err := s.Collection(mp.dbName, CustomerCollectName).UpdateOne(ctx, condition, updata, UpdateOptionsOpts...)
|
||||
if err != nil {
|
||||
log.SError("PersistIndex fail :", err.Error())
|
||||
}
|
||||
|
||||
@@ -27,7 +27,7 @@ func (ss *Subscriber) PushTopicDataToQueue(topic string, topics []TopicData) {
|
||||
}
|
||||
}
|
||||
|
||||
func (ss *Subscriber) PersistTopicData(topic string, topics []TopicData, retryCount int) ([]TopicData, bool) {
|
||||
func (ss *Subscriber) PersistTopicData(topic string, topics []TopicData, retryCount int) ([]TopicData, []TopicData, bool) {
|
||||
return ss.dataPersist.PersistTopicData(topic, topics, retryCount)
|
||||
}
|
||||
|
||||
@@ -56,9 +56,9 @@ func (ss *Subscriber) TopicSubscribe(rpcHandler rpc.IRpcHandler, subScribeType r
|
||||
}
|
||||
|
||||
if ok == true {
|
||||
log.SRelease("repeat subscription for customer ", customerId)
|
||||
log.Info("repeat subscription for customer ", customerId)
|
||||
} else {
|
||||
log.SRelease("subscription for customer ", customerId)
|
||||
log.Info("subscription for customer ", customerId)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -93,7 +93,7 @@ func (tr *TopicRoom) Stop() {
|
||||
func (tr *TopicRoom) topicRoomRun() {
|
||||
defer tr.queueWait.Done()
|
||||
|
||||
log.SRelease("topic room ", tr.topic, " is running..")
|
||||
log.Info("topic room ", tr.topic, " is running..")
|
||||
for {
|
||||
if atomic.LoadInt32(&tr.isStop) != 0 {
|
||||
break
|
||||
@@ -113,25 +113,28 @@ func (tr *TopicRoom) topicRoomRun() {
|
||||
}
|
||||
|
||||
//如果落地失败,最大重试maxTryPersistNum次数
|
||||
var ret bool
|
||||
for j := 0; j < maxTryPersistNum; {
|
||||
for retryCount := 0; retryCount < maxTryPersistNum; {
|
||||
//持久化处理
|
||||
stagingBuff, ret = tr.PersistTopicData(tr.topic, stagingBuff, j+1)
|
||||
//如果存档成功,并且有后续批次,则继续存档
|
||||
if ret == true && len(stagingBuff) > 0 {
|
||||
//二次存档不计次数
|
||||
continue
|
||||
}
|
||||
|
||||
//计数增加一次,并且等待100ms,继续重试
|
||||
j += 1
|
||||
if ret == false {
|
||||
stagingBuff, savedBuff, ret := tr.PersistTopicData(tr.topic, stagingBuff, retryCount+1)
|
||||
|
||||
if ret == true {
|
||||
// 1. 把成功存储的数据放入内存中
|
||||
if len(savedBuff) > 0 {
|
||||
tr.PushTopicDataToQueue(tr.topic, savedBuff)
|
||||
}
|
||||
|
||||
// 2. 如果存档成功,并且有后续批次,则继续存档
|
||||
if ret == true && len(stagingBuff) > 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// 3. 成功了,跳出
|
||||
break
|
||||
} else {
|
||||
//计数增加一次,并且等待100ms,继续重试
|
||||
retryCount++
|
||||
time.Sleep(time.Millisecond * 100)
|
||||
continue
|
||||
}
|
||||
|
||||
tr.PushTopicDataToQueue(tr.topic, stagingBuff)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
@@ -142,5 +145,5 @@ func (tr *TopicRoom) topicRoomRun() {
|
||||
}
|
||||
tr.customerLocker.Unlock()
|
||||
|
||||
log.SRelease("topic room ", tr.topic, " is stop")
|
||||
log.Info("topic room ", tr.topic, " is stop")
|
||||
}
|
||||
|
||||
@@ -6,9 +6,9 @@ import (
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/service"
|
||||
"github.com/duanhf2012/origin/sysmodule/mongodbmodule"
|
||||
"github.com/duanhf2012/origin/util/coroutine"
|
||||
"go.mongodb.org/mongo-driver/bson"
|
||||
"go.mongodb.org/mongo-driver/mongo/options"
|
||||
"runtime"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
@@ -18,10 +18,11 @@ const batchRemoveNum = 128 //一切删除的最大数量
|
||||
|
||||
// RankDataDB 排行表数据
|
||||
type RankDataDB struct {
|
||||
Id uint64 `bson:"_id,omitempty"`
|
||||
RefreshTime int64 `bson:"RefreshTime,omitempty"`
|
||||
SortData []int64 `bson:"SortData,omitempty"`
|
||||
Data []byte `bson:"Data,omitempty"`
|
||||
Id uint64 `bson:"_id"`
|
||||
RefreshTime int64 `bson:"RefreshTime"`
|
||||
SortData []int64 `bson:"SortData"`
|
||||
Data []byte `bson:"Data"`
|
||||
ExData []int64 `bson:"ExData"`
|
||||
}
|
||||
|
||||
// MongoPersist持久化Module
|
||||
@@ -70,7 +71,9 @@ func (mp *MongoPersist) OnInit() error {
|
||||
}
|
||||
|
||||
//开启协程
|
||||
coroutine.GoRecover(mp.persistCoroutine,-1)
|
||||
mp.waitGroup.Add(1)
|
||||
go mp.persistCoroutine()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -139,13 +142,13 @@ func (mp *MongoPersist) OnSetupRank(manual bool,rankSkip *RankSkip) error{
|
||||
return nil
|
||||
}
|
||||
|
||||
log.SRelease("start load rank ",rankSkip.GetRankName()," from mongodb.")
|
||||
log.Info("start load rank ",rankSkip.GetRankName()," from mongodb.")
|
||||
err := mp.loadFromDB(rankSkip.GetRankID(),rankSkip.GetRankName())
|
||||
if err != nil {
|
||||
log.SError("load from db is fail :%s",err.Error())
|
||||
return err
|
||||
}
|
||||
log.SRelease("finish load rank ",rankSkip.GetRankName()," from mongodb.")
|
||||
log.Info("finish load rank ",rankSkip.GetRankName()," from mongodb.")
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -186,6 +189,9 @@ func (mp *MongoPersist) loadFromDB(rankId uint64,rankCollectName string) error{
|
||||
rankData.Data = rankDataDB.Data
|
||||
rankData.Key = rankDataDB.Id
|
||||
rankData.SortData = rankDataDB.SortData
|
||||
for _,eData := range rankDataDB.ExData{
|
||||
rankData.ExData = append(rankData.ExData,&rpc.ExtendIncData{InitValue:eData})
|
||||
}
|
||||
|
||||
//更新到排行榜
|
||||
rankSkip.UpsetRank(&rankData,rankDataDB.RefreshTime,true)
|
||||
@@ -256,9 +262,8 @@ func (mp *MongoPersist) JugeTimeoutSave() bool{
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) persistCoroutine(){
|
||||
mp.waitGroup.Add(1)
|
||||
defer mp.waitGroup.Done()
|
||||
for atomic.LoadInt32(&mp.stop)==0 || mp.hasPersistData(){
|
||||
for atomic.LoadInt32(&mp.stop)==0 {
|
||||
//间隔时间sleep
|
||||
time.Sleep(time.Second*1)
|
||||
|
||||
@@ -287,6 +292,15 @@ func (mp *MongoPersist) hasPersistData() bool{
|
||||
}
|
||||
|
||||
func (mp *MongoPersist) saveToDB(){
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
}
|
||||
}()
|
||||
|
||||
//1.copy数据
|
||||
mp.Lock()
|
||||
mapRemoveRankData := mp.mapRemoveRankData
|
||||
@@ -343,7 +357,7 @@ func (mp *MongoPersist) removeRankData(rankId uint64,keys []uint64) bool {
|
||||
|
||||
func (mp *MongoPersist) upsertToDB(collectName string,rankData *RankData) error{
|
||||
condition := bson.D{{"_id", rankData.Key}}
|
||||
upsert := bson.M{"_id":rankData.Key,"RefreshTime": rankData.refreshTimestamp, "SortData": rankData.SortData, "Data": rankData.Data}
|
||||
upsert := bson.M{"_id":rankData.Key,"RefreshTime": rankData.RefreshTimestamp, "SortData": rankData.SortData, "Data": rankData.Data,"ExData":rankData.ExData}
|
||||
update := bson.M{"$set": upsert}
|
||||
|
||||
s := mp.mongo.TakeSession()
|
||||
|
||||
@@ -14,8 +14,12 @@ var RankDataPool = sync.NewPoolEx(make(chan sync.IPoolData, 10240), func() sync.
|
||||
})
|
||||
|
||||
type RankData struct {
|
||||
*rpc.RankData
|
||||
refreshTimestamp int64 //刷新时间
|
||||
Key uint64
|
||||
SortData []int64
|
||||
Data []byte
|
||||
ExData []int64
|
||||
|
||||
RefreshTimestamp int64 //刷新时间
|
||||
//bRelease bool
|
||||
ref bool
|
||||
compareFunc func(other skip.Comparator) int
|
||||
@@ -27,8 +31,15 @@ func NewRankData(isDec bool, data *rpc.RankData,refreshTimestamp int64) *RankDat
|
||||
if isDec {
|
||||
ret.compareFunc = ret.desCompare
|
||||
}
|
||||
ret.RankData = data
|
||||
ret.refreshTimestamp = refreshTimestamp
|
||||
ret.Key = data.Key
|
||||
ret.SortData = data.SortData
|
||||
ret.Data = data.Data
|
||||
|
||||
for _,d := range data.ExData{
|
||||
ret.ExData = append(ret.ExData,d.InitValue+d.IncreaseValue)
|
||||
}
|
||||
|
||||
ret.RefreshTimestamp = refreshTimestamp
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
@@ -2,13 +2,15 @@ package rankservice
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/duanhf2012/origin/log"
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/service"
|
||||
"time"
|
||||
)
|
||||
|
||||
const PreMapRankSkipLen = 10
|
||||
|
||||
type RankService struct {
|
||||
service.Service
|
||||
|
||||
@@ -61,11 +63,11 @@ func (rs *RankService) RPC_ManualAddRankSkip(addInfo *rpc.AddRankList, addResult
|
||||
continue
|
||||
}
|
||||
|
||||
newSkip := NewRankSkip(addRankListData.RankId,addRankListData.RankName,addRankListData.IsDec, transformLevel(addRankListData.SkipListLevel), addRankListData.MaxRank,time.Duration(addRankListData.ExpireMs)*time.Millisecond)
|
||||
newSkip := NewRankSkip(addRankListData.RankId, addRankListData.RankName, addRankListData.IsDec, transformLevel(addRankListData.SkipListLevel), addRankListData.MaxRank, time.Duration(addRankListData.ExpireMs)*time.Millisecond)
|
||||
newSkip.SetupRankModule(rs.rankModule)
|
||||
|
||||
rs.mapRankSkip[addRankListData.RankId] = newSkip
|
||||
rs.rankModule.OnSetupRank(true,newSkip)
|
||||
rs.rankModule.OnSetupRank(true, newSkip)
|
||||
}
|
||||
|
||||
addResult.AddCount = 1
|
||||
@@ -82,6 +84,52 @@ func (rs *RankService) RPC_UpsetRank(upsetInfo *rpc.UpsetRankData, upsetResult *
|
||||
addCount, updateCount := rankSkip.UpsetRankList(upsetInfo.RankDataList)
|
||||
upsetResult.AddCount = addCount
|
||||
upsetResult.ModifyCount = updateCount
|
||||
|
||||
if upsetInfo.FindNewRank == true {
|
||||
for _, rdata := range upsetInfo.RankDataList {
|
||||
_, rank := rankSkip.GetRankNodeData(rdata.Key)
|
||||
upsetResult.NewRank = append(upsetResult.NewRank, &rpc.RankInfo{Key: rdata.Key, Rank: rank})
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RPC_IncreaseRankData 增量更新排行扩展数据
|
||||
func (rs *RankService) RPC_IncreaseRankData(changeRankData *rpc.IncreaseRankData, changeRankDataRet *rpc.IncreaseRankDataRet) error {
|
||||
rankSkip, ok := rs.mapRankSkip[changeRankData.RankId]
|
||||
if ok == false || rankSkip == nil {
|
||||
return fmt.Errorf("RPC_ChangeRankData[", changeRankData.RankId, "] no this rank id")
|
||||
}
|
||||
|
||||
ret := rankSkip.ChangeExtendData(changeRankData)
|
||||
if ret == false {
|
||||
return fmt.Errorf("RPC_ChangeRankData[", changeRankData.RankId, "] no this key ", changeRankData.Key)
|
||||
}
|
||||
|
||||
if changeRankData.ReturnRankData == true {
|
||||
rankData, rank := rankSkip.GetRankNodeData(changeRankData.Key)
|
||||
changeRankDataRet.PosData = &rpc.RankPosData{}
|
||||
changeRankDataRet.PosData.Rank = rank
|
||||
|
||||
changeRankDataRet.PosData.Key = rankData.Key
|
||||
changeRankDataRet.PosData.Data = rankData.Data
|
||||
changeRankDataRet.PosData.SortData = rankData.SortData
|
||||
changeRankDataRet.PosData.ExtendData = rankData.ExData
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RPC_UpsetRank 更新排行榜
|
||||
func (rs *RankService) RPC_UpdateRankData(updateRankData *rpc.UpdateRankData, updateRankDataRet *rpc.UpdateRankDataRet) error {
|
||||
rankSkip, ok := rs.mapRankSkip[updateRankData.RankId]
|
||||
if ok == false || rankSkip == nil {
|
||||
updateRankDataRet.Ret = false
|
||||
return nil
|
||||
}
|
||||
|
||||
updateRankDataRet.Ret = rankSkip.UpdateRankData(updateRankData)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -114,6 +162,7 @@ func (rs *RankService) RPC_FindRankDataByKey(findInfo *rpc.FindRankDataByKey, fi
|
||||
findResult.Key = findRankData.Key
|
||||
findResult.SortData = findRankData.SortData
|
||||
findResult.Rank = rank
|
||||
findResult.ExtendData = findRankData.ExData
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -131,6 +180,7 @@ func (rs *RankService) RPC_FindRankDataByRank(findInfo *rpc.FindRankDataByRank,
|
||||
findResult.Key = findRankData.Key
|
||||
findResult.SortData = findRankData.SortData
|
||||
findResult.Rank = rankPos
|
||||
findResult.ExtendData = findRankData.ExData
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -139,7 +189,7 @@ func (rs *RankService) RPC_FindRankDataByRank(findInfo *rpc.FindRankDataByRank,
|
||||
func (rs *RankService) RPC_FindRankDataList(findInfo *rpc.FindRankDataList, findResult *rpc.RankDataList) error {
|
||||
rankObj, ok := rs.mapRankSkip[findInfo.RankId]
|
||||
if ok == false || rankObj == nil {
|
||||
err := fmt.Errorf("not config rank %d",findInfo.RankId)
|
||||
err := fmt.Errorf("not config rank %d", findInfo.RankId)
|
||||
log.SError(err.Error())
|
||||
return err
|
||||
}
|
||||
@@ -151,7 +201,7 @@ func (rs *RankService) RPC_FindRankDataList(findInfo *rpc.FindRankDataList, find
|
||||
}
|
||||
|
||||
//查询附带的key
|
||||
if findInfo.Key!= 0 {
|
||||
if findInfo.Key != 0 {
|
||||
findRankData, rank := rankObj.GetRankNodeData(findInfo.Key)
|
||||
if findRankData != nil {
|
||||
findResult.KeyRank = &rpc.RankPosData{}
|
||||
@@ -159,6 +209,7 @@ func (rs *RankService) RPC_FindRankDataList(findInfo *rpc.FindRankDataList, find
|
||||
findResult.KeyRank.Key = findRankData.Key
|
||||
findResult.KeyRank.SortData = findRankData.SortData
|
||||
findResult.KeyRank.Rank = rank
|
||||
findResult.KeyRank.ExtendData = findRankData.ExData
|
||||
}
|
||||
}
|
||||
|
||||
@@ -193,12 +244,12 @@ func (rs *RankService) dealCfg() error {
|
||||
}
|
||||
|
||||
rankId, okId := mapCfg["RankID"].(float64)
|
||||
if okId == false || uint64(rankId)==0 {
|
||||
if okId == false || uint64(rankId) == 0 {
|
||||
return fmt.Errorf("RankService SortCfg data must has RankID[number]")
|
||||
}
|
||||
|
||||
rankName, okId := mapCfg["RankName"].(string)
|
||||
if okId == false || len(rankName)==0 {
|
||||
if okId == false || len(rankName) == 0 {
|
||||
return fmt.Errorf("RankService SortCfg data must has RankName[string]")
|
||||
}
|
||||
|
||||
@@ -207,11 +258,10 @@ func (rs *RankService) dealCfg() error {
|
||||
maxRank, _ := mapCfg["MaxRank"].(float64)
|
||||
expireMs, _ := mapCfg["ExpireMs"].(float64)
|
||||
|
||||
|
||||
newSkip := NewRankSkip(uint64(rankId),rankName,isDec, transformLevel(int32(level)), uint64(maxRank),time.Duration(expireMs)*time.Millisecond)
|
||||
newSkip := NewRankSkip(uint64(rankId), rankName, isDec, transformLevel(int32(level)), uint64(maxRank), time.Duration(expireMs)*time.Millisecond)
|
||||
newSkip.SetupRankModule(rs.rankModule)
|
||||
rs.mapRankSkip[uint64(rankId)] = newSkip
|
||||
err := rs.rankModule.OnSetupRank(false,newSkip)
|
||||
err := rs.rankModule.OnSetupRank(false, newSkip)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -219,5 +269,3 @@ func (rs *RankService) dealCfg() error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -2,20 +2,21 @@ package rankservice
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/duanhf2012/origin/rpc"
|
||||
"github.com/duanhf2012/origin/util/algorithms/skip"
|
||||
"time"
|
||||
)
|
||||
|
||||
type RankSkip struct {
|
||||
rankId uint64 //排行榜ID
|
||||
rankName string //排行榜名称
|
||||
isDes bool //是否为降序 true:降序 false:升序
|
||||
skipList *skip.SkipList //跳表
|
||||
mapRankData map[uint64]*RankData //排行数据map
|
||||
maxLen uint64 //排行数据长度
|
||||
expireMs time.Duration //有效时间
|
||||
rankModule IRankModule
|
||||
rankId uint64 //排行榜ID
|
||||
rankName string //排行榜名称
|
||||
isDes bool //是否为降序 true:降序 false:升序
|
||||
skipList *skip.SkipList //跳表
|
||||
mapRankData map[uint64]*RankData //排行数据map
|
||||
maxLen uint64 //排行数据长度
|
||||
expireMs time.Duration //有效时间
|
||||
rankModule IRankModule
|
||||
rankDataExpire rankDataHeap
|
||||
}
|
||||
|
||||
@@ -28,7 +29,7 @@ const (
|
||||
)
|
||||
|
||||
// NewRankSkip 创建排行榜
|
||||
func NewRankSkip(rankId uint64,rankName string,isDes bool, level interface{}, maxLen uint64,expireMs time.Duration) *RankSkip {
|
||||
func NewRankSkip(rankId uint64, rankName string, isDes bool, level interface{}, maxLen uint64, expireMs time.Duration) *RankSkip {
|
||||
rs := &RankSkip{}
|
||||
|
||||
rs.rankId = rankId
|
||||
@@ -38,17 +39,17 @@ func NewRankSkip(rankId uint64,rankName string,isDes bool, level interface{}, ma
|
||||
rs.mapRankData = make(map[uint64]*RankData, 10240)
|
||||
rs.maxLen = maxLen
|
||||
rs.expireMs = expireMs
|
||||
rs.rankDataExpire.Init(int32(maxLen),expireMs)
|
||||
rs.rankDataExpire.Init(int32(maxLen), expireMs)
|
||||
|
||||
return rs
|
||||
}
|
||||
|
||||
func (rs *RankSkip) pickExpireKey(){
|
||||
func (rs *RankSkip) pickExpireKey() {
|
||||
if rs.expireMs == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
for i:=1;i<=MaxPickExpireNum;i++{
|
||||
for i := 1; i <= MaxPickExpireNum; i++ {
|
||||
key := rs.rankDataExpire.PopExpireKey()
|
||||
if key == 0 {
|
||||
return
|
||||
@@ -79,46 +80,211 @@ func (rs *RankSkip) GetRankLen() uint64 {
|
||||
|
||||
func (rs *RankSkip) UpsetRankList(upsetRankData []*rpc.RankData) (addCount int32, modifyCount int32) {
|
||||
for _, upsetData := range upsetRankData {
|
||||
changeType := rs.UpsetRank(upsetData,time.Now().UnixNano(),false)
|
||||
if changeType == RankDataAdd{
|
||||
addCount+=1
|
||||
} else if changeType == RankDataUpdate{
|
||||
modifyCount+=1
|
||||
}
|
||||
changeType := rs.UpsetRank(upsetData, time.Now().UnixNano(), false)
|
||||
if changeType == RankDataAdd {
|
||||
addCount += 1
|
||||
} else if changeType == RankDataUpdate {
|
||||
modifyCount += 1
|
||||
}
|
||||
}
|
||||
|
||||
rs.pickExpireKey()
|
||||
return
|
||||
}
|
||||
|
||||
func (rs *RankSkip) InsertDataOnNonExistent(changeRankData *rpc.IncreaseRankData) bool {
|
||||
if changeRankData.InsertDataOnNonExistent == false {
|
||||
return false
|
||||
}
|
||||
|
||||
var upsetData rpc.RankData
|
||||
upsetData.Key = changeRankData.Key
|
||||
upsetData.Data = changeRankData.InitData
|
||||
upsetData.SortData = changeRankData.InitSortData
|
||||
|
||||
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(upsetData.SortData); i++ {
|
||||
upsetData.SortData[i] += changeRankData.IncreaseSortData[i]
|
||||
}
|
||||
|
||||
for _, val := range changeRankData.Extend {
|
||||
upsetData.ExData = append(upsetData.ExData, &rpc.ExtendIncData{InitValue: val.InitValue, IncreaseValue: val.IncreaseValue})
|
||||
}
|
||||
|
||||
//强制设计指定值
|
||||
for _, setData := range changeRankData.SetSortAndExtendData {
|
||||
if setData.IsSortData == true {
|
||||
if int(setData.Pos) >= len(upsetData.SortData) {
|
||||
return false
|
||||
}
|
||||
upsetData.SortData[setData.Pos] = setData.Data
|
||||
} else {
|
||||
if int(setData.Pos) < len(upsetData.ExData) {
|
||||
upsetData.ExData[setData.Pos].IncreaseValue = 0
|
||||
upsetData.ExData[setData.Pos].InitValue = setData.Data
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
refreshTimestamp := time.Now().UnixNano()
|
||||
newRankData := NewRankData(rs.isDes, &upsetData, refreshTimestamp)
|
||||
rs.skipList.Insert(newRankData)
|
||||
rs.mapRankData[upsetData.Key] = newRankData
|
||||
|
||||
//刷新有效期和存档数据
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||
rs.rankModule.OnChangeRankData(rs, newRankData)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func (rs *RankSkip) UpdateRankData(updateRankData *rpc.UpdateRankData) bool {
|
||||
rankNode, ok := rs.mapRankData[updateRankData.Key]
|
||||
if ok == false {
|
||||
return false
|
||||
}
|
||||
|
||||
rankNode.Data = updateRankData.Data
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(updateRankData.Key, time.Now().UnixNano())
|
||||
rs.rankModule.OnChangeRankData(rs, rankNode)
|
||||
return true
|
||||
}
|
||||
|
||||
func (rs *RankSkip) ChangeExtendData(changeRankData *rpc.IncreaseRankData) bool {
|
||||
rankNode, ok := rs.mapRankData[changeRankData.Key]
|
||||
if ok == false {
|
||||
return rs.InsertDataOnNonExistent(changeRankData)
|
||||
}
|
||||
|
||||
//先判断是不是有修改
|
||||
bChange := false
|
||||
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(rankNode.SortData); i++ {
|
||||
if changeRankData.IncreaseSortData[i] != 0 {
|
||||
bChange = true
|
||||
}
|
||||
}
|
||||
|
||||
if bChange == false {
|
||||
for _, setSortAndExtendData := range changeRankData.SetSortAndExtendData {
|
||||
if setSortAndExtendData.IsSortData == true {
|
||||
bChange = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//如果有改变,删除原有的数据,重新刷新到跳表
|
||||
rankData := rankNode
|
||||
refreshTimestamp := time.Now().UnixNano()
|
||||
if bChange == true {
|
||||
//copy数据
|
||||
var upsetData rpc.RankData
|
||||
upsetData.Key = rankNode.Key
|
||||
upsetData.Data = rankNode.Data
|
||||
upsetData.SortData = append(upsetData.SortData, rankNode.SortData...)
|
||||
|
||||
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(upsetData.SortData); i++ {
|
||||
if changeRankData.IncreaseSortData[i] != 0 {
|
||||
upsetData.SortData[i] += changeRankData.IncreaseSortData[i]
|
||||
}
|
||||
}
|
||||
|
||||
for _, setData := range changeRankData.SetSortAndExtendData {
|
||||
if setData.IsSortData == true {
|
||||
if int(setData.Pos) < len(upsetData.SortData) {
|
||||
upsetData.SortData[setData.Pos] = setData.Data
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
rankData = NewRankData(rs.isDes, &upsetData, refreshTimestamp)
|
||||
rankData.ExData = append(rankData.ExData, rankNode.ExData...)
|
||||
|
||||
//从排行榜中删除
|
||||
rs.skipList.Delete(rankNode)
|
||||
ReleaseRankData(rankNode)
|
||||
|
||||
rs.skipList.Insert(rankData)
|
||||
rs.mapRankData[upsetData.Key] = rankData
|
||||
}
|
||||
|
||||
//增长扩展参数
|
||||
for i := 0; i < len(changeRankData.Extend); i++ {
|
||||
if i < len(rankData.ExData) {
|
||||
//直接增长
|
||||
rankData.ExData[i] += changeRankData.Extend[i].IncreaseValue
|
||||
} else {
|
||||
//如果不存在的扩展位置,append补充,并按IncreaseValue增长
|
||||
rankData.ExData = append(rankData.ExData, changeRankData.Extend[i].InitValue+changeRankData.Extend[i].IncreaseValue)
|
||||
}
|
||||
}
|
||||
|
||||
//设置固定值
|
||||
for _, setData := range changeRankData.SetSortAndExtendData {
|
||||
if setData.IsSortData == false {
|
||||
if int(setData.Pos) < len(rankData.ExData) {
|
||||
rankData.ExData[setData.Pos] = setData.Data
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(rankData.Key, refreshTimestamp)
|
||||
rs.rankModule.OnChangeRankData(rs, rankData)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// UpsetRank 更新玩家排行数据,返回变化后的数据及变化类型
|
||||
func (rs *RankSkip) UpsetRank(upsetData *rpc.RankData,refreshTimestamp int64,fromLoad bool) RankDataChangeType {
|
||||
func (rs *RankSkip) UpsetRank(upsetData *rpc.RankData, refreshTimestamp int64, fromLoad bool) RankDataChangeType {
|
||||
rankNode, ok := rs.mapRankData[upsetData.Key]
|
||||
if ok == true {
|
||||
//增长扩展数据
|
||||
for i := 0; i < len(upsetData.ExData); i++ {
|
||||
if i < len(rankNode.ExData) {
|
||||
//直接增长
|
||||
rankNode.ExData[i] += upsetData.ExData[i].IncreaseValue
|
||||
} else {
|
||||
//如果不存在的扩展位置,append补充,并按IncreaseValue增长
|
||||
rankNode.ExData = append(rankNode.ExData, upsetData.ExData[i].InitValue+upsetData.ExData[i].IncreaseValue)
|
||||
}
|
||||
}
|
||||
|
||||
//找到的情况对比排名数据是否有变化,无变化进行data更新,有变化则进行删除更新
|
||||
if compareIsEqual(rankNode.SortData, upsetData.SortData) {
|
||||
rankNode.Data = upsetData.GetData()
|
||||
rankNode.refreshTimestamp = refreshTimestamp
|
||||
rankNode.RefreshTimestamp = refreshTimestamp
|
||||
|
||||
if fromLoad == false {
|
||||
rs.rankModule.OnChangeRankData(rs,rankNode)
|
||||
rs.rankModule.OnChangeRankData(rs, rankNode)
|
||||
}
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key,refreshTimestamp)
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||
return RankDataUpdate
|
||||
}
|
||||
|
||||
if upsetData.Data == nil {
|
||||
upsetData.Data = rankNode.Data
|
||||
}
|
||||
|
||||
//设置额外数据
|
||||
for idx, exValue := range rankNode.ExData {
|
||||
currentIncreaseValue := int64(0)
|
||||
if idx < len(upsetData.ExData) {
|
||||
currentIncreaseValue = upsetData.ExData[idx].IncreaseValue
|
||||
}
|
||||
|
||||
upsetData.ExData = append(upsetData.ExData, &rpc.ExtendIncData{
|
||||
InitValue: exValue,
|
||||
IncreaseValue: currentIncreaseValue,
|
||||
})
|
||||
}
|
||||
|
||||
rs.skipList.Delete(rankNode)
|
||||
ReleaseRankData(rankNode)
|
||||
|
||||
newRankData := NewRankData(rs.isDes, upsetData,refreshTimestamp)
|
||||
newRankData := NewRankData(rs.isDes, upsetData, refreshTimestamp)
|
||||
rs.skipList.Insert(newRankData)
|
||||
rs.mapRankData[upsetData.Key] = newRankData
|
||||
|
||||
//刷新有效期
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key,refreshTimestamp)
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||
|
||||
if fromLoad == false {
|
||||
rs.rankModule.OnChangeRankData(rs, newRankData)
|
||||
@@ -127,10 +293,11 @@ func (rs *RankSkip) UpsetRank(upsetData *rpc.RankData,refreshTimestamp int64,fro
|
||||
}
|
||||
|
||||
if rs.checkInsertAndReplace(upsetData) {
|
||||
newRankData := NewRankData(rs.isDes, upsetData,refreshTimestamp)
|
||||
newRankData := NewRankData(rs.isDes, upsetData, refreshTimestamp)
|
||||
|
||||
rs.skipList.Insert(newRankData)
|
||||
rs.mapRankData[upsetData.Key] = newRankData
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key,refreshTimestamp)
|
||||
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
|
||||
|
||||
if fromLoad == false {
|
||||
rs.rankModule.OnEnterRank(rs, newRankData)
|
||||
@@ -152,7 +319,7 @@ func (rs *RankSkip) DeleteRankData(delKeys []uint64) int32 {
|
||||
continue
|
||||
}
|
||||
|
||||
removeRankData+=1
|
||||
removeRankData += 1
|
||||
rs.skipList.Delete(rankData)
|
||||
delete(rs.mapRankData, rankData.Key)
|
||||
rs.rankDataExpire.RemoveExpireKey(rankData.Key)
|
||||
@@ -172,13 +339,13 @@ func (rs *RankSkip) GetRankNodeData(findKey uint64) (*RankData, uint64) {
|
||||
|
||||
rs.pickExpireKey()
|
||||
_, index := rs.skipList.GetWithPosition(rankNode)
|
||||
return rankNode, index+1
|
||||
return rankNode, index + 1
|
||||
}
|
||||
|
||||
// GetRankNodeDataByPos 获取,返回排名节点与名次
|
||||
func (rs *RankSkip) GetRankNodeDataByRank(rank uint64) (*RankData, uint64) {
|
||||
rs.pickExpireKey()
|
||||
rankNode := rs.skipList.ByPosition(rank-1)
|
||||
rankNode := rs.skipList.ByPosition(rank - 1)
|
||||
if rankNode == nil {
|
||||
return nil, 0
|
||||
}
|
||||
@@ -189,12 +356,12 @@ func (rs *RankSkip) GetRankNodeDataByRank(rank uint64) (*RankData, uint64) {
|
||||
// GetRankKeyPrevToLimit 获取key前count名的数据
|
||||
func (rs *RankSkip) GetRankKeyPrevToLimit(findKey, count uint64, result *rpc.RankDataList) error {
|
||||
if rs.GetRankLen() <= 0 {
|
||||
return fmt.Errorf("rank[", rs.rankId, "] no data")
|
||||
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||
}
|
||||
|
||||
findData, ok := rs.mapRankData[findKey]
|
||||
if ok == false {
|
||||
return fmt.Errorf("rank[", rs.rankId, "] no data")
|
||||
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||
}
|
||||
|
||||
_, rankPos := rs.skipList.GetWithPosition(findData)
|
||||
@@ -203,10 +370,11 @@ func (rs *RankSkip) GetRankKeyPrevToLimit(findKey, count uint64, result *rpc.Ran
|
||||
for iter.Prev() && iterCount < count {
|
||||
rankData := iter.Value().(*RankData)
|
||||
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
||||
Key: rankData.Key,
|
||||
Rank: rankPos - iterCount+1,
|
||||
SortData: rankData.SortData,
|
||||
Data: rankData.Data,
|
||||
Key: rankData.Key,
|
||||
Rank: rankPos - iterCount + 1,
|
||||
SortData: rankData.SortData,
|
||||
Data: rankData.Data,
|
||||
ExtendData: rankData.ExData,
|
||||
})
|
||||
iterCount++
|
||||
}
|
||||
@@ -217,12 +385,12 @@ func (rs *RankSkip) GetRankKeyPrevToLimit(findKey, count uint64, result *rpc.Ran
|
||||
// GetRankKeyPrevToLimit 获取key前count名的数据
|
||||
func (rs *RankSkip) GetRankKeyNextToLimit(findKey, count uint64, result *rpc.RankDataList) error {
|
||||
if rs.GetRankLen() <= 0 {
|
||||
return fmt.Errorf("rank[", rs.rankId, "] no data")
|
||||
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||
}
|
||||
|
||||
findData, ok := rs.mapRankData[findKey]
|
||||
if ok == false {
|
||||
return fmt.Errorf("rank[", rs.rankId, "] no data")
|
||||
return fmt.Errorf("rank[%d] no data", rs.rankId)
|
||||
}
|
||||
|
||||
_, rankPos := rs.skipList.GetWithPosition(findData)
|
||||
@@ -231,10 +399,11 @@ func (rs *RankSkip) GetRankKeyNextToLimit(findKey, count uint64, result *rpc.Ran
|
||||
for iter.Next() && iterCount < count {
|
||||
rankData := iter.Value().(*RankData)
|
||||
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
||||
Key: rankData.Key,
|
||||
Rank: rankPos + iterCount+1,
|
||||
SortData: rankData.SortData,
|
||||
Data: rankData.Data,
|
||||
Key: rankData.Key,
|
||||
Rank: rankPos + iterCount + 1,
|
||||
SortData: rankData.SortData,
|
||||
Data: rankData.Data,
|
||||
ExtendData: rankData.ExData,
|
||||
})
|
||||
iterCount++
|
||||
}
|
||||
@@ -259,10 +428,11 @@ func (rs *RankSkip) GetRankDataFromToLimit(startPos, count uint64, result *rpc.R
|
||||
for iter.Next() && iterCount < count {
|
||||
rankData := iter.Value().(*RankData)
|
||||
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
|
||||
Key: rankData.Key,
|
||||
Rank: iterCount + startPos+1,
|
||||
SortData: rankData.SortData,
|
||||
Data: rankData.Data,
|
||||
Key: rankData.Key,
|
||||
Rank: iterCount + startPos + 1,
|
||||
SortData: rankData.SortData,
|
||||
Data: rankData.Data,
|
||||
ExtendData: rankData.ExData,
|
||||
})
|
||||
iterCount++
|
||||
}
|
||||
@@ -301,4 +471,3 @@ func (rs *RankSkip) checkInsertAndReplace(upsetData *rpc.RankData) bool {
|
||||
ReleaseRankData(lastRankData)
|
||||
return true
|
||||
}
|
||||
|
||||
|
||||
@@ -8,10 +8,11 @@ import (
|
||||
"github.com/duanhf2012/origin/network/processor"
|
||||
"github.com/duanhf2012/origin/node"
|
||||
"github.com/duanhf2012/origin/service"
|
||||
"sync/atomic"
|
||||
"sync"
|
||||
"time"
|
||||
"github.com/duanhf2012/origin/util/bytespool"
|
||||
"runtime"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
type TcpService struct {
|
||||
@@ -52,18 +53,9 @@ type Client struct {
|
||||
}
|
||||
|
||||
func (tcpService *TcpService) genId() uint64 {
|
||||
if node.GetNodeId()>MaxNodeId{
|
||||
panic("nodeId exceeds the maximum!")
|
||||
}
|
||||
|
||||
newSeed := atomic.AddUint32(&seed,1) % MaxSeed
|
||||
nowTime := uint64(time.Now().Unix())%MaxTime
|
||||
return (uint64(node.GetNodeId())<<50)|(nowTime<<19)|uint64(newSeed)
|
||||
}
|
||||
|
||||
|
||||
func GetNodeId(agentId uint64) int {
|
||||
return int(agentId>>50)
|
||||
return (uint64(node.GetNodeId()%MaxNodeId)<<50)|(nowTime<<19)|uint64(newSeed)
|
||||
}
|
||||
|
||||
func (tcpService *TcpService) OnInit() error{
|
||||
@@ -90,6 +82,10 @@ func (tcpService *TcpService) OnInit() error{
|
||||
if ok == true {
|
||||
tcpService.tcpServer.LittleEndian = LittleEndian.(bool)
|
||||
}
|
||||
LenMsgLen,ok := tcpCfg["LenMsgLen"]
|
||||
if ok == true {
|
||||
tcpService.tcpServer.LenMsgLen = int(LenMsgLen.(float64))
|
||||
}
|
||||
MinMsgLen,ok := tcpCfg["MinMsgLen"]
|
||||
if ok == true {
|
||||
tcpService.tcpServer.MinMsgLen = uint32(MinMsgLen.(float64))
|
||||
@@ -166,7 +162,7 @@ func (slf *Client) Run() {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError("core dump info[",errString,"]\n",string(buf[:l]))
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -179,7 +175,7 @@ func (slf *Client) Run() {
|
||||
slf.tcpConn.SetReadDeadline(slf.tcpService.tcpServer.ReadDeadline)
|
||||
bytes,err := slf.tcpConn.ReadMsg()
|
||||
if err != nil {
|
||||
log.SDebug("read client id ",slf.id," is error:",err.Error())
|
||||
log.Debug("read client failed",log.ErrorAttr("error",err),log.Uint64("clientId",slf.id))
|
||||
break
|
||||
}
|
||||
data,err:=slf.tcpService.process.Unmarshal(slf.id,bytes)
|
||||
@@ -273,14 +269,14 @@ func (tcpService *TcpService) GetConnNum() int {
|
||||
return connNum
|
||||
}
|
||||
|
||||
func (server *TcpService) SetNetMempool(mempool network.INetMempool){
|
||||
func (server *TcpService) SetNetMempool(mempool bytespool.IBytesMempool){
|
||||
server.tcpServer.SetNetMempool(mempool)
|
||||
}
|
||||
|
||||
func (server *TcpService) GetNetMempool() network.INetMempool{
|
||||
func (server *TcpService) GetNetMempool() bytespool.IBytesMempool {
|
||||
return server.tcpServer.GetNetMempool()
|
||||
}
|
||||
|
||||
func (server *TcpService) ReleaseNetMem(byteBuff []byte) {
|
||||
server.tcpServer.GetNetMempool().ReleaseByteSlice(byteBuff)
|
||||
server.tcpServer.GetNetMempool().ReleaseBytes(byteBuff)
|
||||
}
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
package network
|
||||
package bytespool
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
type INetMempool interface {
|
||||
MakeByteSlice(size int) []byte
|
||||
ReleaseByteSlice(byteBuff []byte) bool
|
||||
type IBytesMempool interface {
|
||||
MakeBytes(size int) []byte
|
||||
ReleaseBytes(byteBuff []byte) bool
|
||||
}
|
||||
|
||||
type memAreaPool struct {
|
||||
@@ -16,7 +16,7 @@ type memAreaPool struct {
|
||||
pool []sync.Pool
|
||||
}
|
||||
|
||||
var memAreaPoolList = [3]*memAreaPool{&memAreaPool{minAreaValue: 1, maxAreaValue: 4096, growthValue: 512}, &memAreaPool{minAreaValue: 4097, maxAreaValue: 40960, growthValue: 4096}, &memAreaPool{minAreaValue: 40961, maxAreaValue: 417792, growthValue: 16384}}
|
||||
var memAreaPoolList = [4]*memAreaPool{&memAreaPool{minAreaValue: 1, maxAreaValue: 4096, growthValue: 512}, &memAreaPool{minAreaValue: 4097, maxAreaValue: 40960, growthValue: 4096}, &memAreaPool{minAreaValue: 40961, maxAreaValue: 417792, growthValue: 16384}, &memAreaPool{minAreaValue: 417793, maxAreaValue: 1925120, growthValue: 65536}}
|
||||
|
||||
func init() {
|
||||
for i := 0; i < len(memAreaPoolList); i++ {
|
||||
@@ -34,7 +34,6 @@ func (areaPool *memAreaPool) makePool() {
|
||||
for i := 0; i < poolLen; i++ {
|
||||
memSize := (areaPool.minAreaValue - 1) + (i+1)*areaPool.growthValue
|
||||
areaPool.pool[i] = sync.Pool{New: func() interface{} {
|
||||
//fmt.Println("make memsize:",memSize)
|
||||
return make([]byte, memSize)
|
||||
}}
|
||||
}
|
||||
@@ -69,7 +68,7 @@ func (areaPool *memAreaPool) releaseByteSlice(byteBuff []byte) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (areaPool *memAreaPool) MakeByteSlice(size int) []byte {
|
||||
func (areaPool *memAreaPool) MakeBytes(size int) []byte {
|
||||
for i := 0; i < len(memAreaPoolList); i++ {
|
||||
if size <= memAreaPoolList[i].maxAreaValue {
|
||||
return memAreaPoolList[i].makeByteSlice(size)
|
||||
@@ -79,7 +78,7 @@ func (areaPool *memAreaPool) MakeByteSlice(size int) []byte {
|
||||
return make([]byte, size)
|
||||
}
|
||||
|
||||
func (areaPool *memAreaPool) ReleaseByteSlice(byteBuff []byte) bool {
|
||||
func (areaPool *memAreaPool) ReleaseBytes(byteBuff []byte) bool {
|
||||
for i := 0; i < len(memAreaPoolList); i++ {
|
||||
if cap(byteBuff) <= memAreaPoolList[i].maxAreaValue {
|
||||
return memAreaPoolList[i].releaseByteSlice(byteBuff)
|
||||
@@ -1,6 +1,8 @@
|
||||
package math
|
||||
|
||||
import "github.com/duanhf2012/origin/log"
|
||||
import (
|
||||
"github.com/duanhf2012/origin/log"
|
||||
)
|
||||
|
||||
type NumberType interface {
|
||||
int | int8 | int16 | int32 | int64 | float32 | float64 | uint | uint8 | uint16 | uint32 | uint64
|
||||
@@ -38,41 +40,90 @@ func Abs[NumType SignedNumberType](Num NumType) NumType {
|
||||
return Num
|
||||
}
|
||||
|
||||
|
||||
func Add[NumType NumberType](number1 NumType, number2 NumType) NumType {
|
||||
func AddSafe[NumType NumberType](number1 NumType, number2 NumType) (NumType, bool) {
|
||||
ret := number1 + number2
|
||||
if number2> 0 && ret < number1 {
|
||||
log.SStack("Calculation overflow , number1 is ",number1," number2 is ",number2)
|
||||
}else if (number2<0 && ret > number1){
|
||||
log.SStack("Calculation overflow , number1 is ",number1," number2 is ",number2)
|
||||
if number2 > 0 && ret < number1 {
|
||||
log.Stack("Calculation overflow", log.Any("number1", number1), log.Any("number2", number2))
|
||||
return ret, false
|
||||
} else if number2 < 0 && ret > number1 {
|
||||
log.Stack("Calculation overflow", log.Any("number1", number1), log.Any("number2", number2))
|
||||
return ret, false
|
||||
}
|
||||
|
||||
return ret, true
|
||||
}
|
||||
|
||||
func SubSafe[NumType NumberType](number1 NumType, number2 NumType) (NumType, bool) {
|
||||
ret := number1 - number2
|
||||
if number2 > 0 && ret > number1 {
|
||||
log.Stack("Calculation overflow", log.Any("number1", number1), log.Any("number2", number2))
|
||||
return ret, false
|
||||
} else if number2 < 0 && ret < number1 {
|
||||
log.Stack("Calculation overflow", log.Any("number1", number1), log.Any("number2", number2))
|
||||
return ret, false
|
||||
}
|
||||
|
||||
return ret, true
|
||||
}
|
||||
|
||||
func MulSafe[NumType NumberType](number1 NumType, number2 NumType) (NumType, bool) {
|
||||
ret := number1 * number2
|
||||
if number1 == 0 || number2 == 0 {
|
||||
return ret, true
|
||||
}
|
||||
|
||||
if ret/number2 == number1 {
|
||||
return ret, true
|
||||
}
|
||||
|
||||
log.Stack("Calculation overflow", log.Any("number1", number1), log.Any("number2", number2))
|
||||
return ret, true
|
||||
}
|
||||
|
||||
func Add[NumType NumberType](number1 NumType, number2 NumType) NumType {
|
||||
ret, _ := AddSafe(number1, number2)
|
||||
return ret
|
||||
}
|
||||
|
||||
func Sub[NumType NumberType](number1 NumType, number2 NumType) NumType {
|
||||
ret := number1 - number2
|
||||
if number2> 0 && ret > number1 {
|
||||
log.SStack("Calculation overflow , number1 is ",number1," number2 is ",number2)
|
||||
}else if (number2<0 && ret < number1){
|
||||
log.SStack("Calculation overflow , number1 is ",number1," number2 is ",number2)
|
||||
}
|
||||
|
||||
ret, _ := SubSafe(number1, number2)
|
||||
return ret
|
||||
}
|
||||
|
||||
|
||||
func Mul[NumType NumberType](number1 NumType, number2 NumType) NumType {
|
||||
ret := number1 * number2
|
||||
if number1 == 0 || number2 == 0 {
|
||||
return ret
|
||||
}
|
||||
|
||||
if ret / number2 == number1 {
|
||||
return ret
|
||||
}
|
||||
|
||||
log.SStack("Calculation overflow , number1 is ",number1," number2 is ",number2)
|
||||
ret, _ := MulSafe(number1, number2)
|
||||
return ret
|
||||
}
|
||||
|
||||
// 安全的求比例
|
||||
func PercentRateSafe[NumType NumberType, OutNumType NumberType](maxValue int64, rate NumType, numbers ...NumType) (OutNumType, bool) {
|
||||
// 比例不能为负数
|
||||
if rate < 0 {
|
||||
log.Stack("rate must not positive")
|
||||
return 0, false
|
||||
}
|
||||
|
||||
if rate == 0 {
|
||||
// 比例为0
|
||||
return 0, true
|
||||
}
|
||||
|
||||
ret := int64(rate)
|
||||
for _, number := range numbers {
|
||||
number64 := int64(number)
|
||||
result, success := MulSafe(number64, ret)
|
||||
if !success {
|
||||
// 基数*比例越界了,int64都越界了,没办法了
|
||||
return 0, false
|
||||
}
|
||||
|
||||
ret = result
|
||||
}
|
||||
|
||||
ret = ret / 10000
|
||||
if ret > maxValue {
|
||||
return 0, false
|
||||
}
|
||||
|
||||
return OutNumType(ret), true
|
||||
}
|
||||
|
||||
413
util/queue/deque.go
Normal file
413
util/queue/deque.go
Normal file
@@ -0,0 +1,413 @@
|
||||
package queue
|
||||
|
||||
// minCapacity is the smallest capacity that deque may have. Must be power of 2
|
||||
// for bitwise modulus: x % n == x & (n - 1).
|
||||
const minCapacity = 16
|
||||
|
||||
// Deque represents a single instance of the deque data structure. A Deque
|
||||
// instance contains items of the type sepcified by the type argument.
|
||||
type Deque[T any] struct {
|
||||
buf []T
|
||||
head int
|
||||
tail int
|
||||
count int
|
||||
minCap int
|
||||
}
|
||||
|
||||
// New creates a new Deque, optionally setting the current and minimum capacity
|
||||
// when non-zero values are given for these. The Deque instance returns
|
||||
// operates on items of the type specified by the type argument. For example,
|
||||
// to create a Deque that contains strings,
|
||||
//
|
||||
// stringDeque := deque.New[string]()
|
||||
//
|
||||
// To create a Deque with capacity to store 2048 ints without resizing, and
|
||||
// that will not resize below space for 32 items when removing items:
|
||||
// d := deque.New[int](2048, 32)
|
||||
//
|
||||
// To create a Deque that has not yet allocated memory, but after it does will
|
||||
// never resize to have space for less than 64 items:
|
||||
// d := deque.New[int](0, 64)
|
||||
//
|
||||
// Any size values supplied here are rounded up to the nearest power of 2.
|
||||
func New[T any](size ...int) *Deque[T] {
|
||||
var capacity, minimum int
|
||||
if len(size) >= 1 {
|
||||
capacity = size[0]
|
||||
if len(size) >= 2 {
|
||||
minimum = size[1]
|
||||
}
|
||||
}
|
||||
|
||||
minCap := minCapacity
|
||||
for minCap < minimum {
|
||||
minCap <<= 1
|
||||
}
|
||||
|
||||
var buf []T
|
||||
if capacity != 0 {
|
||||
bufSize := minCap
|
||||
for bufSize < capacity {
|
||||
bufSize <<= 1
|
||||
}
|
||||
buf = make([]T, bufSize)
|
||||
}
|
||||
|
||||
return &Deque[T]{
|
||||
buf: buf,
|
||||
minCap: minCap,
|
||||
}
|
||||
}
|
||||
|
||||
// Cap returns the current capacity of the Deque. If q is nil, q.Cap() is zero.
|
||||
func (q *Deque[T]) Cap() int {
|
||||
if q == nil {
|
||||
return 0
|
||||
}
|
||||
return len(q.buf)
|
||||
}
|
||||
|
||||
// Len returns the number of elements currently stored in the queue. If q is
|
||||
// nil, q.Len() is zero.
|
||||
func (q *Deque[T]) Len() int {
|
||||
if q == nil {
|
||||
return 0
|
||||
}
|
||||
return q.count
|
||||
}
|
||||
|
||||
// PushBack appends an element to the back of the queue. Implements FIFO when
|
||||
// elements are removed with PopFront(), and LIFO when elements are removed
|
||||
// with PopBack().
|
||||
func (q *Deque[T]) PushBack(elem T) {
|
||||
q.growIfFull()
|
||||
|
||||
q.buf[q.tail] = elem
|
||||
// Calculate new tail position.
|
||||
q.tail = q.next(q.tail)
|
||||
q.count++
|
||||
}
|
||||
|
||||
// PushFront prepends an element to the front of the queue.
|
||||
func (q *Deque[T]) PushFront(elem T) {
|
||||
q.growIfFull()
|
||||
|
||||
// Calculate new head position.
|
||||
q.head = q.prev(q.head)
|
||||
q.buf[q.head] = elem
|
||||
q.count++
|
||||
}
|
||||
|
||||
// PopFront removes and returns the element from the front of the queue.
|
||||
// Implements FIFO when used with PushBack(). If the queue is empty, the call
|
||||
// panics.
|
||||
func (q *Deque[T]) PopFront() T {
|
||||
if q.count <= 0 {
|
||||
panic("deque: PopFront() called on empty queue")
|
||||
}
|
||||
ret := q.buf[q.head]
|
||||
var zero T
|
||||
q.buf[q.head] = zero
|
||||
// Calculate new head position.
|
||||
q.head = q.next(q.head)
|
||||
q.count--
|
||||
|
||||
q.shrinkIfExcess()
|
||||
return ret
|
||||
}
|
||||
|
||||
// PopBack removes and returns the element from the back of the queue.
|
||||
// Implements LIFO when used with PushBack(). If the queue is empty, the call
|
||||
// panics.
|
||||
func (q *Deque[T]) PopBack() T {
|
||||
if q.count <= 0 {
|
||||
panic("deque: PopBack() called on empty queue")
|
||||
}
|
||||
|
||||
// Calculate new tail position
|
||||
q.tail = q.prev(q.tail)
|
||||
|
||||
// Remove value at tail.
|
||||
ret := q.buf[q.tail]
|
||||
var zero T
|
||||
q.buf[q.tail] = zero
|
||||
q.count--
|
||||
|
||||
q.shrinkIfExcess()
|
||||
return ret
|
||||
}
|
||||
|
||||
// Front returns the element at the front of the queue. This is the element
|
||||
// that would be returned by PopFront(). This call panics if the queue is
|
||||
// empty.
|
||||
func (q *Deque[T]) Front() T {
|
||||
if q.count <= 0 {
|
||||
panic("deque: Front() called when empty")
|
||||
}
|
||||
return q.buf[q.head]
|
||||
}
|
||||
|
||||
// Back returns the element at the back of the queue. This is the element that
|
||||
// would be returned by PopBack(). This call panics if the queue is empty.
|
||||
func (q *Deque[T]) Back() T {
|
||||
if q.count <= 0 {
|
||||
panic("deque: Back() called when empty")
|
||||
}
|
||||
return q.buf[q.prev(q.tail)]
|
||||
}
|
||||
|
||||
// At returns the element at index i in the queue without removing the element
|
||||
// from the queue. This method accepts only non-negative index values. At(0)
|
||||
// refers to the first element and is the same as Front(). At(Len()-1) refers
|
||||
// to the last element and is the same as Back(). If the index is invalid, the
|
||||
// call panics.
|
||||
//
|
||||
// The purpose of At is to allow Deque to serve as a more general purpose
|
||||
// circular buffer, where items are only added to and removed from the ends of
|
||||
// the deque, but may be read from any place within the deque. Consider the
|
||||
// case of a fixed-size circular log buffer: A new entry is pushed onto one end
|
||||
// and when full the oldest is popped from the other end. All the log entries
|
||||
// in the buffer must be readable without altering the buffer contents.
|
||||
func (q *Deque[T]) At(i int) T {
|
||||
if i < 0 || i >= q.count {
|
||||
panic("deque: At() called with index out of range")
|
||||
}
|
||||
// bitwise modulus
|
||||
return q.buf[(q.head+i)&(len(q.buf)-1)]
|
||||
}
|
||||
|
||||
// Set puts the element at index i in the queue. Set shares the same purpose
|
||||
// than At() but perform the opposite operation. The index i is the same index
|
||||
// defined by At(). If the index is invalid, the call panics.
|
||||
func (q *Deque[T]) Set(i int, elem T) {
|
||||
if i < 0 || i >= q.count {
|
||||
panic("deque: Set() called with index out of range")
|
||||
}
|
||||
// bitwise modulus
|
||||
q.buf[(q.head+i)&(len(q.buf)-1)] = elem
|
||||
}
|
||||
|
||||
// Clear removes all elements from the queue, but retains the current capacity.
|
||||
// This is useful when repeatedly reusing the queue at high frequency to avoid
|
||||
// GC during reuse. The queue will not be resized smaller as long as items are
|
||||
// only added. Only when items are removed is the queue subject to getting
|
||||
// resized smaller.
|
||||
func (q *Deque[T]) Clear() {
|
||||
// bitwise modulus
|
||||
modBits := len(q.buf) - 1
|
||||
var zero T
|
||||
for h := q.head; h != q.tail; h = (h + 1) & modBits {
|
||||
q.buf[h] = zero
|
||||
}
|
||||
q.head = 0
|
||||
q.tail = 0
|
||||
q.count = 0
|
||||
}
|
||||
|
||||
// Rotate rotates the deque n steps front-to-back. If n is negative, rotates
|
||||
// back-to-front. Having Deque provide Rotate() avoids resizing that could
|
||||
// happen if implementing rotation using only Pop and Push methods. If q.Len()
|
||||
// is one or less, or q is nil, then Rotate does nothing.
|
||||
func (q *Deque[T]) Rotate(n int) {
|
||||
if q.Len() <= 1 {
|
||||
return
|
||||
}
|
||||
// Rotating a multiple of q.count is same as no rotation.
|
||||
n %= q.count
|
||||
if n == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
modBits := len(q.buf) - 1
|
||||
// If no empty space in buffer, only move head and tail indexes.
|
||||
if q.head == q.tail {
|
||||
// Calculate new head and tail using bitwise modulus.
|
||||
q.head = (q.head + n) & modBits
|
||||
q.tail = q.head
|
||||
return
|
||||
}
|
||||
|
||||
var zero T
|
||||
|
||||
if n < 0 {
|
||||
// Rotate back to front.
|
||||
for ; n < 0; n++ {
|
||||
// Calculate new head and tail using bitwise modulus.
|
||||
q.head = (q.head - 1) & modBits
|
||||
q.tail = (q.tail - 1) & modBits
|
||||
// Put tail value at head and remove value at tail.
|
||||
q.buf[q.head] = q.buf[q.tail]
|
||||
q.buf[q.tail] = zero
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Rotate front to back.
|
||||
for ; n > 0; n-- {
|
||||
// Put head value at tail and remove value at head.
|
||||
q.buf[q.tail] = q.buf[q.head]
|
||||
q.buf[q.head] = zero
|
||||
// Calculate new head and tail using bitwise modulus.
|
||||
q.head = (q.head + 1) & modBits
|
||||
q.tail = (q.tail + 1) & modBits
|
||||
}
|
||||
}
|
||||
|
||||
// Index returns the index into the Deque of the first item satisfying f(item),
|
||||
// or -1 if none do. If q is nil, then -1 is always returned. Search is linear
|
||||
// starting with index 0.
|
||||
func (q *Deque[T]) Index(f func(T) bool) int {
|
||||
if q.Len() > 0 {
|
||||
modBits := len(q.buf) - 1
|
||||
for i := 0; i < q.count; i++ {
|
||||
if f(q.buf[(q.head+i)&modBits]) {
|
||||
return i
|
||||
}
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
// RIndex is the same as Index, but searches from Back to Front. The index
|
||||
// returned is from Front to Back, where index 0 is the index of the item
|
||||
// returned by Front().
|
||||
func (q *Deque[T]) RIndex(f func(T) bool) int {
|
||||
if q.Len() > 0 {
|
||||
modBits := len(q.buf) - 1
|
||||
for i := q.count - 1; i >= 0; i-- {
|
||||
if f(q.buf[(q.head+i)&modBits]) {
|
||||
return i
|
||||
}
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
// Insert is used to insert an element into the middle of the queue, before the
|
||||
// element at the specified index. Insert(0,e) is the same as PushFront(e) and
|
||||
// Insert(Len(),e) is the same as PushBack(e). Accepts only non-negative index
|
||||
// values, and panics if index is out of range.
|
||||
//
|
||||
// Important: Deque is optimized for O(1) operations at the ends of the queue,
|
||||
// not for operations in the the middle. Complexity of this function is
|
||||
// constant plus linear in the lesser of the distances between the index and
|
||||
// either of the ends of the queue.
|
||||
func (q *Deque[T]) Insert(at int, item T) {
|
||||
if at < 0 || at > q.count {
|
||||
panic("deque: Insert() called with index out of range")
|
||||
}
|
||||
if at*2 < q.count {
|
||||
q.PushFront(item)
|
||||
front := q.head
|
||||
for i := 0; i < at; i++ {
|
||||
next := q.next(front)
|
||||
q.buf[front], q.buf[next] = q.buf[next], q.buf[front]
|
||||
front = next
|
||||
}
|
||||
return
|
||||
}
|
||||
swaps := q.count - at
|
||||
q.PushBack(item)
|
||||
back := q.prev(q.tail)
|
||||
for i := 0; i < swaps; i++ {
|
||||
prev := q.prev(back)
|
||||
q.buf[back], q.buf[prev] = q.buf[prev], q.buf[back]
|
||||
back = prev
|
||||
}
|
||||
}
|
||||
|
||||
// Remove removes and returns an element from the middle of the queue, at the
|
||||
// specified index. Remove(0) is the same as PopFront() and Remove(Len()-1) is
|
||||
// the same as PopBack(). Accepts only non-negative index values, and panics if
|
||||
// index is out of range.
|
||||
//
|
||||
// Important: Deque is optimized for O(1) operations at the ends of the queue,
|
||||
// not for operations in the the middle. Complexity of this function is
|
||||
// constant plus linear in the lesser of the distances between the index and
|
||||
// either of the ends of the queue.
|
||||
func (q *Deque[T]) Remove(at int) T {
|
||||
if at < 0 || at >= q.Len() {
|
||||
panic("deque: Remove() called with index out of range")
|
||||
}
|
||||
|
||||
rm := (q.head + at) & (len(q.buf) - 1)
|
||||
if at*2 < q.count {
|
||||
for i := 0; i < at; i++ {
|
||||
prev := q.prev(rm)
|
||||
q.buf[prev], q.buf[rm] = q.buf[rm], q.buf[prev]
|
||||
rm = prev
|
||||
}
|
||||
return q.PopFront()
|
||||
}
|
||||
swaps := q.count - at - 1
|
||||
for i := 0; i < swaps; i++ {
|
||||
next := q.next(rm)
|
||||
q.buf[rm], q.buf[next] = q.buf[next], q.buf[rm]
|
||||
rm = next
|
||||
}
|
||||
return q.PopBack()
|
||||
}
|
||||
|
||||
// SetMinCapacity sets a minimum capacity of 2^minCapacityExp. If the value of
|
||||
// the minimum capacity is less than or equal to the minimum allowed, then
|
||||
// capacity is set to the minimum allowed. This may be called at anytime to set
|
||||
// a new minimum capacity.
|
||||
//
|
||||
// Setting a larger minimum capacity may be used to prevent resizing when the
|
||||
// number of stored items changes frequently across a wide range.
|
||||
func (q *Deque[T]) SetMinCapacity(minCapacityExp uint) {
|
||||
if 1<<minCapacityExp > minCapacity {
|
||||
q.minCap = 1 << minCapacityExp
|
||||
} else {
|
||||
q.minCap = minCapacity
|
||||
}
|
||||
}
|
||||
|
||||
// prev returns the previous buffer position wrapping around buffer.
|
||||
func (q *Deque[T]) prev(i int) int {
|
||||
return (i - 1) & (len(q.buf) - 1) // bitwise modulus
|
||||
}
|
||||
|
||||
// next returns the next buffer position wrapping around buffer.
|
||||
func (q *Deque[T]) next(i int) int {
|
||||
return (i + 1) & (len(q.buf) - 1) // bitwise modulus
|
||||
}
|
||||
|
||||
// growIfFull resizes up if the buffer is full.
|
||||
func (q *Deque[T]) growIfFull() {
|
||||
if q.count != len(q.buf) {
|
||||
return
|
||||
}
|
||||
if len(q.buf) == 0 {
|
||||
if q.minCap == 0 {
|
||||
q.minCap = minCapacity
|
||||
}
|
||||
q.buf = make([]T, q.minCap)
|
||||
return
|
||||
}
|
||||
q.resize()
|
||||
}
|
||||
|
||||
// shrinkIfExcess resize down if the buffer 1/4 full.
|
||||
func (q *Deque[T]) shrinkIfExcess() {
|
||||
if len(q.buf) > q.minCap && (q.count<<2) == len(q.buf) {
|
||||
q.resize()
|
||||
}
|
||||
}
|
||||
|
||||
// resize resizes the deque to fit exactly twice its current contents. This is
|
||||
// used to grow the queue when it is full, and also to shrink it when it is
|
||||
// only a quarter full.
|
||||
func (q *Deque[T]) resize() {
|
||||
newBuf := make([]T, q.count<<1)
|
||||
if q.tail > q.head {
|
||||
copy(newBuf, q.buf[q.head:q.tail])
|
||||
} else {
|
||||
n := copy(newBuf, q.buf[q.head:])
|
||||
copy(newBuf[n:], q.buf[:q.tail])
|
||||
}
|
||||
|
||||
q.head = 0
|
||||
q.tail = q.count
|
||||
q.buf = newBuf
|
||||
}
|
||||
836
util/queue/deque_test.go
Normal file
836
util/queue/deque_test.go
Normal file
@@ -0,0 +1,836 @@
|
||||
package queue
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
func TestEmpty(t *testing.T) {
|
||||
q := New[string]()
|
||||
if q.Len() != 0 {
|
||||
t.Error("q.Len() =", q.Len(), "expect 0")
|
||||
}
|
||||
if q.Cap() != 0 {
|
||||
t.Error("expected q.Cap() == 0")
|
||||
}
|
||||
idx := q.Index(func(item string) bool {
|
||||
return true
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Error("should return -1 index for nil deque")
|
||||
}
|
||||
idx = q.RIndex(func(item string) bool {
|
||||
return true
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Error("should return -1 index for nil deque")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNil(t *testing.T) {
|
||||
var q *Deque[int]
|
||||
if q.Len() != 0 {
|
||||
t.Error("expected q.Len() == 0")
|
||||
}
|
||||
if q.Cap() != 0 {
|
||||
t.Error("expected q.Cap() == 0")
|
||||
}
|
||||
q.Rotate(5)
|
||||
idx := q.Index(func(item int) bool {
|
||||
return true
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Error("should return -1 index for nil deque")
|
||||
}
|
||||
idx = q.RIndex(func(item int) bool {
|
||||
return true
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Error("should return -1 index for nil deque")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFrontBack(t *testing.T) {
|
||||
var q Deque[string]
|
||||
q.PushBack("foo")
|
||||
q.PushBack("bar")
|
||||
q.PushBack("baz")
|
||||
if q.Front() != "foo" {
|
||||
t.Error("wrong value at front of queue")
|
||||
}
|
||||
if q.Back() != "baz" {
|
||||
t.Error("wrong value at back of queue")
|
||||
}
|
||||
|
||||
if q.PopFront() != "foo" {
|
||||
t.Error("wrong value removed from front of queue")
|
||||
}
|
||||
if q.Front() != "bar" {
|
||||
t.Error("wrong value remaining at front of queue")
|
||||
}
|
||||
if q.Back() != "baz" {
|
||||
t.Error("wrong value remaining at back of queue")
|
||||
}
|
||||
|
||||
if q.PopBack() != "baz" {
|
||||
t.Error("wrong value removed from back of queue")
|
||||
}
|
||||
if q.Front() != "bar" {
|
||||
t.Error("wrong value remaining at front of queue")
|
||||
}
|
||||
if q.Back() != "bar" {
|
||||
t.Error("wrong value remaining at back of queue")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGrowShrinkBack(t *testing.T) {
|
||||
var q Deque[int]
|
||||
size := minCapacity * 2
|
||||
|
||||
for i := 0; i < size; i++ {
|
||||
if q.Len() != i {
|
||||
t.Error("q.Len() =", q.Len(), "expected", i)
|
||||
}
|
||||
q.PushBack(i)
|
||||
}
|
||||
bufLen := len(q.buf)
|
||||
|
||||
// Remove from back.
|
||||
for i := size; i > 0; i-- {
|
||||
if q.Len() != i {
|
||||
t.Error("q.Len() =", q.Len(), "expected", i)
|
||||
}
|
||||
x := q.PopBack()
|
||||
if x != i-1 {
|
||||
t.Error("q.PopBack() =", x, "expected", i-1)
|
||||
}
|
||||
}
|
||||
if q.Len() != 0 {
|
||||
t.Error("q.Len() =", q.Len(), "expected 0")
|
||||
}
|
||||
if len(q.buf) == bufLen {
|
||||
t.Error("queue buffer did not shrink")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGrowShrinkFront(t *testing.T) {
|
||||
var q Deque[int]
|
||||
size := minCapacity * 2
|
||||
|
||||
for i := 0; i < size; i++ {
|
||||
if q.Len() != i {
|
||||
t.Error("q.Len() =", q.Len(), "expected", i)
|
||||
}
|
||||
q.PushBack(i)
|
||||
}
|
||||
bufLen := len(q.buf)
|
||||
|
||||
// Remove from Front
|
||||
for i := 0; i < size; i++ {
|
||||
if q.Len() != size-i {
|
||||
t.Error("q.Len() =", q.Len(), "expected", minCapacity*2-i)
|
||||
}
|
||||
x := q.PopFront()
|
||||
if x != i {
|
||||
t.Error("q.PopBack() =", x, "expected", i)
|
||||
}
|
||||
}
|
||||
if q.Len() != 0 {
|
||||
t.Error("q.Len() =", q.Len(), "expected 0")
|
||||
}
|
||||
if len(q.buf) == bufLen {
|
||||
t.Error("queue buffer did not shrink")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSimple(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
if q.Front() != 0 {
|
||||
t.Fatalf("expected 0 at front, got %d", q.Front())
|
||||
}
|
||||
if q.Back() != minCapacity-1 {
|
||||
t.Fatalf("expected %d at back, got %d", minCapacity-1, q.Back())
|
||||
}
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
if q.Front() != i {
|
||||
t.Error("peek", i, "had value", q.Front())
|
||||
}
|
||||
x := q.PopFront()
|
||||
if x != i {
|
||||
t.Error("remove", i, "had value", x)
|
||||
}
|
||||
}
|
||||
|
||||
q.Clear()
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
q.PushFront(i)
|
||||
}
|
||||
for i := minCapacity - 1; i >= 0; i-- {
|
||||
x := q.PopFront()
|
||||
if x != i {
|
||||
t.Error("remove", i, "had value", x)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestBufferWrap(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
q.PopFront()
|
||||
q.PushBack(minCapacity + i)
|
||||
}
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
if q.Front() != i+3 {
|
||||
t.Error("peek", i, "had value", q.Front())
|
||||
}
|
||||
q.PopFront()
|
||||
}
|
||||
}
|
||||
|
||||
func TestBufferWrapReverse(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
q.PushFront(i)
|
||||
}
|
||||
for i := 0; i < 3; i++ {
|
||||
q.PopBack()
|
||||
q.PushFront(minCapacity + i)
|
||||
}
|
||||
|
||||
for i := 0; i < minCapacity; i++ {
|
||||
if q.Back() != i+3 {
|
||||
t.Error("peek", i, "had value", q.Front())
|
||||
}
|
||||
q.PopBack()
|
||||
}
|
||||
}
|
||||
|
||||
func TestLen(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
if q.Len() != 0 {
|
||||
t.Error("empty queue length not 0")
|
||||
}
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
q.PushBack(i)
|
||||
if q.Len() != i+1 {
|
||||
t.Error("adding: queue with", i, "elements has length", q.Len())
|
||||
}
|
||||
}
|
||||
for i := 0; i < 1000; i++ {
|
||||
q.PopFront()
|
||||
if q.Len() != 1000-i-1 {
|
||||
t.Error("removing: queue with", 1000-i-i, "elements has length", q.Len())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestBack(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < minCapacity+5; i++ {
|
||||
q.PushBack(i)
|
||||
if q.Back() != i {
|
||||
t.Errorf("Back returned wrong value")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNew(t *testing.T) {
|
||||
minCap := 64
|
||||
q := New[string](0, minCap)
|
||||
if q.Cap() != 0 {
|
||||
t.Fatal("should not have allowcated mem yet")
|
||||
}
|
||||
q.PushBack("foo")
|
||||
q.PopFront()
|
||||
if q.Len() != 0 {
|
||||
t.Fatal("Len() should return 0")
|
||||
}
|
||||
if q.Cap() != minCap {
|
||||
t.Fatalf("worng capactiy expected %d, got %d", minCap, q.Cap())
|
||||
}
|
||||
|
||||
curCap := 128
|
||||
q = New[string](curCap, minCap)
|
||||
if q.Cap() != curCap {
|
||||
t.Fatalf("Cap() should return %d, got %d", curCap, q.Cap())
|
||||
}
|
||||
if q.Len() != 0 {
|
||||
t.Fatalf("Len() should return 0")
|
||||
}
|
||||
q.PushBack("foo")
|
||||
if q.Cap() != curCap {
|
||||
t.Fatalf("Cap() should return %d, got %d", curCap, q.Cap())
|
||||
}
|
||||
}
|
||||
|
||||
func checkRotate(t *testing.T, size int) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < size; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
|
||||
for i := 0; i < q.Len(); i++ {
|
||||
x := i
|
||||
for n := 0; n < q.Len(); n++ {
|
||||
if q.At(n) != x {
|
||||
t.Fatalf("a[%d] != %d after rotate and copy", n, x)
|
||||
}
|
||||
x++
|
||||
if x == q.Len() {
|
||||
x = 0
|
||||
}
|
||||
}
|
||||
q.Rotate(1)
|
||||
if q.Back() != i {
|
||||
t.Fatal("wrong value during rotation")
|
||||
}
|
||||
}
|
||||
for i := q.Len() - 1; i >= 0; i-- {
|
||||
q.Rotate(-1)
|
||||
if q.Front() != i {
|
||||
t.Fatal("wrong value during reverse rotation")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRotate(t *testing.T) {
|
||||
checkRotate(t, 10)
|
||||
checkRotate(t, minCapacity)
|
||||
checkRotate(t, minCapacity+minCapacity/2)
|
||||
|
||||
var q Deque[int]
|
||||
for i := 0; i < 10; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
q.Rotate(11)
|
||||
if q.Front() != 1 {
|
||||
t.Error("rotating 11 places should have been same as one")
|
||||
}
|
||||
q.Rotate(-21)
|
||||
if q.Front() != 0 {
|
||||
t.Error("rotating -21 places should have been same as one -1")
|
||||
}
|
||||
q.Rotate(q.Len())
|
||||
if q.Front() != 0 {
|
||||
t.Error("should not have rotated")
|
||||
}
|
||||
q.Clear()
|
||||
q.PushBack(0)
|
||||
q.Rotate(13)
|
||||
if q.Front() != 0 {
|
||||
t.Error("should not have rotated")
|
||||
}
|
||||
}
|
||||
|
||||
func TestAt(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
|
||||
// Front to back.
|
||||
for j := 0; j < q.Len(); j++ {
|
||||
if q.At(j) != j {
|
||||
t.Errorf("index %d doesn't contain %d", j, j)
|
||||
}
|
||||
}
|
||||
|
||||
// Back to front
|
||||
for j := 1; j <= q.Len(); j++ {
|
||||
if q.At(q.Len()-j) != q.Len()-j {
|
||||
t.Errorf("index %d doesn't contain %d", q.Len()-j, q.Len()-j)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSet(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
q.PushBack(i)
|
||||
q.Set(i, i+50)
|
||||
}
|
||||
|
||||
// Front to back.
|
||||
for j := 0; j < q.Len(); j++ {
|
||||
if q.At(j) != j+50 {
|
||||
t.Errorf("index %d doesn't contain %d", j, j+50)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestClear(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
for i := 0; i < 100; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
if q.Len() != 100 {
|
||||
t.Error("push: queue with 100 elements has length", q.Len())
|
||||
}
|
||||
cap := len(q.buf)
|
||||
q.Clear()
|
||||
if q.Len() != 0 {
|
||||
t.Error("empty queue length not 0 after clear")
|
||||
}
|
||||
if len(q.buf) != cap {
|
||||
t.Error("queue capacity changed after clear")
|
||||
}
|
||||
|
||||
// Check that there are no remaining references after Clear()
|
||||
for i := 0; i < len(q.buf); i++ {
|
||||
if q.buf[i] != 0 {
|
||||
t.Error("queue has non-nil deleted elements after Clear()")
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIndex(t *testing.T) {
|
||||
var q Deque[rune]
|
||||
for _, x := range "Hello, 世界" {
|
||||
q.PushBack(x)
|
||||
}
|
||||
idx := q.Index(func(item rune) bool {
|
||||
c := item
|
||||
return unicode.Is(unicode.Han, c)
|
||||
})
|
||||
if idx != 7 {
|
||||
t.Fatal("Expected index 7, got", idx)
|
||||
}
|
||||
idx = q.Index(func(item rune) bool {
|
||||
c := item
|
||||
return c == 'H'
|
||||
})
|
||||
if idx != 0 {
|
||||
t.Fatal("Expected index 0, got", idx)
|
||||
}
|
||||
idx = q.Index(func(item rune) bool {
|
||||
return false
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Fatal("Expected index -1, got", idx)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRIndex(t *testing.T) {
|
||||
var q Deque[rune]
|
||||
for _, x := range "Hello, 世界" {
|
||||
q.PushBack(x)
|
||||
}
|
||||
idx := q.RIndex(func(item rune) bool {
|
||||
c := item
|
||||
return unicode.Is(unicode.Han, c)
|
||||
})
|
||||
if idx != 8 {
|
||||
t.Fatal("Expected index 8, got", idx)
|
||||
}
|
||||
idx = q.RIndex(func(item rune) bool {
|
||||
c := item
|
||||
return c == 'H'
|
||||
})
|
||||
if idx != 0 {
|
||||
t.Fatal("Expected index 0, got", idx)
|
||||
}
|
||||
idx = q.RIndex(func(item rune) bool {
|
||||
return false
|
||||
})
|
||||
if idx != -1 {
|
||||
t.Fatal("Expected index -1, got", idx)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInsert(t *testing.T) {
|
||||
q := new(Deque[rune])
|
||||
for _, x := range "ABCDEFG" {
|
||||
q.PushBack(x)
|
||||
}
|
||||
q.Insert(4, 'x') // ABCDxEFG
|
||||
if q.At(4) != 'x' {
|
||||
t.Error("expected x at position 4, got", q.At(4))
|
||||
}
|
||||
|
||||
q.Insert(2, 'y') // AByCDxEFG
|
||||
if q.At(2) != 'y' {
|
||||
t.Error("expected y at position 2")
|
||||
}
|
||||
if q.At(5) != 'x' {
|
||||
t.Error("expected x at position 5")
|
||||
}
|
||||
|
||||
q.Insert(0, 'b') // bAByCDxEFG
|
||||
if q.Front() != 'b' {
|
||||
t.Error("expected b inserted at front, got", q.Front())
|
||||
}
|
||||
|
||||
q.Insert(q.Len(), 'e') // bAByCDxEFGe
|
||||
|
||||
for i, x := range "bAByCDxEFGe" {
|
||||
if q.PopFront() != x {
|
||||
t.Error("expected", x, "at position", i)
|
||||
}
|
||||
}
|
||||
|
||||
qs := New[string](16)
|
||||
|
||||
for i := 0; i < qs.Cap(); i++ {
|
||||
qs.PushBack(fmt.Sprint(i))
|
||||
}
|
||||
// deque: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
|
||||
// buffer: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
|
||||
for i := 0; i < qs.Cap()/2; i++ {
|
||||
qs.PopFront()
|
||||
}
|
||||
// deque: 8 9 10 11 12 13 14 15
|
||||
// buffer: [_,_,_,_,_,_,_,_,8,9,10,11,12,13,14,15]
|
||||
for i := 0; i < qs.Cap()/4; i++ {
|
||||
qs.PushBack(fmt.Sprint(qs.Cap() + i))
|
||||
}
|
||||
// deque: 8 9 10 11 12 13 14 15 16 17 18 19
|
||||
// buffer: [16,17,18,19,_,_,_,_,8,9,10,11,12,13,14,15]
|
||||
|
||||
at := qs.Len() - 2
|
||||
qs.Insert(at, "x")
|
||||
// deque: 8 9 10 11 12 13 14 15 16 17 x 18 19
|
||||
// buffer: [16,17,x,18,19,_,_,_,8,9,10,11,12,13,14,15]
|
||||
if qs.At(at) != "x" {
|
||||
t.Error("expected x at position", at)
|
||||
}
|
||||
if qs.At(at) != "x" {
|
||||
t.Error("expected x at position", at)
|
||||
}
|
||||
|
||||
qs.Insert(2, "y")
|
||||
// deque: 8 9 y 10 11 12 13 14 15 16 17 x 18 19
|
||||
// buffer: [16,17,x,18,19,_,_,8,9,y,10,11,12,13,14,15]
|
||||
if qs.At(2) != "y" {
|
||||
t.Error("expected y at position 2")
|
||||
}
|
||||
if qs.At(at+1) != "x" {
|
||||
t.Error("expected x at position 5")
|
||||
}
|
||||
|
||||
qs.Insert(0, "b")
|
||||
// deque: b 8 9 y 10 11 12 13 14 15 16 17 x 18 19
|
||||
// buffer: [16,17,x,18,19,_,b,8,9,y,10,11,12,13,14,15]
|
||||
if qs.Front() != "b" {
|
||||
t.Error("expected b inserted at front, got", qs.Front())
|
||||
}
|
||||
|
||||
qs.Insert(qs.Len(), "e")
|
||||
if qs.Cap() != qs.Len() {
|
||||
t.Fatal("Expected full buffer")
|
||||
}
|
||||
// deque: b 8 9 y 10 11 12 13 14 15 16 17 x 18 19 e
|
||||
// buffer: [16,17,x,18,19,e,b,8,9,y,10,11,12,13,14,15]
|
||||
for i, x := range []string{"16", "17", "x", "18", "19", "e", "b", "8", "9", "y", "10", "11", "12", "13", "14", "15"} {
|
||||
if qs.buf[i] != x {
|
||||
t.Error("expected", x, "at buffer position", i)
|
||||
}
|
||||
}
|
||||
for i, x := range []string{"b", "8", "9", "y", "10", "11", "12", "13", "14", "15", "16", "17", "x", "18", "19", "e"} {
|
||||
if qs.Front() != x {
|
||||
t.Error("expected", x, "at position", i, "got", qs.Front())
|
||||
}
|
||||
qs.PopFront()
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemove(t *testing.T) {
|
||||
q := new(Deque[rune])
|
||||
for _, x := range "ABCDEFG" {
|
||||
q.PushBack(x)
|
||||
}
|
||||
|
||||
if q.Remove(4) != 'E' { // ABCDFG
|
||||
t.Error("expected E from position 4")
|
||||
}
|
||||
|
||||
if q.Remove(2) != 'C' { // ABDFG
|
||||
t.Error("expected C at position 2")
|
||||
}
|
||||
if q.Back() != 'G' {
|
||||
t.Error("expected G at back")
|
||||
}
|
||||
|
||||
if q.Remove(0) != 'A' { // BDFG
|
||||
t.Error("expected to remove A from front")
|
||||
}
|
||||
if q.Front() != 'B' {
|
||||
t.Error("expected G at back")
|
||||
}
|
||||
|
||||
if q.Remove(q.Len()-1) != 'G' { // BDF
|
||||
t.Error("expected to remove G from back")
|
||||
}
|
||||
if q.Back() != 'F' {
|
||||
t.Error("expected F at back")
|
||||
}
|
||||
|
||||
if q.Len() != 3 {
|
||||
t.Error("wrong length")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFrontBackOutOfRangePanics(t *testing.T) {
|
||||
const msg = "should panic when peeking empty queue"
|
||||
var q Deque[int]
|
||||
assertPanics(t, msg, func() {
|
||||
q.Front()
|
||||
})
|
||||
assertPanics(t, msg, func() {
|
||||
q.Back()
|
||||
})
|
||||
|
||||
q.PushBack(1)
|
||||
q.PopFront()
|
||||
|
||||
assertPanics(t, msg, func() {
|
||||
q.Front()
|
||||
})
|
||||
assertPanics(t, msg, func() {
|
||||
q.Back()
|
||||
})
|
||||
}
|
||||
|
||||
func TestPopFrontOutOfRangePanics(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
assertPanics(t, "should panic when removing empty queue", func() {
|
||||
q.PopFront()
|
||||
})
|
||||
|
||||
q.PushBack(1)
|
||||
q.PopFront()
|
||||
|
||||
assertPanics(t, "should panic when removing emptied queue", func() {
|
||||
q.PopFront()
|
||||
})
|
||||
}
|
||||
|
||||
func TestPopBackOutOfRangePanics(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
assertPanics(t, "should panic when removing empty queue", func() {
|
||||
q.PopBack()
|
||||
})
|
||||
|
||||
q.PushBack(1)
|
||||
q.PopBack()
|
||||
|
||||
assertPanics(t, "should panic when removing emptied queue", func() {
|
||||
q.PopBack()
|
||||
})
|
||||
}
|
||||
|
||||
func TestAtOutOfRangePanics(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
q.PushBack(1)
|
||||
q.PushBack(2)
|
||||
q.PushBack(3)
|
||||
|
||||
assertPanics(t, "should panic when negative index", func() {
|
||||
q.At(-4)
|
||||
})
|
||||
|
||||
assertPanics(t, "should panic when index greater than length", func() {
|
||||
q.At(4)
|
||||
})
|
||||
}
|
||||
|
||||
func TestSetOutOfRangePanics(t *testing.T) {
|
||||
var q Deque[int]
|
||||
|
||||
q.PushBack(1)
|
||||
q.PushBack(2)
|
||||
q.PushBack(3)
|
||||
|
||||
assertPanics(t, "should panic when negative index", func() {
|
||||
q.Set(-4, 1)
|
||||
})
|
||||
|
||||
assertPanics(t, "should panic when index greater than length", func() {
|
||||
q.Set(4, 1)
|
||||
})
|
||||
}
|
||||
|
||||
func TestInsertOutOfRangePanics(t *testing.T) {
|
||||
q := new(Deque[string])
|
||||
|
||||
assertPanics(t, "should panic when inserting out of range", func() {
|
||||
q.Insert(1, "X")
|
||||
})
|
||||
|
||||
q.PushBack("A")
|
||||
|
||||
assertPanics(t, "should panic when inserting at negative index", func() {
|
||||
q.Insert(-1, "Y")
|
||||
})
|
||||
|
||||
assertPanics(t, "should panic when inserting out of range", func() {
|
||||
q.Insert(2, "B")
|
||||
})
|
||||
}
|
||||
|
||||
func TestRemoveOutOfRangePanics(t *testing.T) {
|
||||
q := new(Deque[string])
|
||||
|
||||
assertPanics(t, "should panic when removing from empty queue", func() {
|
||||
q.Remove(0)
|
||||
})
|
||||
|
||||
q.PushBack("A")
|
||||
|
||||
assertPanics(t, "should panic when removing at negative index", func() {
|
||||
q.Remove(-1)
|
||||
})
|
||||
|
||||
assertPanics(t, "should panic when removing out of range", func() {
|
||||
q.Remove(1)
|
||||
})
|
||||
}
|
||||
|
||||
func TestSetMinCapacity(t *testing.T) {
|
||||
var q Deque[string]
|
||||
exp := uint(8)
|
||||
q.SetMinCapacity(exp)
|
||||
q.PushBack("A")
|
||||
if q.minCap != 1<<exp {
|
||||
t.Fatal("wrong minimum capacity")
|
||||
}
|
||||
if len(q.buf) != 1<<exp {
|
||||
t.Fatal("wrong buffer size")
|
||||
}
|
||||
q.PopBack()
|
||||
if q.minCap != 1<<exp {
|
||||
t.Fatal("wrong minimum capacity")
|
||||
}
|
||||
if len(q.buf) != 1<<exp {
|
||||
t.Fatal("wrong buffer size")
|
||||
}
|
||||
q.SetMinCapacity(0)
|
||||
if q.minCap != minCapacity {
|
||||
t.Fatal("wrong minimum capacity")
|
||||
}
|
||||
}
|
||||
|
||||
func assertPanics(t *testing.T, name string, f func()) {
|
||||
defer func() {
|
||||
if r := recover(); r == nil {
|
||||
t.Errorf("%s: didn't panic as expected", name)
|
||||
}
|
||||
}()
|
||||
|
||||
f()
|
||||
}
|
||||
|
||||
func BenchmarkPushFront(b *testing.B) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushFront(i)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkPushBack(b *testing.B) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkSerial(b *testing.B) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PopFront()
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkSerialReverse(b *testing.B) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushFront(i)
|
||||
}
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PopBack()
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkRotate(b *testing.B) {
|
||||
q := new(Deque[int])
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
b.ResetTimer()
|
||||
// N complete rotations on length N - 1.
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.Rotate(b.N - 1)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkInsert(b *testing.B) {
|
||||
q := new(Deque[int])
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.Insert(q.Len()/2, -i)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkRemove(b *testing.B) {
|
||||
q := new(Deque[int])
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.PushBack(i)
|
||||
}
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
q.Remove(q.Len() / 2)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkYoyo(b *testing.B) {
|
||||
var q Deque[int]
|
||||
for i := 0; i < b.N; i++ {
|
||||
for j := 0; j < 65536; j++ {
|
||||
q.PushBack(j)
|
||||
}
|
||||
for j := 0; j < 65536; j++ {
|
||||
q.PopFront()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkYoyoFixed(b *testing.B) {
|
||||
var q Deque[int]
|
||||
q.SetMinCapacity(16)
|
||||
for i := 0; i < b.N; i++ {
|
||||
for j := 0; j < 65536; j++ {
|
||||
q.PushBack(j)
|
||||
}
|
||||
for j := 0; j < 65536; j++ {
|
||||
q.PopFront()
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -69,6 +69,13 @@ func (pq *PriorityQueue) Pop() *Item {
|
||||
return heap.Pop(&pq.priorityQueueSlice).(*Item)
|
||||
}
|
||||
|
||||
func (pq *PriorityQueue) GetHighest() *Item{
|
||||
if len(pq.priorityQueueSlice)>0 {
|
||||
return pq.priorityQueueSlice[0]
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
func (pq *PriorityQueue) Len() int {
|
||||
return len(pq.priorityQueueSlice)
|
||||
}
|
||||
|
||||
25
util/sysprocess/process.go
Normal file
25
util/sysprocess/process.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package sysprocess
|
||||
|
||||
import (
|
||||
"github.com/shirou/gopsutil/process"
|
||||
"os"
|
||||
)
|
||||
|
||||
func GetProcessNameByPID(pid int32) (string, error) {
|
||||
proc, err := process.NewProcess(pid)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
processName, err := proc.Name()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return processName, nil
|
||||
}
|
||||
|
||||
func GetMyProcessName() (string, error) {
|
||||
return GetProcessNameByPID(int32(os.Getpid()))
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"reflect"
|
||||
"runtime"
|
||||
"time"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
// ITimer
|
||||
@@ -29,7 +30,7 @@ type OnAddTimer func(timer ITimer)
|
||||
// Timer
|
||||
type Timer struct {
|
||||
Id uint64
|
||||
cancelled bool //是否关闭
|
||||
cancelled int32 //是否关闭
|
||||
C chan ITimer //定时器管道
|
||||
interval time.Duration // 时间间隔(用于循环定时器)
|
||||
fireTime time.Time // 触发时间
|
||||
@@ -131,7 +132,7 @@ func (t *Timer) Do() {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -171,12 +172,12 @@ func (t *Timer) GetInterval() time.Duration {
|
||||
}
|
||||
|
||||
func (t *Timer) Cancel() {
|
||||
t.cancelled = true
|
||||
atomic.StoreInt32(&t.cancelled,1)
|
||||
}
|
||||
|
||||
// 判断定时器是否已经取消
|
||||
func (t *Timer) IsActive() bool {
|
||||
return !t.cancelled
|
||||
return atomic.LoadInt32(&t.cancelled) == 0
|
||||
}
|
||||
|
||||
func (t *Timer) GetName() string {
|
||||
@@ -217,7 +218,7 @@ func (c *Cron) Do() {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -273,7 +274,7 @@ func (c *Ticker) Do() {
|
||||
buf := make([]byte, 4096)
|
||||
l := runtime.Stack(buf, false)
|
||||
errString := fmt.Sprint(r)
|
||||
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
|
||||
log.Dump(string(buf[:l]),log.String("error",errString))
|
||||
}
|
||||
}()
|
||||
|
||||
|
||||
Reference in New Issue
Block a user