Compare commits

..

31 Commits

Author SHA1 Message Date
duanhf2012
a61979e985 优化消息队列服务持久化 2023-05-17 11:26:29 +08:00
duanhf2012
6de25d1c6d 优化rankservice错误返回 2023-05-09 14:34:33 +08:00
duanhf2012
b392617d6e 优化性能监控与rankservice持久化 2023-05-09 14:06:17 +08:00
duanhf2012
92fdb7860c 优化本地node中的服务rpc 2023-05-04 17:53:42 +08:00
duanhf2012
f78d0d58be 优化rpc与rankservice持久化 2023-05-04 17:35:40 +08:00
duanhf2012
5675681ab1 优化concurrent与rpc模块 2023-05-04 14:21:29 +08:00
duanhf2012
ddeaaf7d77 优化concurrent模块 2023-04-11 10:29:06 +08:00
duanhf2012
1174b47475 IService接口新增扩展IConcurrent 2023-04-04 16:36:05 +08:00
duanhf2012
18fff3b567 优化concurrent模块,新增返回值控制是否回调 2023-03-31 15:12:27 +08:00
duanhf2012
7ab6c88f9c 整理优化rpc 2023-03-23 10:06:41 +08:00
duanhf2012
6b64de06a2 优化增加TcpService的包长度字段配置 2023-03-22 14:59:22 +08:00
duanhf2012
95b153f8cf 优化network包长度字段自动计算 2023-03-20 15:20:04 +08:00
duanhf2012
f3ff09b90f 优化rpc调用错误日志
限制配置的服务必需安装
优化结点断开连接时删除结点
2023-03-17 12:09:00 +08:00
duanhf2012
f9738fb9d0 Merge branch 'master' of https://github.com/duanhf2012/origin 2023-03-06 15:57:33 +08:00
duanhf2012
91e773aa8c 补充说明RPC函数名的规范支持RPCFunctionName 2023-03-06 15:57:23 +08:00
origin
c9b96404f4 Merge pull request #873 from duanhf2012/dependabot/go_modules/golang.org/x/crypto-0.1.0
Bump golang.org/x/crypto from 0.0.0-20201216223049-8b5274cf687f to 0.1.0
2023-03-06 15:45:40 +08:00
duanhf2012
aaae63a674 新增支持RPC函数命名RPCXXX格式 2023-03-06 15:41:51 +08:00
duanhf2012
47dc21aee1 优化rpc返回参数与请求参数不一致时报错 2023-03-06 11:47:23 +08:00
dependabot[bot]
4d09532801 Bump golang.org/x/crypto from 0.0.0-20201216223049-8b5274cf687f to 0.1.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.0.0-20201216223049-8b5274cf687f to 0.1.0.
- [Release notes](https://github.com/golang/crypto/releases)
- [Commits](https://github.com/golang/crypto/commits/v0.1.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-06 02:26:33 +00:00
origin
d3ad7fc898 Merge pull request #871 from duanhf2012/dependabot/go_modules/golang.org/x/text-0.3.8
Bump golang.org/x/text from 0.3.6 to 0.3.8
2023-03-06 10:08:56 +08:00
dependabot[bot]
ba2b0568b2 Bump golang.org/x/text from 0.3.6 to 0.3.8
Bumps [golang.org/x/text](https://github.com/golang/text) from 0.3.6 to 0.3.8.
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.3.6...v0.3.8)

---
updated-dependencies:
- dependency-name: golang.org/x/text
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-23 07:30:17 +00:00
duanhf2012
5a3600bd62 补充说明服务的安装启动与停止顺序 2023-02-22 15:43:56 +08:00
duanhf2012
4783d05e75 优化并发模块 2023-02-22 15:13:51 +08:00
duanhf2012
8cc1b1afcb 补充并发函数调用说明 2023-02-22 10:49:01 +08:00
duanhf2012
53d9392901 新增《并发函数调用》说明 2023-02-22 10:45:17 +08:00
duanhf2012
8111b12da5 新增异步函数执行功能 2023-02-22 09:53:50 +08:00
duanhf2012
0ebbe0e31d 优化服务的启停顺序 2023-02-16 15:59:07 +08:00
duanhf2012
e326e342f2 新增排行榜扩展数据以及数据的修改 2023-02-13 17:04:36 +08:00
duanhf2012
a7c6b45764 优化本结点与跨结点Rpc结构&简化原始Rpc接口 2023-01-31 13:50:41 +08:00
duanhf2012
541abd93b4 Merge branch 'master' of https://github.com/duanhf2012/origin 2023-01-29 10:16:23 +08:00
duanhf2012
60064cbba6 优化网络模块 2022-12-27 14:20:18 +08:00
43 changed files with 5388 additions and 918 deletions

203
README.md
View File

@@ -1,10 +1,10 @@
origin 游戏服务器引擎简介
==================
=========================
origin 是一个由 Go 语言golang编写的分布式开源游戏服务器引擎。origin适用于各类游戏服务器的开发包括 H5HTML5游戏服务器。
origin 解决的问题:
* origin总体设计如go语言设计一样总是尽可能的提供简洁和易用的模式快速开发。
* 能够根据业务需求快速并灵活的制定服务器架构。
* 利用多核优势将不同的service配置到不同的node并能高效的协同工作。
@@ -12,12 +12,16 @@ origin 解决的问题:
* 有丰富并健壮的工具库。
Hello world!
---------------
------------
下面我们来一步步的建立origin服务器,先下载[origin引擎](https://github.com/duanhf2012/origin "origin引擎"),或者使用如下命令:
```go
go get -v -u github.com/duanhf2012/origin
```
于是下载到GOPATH环境目录中,在src中加入main.go,内容如下:
```go
package main
@@ -29,16 +33,20 @@ func main() {
node.Start()
}
```
以上只是基础代码,具体运行参数和配置请参照第一章节。
一个origin进程需要创建一个node对象,Start开始运行。您也可以直接下载origin引擎示例:
```
go get -v -u github.com/duanhf2012/originserver
```
本文所有的说明都是基于该示例为主。
origin引擎三大对象关系
---------------
----------------------
* Node: 可以认为每一个Node代表着一个origin进程
* Service:一个独立的服务可以认为是一个大的功能模块他是Node的子集创建完成并安装Node对象中。服务可以支持对外部RPC等功能。
* Module: 这是origin最小对象单元强烈建议所有的业务模块都划分成各个小的Module组合origin引擎将监控所有服务与Module运行状态例如可以监控它们的慢处理和死循环函数。Module可以建立树状关系。Service本身也是Module的类型。
@@ -46,7 +54,8 @@ origin引擎三大对象关系
origin集群核心配置文件在config的cluster目录下如github.com/duanhf2012/originserver的config/cluster目录下有cluster.json与service.json配置
cluster.json如下
---------------
------------------
```
{
"NodeList":[
@@ -70,21 +79,26 @@ cluster.json如下
}
]
```
---------------
---
以上配置了两个结点服务器程序:
* NodeId: 表示origin程序的结点Id标识不允许重复。
* Private: 是否私有结点如果为true表示其他结点不会发现它但可以自我运行。
* ListenAddr:Rpc通信服务的监听地址
* MaxRpcParamLen:Rpc参数数据包最大长度该参数可以缺省默认一次Rpc调用支持最大4294967295byte长度数据。
* NodeName:结点名称
* remark:备注,可选项
* ServiceList:该Node将安装的服务列表
---------------
* ServiceList:该Node拥有的服务列表注意origin按配置的顺序进行安装初始化。但停止服务的顺序是相反。
---
在启动程序命令originserver -start nodeid=1中nodeid就是根据该配置装载服务。
更多参数使用请使用originserver -help查看。
service.json如下
---------------
------------------
```
{
"Global": {
@@ -103,7 +117,7 @@ service.json如下
"Keyfile":""
}
]
},
"TcpService":{
"ListenAddr":"0.0.0.0:9030",
@@ -160,10 +174,12 @@ service.json如下
}
```
---------------
---
以上配置分为两个部分Global,Service与NodeService。Global是全局配置在任何服务中都可以通过cluster.GetCluster().GetGlobalCfg()获取NodeService中配置的对应结点中服务的配置如果启动程序中根据nodeid查找该域的对应的服务如果找不到时从Service公共部分查找。
**HttpService配置**
* ListenAddr:Http监听地址
* ReadTimeout:读网络超时毫秒
* WriteTimeout:写网络超时毫秒
@@ -172,6 +188,7 @@ service.json如下
* CAFile: 证书文件如果您的服务器通过web服务器代理配置https可以忽略该配置
**TcpService配置**
* ListenAddr: 监听地址
* MaxConnNum: 允许最大连接数
* PendingWriteNum发送网络队列最大数量
@@ -180,20 +197,21 @@ service.json如下
* MaxMsgLen:包最大长度
**WSService配置**
* ListenAddr: 监听地址
* MaxConnNum: 允许最大连接数
* PendingWriteNum发送网络队列最大数量
* MaxMsgLen:包最大长度
---------------
---
第一章origin基础:
---------------
-------------------
查看github.com/duanhf2012/originserver中的simple_service中新建两个服务分别是TestService1.go与CTestService2.go。
simple_service/TestService1.go如下
```
package simple_service
@@ -223,7 +241,9 @@ func (slf *TestService1) OnInit() error {
```
simple_service/TestService2.go如下
```
import (
"github.com/duanhf2012/origin/node"
@@ -263,6 +283,7 @@ func main(){
```
* config/cluster/cluster.json如下
```
{
"NodeList":[
@@ -279,6 +300,7 @@ func main(){
```
编译后运行结果如下:
```
#originserver -start nodeid=1
TestService1 OnInit.
@@ -286,13 +308,15 @@ TestService2 OnInit.
```
第二章Service中常用功能:
---------------
--------------------------
定时器:
---------------
-------
在开发中最常用的功能有定时任务origin提供两种定时方式
一种AfterFunc函数可以间隔一定时间触发回调参照simple_service/TestService2.go,实现如下:
```
func (slf *TestService2) OnInit() error {
fmt.Printf("TestService2 OnInit.\n")
@@ -305,10 +329,11 @@ func (slf *TestService2) OnSecondTick(){
slf.AfterFunc(time.Second*1,slf.OnSecondTick)
}
```
此时日志可以看到每隔1秒钟会print一次"tick.",如果下次还需要触发,需要重新设置定时器
另一种方式是类似Linux系统的crontab命令使用如下
```
func (slf *TestService2) OnInit() error {
@@ -327,27 +352,29 @@ func (slf *TestService2) OnCron(cron *timer.Cron){
fmt.Printf(":A minute passed!\n")
}
```
以上运行结果每换分钟时打印:A minute passed!
以上运行结果每换分钟时打印:A minute passed!
打开多协程模式:
---------------
在origin引擎设计中所有的服务是单协程模式这样在编写逻辑代码时不用考虑线程安全问题。极大的减少开发难度但某些开发场景下不用考虑这个问题而且需要并发执行的情况比如某服务只处理数据库操作控制而数据库处理中发生阻塞等待的问题因为一个协程该服务接受的数据库操作只能是一个
一个的排队处理,效率过低。于是可以打开此模式指定处理协程数,代码如下:
```
func (slf *TestService1) OnInit() error {
fmt.Printf("TestService1 OnInit.\n")
//打开多线程处理模式10个协程并发处理
slf.SetGoRoutineNum(10)
return nil
}
```
为了
性能监控功能:
---------------
-------------
我们在开发一个大型的系统时,经常由于一些代码质量的原因,产生处理过慢或者死循环的产生,该功能可以被监测到。使用方法如下:
```
@@ -382,6 +409,7 @@ func main(){
}
```
上面通过GetProfiler().SetOverTime与slf.GetProfiler().SetMaxOverTimer设置监控时间
并在main.go中打开了性能报告器以每10秒汇报一次因为上面的例子中定时器是有死循环所以可以得到以下报告
@@ -390,10 +418,11 @@ process count 0,take time 0 Milliseconds,average 0 Milliseconds/per.
too slow process:Timer_orginserver/simple_service.(*TestService1).Loop-fm is take 38003 Milliseconds
直接帮助找到TestService1服务中的Loop函数
结点连接和断开事件监听:
---------------
-----------------------
在有些业务中需要关注某结点是否断开连接,可以注册回调如下:
```
func (ts *TestService) OnInit() error{
ts.RegRpcListener(ts)
@@ -408,13 +437,14 @@ func (ts *TestService) OnNodeDisconnect(nodeId int){
}
```
第三章Module使用:
---------------
-------------------
Module创建与销毁:
---------------
-----------------
可以认为Service就是一种Module它有Module所有的功能。在示例代码中可以参考originserver/simple_module/TestService3.go。
```
package simple_module
@@ -476,7 +506,9 @@ func (slf *TestService3) OnInit() error {
}
```
在OnInit中创建了一条线型的模块关系TestService3->module1->module2调用AddModule后会返回Module的Id自动生成的Id从10e17开始,内部的id您可以自己设置Id。当调用ReleaseModule释放时module1时同样会将module2释放。会自动调用OnRelease函数日志顺序如下
```
Module1 OnInit.
Module2 OnInit.
@@ -484,14 +516,16 @@ module1 id is 100000000000000001, module2 id is 100000000000000002
Module2 Release.
Module1 Release.
```
在Module中同样可以使用定时器功能请参照第二章节的定时器部分。
第四章:事件使用
---------------
----------------
事件是origin中一个重要的组成部分可以在同一个node中的service与service或者与module之间进行事件通知。系统内置的几个服务TcpService/HttpService等都是通过事件功能实现。他也是一个典型的观察者设计模型。在event中有两个类型的interface一个是event.IEventProcessor它提供注册与卸载功能另一个是event.IEventHandler提供消息广播等功能。
在目录simple_event/TestService4.go中
```
package simple_event
@@ -535,6 +569,7 @@ func (slf *TestService4) TriggerEvent(){
```
在目录simple_event/TestService5.go中
```
package simple_event
@@ -590,19 +625,24 @@ func (slf *TestService5) OnServiceEvent(ev event.IEvent){
```
程序运行10秒后调用slf.TriggerEvent函数广播事件于是在TestService5中会收到
```
OnServiceEvent type :1001 data:event data.
OnModuleEvent type :1001 data:event data.
```
在上面的TestModule中监听的事情当这个Module被Release时监听会自动卸载。
第五章RPC使用
---------------
RPC是service与service间通信的重要方式它允许跨进程node互相访问当然也可以指定nodeid进行调用。如下示例
simple_rpc/TestService6.go文件如下
```
```go
package simple_rpc
import (
@@ -627,6 +667,7 @@ type InputData struct {
B int
}
// 注意RPC函数名的格式必需为RPC_FunctionName或者是RPCFunctionName如下的RPC_Sum也可以写成RPCSum
func (slf *TestService6) RPC_Sum(input *InputData,output *int) error{
*output = input.A+input.B
return nil
@@ -635,6 +676,7 @@ func (slf *TestService6) RPC_Sum(input *InputData,output *int) error{
```
simple_rpc/TestService7.go文件如下
```
package simple_rpc
@@ -709,11 +751,82 @@ func (slf *TestService7) GoTest(){
}
```
您可以把TestService6配置到其他的Node中比如NodeId为2中。只要在一个子网origin引擎可以无差别调用。开发者只需要关注Service关系。同样它也是您服务器架构设计的核心需要思考的部分。
第六章:配置服务发现
第六章:并发函数调用
---------------
在开发中经常会有将某些任务放到其他协程中并发执行,执行完成后,将服务的工作线程去回调。使用方式很简单,先打开该功能如下代码:
```
//以下通过cpu数量来定开启协程并发数量建议:(1)cpu密集型计算使用1.0 (2)i/o密集型使用2.0或者更高
slf.OpenConcurrentByNumCPU(1.0)
//以下通过函数打开并发协程数以下协程数最小5最大10任务管道的cap数量1000000
//origin会根据任务的数量在最小与最大协程数间动态伸缩
//slf.OpenConcurrent(5, 10, 1000000)
```
使用示例如下:
```
func (slf *TestService13) testAsyncDo() {
var context struct {
data int64
}
//1.示例普通使用
//参数一的函数在其他协程池中执行完成,将执行完成事件放入服务工作协程,
//参数二的函数在服务协程中执行,是协程安全的。
slf.AsyncDo(func() bool {
//该函数回调在协程池中执行
context.data = 100
return true
}, func(err error) {
//函数将在服务协程中执行
fmt.Print(context.data) //显示100
})
//2.示例按队列顺序
//参数一传入队列Id,同一个队列Id将在协程池中被排队执行
//以下进行两次调用因为两次都传入参数queueId都为1所以它们会都进入queueId为1的排队执行
queueId := int64(1)
for i := 0; i < 2; i++ {
slf.AsyncDoByQueue(queueId, func() bool {
//该函数会被2次调用但是会排队执行
return true
}, func(err error) {
//函数将在服务协程中执行
})
}
//3.函数参数可以某中一个为空
//参数二函数将被延迟执行
slf.AsyncDo(nil, func(err error) {
//将在下
})
//参数一函数在协程池中执行,但没有在服务协程中回调
slf.AsyncDo(func() bool {
return true
}, nil)
//4.函数返回值控制不进行回调
slf.AsyncDo(func() bool {
//返回false时参数二函数将不会被执行; 为true时则会被执行
return false
}, func(err error) {
//该函数将不会被执行
})
}
```
第七章:配置服务发现
--------------------
origin引擎默认使用读取所有结点配置的进行确认结点有哪些Service。引擎也支持动态服务发现的方式使用了内置的DiscoveryMaster服务用于中心ServiceDiscoveryClient用于向DiscoveryMaster获取整个origin网络中所有的结点以及服务信息。具体实现细节请查看这两部分的服务实现。具体使用方式在以下cluster配置中加入以下内容
```
{
"MasterDiscoveryNode": [{
@@ -727,8 +840,8 @@ origin引擎默认使用读取所有结点配置的进行确认结点有哪些Se
"ListenAddr": "127.0.0.1:8801",
"MaxRpcParamLen": 409600
}],
"NodeList": [{
"NodeId": 1,
"ListenAddr": "127.0.0.1:8801",
@@ -741,6 +854,7 @@ origin引擎默认使用读取所有结点配置的进行确认结点有哪些Se
}]
}
```
新上有两新不同的字段分别为MasterDiscoveryNode与DiscoveryService。其中:
MasterDiscoveryNode中配置了结点Id为1的服务发现Master他的监听地址ListenAddr为127.0.0.1:8801结点为2的也是一个服务发现Master不同在于多了"NeighborService":["HttpGateService"]配置。如果"NeighborService"有配置具体的服务时则表示该结点是一个邻居Master结点。当前运行的Node结点会从该Master结点上筛选HttpGateService的服务并且当前运行的Node结点不会向上同步本地所有公开的服务和邻居结点关系是单向的。
@@ -748,14 +862,13 @@ MasterDiscoveryNode中配置了结点Id为1的服务发现Master他的监听
NeighborService可以用在当有多个以Master中心结点的网络发现跨网络的服务场景。
DiscoveryService表示将筛选origin网络中的TestService8服务注意如果DiscoveryService不配置则筛选功能不生效。
第八章HttpService使用
-----------------------
第七章HttpService使用
---------------
HttpService是origin引擎中系统实现的http服务http接口中常用的GET,POST以及url路由处理。
simple_http/TestHttpService.go文件如下
```
package simple_http
@@ -825,15 +938,16 @@ func (slf *TestHttpService) HttpPost(session *sysservice.HttpSession){
}
```
注意要在main.go中加入import _ "orginserver/simple_service"并且在config/cluster/cluster.json中的ServiceList加入服务。
第九章TcpService服务使用
--------------------------
第七章TcpService服务使用
---------------
TcpService是origin引擎中系统实现的Tcp服务可以支持自定义消息格式处理器。只要重新实现network.Processor接口。目前内置已经实现最常用的protobuf处理器。
simple_tcp/TestTcpService.go文件如下
```
package simple_tcp
@@ -901,9 +1015,9 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
}
```
第十章:其他系统模块介绍
------------------------
第八章:其他系统模块介绍
---------------
* sysservice/wsservice.go:支持了WebSocket协议使用方法与TcpService类似
* sysmodule/DBModule.go:对mysql数据库操作
* sysmodule/RedisModule.go:对Redis数据进行操作
@@ -912,9 +1026,9 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
* util:在该目录下有常用的uuid,hash,md5,协程封装等工具库
* https://github.com/duanhf2012/originservice: 其他扩展支持的服务可以在该工程上看到目前支持firebase推送的封装。
备注:
---------------
-----
**感觉不错请star, 谢谢!**
**欢迎加入origin服务器开发QQ交流群:168306674有任何疑问我都会及时解答**
@@ -924,6 +1038,7 @@ func (slf *TestTcpService) OnRequest (clientid uint64,msg proto.Message){
[因服务器是由个人维护,如果这个项目对您有帮助,您可以点我进行捐赠,感谢!](http://www.cppblog.com/images/cppblog_com/API/21416/r_pay.jpg "Thanks!")
特别感谢以下赞助网友:
```
咕咕兽
_

View File

@@ -26,7 +26,7 @@ type NodeInfo struct {
Private bool
ListenAddr string
MaxRpcParamLen uint32 //最大Rpc参数长度
ServiceList []string //所有的服务列表
ServiceList []string //所有的有序服务列表
PublicServiceList []string //对外公开的服务列表
DiscoveryService []string //筛选发现的服务,如果不配置,不进行筛选
NeighborService []string
@@ -110,15 +110,13 @@ func (cls *Cluster) DelNode(nodeId int, immediately bool) {
break
}
rpc.client.Lock()
//正在连接中不主动断开,只断开没有连接中的
if rpc.client.IsConnected() {
nodeInfo.status = Discard
rpc.client.Unlock()
log.SRelease("Discard node ", nodeInfo.NodeId, " ", nodeInfo.ListenAddr)
return
}
rpc.client.Unlock()
break
}
@@ -194,20 +192,17 @@ func (cls *Cluster) serviceDiscoverySetNodeInfo(nodeInfo *NodeInfo) {
if _, rpcInfoOK := cls.mapRpc[nodeInfo.NodeId]; rpcInfoOK == true {
return
}
rpcInfo := NodeRpcInfo{}
rpcInfo.nodeInfo = *nodeInfo
rpcInfo.client = &rpc.Client{}
rpcInfo.client.TriggerRpcEvent = cls.triggerRpcEvent
rpcInfo.client.Connect(nodeInfo.NodeId, nodeInfo.ListenAddr, nodeInfo.MaxRpcParamLen)
rpcInfo.client =rpc.NewRClient(nodeInfo.NodeId, nodeInfo.ListenAddr, nodeInfo.MaxRpcParamLen,cls.triggerRpcEvent)
cls.mapRpc[nodeInfo.NodeId] = rpcInfo
}
func (cls *Cluster) buildLocalRpc() {
rpcInfo := NodeRpcInfo{}
rpcInfo.nodeInfo = cls.localNodeInfo
rpcInfo.client = &rpc.Client{}
rpcInfo.client.Connect(rpcInfo.nodeInfo.NodeId, "", 0)
rpcInfo.client = rpc.NewLClient(rpcInfo.nodeInfo.NodeId)
cls.mapRpc[cls.localNodeInfo.NodeId] = rpcInfo
}
@@ -253,8 +248,9 @@ func (cls *Cluster) checkDynamicDiscovery(localNodeId int) (bool, bool) {
return localMaster, hasMaster
}
func (cls *Cluster) appendService(serviceName string, bPublicService bool) {
cls.localNodeInfo.ServiceList = append(cls.localNodeInfo.ServiceList, serviceName)
func (cls *Cluster) AddDynamicDiscoveryService(serviceName string, bPublicService bool) {
addServiceList := append([]string{},serviceName)
cls.localNodeInfo.ServiceList = append(addServiceList,cls.localNodeInfo.ServiceList...)
if bPublicService {
cls.localNodeInfo.PublicServiceList = append(cls.localNodeInfo.PublicServiceList, serviceName)
}
@@ -298,11 +294,10 @@ func (cls *Cluster) SetupServiceDiscovery(localNodeId int, setupServiceFun Setup
//2.如果为动态服务发现安装本地发现服务
cls.serviceDiscovery = getDynamicDiscovery()
cls.AddDynamicDiscoveryService(DynamicDiscoveryClientName, true)
if localMaster == true {
cls.appendService(DynamicDiscoveryMasterName, false)
cls.AddDynamicDiscoveryService(DynamicDiscoveryMasterName, false)
}
cls.appendService(DynamicDiscoveryClientName, true)
}
func (cls *Cluster) FindRpcHandler(serviceName string) rpc.IRpcHandler {
@@ -358,10 +353,10 @@ func (cls *Cluster) IsNodeConnected(nodeId int) bool {
return pClient != nil && pClient.IsConnected()
}
func (cls *Cluster) triggerRpcEvent(bConnect bool, clientSeq uint32, nodeId int) {
func (cls *Cluster) triggerRpcEvent(bConnect bool, clientId uint32, nodeId int) {
cls.locker.Lock()
nodeInfo, ok := cls.mapRpc[nodeId]
if ok == false || nodeInfo.client == nil || nodeInfo.client.GetClientSeq() != clientSeq {
if ok == false || nodeInfo.client == nil || nodeInfo.client.GetClientId() != clientId {
cls.locker.Unlock()
return
}
@@ -383,7 +378,6 @@ func (cls *Cluster) triggerRpcEvent(bConnect bool, clientSeq uint32, nodeId int)
}
}
func (cls *Cluster) TriggerDiscoveryEvent(bDiscovery bool, nodeId int, serviceName []string) {
cls.rpcEventLocker.Lock()
defer cls.rpcEventLocker.Unlock()

View File

@@ -60,6 +60,21 @@ func (ds *DynamicDiscoveryMaster) addNodeInfo(nodeInfo *rpc.NodeInfo) {
ds.nodeInfo = append(ds.nodeInfo, nodeInfo)
}
func (ds *DynamicDiscoveryMaster) removeNodeInfo(nodeId int32) {
if _,ok:= ds.mapNodeInfo[nodeId];ok == false {
return
}
for i:=0;i<len(ds.nodeInfo);i++ {
if ds.nodeInfo[i].NodeId == nodeId {
ds.nodeInfo = append(ds.nodeInfo[:i],ds.nodeInfo[i+1:]...)
break
}
}
delete(ds.mapNodeInfo,nodeId)
}
func (ds *DynamicDiscoveryMaster) OnInit() error {
ds.mapNodeInfo = make(map[int32]struct{}, 20)
ds.RegRpcListener(ds)
@@ -103,6 +118,8 @@ func (ds *DynamicDiscoveryMaster) OnNodeDisconnect(nodeId int) {
return
}
ds.removeNodeInfo(int32(nodeId))
var notifyDiscover rpc.SubscribeDiscoverNotify
notifyDiscover.MasterNodeId = int32(cluster.GetLocalNodeInfo().NodeId)
notifyDiscover.DelNodeId = int32(nodeId)

93
concurrent/concurrent.go Normal file
View File

@@ -0,0 +1,93 @@
package concurrent
import (
"errors"
"runtime"
"github.com/duanhf2012/origin/log"
)
const defaultMaxTaskChannelNum = 1000000
type IConcurrent interface {
OpenConcurrentByNumCPU(cpuMul float32)
OpenConcurrent(minGoroutineNum int32, maxGoroutineNum int32, maxTaskChannelNum int)
AsyncDoByQueue(queueId int64, fn func() bool, cb func(err error))
AsyncDo(f func() bool, cb func(err error))
}
type Concurrent struct {
dispatch
tasks chan task
cbChannel chan func(error)
}
/*
cpuMul 表示cpu的倍数
建议:(1)cpu密集型 使用1 (2)i/o密集型使用2或者更高
*/
func (c *Concurrent) OpenConcurrentByNumCPU(cpuNumMul float32) {
goroutineNum := int32(float32(runtime.NumCPU())*cpuNumMul + 1)
c.OpenConcurrent(goroutineNum, goroutineNum, defaultMaxTaskChannelNum)
}
func (c *Concurrent) OpenConcurrent(minGoroutineNum int32, maxGoroutineNum int32, maxTaskChannelNum int) {
c.tasks = make(chan task, maxTaskChannelNum)
c.cbChannel = make(chan func(error), maxTaskChannelNum)
//打开dispach
c.dispatch.open(minGoroutineNum, maxGoroutineNum, c.tasks, c.cbChannel)
}
func (c *Concurrent) AsyncDo(f func() bool, cb func(err error)) {
c.AsyncDoByQueue(0, f, cb)
}
func (c *Concurrent) AsyncDoByQueue(queueId int64, fn func() bool, cb func(err error)) {
if cap(c.tasks) == 0 {
panic("not open concurrent")
}
if fn == nil && cb == nil {
log.SStack("fn and cb is nil")
return
}
if fn == nil {
c.pushAsyncDoCallbackEvent(cb)
return
}
if queueId != 0 {
queueId = queueId % maxTaskQueueSessionId+1
}
select {
case c.tasks <- task{queueId, fn, cb}:
default:
log.SError("tasks channel is full")
if cb != nil {
c.pushAsyncDoCallbackEvent(func(err error) {
cb(errors.New("tasks channel is full"))
})
}
return
}
}
func (c *Concurrent) Close() {
if cap(c.tasks) == 0 {
return
}
log.SRelease("wait close concurrent")
c.dispatch.close()
log.SRelease("concurrent has successfully exited")
}
func (c *Concurrent) GetCallBackChannel() chan func(error) {
return c.cbChannel
}

196
concurrent/dispatch.go Normal file
View File

@@ -0,0 +1,196 @@
package concurrent
import (
"sync"
"sync/atomic"
"time"
"fmt"
"runtime"
"github.com/duanhf2012/origin/log"
"github.com/duanhf2012/origin/util/queue"
)
var idleTimeout = int64(2 * time.Second)
const maxTaskQueueSessionId = 10000
type dispatch struct {
minConcurrentNum int32
maxConcurrentNum int32
queueIdChannel chan int64
workerQueue chan task
tasks chan task
idle bool
workerNum int32
cbChannel chan func(error)
mapTaskQueueSession map[int64]*queue.Deque[task]
waitWorker sync.WaitGroup
waitDispatch sync.WaitGroup
}
func (d *dispatch) open(minGoroutineNum int32, maxGoroutineNum int32, tasks chan task, cbChannel chan func(error)) {
d.minConcurrentNum = minGoroutineNum
d.maxConcurrentNum = maxGoroutineNum
d.tasks = tasks
d.mapTaskQueueSession = make(map[int64]*queue.Deque[task], maxTaskQueueSessionId)
d.workerQueue = make(chan task)
d.cbChannel = cbChannel
d.queueIdChannel = make(chan int64, cap(tasks))
d.waitDispatch.Add(1)
go d.run()
}
func (d *dispatch) run() {
defer d.waitDispatch.Done()
timeout := time.NewTimer(time.Duration(atomic.LoadInt64(&idleTimeout)))
for {
select {
case queueId := <-d.queueIdChannel:
d.processqueueEvent(queueId)
default:
select {
case t, ok := <-d.tasks:
if ok == false {
return
}
d.processTask(&t)
case queueId := <-d.queueIdChannel:
d.processqueueEvent(queueId)
case <-timeout.C:
d.processTimer()
if atomic.LoadInt32(&d.minConcurrentNum) == -1 && len(d.tasks) == 0 {
atomic.StoreInt64(&idleTimeout,int64(time.Millisecond * 10))
}
timeout.Reset(time.Duration(atomic.LoadInt64(&idleTimeout)))
}
}
if atomic.LoadInt32(&d.minConcurrentNum) == -1 && d.workerNum == 0 {
d.waitWorker.Wait()
d.cbChannel <- nil
return
}
}
}
func (d *dispatch) processTimer() {
if d.idle == true && d.workerNum > atomic.LoadInt32(&d.minConcurrentNum) {
d.processIdle()
}
d.idle = true
}
func (d *dispatch) processqueueEvent(queueId int64) {
d.idle = false
queueSession := d.mapTaskQueueSession[queueId]
if queueSession == nil {
return
}
queueSession.PopFront()
if queueSession.Len() == 0 {
return
}
t := queueSession.Front()
d.executeTask(&t)
}
func (d *dispatch) executeTask(t *task) {
select {
case d.workerQueue <- *t:
return
default:
if d.workerNum < d.maxConcurrentNum {
var work worker
work.start(&d.waitWorker, t, d)
return
}
}
d.workerQueue <- *t
}
func (d *dispatch) processTask(t *task) {
d.idle = false
//处理有排队任务
if t.queueId != 0 {
queueSession := d.mapTaskQueueSession[t.queueId]
if queueSession == nil {
queueSession = &queue.Deque[task]{}
d.mapTaskQueueSession[t.queueId] = queueSession
}
//没有正在执行的任务,则直接执行
if queueSession.Len() == 0 {
d.executeTask(t)
}
queueSession.PushBack(*t)
return
}
//普通任务
d.executeTask(t)
}
func (d *dispatch) processIdle() {
select {
case d.workerQueue <- task{}:
d.workerNum--
default:
}
}
func (d *dispatch) pushQueueTaskFinishEvent(queueId int64) {
d.queueIdChannel <- queueId
}
func (c *dispatch) pushAsyncDoCallbackEvent(cb func(err error)) {
if cb == nil {
//不需要回调的情况
return
}
c.cbChannel <- cb
}
func (d *dispatch) close() {
atomic.StoreInt32(&d.minConcurrentNum, -1)
breakFor:
for {
select {
case cb := <-d.cbChannel:
if cb == nil {
break breakFor
}
cb(nil)
}
}
d.waitDispatch.Wait()
}
func (d *dispatch) DoCallback(cb func(err error)) {
defer func() {
if r := recover(); r != nil {
buf := make([]byte, 4096)
l := runtime.Stack(buf, false)
errString := fmt.Sprint(r)
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
}
}()
cb(nil)
}

79
concurrent/worker.go Normal file
View File

@@ -0,0 +1,79 @@
package concurrent
import (
"sync"
"errors"
"fmt"
"runtime"
"github.com/duanhf2012/origin/log"
)
type task struct {
queueId int64
fn func() bool
cb func(err error)
}
type worker struct {
*dispatch
}
func (t *task) isExistTask() bool {
return t.fn == nil
}
func (w *worker) start(waitGroup *sync.WaitGroup, t *task, d *dispatch) {
w.dispatch = d
d.workerNum += 1
waitGroup.Add(1)
go w.run(waitGroup, *t)
}
func (w *worker) run(waitGroup *sync.WaitGroup, t task) {
defer waitGroup.Done()
w.exec(&t)
for {
select {
case tw := <-w.workerQueue:
if tw.isExistTask() {
//exit goroutine
log.SRelease("worker goroutine exit")
return
}
w.exec(&tw)
}
}
}
func (w *worker) exec(t *task) {
defer func() {
if r := recover(); r != nil {
buf := make([]byte, 4096)
l := runtime.Stack(buf, false)
errString := fmt.Sprint(r)
cb := t.cb
t.cb = func(err error) {
cb(errors.New(errString))
}
w.endCallFun(true,t)
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
}
}()
w.endCallFun(t.fn(),t)
}
func (w *worker) endCallFun(isDocallBack bool,t *task) {
if isDocallBack {
w.pushAsyncDoCallbackEvent(t.cb)
}
if t.queueId != 0 {
w.pushQueueTaskFinishEvent(t.queueId)
}
}

View File

@@ -7,7 +7,6 @@ import (
"sync"
)
//事件接受器
type EventCallBack func(event IEvent)
@@ -229,7 +228,6 @@ func (processor *EventProcessor) EventHandler(ev IEvent) {
}
}
func (processor *EventProcessor) castEvent(event IEvent){
if processor.mapListenerEvent == nil {
log.SError("mapListenerEvent not init!")
@@ -246,3 +244,4 @@ func (processor *EventProcessor) castEvent(event IEvent){
proc.PushEvent(event)
}
}

24
event/eventpool.go Normal file
View File

@@ -0,0 +1,24 @@
package event
import "github.com/duanhf2012/origin/util/sync"
// eventPool的内存池,缓存Event
const defaultMaxEventChannelNum = 2000000
var eventPool = sync.NewPoolEx(make(chan sync.IPoolData, defaultMaxEventChannelNum), func() sync.IPoolData {
return &Event{}
})
func NewEvent() *Event{
return eventPool.Get().(*Event)
}
func DeleteEvent(event IEvent){
eventPool.Put(event.(sync.IPoolData))
}
func SetEventPoolSize(eventPoolSize int){
eventPool = sync.NewPoolEx(make(chan sync.IPoolData, eventPoolSize), func() sync.IPoolData {
return &Event{}
})
}

View File

@@ -12,7 +12,11 @@ const (
Sys_Event_WebSocket EventType = -5
Sys_Event_Node_Event EventType = -6
Sys_Event_DiscoverService EventType = -7
Sys_Event_DiscardGoroutine EventType = -8
Sys_Event_QueueTaskFinish EventType = -9
Sys_Event_User_Define EventType = 1
)

4
go.mod
View File

@@ -23,8 +23,8 @@ require (
github.com/xdg-go/scram v1.0.2 // indirect
github.com/xdg-go/stringprep v1.0.2 // indirect
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d // indirect
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f // indirect
golang.org/x/crypto v0.1.0 // indirect
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 // indirect
golang.org/x/text v0.3.6 // indirect
golang.org/x/text v0.4.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)

7
go.sum
View File

@@ -58,8 +58,9 @@ go.mongodb.org/mongo-driver v1.9.1/go.mod h1:0sQWfOeY63QTntERDJJ/0SuKK0T1uVSgKCu
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f h1:aZp0e2vLN4MToVqnjNEYEtrEA8RH8U8FN1CU7JgqsPU=
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
@@ -79,8 +80,8 @@ golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXR
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190531172133-b3315ee88b7d/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=

View File

@@ -78,7 +78,6 @@ func (pbRawProcessor *PBRawProcessor) SetRawMsgHandler(handle RawMessageHandler)
func (pbRawProcessor *PBRawProcessor) MakeRawMsg(msgType uint16,msg []byte,pbRawPackInfo *PBRawPackInfo) {
pbRawPackInfo.typ = msgType
pbRawPackInfo.rawMsg = msg
//return &PBRawPackInfo{typ:msgType,rawMsg:msg}
}
func (pbRawProcessor *PBRawProcessor) UnknownMsgRoute(clientId uint64,msg interface{}){

View File

@@ -17,17 +17,11 @@ type IProcessor interface {
}
type IRawProcessor interface {
SetByteOrder(littleEndian bool)
MsgRoute(clientId uint64,msg interface{}) error
Unmarshal(clientId uint64,data []byte) (interface{}, error)
Marshal(clientId uint64,msg interface{}) ([]byte, error)
IProcessor
SetByteOrder(littleEndian bool)
SetRawMsgHandler(handle RawMessageHandler)
MakeRawMsg(msgType uint16,msg []byte,pbRawPackInfo *PBRawPackInfo)
UnknownMsgRoute(clientId uint64,msg interface{})
ConnectedRoute(clientId uint64)
DisConnectedRoute(clientId uint64)
SetUnknownMsgHandler(unknownMessageHandler UnknownRawMessageHandler)
SetConnectedHandler(connectHandler RawConnectHandler)
SetDisConnectedHandler(disconnectHandler RawConnectHandler)

View File

@@ -34,7 +34,6 @@ func (areaPool *memAreaPool) makePool() {
for i := 0; i < poolLen; i++ {
memSize := (areaPool.minAreaValue - 1) + (i+1)*areaPool.growthValue
areaPool.pool[i] = sync.Pool{New: func() interface{} {
//fmt.Println("make memsize:",memSize)
return make([]byte, memSize)
}}
}

View File

@@ -22,11 +22,7 @@ type TCPClient struct {
closeFlag bool
// msg parser
LenMsgLen int
MinMsgLen uint32
MaxMsgLen uint32
LittleEndian bool
msgParser *MsgParser
MsgParser
}
func (client *TCPClient) Start() {
@@ -69,14 +65,24 @@ func (client *TCPClient) init() {
log.SFatal("client is running")
}
if client.MinMsgLen == 0 {
client.MinMsgLen = Default_MinMsgLen
}
if client.MaxMsgLen == 0 {
client.MaxMsgLen = Default_MaxMsgLen
}
if client.LenMsgLen ==0 {
client.LenMsgLen = Default_LenMsgLen
}
maxMsgLen := client.MsgParser.getMaxMsgLen(client.LenMsgLen)
if client.MaxMsgLen > maxMsgLen {
client.MaxMsgLen = maxMsgLen
log.SRelease("invalid MaxMsgLen, reset to ", maxMsgLen)
}
client.cons = make(ConnSet)
client.closeFlag = false
// msg parser
msgParser := NewMsgParser()
msgParser.SetMsgLen(client.LenMsgLen, client.MinMsgLen, client.MaxMsgLen)
msgParser.SetByteOrder(client.LittleEndian)
client.msgParser = msgParser
client.MsgParser.init()
}
func (client *TCPClient) GetCloseFlag() bool{
@@ -120,7 +126,7 @@ reconnect:
client.cons[conn] = struct{}{}
client.Unlock()
tcpConn := newTCPConn(conn, client.PendingWriteNum, client.msgParser,client.WriteDeadline)
tcpConn := newTCPConn(conn, client.PendingWriteNum, &client.MsgParser,client.WriteDeadline)
agent := client.NewAgent(tcpConn)
agent.Run()

View File

@@ -1,11 +1,12 @@
package network
import (
"errors"
"github.com/duanhf2012/origin/log"
"net"
"sync"
"sync/atomic"
"time"
"errors"
)
type ConnSet map[net.Conn]struct{}
@@ -14,7 +15,7 @@ type TCPConn struct {
sync.Mutex
conn net.Conn
writeChan chan []byte
closeFlag bool
closeFlag int32
msgParser *MsgParser
}
@@ -49,7 +50,7 @@ func newTCPConn(conn net.Conn, pendingWriteNum int, msgParser *MsgParser,writeDe
conn.Close()
tcpConn.Lock()
freeChannel(tcpConn)
tcpConn.closeFlag = true
atomic.StoreInt32(&tcpConn.closeFlag,1)
tcpConn.Unlock()
}()
@@ -60,9 +61,9 @@ func (tcpConn *TCPConn) doDestroy() {
tcpConn.conn.(*net.TCPConn).SetLinger(0)
tcpConn.conn.Close()
if !tcpConn.closeFlag {
if atomic.LoadInt32(&tcpConn.closeFlag)==0 {
close(tcpConn.writeChan)
tcpConn.closeFlag = true
atomic.StoreInt32(&tcpConn.closeFlag,1)
}
}
@@ -76,12 +77,12 @@ func (tcpConn *TCPConn) Destroy() {
func (tcpConn *TCPConn) Close() {
tcpConn.Lock()
defer tcpConn.Unlock()
if tcpConn.closeFlag {
if atomic.LoadInt32(&tcpConn.closeFlag)==1 {
return
}
tcpConn.doWrite(nil)
tcpConn.closeFlag = true
atomic.StoreInt32(&tcpConn.closeFlag,1)
}
func (tcpConn *TCPConn) GetRemoteIp() string {
@@ -104,7 +105,7 @@ func (tcpConn *TCPConn) doWrite(b []byte) error{
func (tcpConn *TCPConn) Write(b []byte) error{
tcpConn.Lock()
defer tcpConn.Unlock()
if tcpConn.closeFlag || b == nil {
if atomic.LoadInt32(&tcpConn.closeFlag)==1 || b == nil {
tcpConn.ReleaseReadMsg(b)
return errors.New("conn is close")
}
@@ -133,14 +134,14 @@ func (tcpConn *TCPConn) ReleaseReadMsg(byteBuff []byte){
}
func (tcpConn *TCPConn) WriteMsg(args ...[]byte) error {
if tcpConn.closeFlag == true {
if atomic.LoadInt32(&tcpConn.closeFlag) == 1 {
return errors.New("conn is close")
}
return tcpConn.msgParser.Write(tcpConn, args...)
}
func (tcpConn *TCPConn) WriteRawMsg(args []byte) error {
if tcpConn.closeFlag == true {
if atomic.LoadInt32(&tcpConn.closeFlag) == 1 {
return errors.New("conn is close")
}
@@ -149,7 +150,7 @@ func (tcpConn *TCPConn) WriteRawMsg(args []byte) error {
func (tcpConn *TCPConn) IsConnected() bool {
return tcpConn.closeFlag == false
return atomic.LoadInt32(&tcpConn.closeFlag) == 0
}
func (tcpConn *TCPConn) SetReadDeadline(d time.Duration) {

View File

@@ -11,62 +11,36 @@ import (
// | len | data |
// --------------
type MsgParser struct {
lenMsgLen int
minMsgLen uint32
maxMsgLen uint32
littleEndian bool
LenMsgLen int
MinMsgLen uint32
MaxMsgLen uint32
LittleEndian bool
INetMempool
}
func NewMsgParser() *MsgParser {
p := new(MsgParser)
p.lenMsgLen = 2
p.minMsgLen = 1
p.maxMsgLen = 4096
p.littleEndian = false
p.INetMempool = NewMemAreaPool()
return p
}
// It's dangerous to call the method on reading or writing
func (p *MsgParser) SetMsgLen(lenMsgLen int, minMsgLen uint32, maxMsgLen uint32) {
if lenMsgLen == 1 || lenMsgLen == 2 || lenMsgLen == 4 {
p.lenMsgLen = lenMsgLen
}
if minMsgLen != 0 {
p.minMsgLen = minMsgLen
}
if maxMsgLen != 0 {
p.maxMsgLen = maxMsgLen
}
var max uint32
switch p.lenMsgLen {
func (p *MsgParser) getMaxMsgLen(lenMsgLen int) uint32 {
switch p.LenMsgLen {
case 1:
max = math.MaxUint8
return math.MaxUint8
case 2:
max = math.MaxUint16
return math.MaxUint16
case 4:
max = math.MaxUint32
}
if p.minMsgLen > max {
p.minMsgLen = max
}
if p.maxMsgLen > max {
p.maxMsgLen = max
return math.MaxUint32
default:
panic("LenMsgLen value must be 1 or 2 or 4")
}
}
// It's dangerous to call the method on reading or writing
func (p *MsgParser) SetByteOrder(littleEndian bool) {
p.littleEndian = littleEndian
func (p *MsgParser) init(){
p.INetMempool = NewMemAreaPool()
}
// goroutine safe
func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
var b [4]byte
bufMsgLen := b[:p.lenMsgLen]
bufMsgLen := b[:p.LenMsgLen]
// read len
if _, err := io.ReadFull(conn, bufMsgLen); err != nil {
@@ -75,17 +49,17 @@ func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
// parse len
var msgLen uint32
switch p.lenMsgLen {
switch p.LenMsgLen {
case 1:
msgLen = uint32(bufMsgLen[0])
case 2:
if p.littleEndian {
if p.LittleEndian {
msgLen = uint32(binary.LittleEndian.Uint16(bufMsgLen))
} else {
msgLen = uint32(binary.BigEndian.Uint16(bufMsgLen))
}
case 4:
if p.littleEndian {
if p.LittleEndian {
msgLen = binary.LittleEndian.Uint32(bufMsgLen)
} else {
msgLen = binary.BigEndian.Uint32(bufMsgLen)
@@ -93,9 +67,9 @@ func (p *MsgParser) Read(conn *TCPConn) ([]byte, error) {
}
// check len
if msgLen > p.maxMsgLen {
if msgLen > p.MaxMsgLen {
return nil, errors.New("message too long")
} else if msgLen < p.minMsgLen {
} else if msgLen < p.MinMsgLen {
return nil, errors.New("message too short")
}
@@ -118,26 +92,26 @@ func (p *MsgParser) Write(conn *TCPConn, args ...[]byte) error {
}
// check len
if msgLen > p.maxMsgLen {
if msgLen > p.MaxMsgLen {
return errors.New("message too long")
} else if msgLen < p.minMsgLen {
} else if msgLen < p.MinMsgLen {
return errors.New("message too short")
}
//msg := make([]byte, uint32(p.lenMsgLen)+msgLen)
msg := p.MakeByteSlice(p.lenMsgLen+int(msgLen))
msg := p.MakeByteSlice(p.LenMsgLen+int(msgLen))
// write len
switch p.lenMsgLen {
switch p.LenMsgLen {
case 1:
msg[0] = byte(msgLen)
case 2:
if p.littleEndian {
if p.LittleEndian {
binary.LittleEndian.PutUint16(msg, uint16(msgLen))
} else {
binary.BigEndian.PutUint16(msg, uint16(msgLen))
}
case 4:
if p.littleEndian {
if p.LittleEndian {
binary.LittleEndian.PutUint32(msg, msgLen)
} else {
binary.BigEndian.PutUint32(msg, msgLen)
@@ -145,7 +119,7 @@ func (p *MsgParser) Write(conn *TCPConn, args ...[]byte) error {
}
// write data
l := p.lenMsgLen
l := p.LenMsgLen
for i := 0; i < len(args); i++ {
copy(msg[l:], args[i])
l += len(args[i])

View File

@@ -7,14 +7,16 @@ import (
"time"
)
const Default_ReadDeadline = time.Second*30 //30s
const Default_WriteDeadline = time.Second*30 //30s
const Default_MaxConnNum = 3000
const Default_PendingWriteNum = 10000
const Default_LittleEndian = false
const Default_MinMsgLen = 2
const Default_MaxMsgLen = 65535
const(
Default_ReadDeadline = time.Second*30 //默认读超时30s
Default_WriteDeadline = time.Second*30 //默认写超时30s
Default_MaxConnNum = 1000000 //默认最大连接数
Default_PendingWriteNum = 100000 //单连接写消息Channel容量
Default_LittleEndian = false //默认大小端
Default_MinMsgLen = 2 //最小消息长度2byte
Default_LenMsgLen = 2 //包头字段长度占用2byte
Default_MaxMsgLen = 65535 //最大消息长度
)
type TCPServer struct {
Addr string
@@ -22,6 +24,7 @@ type TCPServer struct {
PendingWriteNum int
ReadDeadline time.Duration
WriteDeadline time.Duration
NewAgent func(*TCPConn) Agent
ln net.Listener
conns ConnSet
@@ -29,14 +32,7 @@ type TCPServer struct {
wgLn sync.WaitGroup
wgConns sync.WaitGroup
// msg parser
LenMsgLen int
MinMsgLen uint32
MaxMsgLen uint32
LittleEndian bool
msgParser *MsgParser
netMemPool INetMempool
MsgParser
}
func (server *TCPServer) Start() {
@@ -54,14 +50,15 @@ func (server *TCPServer) init() {
server.MaxConnNum = Default_MaxConnNum
log.SRelease("invalid MaxConnNum, reset to ", server.MaxConnNum)
}
if server.PendingWriteNum <= 0 {
server.PendingWriteNum = Default_PendingWriteNum
log.SRelease("invalid PendingWriteNum, reset to ", server.PendingWriteNum)
}
if server.MinMsgLen <= 0 {
server.MinMsgLen = Default_MinMsgLen
log.SRelease("invalid MinMsgLen, reset to ", server.MinMsgLen)
if server.LenMsgLen <= 0 {
server.LenMsgLen = Default_LenMsgLen
log.SRelease("invalid LenMsgLen, reset to ", server.LenMsgLen)
}
if server.MaxMsgLen <= 0 {
@@ -69,10 +66,22 @@ func (server *TCPServer) init() {
log.SRelease("invalid MaxMsgLen, reset to ", server.MaxMsgLen)
}
maxMsgLen := server.MsgParser.getMaxMsgLen(server.LenMsgLen)
if server.MaxMsgLen > maxMsgLen {
server.MaxMsgLen = maxMsgLen
log.SRelease("invalid MaxMsgLen, reset to ", maxMsgLen)
}
if server.MinMsgLen <= 0 {
server.MinMsgLen = Default_MinMsgLen
log.SRelease("invalid MinMsgLen, reset to ", server.MinMsgLen)
}
if server.WriteDeadline == 0 {
server.WriteDeadline = Default_WriteDeadline
log.SRelease("invalid WriteDeadline, reset to ", server.WriteDeadline.Seconds(),"s")
}
if server.ReadDeadline == 0 {
server.ReadDeadline = Default_ReadDeadline
log.SRelease("invalid ReadDeadline, reset to ", server.ReadDeadline.Seconds(),"s")
@@ -84,24 +93,15 @@ func (server *TCPServer) init() {
server.ln = ln
server.conns = make(ConnSet)
// msg parser
msgParser := NewMsgParser()
if msgParser.INetMempool == nil {
msgParser.INetMempool = NewMemAreaPool()
}
msgParser.SetMsgLen(server.LenMsgLen, server.MinMsgLen, server.MaxMsgLen)
msgParser.SetByteOrder(server.LittleEndian)
server.msgParser = msgParser
server.MsgParser.init()
}
func (server *TCPServer) SetNetMempool(mempool INetMempool){
server.msgParser.INetMempool = mempool
server.INetMempool = mempool
}
func (server *TCPServer) GetNetMempool() INetMempool{
return server.msgParser.INetMempool
return server.INetMempool
}
func (server *TCPServer) run() {
@@ -127,6 +127,7 @@ func (server *TCPServer) run() {
}
return
}
conn.(*net.TCPConn).SetNoDelay(true)
tempDelay = 0
@@ -137,16 +138,16 @@ func (server *TCPServer) run() {
log.SWarning("too many connections")
continue
}
server.conns[conn] = struct{}{}
server.mutexConns.Unlock()
server.wgConns.Add(1)
tcpConn := newTCPConn(conn, server.PendingWriteNum, server.msgParser,server.WriteDeadline)
tcpConn := newTCPConn(conn, server.PendingWriteNum, &server.MsgParser,server.WriteDeadline)
agent := server.NewAgent(tcpConn)
go func() {
agent.Run()
// cleanup
tcpConn.Close()
server.mutexConns.Lock()

View File

@@ -22,7 +22,6 @@ import (
"time"
)
var closeSig chan bool
var sig chan os.Signal
var nodeId int
var preSetupService []service.IService //预安装
@@ -40,8 +39,6 @@ const(
)
func init() {
closeSig = make(chan bool, 1)
sig = make(chan os.Signal, 3)
signal.Notify(sig, syscall.SIGINT, syscall.SIGTERM, syscall.Signal(10))
@@ -155,21 +152,28 @@ func initNode(id int) {
return
}
//2.setup service
for _, s := range preSetupService {
//是否配置的service
if cluster.GetCluster().IsConfigService(s.GetName()) == false {
continue
//2.顺序安装服务
serviceOrder := cluster.GetCluster().GetLocalNodeInfo().ServiceList
for _,serviceName:= range serviceOrder{
bSetup := false
for _, s := range preSetupService {
if s.GetName() != serviceName {
continue
}
bSetup = true
pServiceCfg := cluster.GetCluster().GetServiceCfg(s.GetName())
s.Init(s, cluster.GetRpcClient, cluster.GetRpcServer, pServiceCfg)
service.Setup(s)
}
pServiceCfg := cluster.GetCluster().GetServiceCfg(s.GetName())
s.Init(s, cluster.GetRpcClient, cluster.GetRpcServer, pServiceCfg)
service.Setup(s)
if bSetup == false {
log.SFatal("Service name "+serviceName+" configuration error")
}
}
//3.service初始化
service.Init(closeSig)
service.Init()
}
func initLog() error {
@@ -274,8 +278,7 @@ func startNode(args interface{}) error {
}
cluster.GetCluster().Stop()
//7.退出
close(closeSig)
service.WaitStop()
service.StopAllService()
log.SRelease("Server is stop.")
return nil
@@ -292,9 +295,9 @@ func GetService(serviceName string) service.IService {
return service.GetService(serviceName)
}
func SetConfigDir(configDir string) {
configDir = configDir
cluster.SetConfigDir(configDir)
func SetConfigDir(cfgDir string) {
configDir = cfgDir
cluster.SetConfigDir(cfgDir)
}
func GetConfigDir() string {

View File

@@ -193,9 +193,11 @@ func Report() {
record = prof.record
prof.record = list.New()
callNum := prof.callNum
totalCostTime := prof.totalCostTime
prof.stackLocker.RUnlock()
DefaultReportFunction(name,prof.callNum,prof.totalCostTime,record)
DefaultReportFunction(name,callNum,totalCostTime,record)
}
}

View File

@@ -3,91 +3,63 @@ package rpc
import (
"container/list"
"errors"
"fmt"
"github.com/duanhf2012/origin/log"
"github.com/duanhf2012/origin/network"
"math"
"reflect"
"runtime"
"strconv"
"sync"
"sync/atomic"
"time"
)
type Client struct {
clientSeq uint32
id int
bSelfNode bool
network.TCPClient
conn *network.TCPConn
const(
DefaultRpcConnNum = 1
DefaultRpcLenMsgLen = 4
DefaultRpcMinMsgLen = 2
DefaultMaxCheckCallRpcCount = 1000
DefaultMaxPendingWriteNum = 200000
DefaultConnectInterval = 2*time.Second
DefaultCheckRpcCallTimeoutInterval = 5*time.Second
DefaultRpcTimeout = 15*time.Second
)
var clientSeq uint32
type IRealClient interface {
SetConn(conn *network.TCPConn)
Close(waitDone bool)
AsyncCall(rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{}) error
Go(rpcHandler IRpcHandler, noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call
RawGo(rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, rawArgs []byte, reply interface{}) *Call
IsConnected() bool
Run()
OnClose()
}
type Client struct {
clientId uint32
nodeId int
pendingLock sync.RWMutex
startSeq uint64
pending map[uint64]*list.Element
pendingTimer *list.List
callRpcTimeout time.Duration
maxCheckCallRpcCount int
TriggerRpcEvent
IRealClient
}
const MaxCheckCallRpcCount = 1000
const MaxPendingWriteNum = 200000
const ConnectInterval = 2*time.Second
var clientSeq uint32
func (client *Client) NewClientAgent(conn *network.TCPConn) network.Agent {
client.conn = conn
client.ResetPending()
client.SetConn(conn)
return client
}
func (client *Client) Connect(id int, addr string, maxRpcParamLen uint32) error {
client.clientSeq = atomic.AddUint32(&clientSeq, 1)
client.id = id
client.Addr = addr
client.maxCheckCallRpcCount = MaxCheckCallRpcCount
client.callRpcTimeout = 15 * time.Second
client.ConnectInterval = ConnectInterval
client.PendingWriteNum = MaxPendingWriteNum
client.AutoReconnect = true
client.ConnNum = 1
client.LenMsgLen = 4
client.MinMsgLen = 2
client.ReadDeadline = Default_ReadWriteDeadline
client.WriteDeadline = Default_ReadWriteDeadline
if maxRpcParamLen > 0 {
client.MaxMsgLen = maxRpcParamLen
} else {
client.MaxMsgLen = math.MaxUint32
}
client.NewAgent = client.NewClientAgent
client.LittleEndian = LittleEndian
client.ResetPending()
go client.startCheckRpcCallTimer()
if addr == "" {
client.bSelfNode = true
return nil
}
client.Start()
return nil
}
func (client *Client) startCheckRpcCallTimer() {
for {
time.Sleep(5 * time.Second)
client.checkRpcCallTimeout()
}
}
func (client *Client) makeCallFail(call *Call) {
client.removePending(call.Seq)
func (bc *Client) makeCallFail(call *Call) {
bc.removePending(call.Seq)
if call.callback != nil && call.callback.IsValid() {
call.rpcHandler.PushRpcResponse(call)
} else {
@@ -95,29 +67,38 @@ func (client *Client) makeCallFail(call *Call) {
}
}
func (client *Client) checkRpcCallTimeout() {
now := time.Now()
func (bc *Client) checkRpcCallTimeout() {
for{
time.Sleep(DefaultCheckRpcCallTimeoutInterval)
now := time.Now()
for i := 0; i < client.maxCheckCallRpcCount; i++ {
client.pendingLock.Lock()
pElem := client.pendingTimer.Front()
if pElem == nil {
client.pendingLock.Unlock()
for i := 0; i < bc.maxCheckCallRpcCount; i++ {
bc.pendingLock.Lock()
if bc.pendingTimer == nil {
bc.pendingLock.Unlock()
break
}
pElem := bc.pendingTimer.Front()
if pElem == nil {
bc.pendingLock.Unlock()
break
}
pCall := pElem.Value.(*Call)
if now.Sub(pCall.callTime) > bc.callRpcTimeout {
strTimeout := strconv.FormatInt(int64(bc.callRpcTimeout/time.Second), 10)
pCall.Err = errors.New("RPC call takes more than " + strTimeout + " seconds")
bc.makeCallFail(pCall)
bc.pendingLock.Unlock()
continue
}
bc.pendingLock.Unlock()
break
}
pCall := pElem.Value.(*Call)
if now.Sub(pCall.callTime) > client.callRpcTimeout {
strTimeout := strconv.FormatInt(int64(client.callRpcTimeout/time.Second), 10)
pCall.Err = errors.New("RPC call takes more than " + strTimeout + " seconds")
client.makeCallFail(pCall)
client.pendingLock.Unlock()
continue
}
client.pendingLock.Unlock()
}
}
func (client *Client) ResetPending() {
func (client *Client) InitPending() {
client.pendingLock.Lock()
if client.pending != nil {
for _, v := range client.pending {
@@ -131,235 +112,62 @@ func (client *Client) ResetPending() {
client.pendingLock.Unlock()
}
func (client *Client) AddPending(call *Call) {
client.pendingLock.Lock()
func (bc *Client) AddPending(call *Call) {
bc.pendingLock.Lock()
call.callTime = time.Now()
elemTimer := client.pendingTimer.PushBack(call)
client.pending[call.Seq] = elemTimer //如果下面发送失败,将会一一直存在这里
client.pendingLock.Unlock()
elemTimer := bc.pendingTimer.PushBack(call)
bc.pending[call.Seq] = elemTimer //如果下面发送失败,将会一一直存在这里
bc.pendingLock.Unlock()
}
func (client *Client) RemovePending(seq uint64) *Call {
if seq == 0 {
func (bc *Client) RemovePending(seq uint64) *Call {
if seq == 0 {
return nil
}
client.pendingLock.Lock()
call := client.removePending(seq)
client.pendingLock.Unlock()
bc.pendingLock.Lock()
call := bc.removePending(seq)
bc.pendingLock.Unlock()
return call
}
func (client *Client) removePending(seq uint64) *Call {
v, ok := client.pending[seq]
func (bc *Client) removePending(seq uint64) *Call {
v, ok := bc.pending[seq]
if ok == false {
return nil
}
call := v.Value.(*Call)
client.pendingTimer.Remove(v)
delete(client.pending, seq)
bc.pendingTimer.Remove(v)
delete(bc.pending, seq)
return call
}
func (client *Client) FindPending(seq uint64) *Call {
func (bc *Client) FindPending(seq uint64) *Call {
if seq == 0 {
return nil
}
client.pendingLock.Lock()
v, ok := client.pending[seq]
bc.pendingLock.Lock()
v, ok := bc.pending[seq]
if ok == false {
client.pendingLock.Unlock()
bc.pendingLock.Unlock()
return nil
}
pCall := v.Value.(*Call)
client.pendingLock.Unlock()
bc.pendingLock.Unlock()
return pCall
}
func (client *Client) generateSeq() uint64 {
return atomic.AddUint64(&client.startSeq, 1)
func (bc *Client) generateSeq() uint64 {
return atomic.AddUint64(&bc.startSeq, 1)
}
func (client *Client) AsyncCall(rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{}) error {
processorType, processor := GetProcessorType(args)
InParam, herr := processor.Marshal(args)
if herr != nil {
return herr
}
seq := client.generateSeq()
request := MakeRpcRequest(processor, seq, 0, serviceMethod, false, InParam)
bytes, err := processor.Marshal(request.RpcRequestData)
ReleaseRpcRequest(request)
if err != nil {
return err
}
if client.conn == nil {
return errors.New("Rpc server is disconnect,call " + serviceMethod)
}
call := MakeCall()
call.Reply = replyParam
call.callback = &callback
call.rpcHandler = rpcHandler
call.ServiceMethod = serviceMethod
call.Seq = seq
client.AddPending(call)
err = client.conn.WriteMsg([]byte{uint8(processorType)}, bytes)
if err != nil {
client.RemovePending(call.Seq)
ReleaseCall(call)
return err
}
return nil
func (client *Client) GetNodeId() int {
return client.nodeId
}
func (client *Client) RawGo(processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, args []byte, reply interface{}) *Call {
call := MakeCall()
call.ServiceMethod = serviceMethod
call.Reply = reply
call.Seq = client.generateSeq()
request := MakeRpcRequest(processor, call.Seq, rpcMethodId, serviceMethod, noReply, args)
bytes, err := processor.Marshal(request.RpcRequestData)
ReleaseRpcRequest(request)
if err != nil {
call.Seq = 0
call.Err = err
return call
}
if client.conn == nil {
call.Seq = 0
call.Err = errors.New(serviceMethod + " was called failed,rpc client is disconnect")
return call
}
if noReply == false {
client.AddPending(call)
}
err = client.conn.WriteMsg([]byte{uint8(processor.GetProcessorType())}, bytes)
if err != nil {
client.RemovePending(call.Seq)
call.Seq = 0
call.Err = err
}
return call
}
func (client *Client) Go(noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
_, processor := GetProcessorType(args)
InParam, err := processor.Marshal(args)
if err != nil {
call := MakeCall()
call.Err = err
return call
}
return client.RawGo(processor, noReply, 0, serviceMethod, InParam, reply)
}
func (client *Client) Run() {
defer func() {
if r := recover(); r != nil {
buf := make([]byte, 4096)
l := runtime.Stack(buf, false)
errString := fmt.Sprint(r)
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
}
}()
client.TriggerRpcEvent(true, client.GetClientSeq(), client.GetId())
for {
bytes, err := client.conn.ReadMsg()
if err != nil {
log.SError("rpcClient ", client.Addr, " ReadMsg error:", err.Error())
return
}
processor := GetProcessor(bytes[0])
if processor == nil {
client.conn.ReleaseReadMsg(bytes)
log.SError("rpcClient ", client.Addr, " ReadMsg head error:", err.Error())
return
}
//1.解析head
response := RpcResponse{}
response.RpcResponseData = processor.MakeRpcResponse(0, "", nil)
err = processor.Unmarshal(bytes[1:], response.RpcResponseData)
client.conn.ReleaseReadMsg(bytes)
if err != nil {
processor.ReleaseRpcResponse(response.RpcResponseData)
log.SError("rpcClient Unmarshal head error:", err.Error())
continue
}
v := client.RemovePending(response.RpcResponseData.GetSeq())
if v == nil {
log.SError("rpcClient cannot find seq ", response.RpcResponseData.GetSeq(), " in pending")
} else {
v.Err = nil
if len(response.RpcResponseData.GetReply()) > 0 {
err = processor.Unmarshal(response.RpcResponseData.GetReply(), v.Reply)
if err != nil {
log.SError("rpcClient Unmarshal body error:", err.Error())
v.Err = err
}
}
if response.RpcResponseData.GetErr() != nil {
v.Err = response.RpcResponseData.GetErr()
}
if v.callback != nil && v.callback.IsValid() {
v.rpcHandler.PushRpcResponse(v)
} else {
v.done <- v
}
}
processor.ReleaseRpcResponse(response.RpcResponseData)
}
}
func (client *Client) OnClose() {
client.TriggerRpcEvent(false, client.GetClientSeq(), client.GetId())
}
func (client *Client) IsConnected() bool {
return client.bSelfNode || (client.conn != nil && client.conn.IsConnected() == true)
}
func (client *Client) GetId() int {
return client.id
}
func (client *Client) Close(waitDone bool) {
client.TCPClient.Close(waitDone)
client.pendingLock.Lock()
for {
pElem := client.pendingTimer.Front()
if pElem == nil {
break
}
pCall := pElem.Value.(*Call)
pCall.Err = errors.New("nodeid is disconnect ")
client.makeCallFail(pCall)
}
client.pendingLock.Unlock()
}
func (client *Client) GetClientSeq() uint32 {
return client.clientSeq
func (client *Client) GetClientId() uint32 {
return client.clientId
}

View File

@@ -3,6 +3,7 @@ package rpc
import (
"github.com/duanhf2012/origin/util/sync"
"github.com/gogo/protobuf/proto"
"fmt"
)
type GoGoPBProcessor struct {
@@ -40,7 +41,10 @@ func (slf *GoGoPBProcessor) Marshal(v interface{}) ([]byte, error){
}
func (slf *GoGoPBProcessor) Unmarshal(data []byte, msg interface{}) error{
protoMsg := msg.(proto.Message)
protoMsg,ok := msg.(proto.Message)
if ok == false {
return fmt.Errorf("%+v is not of proto.Message type",msg)
}
return proto.Unmarshal(data, protoMsg)
}
@@ -73,6 +77,15 @@ func (slf *GoGoPBProcessor) GetProcessorType() RpcProcessorType{
return RpcProcessorGoGoPB
}
func (slf *GoGoPBProcessor) Clone(src interface{}) (interface{},error){
srcMsg,ok := src.(proto.Message)
if ok == false {
return nil,fmt.Errorf("param is not of proto.message type")
}
return proto.Clone(srcMsg),nil
}
func (slf *GoGoPBRpcRequestData) IsNoReply() bool{
return slf.GetNoReply()
}
@@ -91,5 +104,3 @@ func (slf *GoGoPBRpcResponseData) GetErr() *RpcError {

View File

@@ -3,6 +3,7 @@ package rpc
import (
"github.com/duanhf2012/origin/util/sync"
jsoniter "github.com/json-iterator/go"
"reflect"
)
var json = jsoniter.ConfigCompatibleWithStandardLibrary
@@ -119,6 +120,22 @@ func (jsonRpcResponseData *JsonRpcResponseData) GetReply() []byte{
}
func (jsonProcessor *JsonProcessor) Clone(src interface{}) (interface{},error){
dstValue := reflect.New(reflect.ValueOf(src).Type().Elem())
bytes,err := json.Marshal(src)
if err != nil {
return nil,err
}
dst := dstValue.Interface()
err = json.Unmarshal(bytes,dst)
if err != nil {
return nil,err
}
return dst,nil
}

133
rpc/lclient.go Normal file
View File

@@ -0,0 +1,133 @@
package rpc
import (
"errors"
"github.com/duanhf2012/origin/log"
"github.com/duanhf2012/origin/network"
"reflect"
"strings"
"sync/atomic"
)
//本结点的Client
type LClient struct {
selfClient *Client
}
func (rc *LClient) Lock(){
}
func (rc *LClient) Unlock(){
}
func (lc *LClient) Run(){
}
func (lc *LClient) OnClose(){
}
func (lc *LClient) IsConnected() bool {
return true
}
func (lc *LClient) SetConn(conn *network.TCPConn){
}
func (lc *LClient) Close(waitDone bool){
}
func (lc *LClient) Go(rpcHandler IRpcHandler,noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
pLocalRpcServer := rpcHandler.GetRpcServer()()
//判断是否是同一服务
findIndex := strings.Index(serviceMethod, ".")
if findIndex == -1 {
sErr := errors.New("Call serviceMethod " + serviceMethod + " is error!")
log.SError(sErr.Error())
call := MakeCall()
call.DoError(sErr)
return call
}
serviceName := serviceMethod[:findIndex]
if serviceName == rpcHandler.GetName() { //自己服务调用
//调用自己rpcHandler处理器
err := pLocalRpcServer.myselfRpcHandlerGo(lc.selfClient,serviceName, serviceMethod, args, requestHandlerNull,reply)
call := MakeCall()
if err != nil {
call.DoError(err)
return call
}
call.DoOK()
return call
}
//其他的rpcHandler的处理器
return pLocalRpcServer.selfNodeRpcHandlerGo(nil, lc.selfClient, noReply, serviceName, 0, serviceMethod, args, reply, nil)
}
func (rc *LClient) RawGo(rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceName string, rawArgs []byte, reply interface{}) *Call {
pLocalRpcServer := rpcHandler.GetRpcServer()()
call := MakeCall()
call.ServiceMethod = serviceName
call.Reply = reply
//服务自我调用
if serviceName == rpcHandler.GetName() {
err := pLocalRpcServer.myselfRpcHandlerGo(rc.selfClient,serviceName, serviceName, rawArgs, requestHandlerNull,nil)
call.Err = err
call.done <- call
return call
}
//其他的rpcHandler的处理器
return pLocalRpcServer.selfNodeRpcHandlerGo(processor,rc.selfClient, true, serviceName, rpcMethodId, serviceName, nil, nil, rawArgs)
}
func (lc *LClient) AsyncCall(rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, reply interface{}) error {
pLocalRpcServer := rpcHandler.GetRpcServer()()
//判断是否是同一服务
findIndex := strings.Index(serviceMethod, ".")
if findIndex == -1 {
err := errors.New("Call serviceMethod " + serviceMethod + " is error!")
callback.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
log.SError(err.Error())
return nil
}
serviceName := serviceMethod[:findIndex]
//调用自己rpcHandler处理器
if serviceName == rpcHandler.GetName() { //自己服务调用
return pLocalRpcServer.myselfRpcHandlerGo(lc.selfClient,serviceName, serviceMethod, args,callback ,reply)
}
//其他的rpcHandler的处理器
err := pLocalRpcServer.selfNodeRpcHandlerAsyncGo(lc.selfClient, rpcHandler, false, serviceName, serviceMethod, args, reply, callback)
if err != nil {
callback.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
}
return nil
}
func NewLClient(nodeId int) *Client{
client := &Client{}
client.clientId = atomic.AddUint32(&clientSeq, 1)
client.nodeId = nodeId
client.maxCheckCallRpcCount = DefaultMaxCheckCallRpcCount
client.callRpcTimeout = DefaultRpcTimeout
lClient := &LClient{}
lClient.selfClient = client
client.IRealClient = lClient
client.InitPending()
go client.checkRpcCallTimeout()
return client
}

View File

@@ -1,6 +1,7 @@
package rpc
type IRpcProcessor interface {
Clone(src interface{}) (interface{},error)
Marshal(v interface{}) ([]byte, error) //b表示自定义缓冲区可以填nil由系统自动分配
Unmarshal(data []byte, v interface{}) error
MakeRpcRequest(seq uint64,rpcMethodId uint32,serviceMethod string,noReply bool,inParam []byte) IRpcRequestData

File diff suppressed because it is too large Load Diff

View File

@@ -2,19 +2,48 @@ syntax = "proto3";
package rpc;
option go_package = ".;rpc";
// RankData 排行数据
message RankData {
uint64 Key = 1; //数据主建
repeated int64 SortData = 2; //参与排行的数据
bytes Data = 3; //不参与排行的数据
message SetSortAndExtendData{
bool IsSortData = 1; //是否为排序字段,为true时修改Sort字段否则修改Extend数据
int32 Pos = 2; //排序位置
int64 Data = 3; //排序值
}
//自增值
message IncreaseRankData {
uint64 RankId = 1; //排行榜的ID
uint64 Key = 2; //数据主建
repeated ExtendIncData Extend = 3; //扩展数据
repeated int64 IncreaseSortData = 4;//自增排行数值
repeated SetSortAndExtendData SetSortAndExtendData = 5;//设置排序数据值
bool ReturnRankData = 6; //是否查找最新排名否则不返回排行Rank字段
bool InsertDataOnNonExistent = 7; //为true时:存在不进行更新不存在则插入InitData与InitSortData数据。为false时忽略不对InitData与InitSortData数据
bytes InitData = 8; //不参与排行的数据
repeated int64 InitSortData = 9; //参与排行的数据
}
message IncreaseRankDataRet{
RankPosData PosData = 1;
}
//用于单独刷新排行榜数据
message UpdateRankData {
uint64 RankId = 1; //排行榜的ID
uint64 Key = 2; //数据主建
bytes Data = 3; //数据部分
}
message UpdateRankDataRet {
bool Ret = 1;
}
// RankPosData 排行数据——查询返回
message RankPosData {
uint64 Key = 1; //数据主建
uint64 Rank = 2; //名次
uint64 Rank = 2; //名次
repeated int64 SortData = 3; //参与排行的数据
bytes Data = 4; //不参与排行的数据
repeated int64 ExtendData = 5; //扩展数据
}
// RankList 排行榜数据
@@ -31,6 +60,22 @@ message RankList {
message UpsetRankData {
uint64 RankId = 1; //排行榜的ID
repeated RankData RankDataList = 2; //排行数据
bool FindNewRank = 3; //是否查找最新排名
}
message ExtendIncData {
int64 InitValue = 1;
int64 IncreaseValue = 2;
}
// RankData 排行数据
message RankData {
uint64 Key = 1; //数据主建
repeated int64 SortData = 2; //参与排行的数据
bytes Data = 3; //不参与排行的数据
repeated ExtendIncData ExData = 4; //扩展增量数据
}
// DeleteByKey 删除排行榜数据
@@ -71,9 +116,15 @@ message RankDataList {
RankPosData KeyRank = 3; //附带的Key查询排行结果信息
}
message RankInfo{
uint64 Key = 1;
uint64 Rank = 2;
}
// RankResult
message RankResult {
int32 AddCount = 1;//新增数量
int32 ModifyCount = 2; //修改数量
int32 RemoveCount = 3;//删除数量
repeated RankInfo NewRank = 4; //新的排名名次只有UpsetRankData.FindNewRank为true时才生效
}

268
rpc/rclient.go Normal file
View File

@@ -0,0 +1,268 @@
package rpc
import (
"errors"
"fmt"
"github.com/duanhf2012/origin/log"
"github.com/duanhf2012/origin/network"
"math"
"reflect"
"runtime"
"sync/atomic"
)
//跨结点连接的Client
type RClient struct {
selfClient *Client
network.TCPClient
conn *network.TCPConn
TriggerRpcConnEvent
}
func (rc *RClient) IsConnected() bool {
rc.Lock()
defer rc.Unlock()
return rc.conn != nil && rc.conn.IsConnected() == true
}
func (rc *RClient) GetConn() *network.TCPConn{
rc.Lock()
conn := rc.conn
rc.Unlock()
return conn
}
func (rc *RClient) SetConn(conn *network.TCPConn){
rc.Lock()
rc.conn = conn
rc.Unlock()
}
func (rc *RClient) Go(rpcHandler IRpcHandler,noReply bool, serviceMethod string, args interface{}, reply interface{}) *Call {
_, processor := GetProcessorType(args)
InParam, err := processor.Marshal(args)
if err != nil {
log.SError(err.Error())
call := MakeCall()
call.DoError(err)
return call
}
return rc.RawGo(rpcHandler,processor, noReply, 0, serviceMethod, InParam, reply)
}
func (rc *RClient) RawGo(rpcHandler IRpcHandler,processor IRpcProcessor, noReply bool, rpcMethodId uint32, serviceMethod string, rawArgs []byte, reply interface{}) *Call {
call := MakeCall()
call.ServiceMethod = serviceMethod
call.Reply = reply
call.Seq = rc.selfClient.generateSeq()
request := MakeRpcRequest(processor, call.Seq, rpcMethodId, serviceMethod, noReply, rawArgs)
bytes, err := processor.Marshal(request.RpcRequestData)
ReleaseRpcRequest(request)
if err != nil {
call.Seq = 0
log.SError(err.Error())
call.DoError(err)
return call
}
conn := rc.GetConn()
if conn == nil || conn.IsConnected()==false {
call.Seq = 0
sErr := errors.New(serviceMethod + " was called failed,rpc client is disconnect")
log.SError(sErr.Error())
call.DoError(sErr)
return call
}
if noReply == false {
rc.selfClient.AddPending(call)
}
err = conn.WriteMsg([]byte{uint8(processor.GetProcessorType())}, bytes)
if err != nil {
rc.selfClient.RemovePending(call.Seq)
log.SError(err.Error())
call.Seq = 0
call.DoError(err)
}
return call
}
func (rc *RClient) AsyncCall(rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{}) error {
err := rc.asyncCall(rpcHandler, serviceMethod, callback, args, replyParam)
if err != nil {
callback.Call([]reflect.Value{reflect.ValueOf(replyParam), reflect.ValueOf(err)})
}
return nil
}
func (rc *RClient) asyncCall(rpcHandler IRpcHandler, serviceMethod string, callback reflect.Value, args interface{}, replyParam interface{}) error {
processorType, processor := GetProcessorType(args)
InParam, herr := processor.Marshal(args)
if herr != nil {
return herr
}
seq := rc.selfClient.generateSeq()
request := MakeRpcRequest(processor, seq, 0, serviceMethod, false, InParam)
bytes, err := processor.Marshal(request.RpcRequestData)
ReleaseRpcRequest(request)
if err != nil {
return err
}
conn := rc.GetConn()
if conn == nil || conn.IsConnected()==false {
return errors.New("Rpc server is disconnect,call " + serviceMethod)
}
call := MakeCall()
call.Reply = replyParam
call.callback = &callback
call.rpcHandler = rpcHandler
call.ServiceMethod = serviceMethod
call.Seq = seq
rc.selfClient.AddPending(call)
err = conn.WriteMsg([]byte{uint8(processorType)}, bytes)
if err != nil {
rc.selfClient.RemovePending(call.Seq)
ReleaseCall(call)
return err
}
return nil
}
func (rc *RClient) Run() {
defer func() {
if r := recover(); r != nil {
buf := make([]byte, 4096)
l := runtime.Stack(buf, false)
errString := fmt.Sprint(r)
log.SError("core dump info[", errString, "]\n", string(buf[:l]))
}
}()
rc.TriggerRpcConnEvent(true, rc.selfClient.GetClientId(), rc.selfClient.GetNodeId())
for {
bytes, err := rc.conn.ReadMsg()
if err != nil {
log.SError("rpcClient ", rc.Addr, " ReadMsg error:", err.Error())
return
}
processor := GetProcessor(bytes[0])
if processor == nil {
rc.conn.ReleaseReadMsg(bytes)
log.SError("rpcClient ", rc.Addr, " ReadMsg head error:", err.Error())
return
}
//1.解析head
response := RpcResponse{}
response.RpcResponseData = processor.MakeRpcResponse(0, "", nil)
err = processor.Unmarshal(bytes[1:], response.RpcResponseData)
rc.conn.ReleaseReadMsg(bytes)
if err != nil {
processor.ReleaseRpcResponse(response.RpcResponseData)
log.SError("rpcClient Unmarshal head error:", err.Error())
continue
}
v := rc.selfClient.RemovePending(response.RpcResponseData.GetSeq())
if v == nil {
log.SError("rpcClient cannot find seq ", response.RpcResponseData.GetSeq(), " in pending")
} else {
v.Err = nil
if len(response.RpcResponseData.GetReply()) > 0 {
err = processor.Unmarshal(response.RpcResponseData.GetReply(), v.Reply)
if err != nil {
log.SError("rpcClient Unmarshal body error:", err.Error())
v.Err = err
}
}
if response.RpcResponseData.GetErr() != nil {
v.Err = response.RpcResponseData.GetErr()
}
if v.callback != nil && v.callback.IsValid() {
v.rpcHandler.PushRpcResponse(v)
} else {
v.done <- v
}
}
processor.ReleaseRpcResponse(response.RpcResponseData)
}
}
func (rc *RClient) OnClose() {
rc.TriggerRpcConnEvent(false, rc.selfClient.GetClientId(), rc.selfClient.GetNodeId())
}
func NewRClient(nodeId int, addr string, maxRpcParamLen uint32,triggerRpcConnEvent TriggerRpcConnEvent) *Client{
client := &Client{}
client.clientId = atomic.AddUint32(&clientSeq, 1)
client.nodeId = nodeId
client.maxCheckCallRpcCount = DefaultMaxCheckCallRpcCount
client.callRpcTimeout = DefaultRpcTimeout
c:= &RClient{}
c.selfClient = client
c.Addr = addr
c.ConnectInterval = DefaultConnectInterval
c.PendingWriteNum = DefaultMaxPendingWriteNum
c.AutoReconnect = true
c.TriggerRpcConnEvent = triggerRpcConnEvent
c.ConnNum = DefaultRpcConnNum
c.LenMsgLen = DefaultRpcLenMsgLen
c.MinMsgLen = DefaultRpcMinMsgLen
c.ReadDeadline = Default_ReadWriteDeadline
c.WriteDeadline = Default_ReadWriteDeadline
c.LittleEndian = LittleEndian
c.NewAgent = client.NewClientAgent
if maxRpcParamLen > 0 {
c.MaxMsgLen = maxRpcParamLen
} else {
c.MaxMsgLen = math.MaxUint32
}
client.IRealClient = c
client.InitPending()
go client.checkRpcCallTimeout()
c.Start()
return client
}
func (rc *RClient) Close(waitDone bool) {
rc.TCPClient.Close(waitDone)
rc.selfClient.pendingLock.Lock()
for {
pElem := rc.selfClient.pendingTimer.Front()
if pElem == nil {
break
}
pCall := pElem.Value.(*Call)
pCall.Err = errors.New("nodeid is disconnect ")
rc.selfClient.makeCallFail(pCall)
}
rc.selfClient.pendingLock.Unlock()
}

View File

@@ -51,12 +51,6 @@ type IRpcResponseData interface {
GetReply() []byte
}
type IRawInputArgs interface {
GetRawData() []byte //获取原始数据
DoFree() //处理完成,回收内存
DoEscape() //逃逸,GC自动回收
}
type RpcHandleFinder interface {
FindRpcHandler(serviceMethod string) IRpcHandler
}
@@ -108,6 +102,15 @@ func (rpcResponse *RpcResponse) Clear() *RpcResponse{
return rpcResponse
}
func (call *Call) DoError(err error){
call.Err = err
call.done <- call
}
func (call *Call) DoOK(){
call.done <- call
}
func (call *Call) Clear() *Call{
call.Seq = 0
call.ServiceMethod = ""

View File

@@ -6,7 +6,6 @@ import (
"github.com/duanhf2012/origin/log"
"reflect"
"runtime"
"strconv"
"strings"
"unicode"
"unicode/utf8"
@@ -17,6 +16,7 @@ const maxClusterNode int = 128
type FuncRpcClient func(nodeId int, serviceMethod string, client []*Client) (error, int)
type FuncRpcServer func() *Server
var nilError = reflect.Zero(reflect.TypeOf((*error)(nil)).Elem())
type RpcError string
@@ -45,10 +45,7 @@ type RpcMethodInfo struct {
rpcProcessorType RpcProcessorType
}
type RawRpcCallBack interface {
Unmarshal(data []byte) (interface{}, error)
CB(data interface{})
}
type RawRpcCallBack func(rawData []byte)
type IRpcHandlerChannel interface {
PushRpcResponse(call *Call) error
@@ -67,7 +64,7 @@ type RpcHandler struct {
pClientList []*Client
}
type TriggerRpcEvent func(bConnect bool, clientSeq uint32, nodeId int)
type TriggerRpcConnEvent func(bConnect bool, clientSeq uint32, nodeId int)
type INodeListener interface {
OnNodeConnected(nodeId int)
OnNodeDisconnect(nodeId int)
@@ -92,10 +89,11 @@ type IRpcHandler interface {
AsyncCallNode(nodeId int, serviceMethod string, args interface{}, callback interface{}) error
CallNode(nodeId int, serviceMethod string, args interface{}, reply interface{}) error
GoNode(nodeId int, serviceMethod string, args interface{}) error
RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs IRawInputArgs) error
RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs []byte) error
CastGo(serviceMethod string, args interface{}) error
IsSingleCoroutine() bool
UnmarshalInParam(rpcProcessor IRpcProcessor, serviceMethod string, rawRpcMethodId uint32, inParam []byte) (interface{}, error)
GetRpcServer() FuncRpcServer
}
func reqHandlerNull(Returns interface{}, Err RpcError) {
@@ -140,7 +138,7 @@ func (handler *RpcHandler) isExportedOrBuiltinType(t reflect.Type) bool {
func (handler *RpcHandler) suitableMethods(method reflect.Method) error {
//只有RPC_开头的才能被调用
if strings.Index(method.Name, "RPC_") != 0 {
if strings.Index(method.Name, "RPC_") != 0 && strings.Index(method.Name, "RPC") != 0 {
return nil
}
@@ -244,8 +242,13 @@ func (handler *RpcHandler) HandlerRpcRequest(request *RpcRequest) {
log.SError("RpcHandler cannot find request rpc id", rawRpcId)
return
}
rawData,ok := request.inParam.([]byte)
if ok == false {
log.SError("RpcHandler " + handler.rpcHandler.GetName()," cannot convert in param to []byte", rawRpcId)
return
}
v.CB(request.inParam)
v(rawData)
return
}
@@ -288,14 +291,16 @@ func (handler *RpcHandler) HandlerRpcRequest(request *RpcRequest) {
request.requestHandle(nil, RpcError(rErr))
return
}
requestHanle := request.requestHandle
returnValues := v.method.Func.Call(paramList)
errInter := returnValues[0].Interface()
if errInter != nil {
err = errInter.(error)
}
if request.requestHandle != nil && v.hasResponder == false {
request.requestHandle(oParam.Interface(), ConvertError(err))
if v.hasResponder == false && requestHanle != nil {
requestHanle(oParam.Interface(), ConvertError(err))
}
}
@@ -427,36 +432,8 @@ func (handler *RpcHandler) goRpc(processor IRpcProcessor, bCast bool, nodeId int
}
//2.rpcClient调用
//如果调用本结点服务
for i := 0; i < count; i++ {
if pClientList[i].bSelfNode == true {
pLocalRpcServer := handler.funcRpcServer()
//判断是否是同一服务
findIndex := strings.Index(serviceMethod, ".")
if findIndex == -1 {
sErr := errors.New("Call serviceMethod " + serviceMethod + " is error!")
log.SError(sErr.Error())
err = sErr
continue
}
serviceName := serviceMethod[:findIndex]
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
//调用自己rpcHandler处理器
return pLocalRpcServer.myselfRpcHandlerGo(pClientList[i],serviceName, serviceMethod, args, requestHandlerNull,nil)
}
//其他的rpcHandler的处理器
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(processor, pClientList[i], true, serviceName, 0, serviceMethod, args, nil, nil)
if pCall.Err != nil {
err = pCall.Err
}
pClientList[i].RemovePending(pCall.Seq)
ReleaseCall(pCall)
continue
}
//跨node调用
pCall := pClientList[i].Go(true, serviceMethod, args, nil)
pCall := pClientList[i].Go(handler.rpcHandler,true, serviceMethod, args, nil)
if pCall.Err != nil {
err = pCall.Err
}
@@ -482,38 +459,9 @@ func (handler *RpcHandler) callRpc(nodeId int, serviceMethod string, args interf
return errors.New("cannot call more then 1 node")
}
//2.rpcClient调用
//如果调用本结点服务
pClient := pClientList[0]
if pClient.bSelfNode == true {
pLocalRpcServer := handler.funcRpcServer()
//判断是否是同一服务
findIndex := strings.Index(serviceMethod, ".")
if findIndex == -1 {
err := errors.New("Call serviceMethod " + serviceMethod + "is error!")
log.SError(err.Error())
return err
}
serviceName := serviceMethod[:findIndex]
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
//调用自己rpcHandler处理器
return pLocalRpcServer.myselfRpcHandlerGo(pClient,serviceName, serviceMethod, args,requestHandlerNull, reply)
}
//其他的rpcHandler的处理器
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(nil, pClient, false, serviceName, 0, serviceMethod, args, reply, nil)
err = pCall.Done().Err
pClient.RemovePending(pCall.Seq)
ReleaseCall(pCall)
return err
}
pCall := pClient.Go(handler.rpcHandler,false, serviceMethod, args, reply)
//跨node调用
pCall := pClient.Go(false, serviceMethod, args, reply)
if pCall.Err != nil {
err = pCall.Err
ReleaseCall(pCall)
return err
}
err = pCall.Done().Err
pClient.RemovePending(pCall.Seq)
ReleaseCall(pCall)
@@ -541,12 +489,15 @@ func (handler *RpcHandler) asyncCallRpc(nodeId int, serviceMethod string, args i
}
reply := reflect.New(fVal.Type().In(0).Elem()).Interface()
var pClientList [maxClusterNode]*Client
var pClientList [2]*Client
err, count := handler.funcRpcClient(nodeId, serviceMethod, pClientList[:])
if count == 0 || err != nil {
strNodeId := strconv.Itoa(nodeId)
if err == nil {
err = errors.New("cannot find rpcClient from nodeId " + strNodeId + " " + serviceMethod)
if nodeId > 0 {
err = fmt.Errorf("cannot find %s from nodeId %d",serviceMethod,nodeId)
}else {
err = fmt.Errorf("No %s service found in the origin network",serviceMethod)
}
}
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
log.SError("Call serviceMethod is error:", err.Error())
@@ -563,35 +514,9 @@ func (handler *RpcHandler) asyncCallRpc(nodeId int, serviceMethod string, args i
//2.rpcClient调用
//如果调用本结点服务
pClient := pClientList[0]
if pClient.bSelfNode == true {
pLocalRpcServer := handler.funcRpcServer()
//判断是否是同一服务
findIndex := strings.Index(serviceMethod, ".")
if findIndex == -1 {
err := errors.New("Call serviceMethod " + serviceMethod + " is error!")
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
log.SError(err.Error())
return nil
}
serviceName := serviceMethod[:findIndex]
//调用自己rpcHandler处理器
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
return pLocalRpcServer.myselfRpcHandlerGo(pClient,serviceName, serviceMethod, args,fVal ,reply)
}
pClient.AsyncCall(handler.rpcHandler, serviceMethod, fVal, args, reply)
//其他的rpcHandler的处理器
err = pLocalRpcServer.selfNodeRpcHandlerAsyncGo(pClient, handler, false, serviceName, serviceMethod, args, reply, fVal)
if err != nil {
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
}
return nil
}
//跨node调用
err = pClient.AsyncCall(handler, serviceMethod, fVal, args, reply)
if err != nil {
fVal.Call([]reflect.Value{reflect.ValueOf(reply), reflect.ValueOf(err)})
}
return nil
}
@@ -631,16 +556,14 @@ func (handler *RpcHandler) CastGo(serviceMethod string, args interface{}) error
return handler.goRpc(nil, true, 0, serviceMethod, args)
}
func (handler *RpcHandler) RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs IRawInputArgs) error {
func (handler *RpcHandler) RawGoNode(rpcProcessorType RpcProcessorType, nodeId int, rpcMethodId uint32, serviceName string, rawArgs []byte) error {
processor := GetProcessor(uint8(rpcProcessorType))
err, count := handler.funcRpcClient(nodeId, serviceName, handler.pClientList)
if count == 0 || err != nil {
//args.DoGc()
log.SError("Call serviceMethod is error:", err.Error())
return err
}
if count > 1 {
//args.DoGc()
err := errors.New("cannot call more then 1 node")
log.SError(err.Error())
return err
@@ -649,32 +572,12 @@ func (handler *RpcHandler) RawGoNode(rpcProcessorType RpcProcessorType, nodeId i
//2.rpcClient调用
//如果调用本结点服务
for i := 0; i < count; i++ {
if handler.pClientList[i].bSelfNode == true {
pLocalRpcServer := handler.funcRpcServer()
//调用自己rpcHandler处理器
if serviceName == handler.rpcHandler.GetName() { //自己服务调用
err := pLocalRpcServer.myselfRpcHandlerGo(handler.pClientList[i],serviceName, serviceName, rawArgs.GetRawData(), requestHandlerNull,nil)
//args.DoGc()
return err
}
//其他的rpcHandler的处理器
pCall := pLocalRpcServer.selfNodeRpcHandlerGo(processor, handler.pClientList[i], true, serviceName, rpcMethodId, serviceName, nil, nil, rawArgs.GetRawData())
rawArgs.DoEscape()
if pCall.Err != nil {
err = pCall.Err
}
handler.pClientList[i].RemovePending(pCall.Seq)
ReleaseCall(pCall)
continue
}
//跨node调用
pCall := handler.pClientList[i].RawGo(processor, true, rpcMethodId, serviceName, rawArgs.GetRawData(), nil)
rawArgs.DoFree()
pCall := handler.pClientList[i].RawGo(handler.rpcHandler,processor, true, rpcMethodId, serviceName, rawArgs, nil)
if pCall.Err != nil {
err = pCall.Err
}
handler.pClientList[i].RemovePending(pCall.Seq)
ReleaseCall(pCall)
}
@@ -688,23 +591,7 @@ func (handler *RpcHandler) RegRawRpc(rpcMethodId uint32, rawRpcCB RawRpcCallBack
func (handler *RpcHandler) UnmarshalInParam(rpcProcessor IRpcProcessor, serviceMethod string, rawRpcMethodId uint32, inParam []byte) (interface{}, error) {
if rawRpcMethodId > 0 {
v, ok := handler.mapRawFunctions[rawRpcMethodId]
if ok == false {
strRawRpcMethodId := strconv.FormatUint(uint64(rawRpcMethodId), 10)
err := errors.New("RpcHandler cannot find request rpc id " + strRawRpcMethodId)
log.SError(err.Error())
return nil, err
}
msg, err := v.Unmarshal(inParam)
if err != nil {
strRawRpcMethodId := strconv.FormatUint(uint64(rawRpcMethodId), 10)
err := errors.New("RpcHandler cannot Unmarshal rpc id " + strRawRpcMethodId)
log.SError(err.Error())
return nil, err
}
return msg, err
return inParam,nil
}
v, ok := handler.mapFunctions[serviceMethod]
@@ -717,3 +604,8 @@ func (handler *RpcHandler) UnmarshalInParam(rpcProcessor IRpcProcessor, serviceM
err = rpcProcessor.Unmarshal(inParam, param)
return param, err
}
func (handler *RpcHandler) GetRpcServer() FuncRpcServer{
return handler.funcRpcServer
}

View File

@@ -19,7 +19,6 @@ const (
RpcProcessorGoGoPB RpcProcessorType = 1
)
//var processor IRpcProcessor = &JsonProcessor{}
var arrayProcessor = []IRpcProcessor{&JsonProcessor{}, &GoGoPBProcessor{}}
var arrayProcessorLen uint8 = 2
var LittleEndian bool
@@ -72,7 +71,6 @@ func (server *Server) Start(listenAddr string, maxRpcParamLen uint32) {
}
server.rpcServer.Addr = ":" + splitAddr[1]
server.rpcServer.LenMsgLen = 4 //uint16
server.rpcServer.MinMsgLen = 2
if maxRpcParamLen > 0 {
server.rpcServer.MaxMsgLen = maxRpcParamLen
@@ -86,6 +84,8 @@ func (server *Server) Start(listenAddr string, maxRpcParamLen uint32) {
server.rpcServer.LittleEndian = LittleEndian
server.rpcServer.WriteDeadline = Default_ReadWriteDeadline
server.rpcServer.ReadDeadline = Default_ReadWriteDeadline
server.rpcServer.LenMsgLen = DefaultRpcLenMsgLen
server.rpcServer.Start()
}
@@ -148,7 +148,6 @@ func (agent *RpcAgent) Run() {
ReleaseRpcRequest(req)
continue
} else {
//will close tcpconn
ReleaseRpcRequest(req)
break
}
@@ -245,48 +244,39 @@ func (server *Server) myselfRpcHandlerGo(client *Client,handlerName string, serv
log.SError(err.Error())
return err
}
return rpcHandler.CallMethod(client,serviceMethod, args,callBack, reply)
}
func (server *Server) selfNodeRpcHandlerGo(processor IRpcProcessor, client *Client, noReply bool, handlerName string, rpcMethodId uint32, serviceMethod string, args interface{}, reply interface{}, rawArgs []byte) *Call {
pCall := MakeCall()
pCall.Seq = client.generateSeq()
rpcHandler := server.rpcHandleFinder.FindRpcHandler(handlerName)
if rpcHandler == nil {
err := errors.New("service method " + serviceMethod + " not config!")
log.SError(err.Error())
pCall.Seq = 0
pCall.Err = errors.New("service method " + serviceMethod + " not config!")
pCall.done <- pCall
log.SError(pCall.Err.Error())
pCall.DoError(err)
return pCall
}
var iParam interface{}
if processor == nil {
_, processor = GetProcessorType(args)
}
if args != nil {
inParamValue := reflect.New(reflect.ValueOf(args).Type().Elem())
//args
//复制输入参数
iParam = inParamValue.Interface()
bytes,err := processor.Marshal(args)
if err == nil {
err = processor.Unmarshal(bytes,iParam)
}
var err error
iParam,err = processor.Clone(args)
if err != nil {
sErr := errors.New("RpcHandler " + handlerName + "."+serviceMethod+" deep copy inParam is error:" + err.Error())
log.SError(sErr.Error())
pCall.Seq = 0
pCall.Err = errors.New("RpcHandler " + handlerName + "."+serviceMethod+" deep copy inParam is error:" + err.Error())
pCall.done <- pCall
log.SError(pCall.Err.Error())
pCall.DoError(sErr)
return pCall
}
@@ -299,9 +289,10 @@ func (server *Server) selfNodeRpcHandlerGo(processor IRpcProcessor, client *Clie
var err error
req.inParam, err = rpcHandler.UnmarshalInParam(processor, serviceMethod, rpcMethodId, rawArgs)
if err != nil {
log.SError(err.Error())
pCall.Seq = 0
pCall.DoError(err)
ReleaseRpcRequest(req)
pCall.Err = err
pCall.done <- pCall
return pCall
}
}
@@ -313,38 +304,40 @@ func (server *Server) selfNodeRpcHandlerGo(processor IRpcProcessor, client *Clie
if reply != nil && Returns != reply && Returns != nil {
byteReturns, err := req.rpcProcessor.Marshal(Returns)
if err != nil {
log.SError("returns data cannot be marshal ", callSeq)
ReleaseRpcRequest(req)
}
err = req.rpcProcessor.Unmarshal(byteReturns, reply)
if err != nil {
log.SError("returns data cannot be Unmarshal ", callSeq)
ReleaseRpcRequest(req)
Err = ConvertError(err)
log.SError("returns data cannot be marshal,callSeq is ", callSeq," error is ",err.Error())
}else{
err = req.rpcProcessor.Unmarshal(byteReturns, reply)
if err != nil {
Err = ConvertError(err)
log.SError("returns data cannot be Unmarshal,callSeq is ", callSeq," error is ",err.Error())
}
}
}
ReleaseRpcRequest(req)
v := client.RemovePending(callSeq)
if v == nil {
log.SError("rpcClient cannot find seq ",callSeq, " in pending")
ReleaseRpcRequest(req)
return
}
if len(Err) == 0 {
v.Err = nil
v.DoOK()
} else {
v.Err = Err
log.SError(Err.Error())
v.DoError(Err)
}
v.done <- v
ReleaseRpcRequest(req)
}
}
err := rpcHandler.PushRpcRequest(req)
if err != nil {
log.SError(err.Error())
pCall.DoError(err)
ReleaseRpcRequest(req)
pCall.Err = err
pCall.done <- pCall
}
return pCall
@@ -359,15 +352,7 @@ func (server *Server) selfNodeRpcHandlerAsyncGo(client *Client, callerRpcHandler
}
_, processor := GetProcessorType(args)
inParamValue := reflect.New(reflect.ValueOf(args).Type().Elem())
//args
//复制输入参数
iParam := inParamValue.Interface()
bytes,err := processor.Marshal(args)
if err == nil {
err = processor.Unmarshal(bytes,iParam)
}
iParam,err := processor.Clone(args)
if err != nil {
errM := errors.New("RpcHandler " + handlerName + "."+serviceMethod+" deep copy inParam is error:" + err.Error())
log.SError(errM.Error())

View File

@@ -10,11 +10,13 @@ import (
"github.com/duanhf2012/origin/log"
rpcHandle "github.com/duanhf2012/origin/rpc"
"github.com/duanhf2012/origin/util/timer"
"github.com/duanhf2012/origin/concurrent"
)
const InitModuleId = 1e9
type IModule interface {
concurrent.IConcurrent
SetModuleId(moduleId uint32) bool
GetModuleId() uint32
AddModule(module IModule) (uint32, error)
@@ -56,6 +58,7 @@ type Module struct {
//事件管道
eventHandler event.IEventHandler
concurrent.IConcurrent
}
func (m *Module) SetModuleId(moduleId uint32) bool {
@@ -105,6 +108,7 @@ func (m *Module) AddModule(module IModule) (uint32, error) {
pAddModule.moduleName = reflect.Indirect(reflect.ValueOf(module)).Type().Name()
pAddModule.eventHandler = event.NewEventHandler()
pAddModule.eventHandler.Init(m.eventHandler.GetEventProcessor())
pAddModule.IConcurrent = m.IConcurrent
err := module.OnInit()
if err != nil {
return 0, err
@@ -273,6 +277,11 @@ func (m *Module) SafeNewTicker(tickerId *uint64, d time.Duration, AdditionData i
}
func (m *Module) CancelTimerId(timerId *uint64) bool {
if timerId==nil || *timerId == 0 {
log.SWarning("timerId is invalid")
return false
}
if m.mapActiveIdTimer == nil {
log.SError("mapActiveIdTimer is nil")
return false
@@ -280,7 +289,7 @@ func (m *Module) CancelTimerId(timerId *uint64) bool {
t, ok := m.mapActiveIdTimer[*timerId]
if ok == false {
log.SError("cannot find timer id ", timerId)
log.SStack("cannot find timer id ", timerId)
return false
}

View File

@@ -7,22 +7,22 @@ import (
"github.com/duanhf2012/origin/log"
"github.com/duanhf2012/origin/profiler"
"github.com/duanhf2012/origin/rpc"
originSync "github.com/duanhf2012/origin/util/sync"
"github.com/duanhf2012/origin/util/timer"
"reflect"
"runtime"
"strconv"
"sync"
"sync/atomic"
"github.com/duanhf2012/origin/concurrent"
)
var closeSig chan bool
var timerDispatcherLen = 100000
var maxServiceEventChannelNum = 2000000
type IService interface {
concurrent.IConcurrent
Init(iService IService,getClientFun rpc.FuncRpcClient,getServerFun rpc.FuncRpcServer,serviceCfg interface{})
Wait()
Stop()
Start()
OnSetup(iService IService)
@@ -42,14 +42,9 @@ type IService interface {
OpenProfiler()
}
// eventPool的内存池,缓存Event
var maxServiceEventChannel = 2000000
var eventPool = originSync.NewPoolEx(make(chan originSync.IPoolData, maxServiceEventChannel), func() originSync.IPoolData {
return &event.Event{}
})
type Service struct {
Module
rpcHandler rpc.RpcHandler //rpc
name string //service name
wg sync.WaitGroup
@@ -61,6 +56,7 @@ type Service struct {
nodeEventLister rpc.INodeListener
discoveryServiceLister rpc.IDiscoveryServiceListener
chanEvent chan event.IEvent
closeSig chan struct{}
}
// RpcConnEvent Node结点连接事件
@@ -77,10 +73,7 @@ type DiscoveryServiceEvent struct{
}
func SetMaxServiceChannel(maxEventChannel int){
maxServiceEventChannel = maxEventChannel
eventPool = originSync.NewPoolEx(make(chan originSync.IPoolData, maxServiceEventChannel), func() originSync.IPoolData {
return &event.Event{}
})
maxServiceEventChannelNum = maxEventChannel
}
func (rpcEventData *DiscoveryServiceEvent) GetEventType() event.EventType{
@@ -105,9 +98,10 @@ func (s *Service) OpenProfiler() {
}
func (s *Service) Init(iService IService,getClientFun rpc.FuncRpcClient,getServerFun rpc.FuncRpcServer,serviceCfg interface{}) {
s.closeSig = make(chan struct{})
s.dispatcher =timer.NewDispatcher(timerDispatcherLen)
if s.chanEvent == nil {
s.chanEvent = make(chan event.IEvent,maxServiceEventChannel)
s.chanEvent = make(chan event.IEvent,maxServiceEventChannelNum)
}
s.rpcHandler.InitRpcHandler(iService.(rpc.IRpcHandler),getClientFun,getServerFun,iService.(rpc.IRpcHandlerChannel))
@@ -123,29 +117,42 @@ func (s *Service) Init(iService IService,getClientFun rpc.FuncRpcClient,getServe
s.eventProcessor.Init(s)
s.eventHandler = event.NewEventHandler()
s.eventHandler.Init(s.eventProcessor)
s.Module.IConcurrent = &concurrent.Concurrent{}
}
func (s *Service) Start() {
s.startStatus = true
var waitRun sync.WaitGroup
for i:=int32(0);i< s.goroutineNum;i++{
s.wg.Add(1)
waitRun.Add(1)
go func(){
log.SRelease(s.GetName()," service is running",)
waitRun.Done()
s.Run()
}()
}
waitRun.Wait()
}
func (s *Service) Run() {
log.SDebug("Start running Service ", s.GetName())
defer s.wg.Done()
var bStop = false
concurrent := s.IConcurrent.(*concurrent.Concurrent)
concurrentCBChannel := concurrent.GetCallBackChannel()
s.self.(IService).OnStart()
for{
var analyzer *profiler.Analyzer
select {
case <- closeSig:
case <- s.closeSig:
bStop = true
concurrent.Close()
case cb:=<-concurrentCBChannel:
concurrent.DoCallback(cb)
case ev := <- s.chanEvent:
switch ev.GetEventType() {
case event.ServiceRpcRequestEvent:
@@ -168,7 +175,7 @@ func (s *Service) Run() {
analyzer.Pop()
analyzer = nil
}
eventPool.Put(cEvent)
event.DeleteEvent(cEvent)
case event.ServiceRpcResponseEvent:
cEvent,ok := ev.(*event.Event)
if ok == false {
@@ -188,7 +195,7 @@ func (s *Service) Run() {
analyzer.Pop()
analyzer = nil
}
eventPool.Put(cEvent)
event.DeleteEvent(cEvent)
default:
if s.profiler!=nil {
analyzer = s.profiler.Push("[SEvent]"+strconv.Itoa(int(ev.GetEventType())))
@@ -238,8 +245,8 @@ func (s *Service) Release(){
log.SError("core dump info[",errString,"]\n",string(buf[:l]))
}
}()
s.self.OnRelease()
log.SDebug("Release Service ", s.GetName())
}
func (s *Service) OnRelease(){
@@ -249,8 +256,11 @@ func (s *Service) OnInit() error {
return nil
}
func (s *Service) Wait(){
func (s *Service) Stop(){
log.SRelease("stop ",s.GetName()," service ")
close(s.closeSig)
s.wg.Wait()
log.SRelease(s.GetName()," service has been stopped")
}
func (s *Service) GetServiceCfg()interface{}{
@@ -320,9 +330,8 @@ func (s *Service) UnRegDiscoverListener(rpcLister rpc.INodeListener) {
UnRegDiscoveryServiceEventFun(s.GetName())
}
func (s *Service) PushRpcRequest(rpcRequest *rpc.RpcRequest) error{
ev := eventPool.Get().(*event.Event)
ev := event.NewEvent()
ev.Type = event.ServiceRpcRequestEvent
ev.Data = rpcRequest
@@ -330,7 +339,7 @@ func (s *Service) PushRpcRequest(rpcRequest *rpc.RpcRequest) error{
}
func (s *Service) PushRpcResponse(call *rpc.Call) error{
ev := eventPool.Get().(*event.Event)
ev := event.NewEvent()
ev.Type = event.ServiceRpcResponseEvent
ev.Data = call
@@ -342,7 +351,7 @@ func (s *Service) PushEvent(ev event.IEvent) error{
}
func (s *Service) pushEvent(ev event.IEvent) error{
if len(s.chanEvent) >= maxServiceEventChannel {
if len(s.chanEvent) >= maxServiceEventChannelNum {
err := errors.New("The event channel in the service is full")
log.SError(err.Error())
return err

View File

@@ -19,9 +19,7 @@ func init(){
setupServiceList = []IService{}
}
func Init(chanCloseSig chan bool) {
closeSig=chanCloseSig
func Init() {
for _,s := range setupServiceList {
err := s.OnInit()
if err != nil {
@@ -57,8 +55,8 @@ func Start(){
}
}
func WaitStop(){
func StopAllService(){
for i := len(setupServiceList) - 1; i >= 0; i-- {
setupServiceList[i].Wait()
setupServiceList[i].Stop()
}
}

View File

@@ -1,18 +1,49 @@
package messagequeueservice
import (
"errors"
"fmt"
"github.com/duanhf2012/origin/log"
"github.com/duanhf2012/origin/service"
"github.com/duanhf2012/origin/sysmodule/mongodbmodule"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo/options"
"sunserver/common/util"
"time"
)
const MaxDays = 180
type DataType interface {
int | uint | int64 | uint64 | float32 | float64 | int32 | uint32 | int16 | uint16
}
func convertToNumber[DType DataType](val interface{}) (error, DType) {
switch val.(type) {
case int64:
return nil, DType(val.(int64))
case int:
return nil, DType(val.(int))
case uint:
return nil, DType(val.(uint))
case uint64:
return nil, DType(val.(uint64))
case float32:
return nil, DType(val.(float32))
case float64:
return nil, DType(val.(float64))
case int32:
return nil, DType(val.(int32))
case uint32:
return nil, DType(val.(uint32))
case int16:
return nil, DType(val.(int16))
case uint16:
return nil, DType(val.(uint16))
}
return errors.New("unsupported type"), 0
}
type MongoPersist struct {
service.Module
mongo mongodbmodule.MongoModule
@@ -363,7 +394,7 @@ func (mp *MongoPersist) GetIndex(topicData *TopicData) uint64 {
for _, e := range document {
if e.Key == "_id" {
errC, seq := util.ConvertToNumber[uint64](e.Value)
errC, seq := convertToNumber[uint64](e.Value)
if errC != nil {
log.Error("value is error:%s,%+v, ", errC.Error(), e.Value)
}

View File

@@ -6,9 +6,9 @@ import (
"github.com/duanhf2012/origin/rpc"
"github.com/duanhf2012/origin/service"
"github.com/duanhf2012/origin/sysmodule/mongodbmodule"
"github.com/duanhf2012/origin/util/coroutine"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo/options"
"runtime"
"sync"
"sync/atomic"
"time"
@@ -18,10 +18,11 @@ const batchRemoveNum = 128 //一切删除的最大数量
// RankDataDB 排行表数据
type RankDataDB struct {
Id uint64 `bson:"_id,omitempty"`
RefreshTime int64 `bson:"RefreshTime,omitempty"`
SortData []int64 `bson:"SortData,omitempty"`
Data []byte `bson:"Data,omitempty"`
Id uint64 `bson:"_id"`
RefreshTime int64 `bson:"RefreshTime"`
SortData []int64 `bson:"SortData"`
Data []byte `bson:"Data"`
ExData []int64 `bson:"ExData"`
}
// MongoPersist持久化Module
@@ -70,7 +71,9 @@ func (mp *MongoPersist) OnInit() error {
}
//开启协程
coroutine.GoRecover(mp.persistCoroutine,-1)
mp.waitGroup.Add(1)
go mp.persistCoroutine()
return nil
}
@@ -186,6 +189,9 @@ func (mp *MongoPersist) loadFromDB(rankId uint64,rankCollectName string) error{
rankData.Data = rankDataDB.Data
rankData.Key = rankDataDB.Id
rankData.SortData = rankDataDB.SortData
for _,eData := range rankDataDB.ExData{
rankData.ExData = append(rankData.ExData,&rpc.ExtendIncData{InitValue:eData})
}
//更新到排行榜
rankSkip.UpsetRank(&rankData,rankDataDB.RefreshTime,true)
@@ -256,7 +262,6 @@ func (mp *MongoPersist) JugeTimeoutSave() bool{
}
func (mp *MongoPersist) persistCoroutine(){
mp.waitGroup.Add(1)
defer mp.waitGroup.Done()
for atomic.LoadInt32(&mp.stop)==0 || mp.hasPersistData(){
//间隔时间sleep
@@ -287,6 +292,15 @@ func (mp *MongoPersist) hasPersistData() bool{
}
func (mp *MongoPersist) saveToDB(){
defer func() {
if r := recover(); r != nil {
buf := make([]byte, 4096)
l := runtime.Stack(buf, false)
errString := fmt.Sprint(r)
log.SError(" Core dump info[", errString, "]\n", string(buf[:l]))
}
}()
//1.copy数据
mp.Lock()
mapRemoveRankData := mp.mapRemoveRankData
@@ -343,7 +357,7 @@ func (mp *MongoPersist) removeRankData(rankId uint64,keys []uint64) bool {
func (mp *MongoPersist) upsertToDB(collectName string,rankData *RankData) error{
condition := bson.D{{"_id", rankData.Key}}
upsert := bson.M{"_id":rankData.Key,"RefreshTime": rankData.refreshTimestamp, "SortData": rankData.SortData, "Data": rankData.Data}
upsert := bson.M{"_id":rankData.Key,"RefreshTime": rankData.refreshTimestamp, "SortData": rankData.SortData, "Data": rankData.Data,"ExData":rankData.ExData}
update := bson.M{"$set": upsert}
s := mp.mongo.TakeSession()

View File

@@ -14,7 +14,11 @@ var RankDataPool = sync.NewPoolEx(make(chan sync.IPoolData, 10240), func() sync.
})
type RankData struct {
*rpc.RankData
Key uint64
SortData []int64
Data []byte
ExData []int64
refreshTimestamp int64 //刷新时间
//bRelease bool
ref bool
@@ -27,7 +31,14 @@ func NewRankData(isDec bool, data *rpc.RankData,refreshTimestamp int64) *RankDat
if isDec {
ret.compareFunc = ret.desCompare
}
ret.RankData = data
ret.Key = data.Key
ret.SortData = data.SortData
ret.Data = data.Data
for _,d := range data.ExData{
ret.ExData = append(ret.ExData,d.InitValue+d.IncreaseValue)
}
ret.refreshTimestamp = refreshTimestamp
return ret

View File

@@ -2,13 +2,15 @@ package rankservice
import (
"fmt"
"time"
"github.com/duanhf2012/origin/log"
"github.com/duanhf2012/origin/rpc"
"github.com/duanhf2012/origin/service"
"time"
)
const PreMapRankSkipLen = 10
type RankService struct {
service.Service
@@ -61,11 +63,11 @@ func (rs *RankService) RPC_ManualAddRankSkip(addInfo *rpc.AddRankList, addResult
continue
}
newSkip := NewRankSkip(addRankListData.RankId,addRankListData.RankName,addRankListData.IsDec, transformLevel(addRankListData.SkipListLevel), addRankListData.MaxRank,time.Duration(addRankListData.ExpireMs)*time.Millisecond)
newSkip := NewRankSkip(addRankListData.RankId, addRankListData.RankName, addRankListData.IsDec, transformLevel(addRankListData.SkipListLevel), addRankListData.MaxRank, time.Duration(addRankListData.ExpireMs)*time.Millisecond)
newSkip.SetupRankModule(rs.rankModule)
rs.mapRankSkip[addRankListData.RankId] = newSkip
rs.rankModule.OnSetupRank(true,newSkip)
rs.rankModule.OnSetupRank(true, newSkip)
}
addResult.AddCount = 1
@@ -82,6 +84,52 @@ func (rs *RankService) RPC_UpsetRank(upsetInfo *rpc.UpsetRankData, upsetResult *
addCount, updateCount := rankSkip.UpsetRankList(upsetInfo.RankDataList)
upsetResult.AddCount = addCount
upsetResult.ModifyCount = updateCount
if upsetInfo.FindNewRank == true {
for _, rdata := range upsetInfo.RankDataList {
_, rank := rankSkip.GetRankNodeData(rdata.Key)
upsetResult.NewRank = append(upsetResult.NewRank, &rpc.RankInfo{Key: rdata.Key, Rank: rank})
}
}
return nil
}
// RPC_IncreaseRankData 增量更新排行扩展数据
func (rs *RankService) RPC_IncreaseRankData(changeRankData *rpc.IncreaseRankData, changeRankDataRet *rpc.IncreaseRankDataRet) error {
rankSkip, ok := rs.mapRankSkip[changeRankData.RankId]
if ok == false || rankSkip == nil {
return fmt.Errorf("RPC_ChangeRankData[", changeRankData.RankId, "] no this rank id")
}
ret := rankSkip.ChangeExtendData(changeRankData)
if ret == false {
return fmt.Errorf("RPC_ChangeRankData[", changeRankData.RankId, "] no this key ", changeRankData.Key)
}
if changeRankData.ReturnRankData == true {
rankData, rank := rankSkip.GetRankNodeData(changeRankData.Key)
changeRankDataRet.PosData = &rpc.RankPosData{}
changeRankDataRet.PosData.Rank = rank
changeRankDataRet.PosData.Key = rankData.Key
changeRankDataRet.PosData.Data = rankData.Data
changeRankDataRet.PosData.SortData = rankData.SortData
changeRankDataRet.PosData.ExtendData = rankData.ExData
}
return nil
}
// RPC_UpsetRank 更新排行榜
func (rs *RankService) RPC_UpdateRankData(updateRankData *rpc.UpdateRankData, updateRankDataRet *rpc.UpdateRankDataRet) error {
rankSkip, ok := rs.mapRankSkip[updateRankData.RankId]
if ok == false || rankSkip == nil {
updateRankDataRet.Ret = false
return nil
}
updateRankDataRet.Ret = rankSkip.UpdateRankData(updateRankData)
return nil
}
@@ -114,6 +162,7 @@ func (rs *RankService) RPC_FindRankDataByKey(findInfo *rpc.FindRankDataByKey, fi
findResult.Key = findRankData.Key
findResult.SortData = findRankData.SortData
findResult.Rank = rank
findResult.ExtendData = findRankData.ExData
}
return nil
}
@@ -131,6 +180,7 @@ func (rs *RankService) RPC_FindRankDataByRank(findInfo *rpc.FindRankDataByRank,
findResult.Key = findRankData.Key
findResult.SortData = findRankData.SortData
findResult.Rank = rankPos
findResult.ExtendData = findRankData.ExData
}
return nil
}
@@ -139,7 +189,7 @@ func (rs *RankService) RPC_FindRankDataByRank(findInfo *rpc.FindRankDataByRank,
func (rs *RankService) RPC_FindRankDataList(findInfo *rpc.FindRankDataList, findResult *rpc.RankDataList) error {
rankObj, ok := rs.mapRankSkip[findInfo.RankId]
if ok == false || rankObj == nil {
err := fmt.Errorf("not config rank %d",findInfo.RankId)
err := fmt.Errorf("not config rank %d", findInfo.RankId)
log.SError(err.Error())
return err
}
@@ -151,7 +201,7 @@ func (rs *RankService) RPC_FindRankDataList(findInfo *rpc.FindRankDataList, find
}
//查询附带的key
if findInfo.Key!= 0 {
if findInfo.Key != 0 {
findRankData, rank := rankObj.GetRankNodeData(findInfo.Key)
if findRankData != nil {
findResult.KeyRank = &rpc.RankPosData{}
@@ -159,6 +209,7 @@ func (rs *RankService) RPC_FindRankDataList(findInfo *rpc.FindRankDataList, find
findResult.KeyRank.Key = findRankData.Key
findResult.KeyRank.SortData = findRankData.SortData
findResult.KeyRank.Rank = rank
findResult.KeyRank.ExtendData = findRankData.ExData
}
}
@@ -193,12 +244,12 @@ func (rs *RankService) dealCfg() error {
}
rankId, okId := mapCfg["RankID"].(float64)
if okId == false || uint64(rankId)==0 {
if okId == false || uint64(rankId) == 0 {
return fmt.Errorf("RankService SortCfg data must has RankID[number]")
}
rankName, okId := mapCfg["RankName"].(string)
if okId == false || len(rankName)==0 {
if okId == false || len(rankName) == 0 {
return fmt.Errorf("RankService SortCfg data must has RankName[string]")
}
@@ -207,11 +258,10 @@ func (rs *RankService) dealCfg() error {
maxRank, _ := mapCfg["MaxRank"].(float64)
expireMs, _ := mapCfg["ExpireMs"].(float64)
newSkip := NewRankSkip(uint64(rankId),rankName,isDec, transformLevel(int32(level)), uint64(maxRank),time.Duration(expireMs)*time.Millisecond)
newSkip := NewRankSkip(uint64(rankId), rankName, isDec, transformLevel(int32(level)), uint64(maxRank), time.Duration(expireMs)*time.Millisecond)
newSkip.SetupRankModule(rs.rankModule)
rs.mapRankSkip[uint64(rankId)] = newSkip
err := rs.rankModule.OnSetupRank(false,newSkip)
err := rs.rankModule.OnSetupRank(false, newSkip)
if err != nil {
return err
}
@@ -219,5 +269,3 @@ func (rs *RankService) dealCfg() error {
return nil
}

View File

@@ -2,20 +2,21 @@ package rankservice
import (
"fmt"
"time"
"github.com/duanhf2012/origin/rpc"
"github.com/duanhf2012/origin/util/algorithms/skip"
"time"
)
type RankSkip struct {
rankId uint64 //排行榜ID
rankName string //排行榜名称
isDes bool //是否为降序 true降序 false升序
skipList *skip.SkipList //跳表
mapRankData map[uint64]*RankData //排行数据map
maxLen uint64 //排行数据长度
expireMs time.Duration //有效时间
rankModule IRankModule
rankId uint64 //排行榜ID
rankName string //排行榜名称
isDes bool //是否为降序 true降序 false升序
skipList *skip.SkipList //跳表
mapRankData map[uint64]*RankData //排行数据map
maxLen uint64 //排行数据长度
expireMs time.Duration //有效时间
rankModule IRankModule
rankDataExpire rankDataHeap
}
@@ -28,7 +29,7 @@ const (
)
// NewRankSkip 创建排行榜
func NewRankSkip(rankId uint64,rankName string,isDes bool, level interface{}, maxLen uint64,expireMs time.Duration) *RankSkip {
func NewRankSkip(rankId uint64, rankName string, isDes bool, level interface{}, maxLen uint64, expireMs time.Duration) *RankSkip {
rs := &RankSkip{}
rs.rankId = rankId
@@ -38,17 +39,17 @@ func NewRankSkip(rankId uint64,rankName string,isDes bool, level interface{}, ma
rs.mapRankData = make(map[uint64]*RankData, 10240)
rs.maxLen = maxLen
rs.expireMs = expireMs
rs.rankDataExpire.Init(int32(maxLen),expireMs)
rs.rankDataExpire.Init(int32(maxLen), expireMs)
return rs
}
func (rs *RankSkip) pickExpireKey(){
func (rs *RankSkip) pickExpireKey() {
if rs.expireMs == 0 {
return
}
for i:=1;i<=MaxPickExpireNum;i++{
for i := 1; i <= MaxPickExpireNum; i++ {
key := rs.rankDataExpire.PopExpireKey()
if key == 0 {
return
@@ -79,46 +80,211 @@ func (rs *RankSkip) GetRankLen() uint64 {
func (rs *RankSkip) UpsetRankList(upsetRankData []*rpc.RankData) (addCount int32, modifyCount int32) {
for _, upsetData := range upsetRankData {
changeType := rs.UpsetRank(upsetData,time.Now().UnixNano(),false)
if changeType == RankDataAdd{
addCount+=1
} else if changeType == RankDataUpdate{
modifyCount+=1
}
changeType := rs.UpsetRank(upsetData, time.Now().UnixNano(), false)
if changeType == RankDataAdd {
addCount += 1
} else if changeType == RankDataUpdate {
modifyCount += 1
}
}
rs.pickExpireKey()
return
}
func (rs *RankSkip) InsertDataOnNonExistent(changeRankData *rpc.IncreaseRankData) bool {
if changeRankData.InsertDataOnNonExistent == false {
return false
}
var upsetData rpc.RankData
upsetData.Key = changeRankData.Key
upsetData.Data = changeRankData.InitData
upsetData.SortData = changeRankData.InitSortData
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(upsetData.SortData); i++ {
upsetData.SortData[i] += changeRankData.IncreaseSortData[i]
}
for _, val := range changeRankData.Extend {
upsetData.ExData = append(upsetData.ExData, &rpc.ExtendIncData{InitValue: val.InitValue, IncreaseValue: val.IncreaseValue})
}
//强制设计指定值
for _, setData := range changeRankData.SetSortAndExtendData {
if setData.IsSortData == true {
if int(setData.Pos) >= len(upsetData.SortData) {
return false
}
upsetData.SortData[setData.Pos] = setData.Data
} else {
if int(setData.Pos) < len(upsetData.ExData) {
upsetData.ExData[setData.Pos].IncreaseValue = 0
upsetData.ExData[setData.Pos].InitValue = setData.Data
}
}
}
refreshTimestamp := time.Now().UnixNano()
newRankData := NewRankData(rs.isDes, &upsetData, refreshTimestamp)
rs.skipList.Insert(newRankData)
rs.mapRankData[upsetData.Key] = newRankData
//刷新有效期和存档数据
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
rs.rankModule.OnChangeRankData(rs, newRankData)
return true
}
func (rs *RankSkip) UpdateRankData(updateRankData *rpc.UpdateRankData) bool {
rankNode, ok := rs.mapRankData[updateRankData.Key]
if ok == false {
return false
}
rankNode.Data = updateRankData.Data
rs.rankDataExpire.PushOrRefreshExpireKey(updateRankData.Key, time.Now().UnixNano())
rs.rankModule.OnChangeRankData(rs, rankNode)
return true
}
func (rs *RankSkip) ChangeExtendData(changeRankData *rpc.IncreaseRankData) bool {
rankNode, ok := rs.mapRankData[changeRankData.Key]
if ok == false {
return rs.InsertDataOnNonExistent(changeRankData)
}
//先判断是不是有修改
bChange := false
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(rankNode.SortData); i++ {
if changeRankData.IncreaseSortData[i] != 0 {
bChange = true
}
}
if bChange == false {
for _, setSortAndExtendData := range changeRankData.SetSortAndExtendData {
if setSortAndExtendData.IsSortData == true {
bChange = true
}
}
}
//如果有改变,删除原有的数据,重新刷新到跳表
rankData := rankNode
refreshTimestamp := time.Now().UnixNano()
if bChange == true {
//copy数据
var upsetData rpc.RankData
upsetData.Key = rankNode.Key
upsetData.Data = rankNode.Data
upsetData.SortData = append(upsetData.SortData, rankNode.SortData...)
for i := 0; i < len(changeRankData.IncreaseSortData) && i < len(upsetData.SortData); i++ {
if changeRankData.IncreaseSortData[i] != 0 {
upsetData.SortData[i] += changeRankData.IncreaseSortData[i]
}
}
for _, setData := range changeRankData.SetSortAndExtendData {
if setData.IsSortData == true {
if int(setData.Pos) < len(upsetData.SortData) {
upsetData.SortData[setData.Pos] = setData.Data
}
}
}
rankData = NewRankData(rs.isDes, &upsetData, refreshTimestamp)
rankData.ExData = append(rankData.ExData, rankNode.ExData...)
//从排行榜中删除
rs.skipList.Delete(rankNode)
ReleaseRankData(rankNode)
rs.skipList.Insert(rankData)
rs.mapRankData[upsetData.Key] = rankData
}
//增长扩展参数
for i := 0; i < len(changeRankData.Extend); i++ {
if i < len(rankData.ExData) {
//直接增长
rankData.ExData[i] += changeRankData.Extend[i].IncreaseValue
} else {
//如果不存在的扩展位置,append补充并按IncreaseValue增长
rankData.ExData = append(rankData.ExData, changeRankData.Extend[i].InitValue+changeRankData.Extend[i].IncreaseValue)
}
}
//设置固定值
for _, setData := range changeRankData.SetSortAndExtendData {
if setData.IsSortData == false {
if int(setData.Pos) < len(rankData.ExData) {
rankData.ExData[setData.Pos] = setData.Data
}
}
}
rs.rankDataExpire.PushOrRefreshExpireKey(rankData.Key, refreshTimestamp)
rs.rankModule.OnChangeRankData(rs, rankData)
return true
}
// UpsetRank 更新玩家排行数据,返回变化后的数据及变化类型
func (rs *RankSkip) UpsetRank(upsetData *rpc.RankData,refreshTimestamp int64,fromLoad bool) RankDataChangeType {
func (rs *RankSkip) UpsetRank(upsetData *rpc.RankData, refreshTimestamp int64, fromLoad bool) RankDataChangeType {
rankNode, ok := rs.mapRankData[upsetData.Key]
if ok == true {
//增长扩展数据
for i := 0; i < len(upsetData.ExData); i++ {
if i < len(rankNode.ExData) {
//直接增长
rankNode.ExData[i] += upsetData.ExData[i].IncreaseValue
} else {
//如果不存在的扩展位置,append补充并按IncreaseValue增长
rankNode.ExData = append(rankNode.ExData, upsetData.ExData[i].InitValue+upsetData.ExData[i].IncreaseValue)
}
}
//找到的情况对比排名数据是否有变化,无变化进行data更新,有变化则进行删除更新
if compareIsEqual(rankNode.SortData, upsetData.SortData) {
rankNode.Data = upsetData.GetData()
rankNode.refreshTimestamp = refreshTimestamp
if fromLoad == false {
rs.rankModule.OnChangeRankData(rs,rankNode)
rs.rankModule.OnChangeRankData(rs, rankNode)
}
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key,refreshTimestamp)
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
return RankDataUpdate
}
if upsetData.Data == nil {
upsetData.Data = rankNode.Data
}
//设置额外数据
for idx, exValue := range rankNode.ExData {
currentIncreaseValue := int64(0)
if idx < len(upsetData.ExData) {
currentIncreaseValue = upsetData.ExData[idx].IncreaseValue
}
upsetData.ExData = append(upsetData.ExData, &rpc.ExtendIncData{
InitValue: exValue,
IncreaseValue: currentIncreaseValue,
})
}
rs.skipList.Delete(rankNode)
ReleaseRankData(rankNode)
newRankData := NewRankData(rs.isDes, upsetData,refreshTimestamp)
newRankData := NewRankData(rs.isDes, upsetData, refreshTimestamp)
rs.skipList.Insert(newRankData)
rs.mapRankData[upsetData.Key] = newRankData
//刷新有效期
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key,refreshTimestamp)
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
if fromLoad == false {
rs.rankModule.OnChangeRankData(rs, newRankData)
@@ -127,10 +293,11 @@ func (rs *RankSkip) UpsetRank(upsetData *rpc.RankData,refreshTimestamp int64,fro
}
if rs.checkInsertAndReplace(upsetData) {
newRankData := NewRankData(rs.isDes, upsetData,refreshTimestamp)
newRankData := NewRankData(rs.isDes, upsetData, refreshTimestamp)
rs.skipList.Insert(newRankData)
rs.mapRankData[upsetData.Key] = newRankData
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key,refreshTimestamp)
rs.rankDataExpire.PushOrRefreshExpireKey(upsetData.Key, refreshTimestamp)
if fromLoad == false {
rs.rankModule.OnEnterRank(rs, newRankData)
@@ -152,7 +319,7 @@ func (rs *RankSkip) DeleteRankData(delKeys []uint64) int32 {
continue
}
removeRankData+=1
removeRankData += 1
rs.skipList.Delete(rankData)
delete(rs.mapRankData, rankData.Key)
rs.rankDataExpire.RemoveExpireKey(rankData.Key)
@@ -172,13 +339,13 @@ func (rs *RankSkip) GetRankNodeData(findKey uint64) (*RankData, uint64) {
rs.pickExpireKey()
_, index := rs.skipList.GetWithPosition(rankNode)
return rankNode, index+1
return rankNode, index + 1
}
// GetRankNodeDataByPos 获取,返回排名节点与名次
func (rs *RankSkip) GetRankNodeDataByRank(rank uint64) (*RankData, uint64) {
rs.pickExpireKey()
rankNode := rs.skipList.ByPosition(rank-1)
rankNode := rs.skipList.ByPosition(rank - 1)
if rankNode == nil {
return nil, 0
}
@@ -189,12 +356,12 @@ func (rs *RankSkip) GetRankNodeDataByRank(rank uint64) (*RankData, uint64) {
// GetRankKeyPrevToLimit 获取key前count名的数据
func (rs *RankSkip) GetRankKeyPrevToLimit(findKey, count uint64, result *rpc.RankDataList) error {
if rs.GetRankLen() <= 0 {
return fmt.Errorf("rank[", rs.rankId, "] no data")
return fmt.Errorf("rank[%d] no data", rs.rankId)
}
findData, ok := rs.mapRankData[findKey]
if ok == false {
return fmt.Errorf("rank[", rs.rankId, "] no data")
return fmt.Errorf("rank[%d] no data", rs.rankId)
}
_, rankPos := rs.skipList.GetWithPosition(findData)
@@ -203,10 +370,11 @@ func (rs *RankSkip) GetRankKeyPrevToLimit(findKey, count uint64, result *rpc.Ran
for iter.Prev() && iterCount < count {
rankData := iter.Value().(*RankData)
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
Key: rankData.Key,
Rank: rankPos - iterCount+1,
SortData: rankData.SortData,
Data: rankData.Data,
Key: rankData.Key,
Rank: rankPos - iterCount + 1,
SortData: rankData.SortData,
Data: rankData.Data,
ExtendData: rankData.ExData,
})
iterCount++
}
@@ -217,12 +385,12 @@ func (rs *RankSkip) GetRankKeyPrevToLimit(findKey, count uint64, result *rpc.Ran
// GetRankKeyPrevToLimit 获取key前count名的数据
func (rs *RankSkip) GetRankKeyNextToLimit(findKey, count uint64, result *rpc.RankDataList) error {
if rs.GetRankLen() <= 0 {
return fmt.Errorf("rank[", rs.rankId, "] no data")
return fmt.Errorf("rank[%d] no data", rs.rankId)
}
findData, ok := rs.mapRankData[findKey]
if ok == false {
return fmt.Errorf("rank[", rs.rankId, "] no data")
return fmt.Errorf("rank[%d] no data", rs.rankId)
}
_, rankPos := rs.skipList.GetWithPosition(findData)
@@ -231,10 +399,11 @@ func (rs *RankSkip) GetRankKeyNextToLimit(findKey, count uint64, result *rpc.Ran
for iter.Next() && iterCount < count {
rankData := iter.Value().(*RankData)
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
Key: rankData.Key,
Rank: rankPos + iterCount+1,
SortData: rankData.SortData,
Data: rankData.Data,
Key: rankData.Key,
Rank: rankPos + iterCount + 1,
SortData: rankData.SortData,
Data: rankData.Data,
ExtendData: rankData.ExData,
})
iterCount++
}
@@ -259,10 +428,11 @@ func (rs *RankSkip) GetRankDataFromToLimit(startPos, count uint64, result *rpc.R
for iter.Next() && iterCount < count {
rankData := iter.Value().(*RankData)
result.RankPosDataList = append(result.RankPosDataList, &rpc.RankPosData{
Key: rankData.Key,
Rank: iterCount + startPos+1,
SortData: rankData.SortData,
Data: rankData.Data,
Key: rankData.Key,
Rank: iterCount + startPos + 1,
SortData: rankData.SortData,
Data: rankData.Data,
ExtendData: rankData.ExData,
})
iterCount++
}
@@ -301,4 +471,3 @@ func (rs *RankSkip) checkInsertAndReplace(upsetData *rpc.RankData) bool {
ReleaseRankData(lastRankData)
return true
}

View File

@@ -90,6 +90,10 @@ func (tcpService *TcpService) OnInit() error{
if ok == true {
tcpService.tcpServer.LittleEndian = LittleEndian.(bool)
}
LenMsgLen,ok := tcpCfg["LenMsgLen"]
if ok == true {
tcpService.tcpServer.LenMsgLen = int(LenMsgLen.(float64))
}
MinMsgLen,ok := tcpCfg["MinMsgLen"]
if ok == true {
tcpService.tcpServer.MinMsgLen = uint32(MinMsgLen.(float64))

413
util/queue/deque.go Normal file
View File

@@ -0,0 +1,413 @@
package queue
// minCapacity is the smallest capacity that deque may have. Must be power of 2
// for bitwise modulus: x % n == x & (n - 1).
const minCapacity = 16
// Deque represents a single instance of the deque data structure. A Deque
// instance contains items of the type sepcified by the type argument.
type Deque[T any] struct {
buf []T
head int
tail int
count int
minCap int
}
// New creates a new Deque, optionally setting the current and minimum capacity
// when non-zero values are given for these. The Deque instance returns
// operates on items of the type specified by the type argument. For example,
// to create a Deque that contains strings,
//
// stringDeque := deque.New[string]()
//
// To create a Deque with capacity to store 2048 ints without resizing, and
// that will not resize below space for 32 items when removing items:
// d := deque.New[int](2048, 32)
//
// To create a Deque that has not yet allocated memory, but after it does will
// never resize to have space for less than 64 items:
// d := deque.New[int](0, 64)
//
// Any size values supplied here are rounded up to the nearest power of 2.
func New[T any](size ...int) *Deque[T] {
var capacity, minimum int
if len(size) >= 1 {
capacity = size[0]
if len(size) >= 2 {
minimum = size[1]
}
}
minCap := minCapacity
for minCap < minimum {
minCap <<= 1
}
var buf []T
if capacity != 0 {
bufSize := minCap
for bufSize < capacity {
bufSize <<= 1
}
buf = make([]T, bufSize)
}
return &Deque[T]{
buf: buf,
minCap: minCap,
}
}
// Cap returns the current capacity of the Deque. If q is nil, q.Cap() is zero.
func (q *Deque[T]) Cap() int {
if q == nil {
return 0
}
return len(q.buf)
}
// Len returns the number of elements currently stored in the queue. If q is
// nil, q.Len() is zero.
func (q *Deque[T]) Len() int {
if q == nil {
return 0
}
return q.count
}
// PushBack appends an element to the back of the queue. Implements FIFO when
// elements are removed with PopFront(), and LIFO when elements are removed
// with PopBack().
func (q *Deque[T]) PushBack(elem T) {
q.growIfFull()
q.buf[q.tail] = elem
// Calculate new tail position.
q.tail = q.next(q.tail)
q.count++
}
// PushFront prepends an element to the front of the queue.
func (q *Deque[T]) PushFront(elem T) {
q.growIfFull()
// Calculate new head position.
q.head = q.prev(q.head)
q.buf[q.head] = elem
q.count++
}
// PopFront removes and returns the element from the front of the queue.
// Implements FIFO when used with PushBack(). If the queue is empty, the call
// panics.
func (q *Deque[T]) PopFront() T {
if q.count <= 0 {
panic("deque: PopFront() called on empty queue")
}
ret := q.buf[q.head]
var zero T
q.buf[q.head] = zero
// Calculate new head position.
q.head = q.next(q.head)
q.count--
q.shrinkIfExcess()
return ret
}
// PopBack removes and returns the element from the back of the queue.
// Implements LIFO when used with PushBack(). If the queue is empty, the call
// panics.
func (q *Deque[T]) PopBack() T {
if q.count <= 0 {
panic("deque: PopBack() called on empty queue")
}
// Calculate new tail position
q.tail = q.prev(q.tail)
// Remove value at tail.
ret := q.buf[q.tail]
var zero T
q.buf[q.tail] = zero
q.count--
q.shrinkIfExcess()
return ret
}
// Front returns the element at the front of the queue. This is the element
// that would be returned by PopFront(). This call panics if the queue is
// empty.
func (q *Deque[T]) Front() T {
if q.count <= 0 {
panic("deque: Front() called when empty")
}
return q.buf[q.head]
}
// Back returns the element at the back of the queue. This is the element that
// would be returned by PopBack(). This call panics if the queue is empty.
func (q *Deque[T]) Back() T {
if q.count <= 0 {
panic("deque: Back() called when empty")
}
return q.buf[q.prev(q.tail)]
}
// At returns the element at index i in the queue without removing the element
// from the queue. This method accepts only non-negative index values. At(0)
// refers to the first element and is the same as Front(). At(Len()-1) refers
// to the last element and is the same as Back(). If the index is invalid, the
// call panics.
//
// The purpose of At is to allow Deque to serve as a more general purpose
// circular buffer, where items are only added to and removed from the ends of
// the deque, but may be read from any place within the deque. Consider the
// case of a fixed-size circular log buffer: A new entry is pushed onto one end
// and when full the oldest is popped from the other end. All the log entries
// in the buffer must be readable without altering the buffer contents.
func (q *Deque[T]) At(i int) T {
if i < 0 || i >= q.count {
panic("deque: At() called with index out of range")
}
// bitwise modulus
return q.buf[(q.head+i)&(len(q.buf)-1)]
}
// Set puts the element at index i in the queue. Set shares the same purpose
// than At() but perform the opposite operation. The index i is the same index
// defined by At(). If the index is invalid, the call panics.
func (q *Deque[T]) Set(i int, elem T) {
if i < 0 || i >= q.count {
panic("deque: Set() called with index out of range")
}
// bitwise modulus
q.buf[(q.head+i)&(len(q.buf)-1)] = elem
}
// Clear removes all elements from the queue, but retains the current capacity.
// This is useful when repeatedly reusing the queue at high frequency to avoid
// GC during reuse. The queue will not be resized smaller as long as items are
// only added. Only when items are removed is the queue subject to getting
// resized smaller.
func (q *Deque[T]) Clear() {
// bitwise modulus
modBits := len(q.buf) - 1
var zero T
for h := q.head; h != q.tail; h = (h + 1) & modBits {
q.buf[h] = zero
}
q.head = 0
q.tail = 0
q.count = 0
}
// Rotate rotates the deque n steps front-to-back. If n is negative, rotates
// back-to-front. Having Deque provide Rotate() avoids resizing that could
// happen if implementing rotation using only Pop and Push methods. If q.Len()
// is one or less, or q is nil, then Rotate does nothing.
func (q *Deque[T]) Rotate(n int) {
if q.Len() <= 1 {
return
}
// Rotating a multiple of q.count is same as no rotation.
n %= q.count
if n == 0 {
return
}
modBits := len(q.buf) - 1
// If no empty space in buffer, only move head and tail indexes.
if q.head == q.tail {
// Calculate new head and tail using bitwise modulus.
q.head = (q.head + n) & modBits
q.tail = q.head
return
}
var zero T
if n < 0 {
// Rotate back to front.
for ; n < 0; n++ {
// Calculate new head and tail using bitwise modulus.
q.head = (q.head - 1) & modBits
q.tail = (q.tail - 1) & modBits
// Put tail value at head and remove value at tail.
q.buf[q.head] = q.buf[q.tail]
q.buf[q.tail] = zero
}
return
}
// Rotate front to back.
for ; n > 0; n-- {
// Put head value at tail and remove value at head.
q.buf[q.tail] = q.buf[q.head]
q.buf[q.head] = zero
// Calculate new head and tail using bitwise modulus.
q.head = (q.head + 1) & modBits
q.tail = (q.tail + 1) & modBits
}
}
// Index returns the index into the Deque of the first item satisfying f(item),
// or -1 if none do. If q is nil, then -1 is always returned. Search is linear
// starting with index 0.
func (q *Deque[T]) Index(f func(T) bool) int {
if q.Len() > 0 {
modBits := len(q.buf) - 1
for i := 0; i < q.count; i++ {
if f(q.buf[(q.head+i)&modBits]) {
return i
}
}
}
return -1
}
// RIndex is the same as Index, but searches from Back to Front. The index
// returned is from Front to Back, where index 0 is the index of the item
// returned by Front().
func (q *Deque[T]) RIndex(f func(T) bool) int {
if q.Len() > 0 {
modBits := len(q.buf) - 1
for i := q.count - 1; i >= 0; i-- {
if f(q.buf[(q.head+i)&modBits]) {
return i
}
}
}
return -1
}
// Insert is used to insert an element into the middle of the queue, before the
// element at the specified index. Insert(0,e) is the same as PushFront(e) and
// Insert(Len(),e) is the same as PushBack(e). Accepts only non-negative index
// values, and panics if index is out of range.
//
// Important: Deque is optimized for O(1) operations at the ends of the queue,
// not for operations in the the middle. Complexity of this function is
// constant plus linear in the lesser of the distances between the index and
// either of the ends of the queue.
func (q *Deque[T]) Insert(at int, item T) {
if at < 0 || at > q.count {
panic("deque: Insert() called with index out of range")
}
if at*2 < q.count {
q.PushFront(item)
front := q.head
for i := 0; i < at; i++ {
next := q.next(front)
q.buf[front], q.buf[next] = q.buf[next], q.buf[front]
front = next
}
return
}
swaps := q.count - at
q.PushBack(item)
back := q.prev(q.tail)
for i := 0; i < swaps; i++ {
prev := q.prev(back)
q.buf[back], q.buf[prev] = q.buf[prev], q.buf[back]
back = prev
}
}
// Remove removes and returns an element from the middle of the queue, at the
// specified index. Remove(0) is the same as PopFront() and Remove(Len()-1) is
// the same as PopBack(). Accepts only non-negative index values, and panics if
// index is out of range.
//
// Important: Deque is optimized for O(1) operations at the ends of the queue,
// not for operations in the the middle. Complexity of this function is
// constant plus linear in the lesser of the distances between the index and
// either of the ends of the queue.
func (q *Deque[T]) Remove(at int) T {
if at < 0 || at >= q.Len() {
panic("deque: Remove() called with index out of range")
}
rm := (q.head + at) & (len(q.buf) - 1)
if at*2 < q.count {
for i := 0; i < at; i++ {
prev := q.prev(rm)
q.buf[prev], q.buf[rm] = q.buf[rm], q.buf[prev]
rm = prev
}
return q.PopFront()
}
swaps := q.count - at - 1
for i := 0; i < swaps; i++ {
next := q.next(rm)
q.buf[rm], q.buf[next] = q.buf[next], q.buf[rm]
rm = next
}
return q.PopBack()
}
// SetMinCapacity sets a minimum capacity of 2^minCapacityExp. If the value of
// the minimum capacity is less than or equal to the minimum allowed, then
// capacity is set to the minimum allowed. This may be called at anytime to set
// a new minimum capacity.
//
// Setting a larger minimum capacity may be used to prevent resizing when the
// number of stored items changes frequently across a wide range.
func (q *Deque[T]) SetMinCapacity(minCapacityExp uint) {
if 1<<minCapacityExp > minCapacity {
q.minCap = 1 << minCapacityExp
} else {
q.minCap = minCapacity
}
}
// prev returns the previous buffer position wrapping around buffer.
func (q *Deque[T]) prev(i int) int {
return (i - 1) & (len(q.buf) - 1) // bitwise modulus
}
// next returns the next buffer position wrapping around buffer.
func (q *Deque[T]) next(i int) int {
return (i + 1) & (len(q.buf) - 1) // bitwise modulus
}
// growIfFull resizes up if the buffer is full.
func (q *Deque[T]) growIfFull() {
if q.count != len(q.buf) {
return
}
if len(q.buf) == 0 {
if q.minCap == 0 {
q.minCap = minCapacity
}
q.buf = make([]T, q.minCap)
return
}
q.resize()
}
// shrinkIfExcess resize down if the buffer 1/4 full.
func (q *Deque[T]) shrinkIfExcess() {
if len(q.buf) > q.minCap && (q.count<<2) == len(q.buf) {
q.resize()
}
}
// resize resizes the deque to fit exactly twice its current contents. This is
// used to grow the queue when it is full, and also to shrink it when it is
// only a quarter full.
func (q *Deque[T]) resize() {
newBuf := make([]T, q.count<<1)
if q.tail > q.head {
copy(newBuf, q.buf[q.head:q.tail])
} else {
n := copy(newBuf, q.buf[q.head:])
copy(newBuf[n:], q.buf[:q.tail])
}
q.head = 0
q.tail = q.count
q.buf = newBuf
}

836
util/queue/deque_test.go Normal file
View File

@@ -0,0 +1,836 @@
package queue
import (
"fmt"
"testing"
"unicode"
)
func TestEmpty(t *testing.T) {
q := New[string]()
if q.Len() != 0 {
t.Error("q.Len() =", q.Len(), "expect 0")
}
if q.Cap() != 0 {
t.Error("expected q.Cap() == 0")
}
idx := q.Index(func(item string) bool {
return true
})
if idx != -1 {
t.Error("should return -1 index for nil deque")
}
idx = q.RIndex(func(item string) bool {
return true
})
if idx != -1 {
t.Error("should return -1 index for nil deque")
}
}
func TestNil(t *testing.T) {
var q *Deque[int]
if q.Len() != 0 {
t.Error("expected q.Len() == 0")
}
if q.Cap() != 0 {
t.Error("expected q.Cap() == 0")
}
q.Rotate(5)
idx := q.Index(func(item int) bool {
return true
})
if idx != -1 {
t.Error("should return -1 index for nil deque")
}
idx = q.RIndex(func(item int) bool {
return true
})
if idx != -1 {
t.Error("should return -1 index for nil deque")
}
}
func TestFrontBack(t *testing.T) {
var q Deque[string]
q.PushBack("foo")
q.PushBack("bar")
q.PushBack("baz")
if q.Front() != "foo" {
t.Error("wrong value at front of queue")
}
if q.Back() != "baz" {
t.Error("wrong value at back of queue")
}
if q.PopFront() != "foo" {
t.Error("wrong value removed from front of queue")
}
if q.Front() != "bar" {
t.Error("wrong value remaining at front of queue")
}
if q.Back() != "baz" {
t.Error("wrong value remaining at back of queue")
}
if q.PopBack() != "baz" {
t.Error("wrong value removed from back of queue")
}
if q.Front() != "bar" {
t.Error("wrong value remaining at front of queue")
}
if q.Back() != "bar" {
t.Error("wrong value remaining at back of queue")
}
}
func TestGrowShrinkBack(t *testing.T) {
var q Deque[int]
size := minCapacity * 2
for i := 0; i < size; i++ {
if q.Len() != i {
t.Error("q.Len() =", q.Len(), "expected", i)
}
q.PushBack(i)
}
bufLen := len(q.buf)
// Remove from back.
for i := size; i > 0; i-- {
if q.Len() != i {
t.Error("q.Len() =", q.Len(), "expected", i)
}
x := q.PopBack()
if x != i-1 {
t.Error("q.PopBack() =", x, "expected", i-1)
}
}
if q.Len() != 0 {
t.Error("q.Len() =", q.Len(), "expected 0")
}
if len(q.buf) == bufLen {
t.Error("queue buffer did not shrink")
}
}
func TestGrowShrinkFront(t *testing.T) {
var q Deque[int]
size := minCapacity * 2
for i := 0; i < size; i++ {
if q.Len() != i {
t.Error("q.Len() =", q.Len(), "expected", i)
}
q.PushBack(i)
}
bufLen := len(q.buf)
// Remove from Front
for i := 0; i < size; i++ {
if q.Len() != size-i {
t.Error("q.Len() =", q.Len(), "expected", minCapacity*2-i)
}
x := q.PopFront()
if x != i {
t.Error("q.PopBack() =", x, "expected", i)
}
}
if q.Len() != 0 {
t.Error("q.Len() =", q.Len(), "expected 0")
}
if len(q.buf) == bufLen {
t.Error("queue buffer did not shrink")
}
}
func TestSimple(t *testing.T) {
var q Deque[int]
for i := 0; i < minCapacity; i++ {
q.PushBack(i)
}
if q.Front() != 0 {
t.Fatalf("expected 0 at front, got %d", q.Front())
}
if q.Back() != minCapacity-1 {
t.Fatalf("expected %d at back, got %d", minCapacity-1, q.Back())
}
for i := 0; i < minCapacity; i++ {
if q.Front() != i {
t.Error("peek", i, "had value", q.Front())
}
x := q.PopFront()
if x != i {
t.Error("remove", i, "had value", x)
}
}
q.Clear()
for i := 0; i < minCapacity; i++ {
q.PushFront(i)
}
for i := minCapacity - 1; i >= 0; i-- {
x := q.PopFront()
if x != i {
t.Error("remove", i, "had value", x)
}
}
}
func TestBufferWrap(t *testing.T) {
var q Deque[int]
for i := 0; i < minCapacity; i++ {
q.PushBack(i)
}
for i := 0; i < 3; i++ {
q.PopFront()
q.PushBack(minCapacity + i)
}
for i := 0; i < minCapacity; i++ {
if q.Front() != i+3 {
t.Error("peek", i, "had value", q.Front())
}
q.PopFront()
}
}
func TestBufferWrapReverse(t *testing.T) {
var q Deque[int]
for i := 0; i < minCapacity; i++ {
q.PushFront(i)
}
for i := 0; i < 3; i++ {
q.PopBack()
q.PushFront(minCapacity + i)
}
for i := 0; i < minCapacity; i++ {
if q.Back() != i+3 {
t.Error("peek", i, "had value", q.Front())
}
q.PopBack()
}
}
func TestLen(t *testing.T) {
var q Deque[int]
if q.Len() != 0 {
t.Error("empty queue length not 0")
}
for i := 0; i < 1000; i++ {
q.PushBack(i)
if q.Len() != i+1 {
t.Error("adding: queue with", i, "elements has length", q.Len())
}
}
for i := 0; i < 1000; i++ {
q.PopFront()
if q.Len() != 1000-i-1 {
t.Error("removing: queue with", 1000-i-i, "elements has length", q.Len())
}
}
}
func TestBack(t *testing.T) {
var q Deque[int]
for i := 0; i < minCapacity+5; i++ {
q.PushBack(i)
if q.Back() != i {
t.Errorf("Back returned wrong value")
}
}
}
func TestNew(t *testing.T) {
minCap := 64
q := New[string](0, minCap)
if q.Cap() != 0 {
t.Fatal("should not have allowcated mem yet")
}
q.PushBack("foo")
q.PopFront()
if q.Len() != 0 {
t.Fatal("Len() should return 0")
}
if q.Cap() != minCap {
t.Fatalf("worng capactiy expected %d, got %d", minCap, q.Cap())
}
curCap := 128
q = New[string](curCap, minCap)
if q.Cap() != curCap {
t.Fatalf("Cap() should return %d, got %d", curCap, q.Cap())
}
if q.Len() != 0 {
t.Fatalf("Len() should return 0")
}
q.PushBack("foo")
if q.Cap() != curCap {
t.Fatalf("Cap() should return %d, got %d", curCap, q.Cap())
}
}
func checkRotate(t *testing.T, size int) {
var q Deque[int]
for i := 0; i < size; i++ {
q.PushBack(i)
}
for i := 0; i < q.Len(); i++ {
x := i
for n := 0; n < q.Len(); n++ {
if q.At(n) != x {
t.Fatalf("a[%d] != %d after rotate and copy", n, x)
}
x++
if x == q.Len() {
x = 0
}
}
q.Rotate(1)
if q.Back() != i {
t.Fatal("wrong value during rotation")
}
}
for i := q.Len() - 1; i >= 0; i-- {
q.Rotate(-1)
if q.Front() != i {
t.Fatal("wrong value during reverse rotation")
}
}
}
func TestRotate(t *testing.T) {
checkRotate(t, 10)
checkRotate(t, minCapacity)
checkRotate(t, minCapacity+minCapacity/2)
var q Deque[int]
for i := 0; i < 10; i++ {
q.PushBack(i)
}
q.Rotate(11)
if q.Front() != 1 {
t.Error("rotating 11 places should have been same as one")
}
q.Rotate(-21)
if q.Front() != 0 {
t.Error("rotating -21 places should have been same as one -1")
}
q.Rotate(q.Len())
if q.Front() != 0 {
t.Error("should not have rotated")
}
q.Clear()
q.PushBack(0)
q.Rotate(13)
if q.Front() != 0 {
t.Error("should not have rotated")
}
}
func TestAt(t *testing.T) {
var q Deque[int]
for i := 0; i < 1000; i++ {
q.PushBack(i)
}
// Front to back.
for j := 0; j < q.Len(); j++ {
if q.At(j) != j {
t.Errorf("index %d doesn't contain %d", j, j)
}
}
// Back to front
for j := 1; j <= q.Len(); j++ {
if q.At(q.Len()-j) != q.Len()-j {
t.Errorf("index %d doesn't contain %d", q.Len()-j, q.Len()-j)
}
}
}
func TestSet(t *testing.T) {
var q Deque[int]
for i := 0; i < 1000; i++ {
q.PushBack(i)
q.Set(i, i+50)
}
// Front to back.
for j := 0; j < q.Len(); j++ {
if q.At(j) != j+50 {
t.Errorf("index %d doesn't contain %d", j, j+50)
}
}
}
func TestClear(t *testing.T) {
var q Deque[int]
for i := 0; i < 100; i++ {
q.PushBack(i)
}
if q.Len() != 100 {
t.Error("push: queue with 100 elements has length", q.Len())
}
cap := len(q.buf)
q.Clear()
if q.Len() != 0 {
t.Error("empty queue length not 0 after clear")
}
if len(q.buf) != cap {
t.Error("queue capacity changed after clear")
}
// Check that there are no remaining references after Clear()
for i := 0; i < len(q.buf); i++ {
if q.buf[i] != 0 {
t.Error("queue has non-nil deleted elements after Clear()")
break
}
}
}
func TestIndex(t *testing.T) {
var q Deque[rune]
for _, x := range "Hello, 世界" {
q.PushBack(x)
}
idx := q.Index(func(item rune) bool {
c := item
return unicode.Is(unicode.Han, c)
})
if idx != 7 {
t.Fatal("Expected index 7, got", idx)
}
idx = q.Index(func(item rune) bool {
c := item
return c == 'H'
})
if idx != 0 {
t.Fatal("Expected index 0, got", idx)
}
idx = q.Index(func(item rune) bool {
return false
})
if idx != -1 {
t.Fatal("Expected index -1, got", idx)
}
}
func TestRIndex(t *testing.T) {
var q Deque[rune]
for _, x := range "Hello, 世界" {
q.PushBack(x)
}
idx := q.RIndex(func(item rune) bool {
c := item
return unicode.Is(unicode.Han, c)
})
if idx != 8 {
t.Fatal("Expected index 8, got", idx)
}
idx = q.RIndex(func(item rune) bool {
c := item
return c == 'H'
})
if idx != 0 {
t.Fatal("Expected index 0, got", idx)
}
idx = q.RIndex(func(item rune) bool {
return false
})
if idx != -1 {
t.Fatal("Expected index -1, got", idx)
}
}
func TestInsert(t *testing.T) {
q := new(Deque[rune])
for _, x := range "ABCDEFG" {
q.PushBack(x)
}
q.Insert(4, 'x') // ABCDxEFG
if q.At(4) != 'x' {
t.Error("expected x at position 4, got", q.At(4))
}
q.Insert(2, 'y') // AByCDxEFG
if q.At(2) != 'y' {
t.Error("expected y at position 2")
}
if q.At(5) != 'x' {
t.Error("expected x at position 5")
}
q.Insert(0, 'b') // bAByCDxEFG
if q.Front() != 'b' {
t.Error("expected b inserted at front, got", q.Front())
}
q.Insert(q.Len(), 'e') // bAByCDxEFGe
for i, x := range "bAByCDxEFGe" {
if q.PopFront() != x {
t.Error("expected", x, "at position", i)
}
}
qs := New[string](16)
for i := 0; i < qs.Cap(); i++ {
qs.PushBack(fmt.Sprint(i))
}
// deque: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
// buffer: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
for i := 0; i < qs.Cap()/2; i++ {
qs.PopFront()
}
// deque: 8 9 10 11 12 13 14 15
// buffer: [_,_,_,_,_,_,_,_,8,9,10,11,12,13,14,15]
for i := 0; i < qs.Cap()/4; i++ {
qs.PushBack(fmt.Sprint(qs.Cap() + i))
}
// deque: 8 9 10 11 12 13 14 15 16 17 18 19
// buffer: [16,17,18,19,_,_,_,_,8,9,10,11,12,13,14,15]
at := qs.Len() - 2
qs.Insert(at, "x")
// deque: 8 9 10 11 12 13 14 15 16 17 x 18 19
// buffer: [16,17,x,18,19,_,_,_,8,9,10,11,12,13,14,15]
if qs.At(at) != "x" {
t.Error("expected x at position", at)
}
if qs.At(at) != "x" {
t.Error("expected x at position", at)
}
qs.Insert(2, "y")
// deque: 8 9 y 10 11 12 13 14 15 16 17 x 18 19
// buffer: [16,17,x,18,19,_,_,8,9,y,10,11,12,13,14,15]
if qs.At(2) != "y" {
t.Error("expected y at position 2")
}
if qs.At(at+1) != "x" {
t.Error("expected x at position 5")
}
qs.Insert(0, "b")
// deque: b 8 9 y 10 11 12 13 14 15 16 17 x 18 19
// buffer: [16,17,x,18,19,_,b,8,9,y,10,11,12,13,14,15]
if qs.Front() != "b" {
t.Error("expected b inserted at front, got", qs.Front())
}
qs.Insert(qs.Len(), "e")
if qs.Cap() != qs.Len() {
t.Fatal("Expected full buffer")
}
// deque: b 8 9 y 10 11 12 13 14 15 16 17 x 18 19 e
// buffer: [16,17,x,18,19,e,b,8,9,y,10,11,12,13,14,15]
for i, x := range []string{"16", "17", "x", "18", "19", "e", "b", "8", "9", "y", "10", "11", "12", "13", "14", "15"} {
if qs.buf[i] != x {
t.Error("expected", x, "at buffer position", i)
}
}
for i, x := range []string{"b", "8", "9", "y", "10", "11", "12", "13", "14", "15", "16", "17", "x", "18", "19", "e"} {
if qs.Front() != x {
t.Error("expected", x, "at position", i, "got", qs.Front())
}
qs.PopFront()
}
}
func TestRemove(t *testing.T) {
q := new(Deque[rune])
for _, x := range "ABCDEFG" {
q.PushBack(x)
}
if q.Remove(4) != 'E' { // ABCDFG
t.Error("expected E from position 4")
}
if q.Remove(2) != 'C' { // ABDFG
t.Error("expected C at position 2")
}
if q.Back() != 'G' {
t.Error("expected G at back")
}
if q.Remove(0) != 'A' { // BDFG
t.Error("expected to remove A from front")
}
if q.Front() != 'B' {
t.Error("expected G at back")
}
if q.Remove(q.Len()-1) != 'G' { // BDF
t.Error("expected to remove G from back")
}
if q.Back() != 'F' {
t.Error("expected F at back")
}
if q.Len() != 3 {
t.Error("wrong length")
}
}
func TestFrontBackOutOfRangePanics(t *testing.T) {
const msg = "should panic when peeking empty queue"
var q Deque[int]
assertPanics(t, msg, func() {
q.Front()
})
assertPanics(t, msg, func() {
q.Back()
})
q.PushBack(1)
q.PopFront()
assertPanics(t, msg, func() {
q.Front()
})
assertPanics(t, msg, func() {
q.Back()
})
}
func TestPopFrontOutOfRangePanics(t *testing.T) {
var q Deque[int]
assertPanics(t, "should panic when removing empty queue", func() {
q.PopFront()
})
q.PushBack(1)
q.PopFront()
assertPanics(t, "should panic when removing emptied queue", func() {
q.PopFront()
})
}
func TestPopBackOutOfRangePanics(t *testing.T) {
var q Deque[int]
assertPanics(t, "should panic when removing empty queue", func() {
q.PopBack()
})
q.PushBack(1)
q.PopBack()
assertPanics(t, "should panic when removing emptied queue", func() {
q.PopBack()
})
}
func TestAtOutOfRangePanics(t *testing.T) {
var q Deque[int]
q.PushBack(1)
q.PushBack(2)
q.PushBack(3)
assertPanics(t, "should panic when negative index", func() {
q.At(-4)
})
assertPanics(t, "should panic when index greater than length", func() {
q.At(4)
})
}
func TestSetOutOfRangePanics(t *testing.T) {
var q Deque[int]
q.PushBack(1)
q.PushBack(2)
q.PushBack(3)
assertPanics(t, "should panic when negative index", func() {
q.Set(-4, 1)
})
assertPanics(t, "should panic when index greater than length", func() {
q.Set(4, 1)
})
}
func TestInsertOutOfRangePanics(t *testing.T) {
q := new(Deque[string])
assertPanics(t, "should panic when inserting out of range", func() {
q.Insert(1, "X")
})
q.PushBack("A")
assertPanics(t, "should panic when inserting at negative index", func() {
q.Insert(-1, "Y")
})
assertPanics(t, "should panic when inserting out of range", func() {
q.Insert(2, "B")
})
}
func TestRemoveOutOfRangePanics(t *testing.T) {
q := new(Deque[string])
assertPanics(t, "should panic when removing from empty queue", func() {
q.Remove(0)
})
q.PushBack("A")
assertPanics(t, "should panic when removing at negative index", func() {
q.Remove(-1)
})
assertPanics(t, "should panic when removing out of range", func() {
q.Remove(1)
})
}
func TestSetMinCapacity(t *testing.T) {
var q Deque[string]
exp := uint(8)
q.SetMinCapacity(exp)
q.PushBack("A")
if q.minCap != 1<<exp {
t.Fatal("wrong minimum capacity")
}
if len(q.buf) != 1<<exp {
t.Fatal("wrong buffer size")
}
q.PopBack()
if q.minCap != 1<<exp {
t.Fatal("wrong minimum capacity")
}
if len(q.buf) != 1<<exp {
t.Fatal("wrong buffer size")
}
q.SetMinCapacity(0)
if q.minCap != minCapacity {
t.Fatal("wrong minimum capacity")
}
}
func assertPanics(t *testing.T, name string, f func()) {
defer func() {
if r := recover(); r == nil {
t.Errorf("%s: didn't panic as expected", name)
}
}()
f()
}
func BenchmarkPushFront(b *testing.B) {
var q Deque[int]
for i := 0; i < b.N; i++ {
q.PushFront(i)
}
}
func BenchmarkPushBack(b *testing.B) {
var q Deque[int]
for i := 0; i < b.N; i++ {
q.PushBack(i)
}
}
func BenchmarkSerial(b *testing.B) {
var q Deque[int]
for i := 0; i < b.N; i++ {
q.PushBack(i)
}
for i := 0; i < b.N; i++ {
q.PopFront()
}
}
func BenchmarkSerialReverse(b *testing.B) {
var q Deque[int]
for i := 0; i < b.N; i++ {
q.PushFront(i)
}
for i := 0; i < b.N; i++ {
q.PopBack()
}
}
func BenchmarkRotate(b *testing.B) {
q := new(Deque[int])
for i := 0; i < b.N; i++ {
q.PushBack(i)
}
b.ResetTimer()
// N complete rotations on length N - 1.
for i := 0; i < b.N; i++ {
q.Rotate(b.N - 1)
}
}
func BenchmarkInsert(b *testing.B) {
q := new(Deque[int])
for i := 0; i < b.N; i++ {
q.PushBack(i)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
q.Insert(q.Len()/2, -i)
}
}
func BenchmarkRemove(b *testing.B) {
q := new(Deque[int])
for i := 0; i < b.N; i++ {
q.PushBack(i)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
q.Remove(q.Len() / 2)
}
}
func BenchmarkYoyo(b *testing.B) {
var q Deque[int]
for i := 0; i < b.N; i++ {
for j := 0; j < 65536; j++ {
q.PushBack(j)
}
for j := 0; j < 65536; j++ {
q.PopFront()
}
}
}
func BenchmarkYoyoFixed(b *testing.B) {
var q Deque[int]
q.SetMinCapacity(16)
for i := 0; i < b.N; i++ {
for j := 0; j < 65536; j++ {
q.PushBack(j)
}
for j := 0; j < 65536; j++ {
q.PopFront()
}
}
}

View File

@@ -69,6 +69,13 @@ func (pq *PriorityQueue) Pop() *Item {
return heap.Pop(&pq.priorityQueueSlice).(*Item)
}
func (pq *PriorityQueue) GetHighest() *Item{
if len(pq.priorityQueueSlice)>0 {
return pq.priorityQueueSlice[0]
}
return nil
}
func (pq *PriorityQueue) Len() int {
return len(pq.priorityQueueSlice)
}