-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
服务端无法支持高并发,rpc超时率比较高 #16
Comments
您好:
多级线程池,从IO线程收上来一个完整的请求,丢给后面的工作线程池,这样做到一个链接支持多个处理请求。如有疑问,可以继续沟通
…------------------ Original ------------------
From: "Gliushuai";<[email protected]>;
Send time: Saturday, Apr 18, 2020 10:26 AM
To: "Meituan-Dianping/octo-rpc"<[email protected]>;
Cc: "Subscribed"<[email protected]>;
Subject: [Meituan-Dianping/octo-rpc] 服务端无法支持高并发,rpc超时率比较高 (#16)
服务端代码用C++写,TNonblockingServer 业务依然是单线程处理,请问你们那边服务端C++程序如何做到支持单链接,并行处理请求。原因是客户端的源码是单channel。
服务端源码如下
std::shared_ptr handler(new GeekRecallApiThriftHandler(conf));
std::shared_ptr processor(new GeekRecallApiThriftProcessor(handler));
// std::shared_ptr serverTransport(new TServerSocket(port));
std::shared_ptr serverTransport(new TNonblockingServerSocket(port));
std::shared_ptr transportFactory(new TFramedTransportFactory());
std::shared_ptr protocolFactory(new TBinaryProtocolFactory());
std::shared_ptr<ThreadManager> threadManager = ThreadManager::newSimpleThreadManager(10); std::shared_ptr<PosixThreadFactory> threadFactory = std::shared_ptr<PosixThreadFactory > (new PosixThreadFactory()); threadManager->threadFactory(threadFactory); threadManager->start(); boss_server = std::make_shared<TNonblockingServer>(processor, protocolFactory, serverTransport, threadManager);
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
感谢回复。因为本人是java 开发,并没有阅读完Whale框架。现在服务器端采用Thrift本身的TNonblockingServer,客户端用Dorado去调用。遇到了服务器端不支持单链接多并发处理的问题,请问采用Whale框架去写服务端的代码是否能够解决这个问题 |
是支持的 |
@Gliushuai thrift 自带的用 libevent 实现的 TNonblockingServer 因为多线程安全问题只支持单个连接上 因为下面处理过程中引用的这些资源都不是线程安全的。 // Invoke the processor
processor_->process(inputProtocol_, outputProtocol_, connectionContext_); octo-rpc 使用 ThreadLocal 来解决这一问题的,每个线程本地存储并操作这几个关键数据结构。 具体代码看 CthriftSvr::InitStaticThreadLocalMember __thread boost::shared_ptr<TMemoryBuffer> *
CthriftSvr::sp_p_input_tmemorybuffer_;
__thread boost::shared_ptr<TMemoryBuffer> *
CthriftSvr::sp_p_output_tmemorybuffer_;
__thread boost::shared_ptr<TProtocol> *CthriftSvr::sp_p_input_tprotocol_;
__thread boost::shared_ptr<TProtocol> *CthriftSvr::sp_p_output_tprotocol_;
__thread boost::shared_ptr<TProcessor> *CthriftSvr::sp_p_processor_;
void CthriftSvr::InitStaticThreadLocalMember(void) {
...
}
// #L444
(*sp_p_processor_)->process(*sp_p_input_tprotocol_,
*sp_p_output_tprotocol_, 0); |
确实是这样的,如果解决不了线程竞争问题,只能顺序处理,就如原生thirft里面的做法,这个也是并发度上不去的原因,尤其当后端计算过重,会一直hold住线程,情况会更加恶化,高并发需要解决问题之一就是上面说的线程竞争,octo-rpc就是使用的线程局部变量,解决争用问题,保证线程安全,原生thrift要么加锁,要么顺序处理,就是为了保证县城安全问题 |
服务端代码用C++写,TNonblockingServer 业务依然是单线程处理,请问你们那边服务端C++程序如何做到支持单链接,并行处理请求。原因是客户端的源码是单channel。
服务端源码如下
std::shared_ptr handler(new GeekRecallApiThriftHandler(conf));
std::shared_ptr processor(new GeekRecallApiThriftProcessor(handler));
// std::shared_ptr serverTransport(new TServerSocket(port));
std::shared_ptr serverTransport(new TNonblockingServerSocket(port));
std::shared_ptr transportFactory(new TFramedTransportFactory());
std::shared_ptr protocolFactory(new TBinaryProtocolFactory());
The text was updated successfully, but these errors were encountered: