In
Chapter 1 - Basics we took ZeroMQ for a drive, with some basic examples of the main ZeroMQ patterns: request-reply, pub-sub, and pipeline. In this chapter, we’re going to get our hands dirty and start to learn how to use these tools in real programs. 在第 1 章 - 基础中,我们通过一些基本示例体验了 ZeroMQ 的主要模式:请求-响应、发布-订阅和流水线。在本章中,我们将深入实践,开始学习如何在实际程序中使用这些工具。
We’ll cover: 我们将涵盖:
How to create and work with ZeroMQ sockets. 如何创建和使用 ZeroMQ 套接字。
How to send and receive messages on sockets. 如何在套接字上发送和接收消息。
How to build your apps around ZeroMQ’s asynchronous I/O model. 如何围绕 ZeroMQ 的异步 I/O 模型构建应用程序。
How to handle multiple sockets in one thread. 如何在一个线程中处理多个套接字。
How to handle fatal and nonfatal errors properly. 如何正确处理致命和非致命错误。
How to handle interrupt signals like Ctrl-C. 如何处理中断信号,如 Ctrl-C。
How to shut down a ZeroMQ application cleanly. 如何干净地关闭 ZeroMQ 应用程序。
How to check a ZeroMQ application for memory leaks. 如何检查 ZeroMQ 应用程序的内存泄漏。
How to send and receive multipart messages. 如何发送和接收多部分消息。
How to forward messages across networks. 如何在网络间转发消息。
How to build a simple message queuing broker. 如何构建一个简单的消息队列代理。
How to write multithreaded applications with ZeroMQ. 如何使用 ZeroMQ 编写多线程应用程序。
How to use ZeroMQ to signal between threads. 如何使用 ZeroMQ 在线程之间发送信号。
How to use ZeroMQ to coordinate a network of nodes. 如何使用 ZeroMQ 协调一个节点网络。
How to create and use message envelopes for pub-sub. 如何为发布-订阅创建和使用消息信封。
Using the HWM (high-water mark) to protect against memory overflows. 使用 HWM(高水位标记)防止内存溢出。
To be perfectly honest, ZeroMQ does a kind of switch-and-bait on you, for which we don’t apologize. It’s for your own good and it hurts us more than it hurts you. ZeroMQ presents a familiar socket-based API, which requires great effort for us to hide a bunch of message-processing engines. However, the result will slowly fix your world view about how to design and write distributed software. 说实话,ZeroMQ 对你有点“诱敌深入”,对此我们不感到抱歉。这是为了你好,对我们来说甚至比对你更痛苦。ZeroMQ 呈现了一个熟悉的基于套接字的 API,而我们付出了巨大努力来隐藏一堆消息处理引擎。然而,最终结果会慢慢改变你对如何设计和编写分布式软件的世界观。
Sockets are the de facto standard API for network programming, as well as being useful for stopping your eyes from falling onto your cheeks. One thing that makes ZeroMQ especially tasty to developers is that it uses sockets and messages instead of some other arbitrary set of concepts. Kudos to Martin Sustrik for pulling this off. It turns “Message Oriented Middleware”, a phrase guaranteed to send the whole room off to Catatonia, into “Extra Spicy Sockets!”, which leaves us with a strange craving for pizza and a desire to know more. 套接字是网络编程的事实标准 API,同时也有助于防止你的眼睛掉到脸颊上。ZeroMQ 对开发者特别有吸引力的一点是它使用套接字和消息,而不是其他任意的一套概念。向 Martin Sustrik 致敬,他做到了这一点。它将“面向消息的中间件”——一个保证让全场陷入昏睡的词汇——变成了“超辣套接字!”,这让我们奇怪地渴望披萨,并且想了解更多。
Like a favorite dish, ZeroMQ sockets are easy to digest. Sockets have a life in four parts, just like BSD sockets: 就像一道拿手菜,ZeroMQ 套接字易于理解。套接字的生命周期分为四个部分,就像 BSD 套接字一样:
Creating and destroying sockets, which go together to form a karmic circle of socket life (see zmq_socket(), zmq_close()). 创建和销毁套接字,这两者共同构成了套接字生命周期的因果循环(参见 zmq_socket() , zmq_close() )。
Plugging sockets into the network topology by creating ZeroMQ connections to and from them (see zmq_bind(), zmq_connect()). 通过创建 ZeroMQ 连接将套接字插入网络拓扑中,实现与套接字的双向连接(参见 zmq_bind() , zmq_connect() )。
Note that sockets are always void pointers, and messages (which we’ll come to very soon) are structures. So in C you pass sockets as-such, but you pass addresses of messages in all functions that work with messages, like zmq_msg_send() and zmq_msg_recv(). As a mnemonic, realize that “in ZeroMQ, all your sockets belong to us”, but messages are things you actually own in your code. 请注意,套接字始终是无类型指针,而消息(我们很快会介绍)是结构体。因此,在 C 语言中,套接字直接传递,但在所有处理消息的函数中,如 zmq_msg_send() 和 zmq_msg_recv() ,传递的是消息的地址。作为记忆提示,请记住“在 ZeroMQ 中,所有的套接字都属于我们”,但消息是你在代码中实际拥有的对象。
Creating, destroying, and configuring sockets works as you’d expect for any object. But remember that ZeroMQ is an asynchronous, elastic fabric. This has some impact on how we plug sockets into the network topology and how we use the sockets after that. 创建、销毁和配置套接字的方式与任何对象的操作相同。但请记住,ZeroMQ 是一个异步的、弹性的通信框架。这会影响我们如何将套接字插入网络拓扑,以及之后如何使用这些套接字。
To create a connection between two nodes, you use zmq_bind() in one node and zmq_connect() in the other. As a general rule of thumb, the node that does zmq_bind() is a “server”, sitting on a well-known network address, and the node which does zmq_connect() is a “client”, with unknown or arbitrary network addresses. Thus we say that we “bind a socket to an endpoint” and “connect a socket to an endpoint”, the endpoint being that well-known network address. 要在两个节点之间创建连接,你需要在一个节点使用 zmq_bind() ,在另一个节点使用 zmq_connect() 。一般来说,执行 zmq_bind() 的节点是“服务器”,位于一个众所周知的网络地址上,而执行 zmq_connect() 的节点是“客户端”,其网络地址未知或任意。因此,我们说“将套接字绑定到端点”和“将套接字连接到端点”,端点即那个众所周知的网络地址。
ZeroMQ connections are somewhat different from classic TCP connections. The main notable differences are: ZeroMQ 连接与经典的 TCP 连接有所不同。主要的显著区别有:
One socket may have many outgoing and many incoming connections. 一个套接字可以有多个出站和多个入站连接。
There is no zmq_accept() method. When a socket is bound to an endpoint it automatically starts accepting connections. 没有 zmq_accept () 方法。当套接字绑定到一个端点时,它会自动开始接受连接。
The network connection itself happens in the background, and ZeroMQ will automatically reconnect if the network connection is broken (e.g., if the peer disappears and then comes back). 网络连接本身在后台进行,如果网络连接中断(例如,对等方消失然后又回来),ZeroMQ 会自动重新连接。
Your application code cannot work with these connections directly; they are encapsulated under the socket. 您的应用程序代码无法直接操作这些连接;它们被封装在套接字下。
Many architectures follow some kind of client/server model, where the server is the component that is most static, and the clients are the components that are most dynamic, i.e., they come and go the most. There are sometimes issues of addressing: servers will be visible to clients, but not necessarily vice versa. So mostly it’s obvious which node should be doing zmq_bind() (the server) and which should be doing zmq_connect() (the client). It also depends on the kind of sockets you’re using, with some exceptions for unusual network architectures. We’ll look at socket types later. 许多架构遵循某种客户端/服务器模型,其中服务器是最静态的组件,客户端是最动态的组件,即它们最频繁地出现和消失。有时会存在寻址问题:服务器对客户端可见,但客户端不一定对服务器可见。因此,大多数情况下很明显哪个节点应该执行 zmq_bind() (服务器),哪个应该执行 zmq_connect() (客户端)。这也取决于您使用的套接字类型,某些不寻常的网络架构会有例外。我们稍后会介绍套接字类型。
Now, imagine we start the client before we start the server. In traditional networking, we get a big red Fail flag. But ZeroMQ lets us start and stop pieces arbitrarily. As soon as the client node does zmq_connect(), the connection exists and that node can start to write messages to the socket. At some stage (hopefully before messages queue up so much that they start to get discarded, or the client blocks), the server comes alive, does a zmq_bind(), and ZeroMQ starts to deliver messages. 现在,想象一下我们在启动服务器之前先启动客户端。在传统网络中,我们会遇到一个大大的失败标志。但 ZeroMQ 允许我们任意启动和停止各个部分。只要客户端节点执行了 zmq_connect() ,连接就存在了,该节点就可以开始向套接字写入消息。在某个阶段(希望是在消息积压过多以至于开始被丢弃,或者客户端阻塞之前),服务器启动,执行 zmq_bind() ,ZeroMQ 就开始传递消息。
A server node can bind to many endpoints (that is, a combination of protocol and address) and it can do this using a single socket. This means it will accept connections across different transports: 服务器节点可以绑定到多个端点(即协议和地址的组合),并且可以使用单个套接字完成此操作。这意味着它将接受跨不同传输的连接:
With most transports, you cannot bind to the same endpoint twice, unlike for example in UDP. The ipc transport does, however, let one process bind to an endpoint already used by a first process. It’s meant to allow a process to recover after a crash. 对于大多数传输方式,您不能对同一端点进行两次绑定,这与 UDP 等协议不同。然而, ipc 传输允许一个进程绑定到已被第一个进程使用的端点。它旨在允许进程在崩溃后恢复。
Although ZeroMQ tries to be neutral about which side binds and which side connects, there are differences. We’ll see these in more detail later. The upshot is that you should usually think in terms of “servers” as static parts of your topology that bind to more or less fixed endpoints, and “clients” as dynamic parts that come and go and connect to these endpoints. Then, design your application around this model. The chances that it will “just work” are much better like that. 尽管 ZeroMQ 试图对哪一方绑定、哪一方连接保持中立,但两者之间还是存在差异。我们稍后会更详细地讨论这些差异。结论是,您通常应该将“服务器”视为拓扑中绑定到或多或少固定端点的静态部分,而“客户端”则是动态部分,随时连接和断开这些端点。然后,围绕这个模型设计您的应用程序。这样做,它“能正常工作”的可能性会大大提高。
Sockets have types. The socket type defines the semantics of the socket, its policies for routing messages inwards and outwards, queuing, etc. You can connect certain types of socket together, e.g., a publisher socket and a subscriber socket. Sockets work together in “messaging patterns”. We’ll look at this in more detail later. 套接字有类型。套接字类型定义了套接字的语义、消息的进出路由策略、排队等规则。您可以将某些类型的套接字连接在一起,例如发布者套接字和订阅者套接字。套接字协同工作形成“消息模式”。我们稍后会更详细地介绍这一点。
It’s the ability to connect sockets in these different ways that gives ZeroMQ its basic power as a message queuing system. There are layers on top of this, such as proxies, which we’ll get to later. But essentially, with ZeroMQ you define your network architecture by plugging pieces together like a child’s construction toy. 正是能够以这些不同方式连接套接字,使得 ZeroMQ 作为消息队列系统具备了其基本的强大功能。在此之上还有一些层,比如代理,我们稍后会讲到。但本质上,使用 ZeroMQ 你可以像玩儿童积木一样,通过拼接各个部分来定义你的网络架构。
To send and receive messages you use the zmq_msg_send() and zmq_msg_recv() methods. The names are conventional, but ZeroMQ’s I/O model is different enough from the classic TCP model that you will need time to get your head around it. 要发送和接收消息,你使用 zmq_msg_send() 和 zmq_msg_recv() 方法。名称是约定俗成的,但 ZeroMQ 的 I/O 模型与经典的 TCP 模型有足够的不同,你需要时间来理解它。
Figure 9 - TCP sockets are 1 to 1 图 9 - TCP 套接字是一对一的
Let’s look at the main differences between TCP sockets and ZeroMQ sockets when it comes to working with data: 让我们来看一下在处理数据时,TCP 套接字和 ZeroMQ 套接字之间的主要区别:
ZeroMQ sockets carry messages, like UDP, rather than a stream of bytes as TCP does. A ZeroMQ message is length-specified binary data. We’ll come to messages shortly; their design is optimized for performance and so a little tricky. ZeroMQ 套接字传递的是消息,类似于 UDP,而不是像 TCP 那样传输字节流。ZeroMQ 消息是长度指定的二进制数据。我们稍后会详细介绍消息;它们的设计经过性能优化,因此有些复杂。
ZeroMQ sockets do their I/O in a background thread. This means that messages arrive in local input queues and are sent from local output queues, no matter what your application is busy doing. ZeroMQ 套接字在后台线程中进行 I/O 操作。这意味着无论您的应用程序正在忙什么,消息都会到达本地输入队列,并从本地输出队列发送。
ZeroMQ sockets have one-to-N routing behavior built-in, according to the socket type. ZeroMQ 套接字根据套接字类型内置了一对多的路由行为。
The zmq_send() method does not actually send the message to the socket connection(s). It queues the message so that the I/O thread can send it asynchronously. It does not block except in some exception cases. So the message is not necessarily sent when zmq_send() returns to your application. zmq_send() 方法实际上并不会将消息发送到套接字连接。它会将消息排入队列,以便 I/O 线程可以异步发送。除了一些例外情况外,它不会阻塞。因此,当 zmq_send() 返回到您的应用程序时,消息不一定已经发送。
ZeroMQ provides a set of unicast transports (inproc, ipc, and tcp) and multicast transports (epgm, pgm). Multicast is an advanced technique that we’ll come to later. Don’t even start using it unless you know that your fan-out ratios will make 1-to-N unicast impossible. ZeroMQ 提供了一组单播传输( inproc 、 ipc 和 tcp )以及多播传输(epgm、pgm)。多播是一种高级技术,我们稍后会介绍。除非您确定您的扇出比使得 1 对 N 的单播不可能,否则不要开始使用它。
For most common cases, use tcp, which is a disconnected TCP transport. It is elastic, portable, and fast enough for most cases. We call this disconnected because ZeroMQ’s tcp transport doesn’t require that the endpoint exists before you connect to it. Clients and servers can connect and bind at any time, can go and come back, and it remains transparent to applications. 对于大多数常见情况,使用 tcp ,这是一种无连接的 TCP 传输。它具有弹性、可移植,并且速度足够快。我们称之为无连接,是因为 ZeroMQ 的 tcp 传输不要求端点在连接之前必须存在。客户端和服务器可以随时连接和绑定,可以断开后再重新连接,这对应用程序来说是透明的。
The inter-process ipc transport is disconnected, like tcp. It has one limitation: it does not yet work on Windows. By convention we use endpoint names with an “.ipc” extension to avoid potential conflict with other file names. On UNIX systems, if you use ipc endpoints you need to create these with appropriate permissions otherwise they may not be shareable between processes running under different user IDs. You must also make sure all processes can access the files, e.g., by running in the same working directory. 进程间 ipc 传输是断开连接的,类似于 tcp 。它有一个限制:目前尚不支持 Windows 系统。按照惯例,我们使用带有“.ipc”扩展名的端点名称,以避免与其他文件名发生潜在冲突。在 UNIX 系统上,如果使用 ipc 端点,则需要以适当的权限创建这些端点,否则它们可能无法在不同用户 ID 下运行的进程之间共享。你还必须确保所有进程都能访问这些文件,例如,通过在相同的工作目录下运行。
The inter-thread transport, inproc, is a connected signaling transport. It is much faster than tcp or ipc. This transport has a specific limitation compared to tcp and ipc: the server must issue a bind before any client issues a connect. This was fixed in ZeroMQ v4.0 and later versions. 线程间传输 inproc 是一种连接的信号传输方式。它比 tcp 或 ipc 快得多。与 tcp 和 ipc 相比,这种传输有一个特定的限制:服务器必须先执行 bind 操作,然后客户端才能执行 connect 操作。这个问题在 ZeroMQ v4.0 及更高版本中已被修复。
A common question that newcomers to ZeroMQ ask (it’s one I’ve asked myself) is, “how do I write an XYZ server in ZeroMQ?” For example, “how do I write an HTTP server in ZeroMQ?” The implication is that if we use normal sockets to carry HTTP requests and responses, we should be able to use ZeroMQ sockets to do the same, only much faster and better. ZeroMQ 新手常问的一个问题(我自己也问过)是,“如何用 ZeroMQ 编写一个 XYZ 服务器?”例如,“如何用 ZeroMQ 编写一个 HTTP 服务器?”其含义是,如果我们使用普通套接字来传输 HTTP 请求和响应,那么我们应该也能用 ZeroMQ 套接字做同样的事情,只是速度更快、效果更好。
The answer used to be “this is not how it works”. ZeroMQ is not a neutral carrier: it imposes a framing on the transport protocols it uses. This framing is not compatible with existing protocols, which tend to use their own framing. For example, compare an HTTP request and a ZeroMQ request, both over TCP/IP. 答案过去是“这不是它的工作方式”。ZeroMQ 不是一个中立的载体:它对所使用的传输协议施加了帧结构。这种帧结构与现有协议不兼容,后者往往使用自己的帧结构。例如,比较通过 TCP/IP 传输的 HTTP 请求和 ZeroMQ 请求。
Figure 10 - HTTP on the Wire 图 10 - 网络上的 HTTP
The HTTP request uses CR-LF as its simplest framing delimiter, whereas ZeroMQ uses a length-specified frame. So you could write an HTTP-like protocol using ZeroMQ, using for example the request-reply socket pattern. But it would not be HTTP. HTTP 请求使用 CR-LF 作为最简单的帧定界符,而 ZeroMQ 使用长度指定的帧。因此,你可以使用 ZeroMQ 编写类似 HTTP 的协议,例如使用请求-应答套接字模式。但它不会是 HTTP。
Figure 11 - ZeroMQ on the Wire 图 11 - 网络上的 ZeroMQ
Since v3.3, however, ZeroMQ has a socket option called ZMQ_ROUTER_RAW that lets you read and write data without the ZeroMQ framing. You could use this to read and write proper HTTP requests and responses. Hardeep Singh contributed this change so that he could connect to Telnet servers from his ZeroMQ application. At time of writing this is still somewhat experimental, but it shows how ZeroMQ keeps evolving to solve new problems. Maybe the next patch will be yours. 然而,从 v3.3 开始,ZeroMQ 引入了一个名为 ZMQ_ROUTER_RAW 的套接字选项,允许你在不使用 ZeroMQ 帧格式的情况下读写数据。你可以用它来读写标准的 HTTP 请求和响应。Hardeep Singh 贡献了这项改动,以便他能从 ZeroMQ 应用程序连接到 Telnet 服务器。撰写本文时,这项功能仍处于实验阶段,但它展示了 ZeroMQ 如何不断发展以解决新问题。也许下一个补丁就是你的。
We said that ZeroMQ does I/O in a background thread. One I/O thread (for all sockets) is sufficient for all but the most extreme applications. When you create a new context, it starts with one I/O thread. The general rule of thumb is to allow one I/O thread per gigabyte of data in or out per second. To raise the number of I/O threads, use the zmq_ctx_set() call before creating any sockets: 我们之前提到 ZeroMQ 在后台线程中进行 I/O 操作。对于除极端应用外,单个 I/O 线程(处理所有套接字)就足够了。当你创建一个新上下文时,它默认启动一个 I/O 线程。一般经验法则是每秒传入或传出一 GB 数据分配一个 I/O 线程。要增加 I/O 线程数量,请在创建任何套接字之前使用 zmq_ctx_set() 调用:
We’ve seen that one socket can handle dozens, even thousands of connections at once. This has a fundamental impact on how you write applications. A traditional networked application has one process or one thread per remote connection, and that process or thread handles one socket. ZeroMQ lets you collapse this entire structure into a single process and then break it up as necessary for scaling. 我们已经看到,一个套接字可以同时处理数十个,甚至数千个连接。这对你编写应用程序的方式有根本性的影响。传统的网络应用程序为每个远程连接创建一个进程或线程,该进程或线程处理一个套接字。而 ZeroMQ 允许你将整个结构合并到单个进程中,然后根据需要拆分以实现扩展。
If you are using ZeroMQ for inter-thread communications only (i.e., a multithreaded application that does no external socket I/O) you can set the I/O threads to zero. It’s not a significant optimization though, more of a curiosity. 如果你仅使用 ZeroMQ 进行线程间通信(即一个不进行外部套接字 I/O 的多线程应用程序),你可以将 I/O 线程数设置为零。不过这并不是一个显著的优化,更像是一种好奇的设置。
Underneath the brown paper wrapping of ZeroMQ’s socket API lies the world of messaging patterns. If you have a background in enterprise messaging, or know UDP well, these will be vaguely familiar. But to most ZeroMQ newcomers, they are a surprise. We’re so used to the TCP paradigm where a socket maps one-to-one to another node. 在 ZeroMQ 套接字 API 的棕色纸包装下,隐藏着消息模式的世界。如果你有企业消息传递的背景,或者熟悉 UDP,这些模式会有些熟悉。但对大多数 ZeroMQ 新手来说,这些是一个惊喜。我们已经习惯了 TCP 范式,即一个套接字一对一映射到另一个节点。
Let’s recap briefly what ZeroMQ does for you. It delivers blobs of data (messages) to nodes, quickly and efficiently. You can map nodes to threads, processes, or nodes. ZeroMQ gives your applications a single socket API to work with, no matter what the actual transport (like in-process, inter-process, TCP, or multicast). It automatically reconnects to peers as they come and go. It queues messages at both sender and receiver, as needed. It limits these queues to guard processes against running out of memory. It handles socket errors. It does all I/O in background threads. It uses lock-free techniques for talking between nodes, so there are never locks, waits, semaphores, or deadlocks. 让我们简要回顾一下 ZeroMQ 为你做了什么。它将数据块(消息)快速高效地传递给节点。你可以将节点映射到线程、进程或物理节点。ZeroMQ 为你的应用程序提供了一个统一的套接字 API,无论实际传输方式是进程内、进程间、TCP 还是多播。它会自动在对等方上线和下线时重新连接。它根据需要在发送方和接收方排队消息。它限制这些队列的大小,以防止进程内存耗尽。它处理套接字错误。所有 I/O 操作都在后台线程中完成。它使用无锁技术进行节点间通信,因此不会出现锁、等待、信号量或死锁。
But cutting through that, it routes and queues messages according to precise recipes called patterns. It is these patterns that provide ZeroMQ’s intelligence. They encapsulate our hard-earned experience of the best ways to distribute data and work. ZeroMQ’s patterns are hard-coded but future versions may allow user-definable patterns. 但归根结底,它根据称为模式的精确方案路由和排队消息。正是这些模式赋予了 ZeroMQ 智能。它们封装了我们通过实践积累的最佳数据分发和工作方式的经验。ZeroMQ 的模式是硬编码的,但未来版本可能允许用户自定义模式。
ZeroMQ patterns are implemented by pairs of sockets with matching types. In other words, to understand ZeroMQ patterns you need to understand socket types and how they work together. Mostly, this just takes study; there is little that is obvious at this level. ZeroMQ 模式是由成对的匹配类型的套接字实现的。换句话说,要理解 ZeroMQ 模式,你需要了解套接字类型及其如何协同工作。大多数情况下,这只需要学习;在这个层面上几乎没有显而易见的东西。
The built-in core ZeroMQ patterns are: 内置的核心 ZeroMQ 模式有:
Request-reply, which connects a set of clients to a set of services. This is a remote procedure call and task distribution pattern. 请求-响应(Request-reply),它连接一组客户端和一组服务端。这是一种远程过程调用和任务分发模式。
Pub-sub, which connects a set of publishers to a set of subscribers. This is a data distribution pattern. 发布-订阅(Pub-sub),它连接一组发布者和一组订阅者。这是一种数据分发模式。
Pipeline, which connects nodes in a fan-out/fan-in pattern that can have multiple steps and loops. This is a parallel task distribution and collection pattern. Pipeline,连接节点形成一个多步骤和循环的分发/汇聚模式。这是一种并行任务分发和收集模式。
Exclusive pair, which connects two sockets exclusively. This is a pattern for connecting two threads in a process, not to be confused with “normal” pairs of sockets. Exclusive pair,专门连接两个套接字。这是一种用于连接进程中两个线程的模式,不应与“普通”套接字对混淆。
We looked at the first three of these in
Chapter 1 - Basics, and we’ll see the exclusive pair pattern later in this chapter. The zmq_socket() man page is fairly clear about the patterns – it’s worth reading several times until it starts to make sense. These are the socket combinations that are valid for a connect-bind pair (either side can bind): 我们在第 1 章《基础》中已经看过前三种模式,本章稍后会介绍 exclusive pair 模式。 zmq_socket() 手册页对这些模式解释得相当清楚——值得多读几遍,直到理解为止。这些是适用于 connect-bind 对(任一方都可以 bind)的有效套接字组合:
PUB and SUB PUB 和 SUB
REQ and REP REQ 和 REP
REQ and ROUTER (take care, REQ inserts an extra null frame) REQ 和 ROUTER(注意,REQ 会插入一个额外的空帧)
DEALER and REP (take care, REP assumes a null frame) DEALER 和 REP(注意,REP 假设有一个空帧)
DEALER and ROUTER DEALER 和 ROUTER
DEALER and DEALER DEALER 和 DEALER
ROUTER and ROUTER ROUTER 和 ROUTER
PUSH and PULL PUSH 和 PULL
PAIR and PAIR PAIR 和 PAIR
You’ll also see references to XPUB and XSUB sockets, which we’ll come to later (they’re like raw versions of PUB and SUB). Any other combination will produce undocumented and unreliable results, and future versions of ZeroMQ will probably return errors if you try them. You can and will, of course, bridge other socket types via code, i.e., read from one socket type and write to another. 你还会看到对 XPUB 和 XSUB 套接字的引用,我们稍后会讲到它们(它们类似于 PUB 和 SUB 的原始版本)。任何其他组合都会产生未记录且不可靠的结果,未来版本的 ZeroMQ 可能会在你尝试时返回错误。当然,你可以并且会通过代码桥接其他套接字类型,即从一种套接字类型读取并写入另一种。
These four core patterns are cooked into ZeroMQ. They are part of the ZeroMQ API, implemented in the core C++ library, and are guaranteed to be available in all fine retail stores. 这四个核心模式是内置于 ZeroMQ 的。它们是 ZeroMQ API 的一部分,在核心 C++ 库中实现,并保证在所有正规零售渠道中可用。
On top of those, we add high-level messaging patterns. We build these high-level patterns on top of ZeroMQ and implement them in whatever language we’re using for our application. They are not part of the core library, do not come with the ZeroMQ package, and exist in their own space as part of the ZeroMQ community. For example the Majordomo pattern, which we explore in
Chapter 4 - Reliable Request-Reply Patterns, sits in the GitHub Majordomo project in the ZeroMQ organization. 在此基础上,我们添加了高级消息传递模式。我们在 ZeroMQ 之上构建这些高级模式,并用我们应用程序所用的任何语言实现它们。它们不是核心库的一部分,不随 ZeroMQ 包一起提供,而是作为 ZeroMQ 社区的一部分存在于自己的空间中。例如 Majordomo 模式,我们将在第 4 章《可靠的请求-响应模式》中探讨,它位于 ZeroMQ 组织的 GitHub Majordomo 项目中。
One of the things we aim to provide you with in this book are a set of such high-level patterns, both small (how to handle messages sanely) and large (how to make a reliable pub-sub architecture). 本书旨在为您提供一套高级模式,既有小型的(如何理智地处理消息),也有大型的(如何构建可靠的发布-订阅架构)。
The libzmq core library has in fact two APIs to send and receive messages. The zmq_send() and zmq_recv() methods that we’ve already seen and used are simple one-liners. We will use these often, but zmq_recv() is bad at dealing with arbitrary message sizes: it truncates messages to whatever buffer size you provide. So there’s a second API that works with zmq_msg_t structures, with a richer but more difficult API: libzmq 核心库实际上有两个用于发送和接收消息的 API。我们已经见过并使用过的 zmq_send() 和 zmq_recv() 方法是简单的一行代码。我们会经常使用它们,但 zmq_recv() 在处理任意消息大小时表现不佳:它会将消息截断到您提供的缓冲区大小。因此,还有第二个 API,它使用 zmq_msg_t 结构,API 更丰富但更复杂:
On the wire, ZeroMQ messages are blobs of any size from zero upwards that fit in memory. You do your own serialization using protocol buffers, msgpack, JSON, or whatever else your applications need to speak. It’s wise to choose a data representation that is portable, but you can make your own decisions about trade-offs. 在传输层,ZeroMQ 消息是任意大小的内存块,从零字节起。你需要使用协议缓冲区、msgpack、JSON 或其他应用程序需要的格式自行进行序列化。选择一种可移植的数据表示方式是明智的,但你可以根据权衡自行决定。
In memory, ZeroMQ messages are zmq_msg_t structures (or classes depending on your language). Here are the basic ground rules for using ZeroMQ messages in C: 在内存中,ZeroMQ 消息是 zmq_msg_t 结构(或根据你的语言是类)。以下是在 C 语言中使用 ZeroMQ 消息的基本规则:
You create and pass around zmq_msg_t objects, not blocks of data. 你创建并传递的是 zmq_msg_t 对象,而不是数据块。
To write a message from new data, you use zmq_msg_init_size() to create a message and at the same time allocate a block of data of some size. You then fill that data using memcpy, and pass the message to zmq_msg_send(). 要从新数据写入消息,您使用 zmq_msg_init_size() 创建一个消息,同时分配一块一定大小的数据。然后使用 memcpy 填充该数据,并将消息传递给 zmq_msg_send() 。
To release (not destroy) a message, you call zmq_msg_close(). This drops a reference, and eventually ZeroMQ will destroy the message. 要释放(而非销毁)消息,您调用 zmq_msg_close() 。这会减少一个引用,最终 ZeroMQ 会销毁该消息。
After you pass a message to zmq_msg_send(), ØMQ will clear the message, i.e., set the size to zero. You cannot send the same message twice, and you cannot access the message data after sending it. 当你将消息传递给 zmq_msg_send() 后,ØMQ 会清除该消息,即将大小设置为零。你不能发送同一条消息两次,也不能在发送后访问消息数据。
These rules don’t apply if you use zmq_send() and zmq_recv(), to which you pass byte arrays, not message structures. 如果你使用 zmq_send() 和 zmq_recv() ,并传递字节数组而非消息结构体,则这些规则不适用。
If you want to send the same message more than once, and it’s sizable, create a second message, initialize it using zmq_msg_init(), and then use zmq_msg_copy() to create a copy of the first message. This does not copy the data but copies a reference. You can then send the message twice (or more, if you create more copies) and the message will only be finally destroyed when the last copy is sent or closed. 如果你想多次发送同一条较大的消息,可以创建第二条消息,使用 zmq_msg_init() 初始化它,然后使用 zmq_msg_copy() 创建第一条消息的副本。这不会复制数据,而是复制一个引用。这样你就可以发送该消息两次(或更多次,如果你创建了更多副本),消息只有在最后一个副本被发送或关闭时才会被最终销毁。
ZeroMQ also supports multipart messages, which let you send or receive a list of frames as a single on-the-wire message. This is widely used in real applications and we’ll look at that later in this chapter and in
Chapter 3 - Advanced Request-Reply Patterns. ZeroMQ 还支持多部分消息,允许你将一组帧作为单个网络传输消息发送或接收。这在实际应用中被广泛使用,我们将在本章后面以及第 3 章——高级请求-响应模式中详细介绍。
Frames (also called “message parts” in the ZeroMQ reference manual pages) are the basic wire format for ZeroMQ messages. A frame is a length-specified block of data. The length can be zero upwards. If you’ve done any TCP programming you’ll appreciate why frames are a useful answer to the question “how much data am I supposed to read of this network socket now?” 帧(在 ZeroMQ 参考手册中也称为“消息部分”)是 ZeroMQ 消息的基本传输格式。帧是一个指定长度的数据块。长度可以为零或更大。如果你有过 TCP 编程经验,就会理解为什么帧是回答“我现在应该从这个网络套接字读取多少数据?”这个问题的有用方案。
There is a wire-level
protocol called ZMTP that defines how ZeroMQ reads and writes frames on a TCP connection. If you’re interested in how this works, the spec is quite short. 有一个名为 ZMTP 的传输层协议,定义了 ZeroMQ 如何在 TCP 连接上读写帧。如果你对其工作原理感兴趣,规范文档相当简短。
Originally, a ZeroMQ message was one frame, like UDP. We later extended this with multipart messages, which are quite simply series of frames with a “more” bit set to one, followed by one with that bit set to zero. The ZeroMQ API then lets you write messages with a “more” flag and when you read messages, it lets you check if there’s “more”. 最初,ZeroMQ 消息是单帧的,类似 UDP。后来我们扩展了多部分消息,它实际上是一系列帧,前面的帧的“more”位设置为 1,最后一帧的该位设置为 0。ZeroMQ API 允许你写带有“more”标志的消息,读取消息时也可以检查是否还有“more”。
In the low-level ZeroMQ API and the reference manual, therefore, there’s some fuzziness about messages versus frames. So here’s a useful lexicon: 在低级 ZeroMQ API 和参考手册中,消息与帧之间存在一些模糊之处。因此,这里有一个有用的词汇表:
A message can be one or more parts. 一个消息可以包含一个或多个部分。
These parts are also called “frames”. 这些部分也称为“帧”。
Each part is a zmq_msg_t object. 每个部分都是一个 zmq_msg_t 对象。
You send and receive each part separately, in the low-level API. 在低级 API 中,您需要分别发送和接收每个部分。
Higher-level APIs provide wrappers to send entire multipart messages. 高级 API 提供了封装,可以发送整个多部分消息。
Some other things that are worth knowing about messages: 关于消息,还有一些值得了解的内容:
You may send zero-length messages, e.g., for sending a signal from one thread to another. 您可以发送零长度消息,例如,用于从一个线程向另一个线程发送信号。
ZeroMQ guarantees to deliver all the parts (one or more) for a message, or none of them. ZeroMQ 保证要么传递消息的所有部分(一个或多个),要么一个都不传递。
ZeroMQ does not send the message (single or multipart) right away, but at some indeterminate later time. A multipart message must therefore fit in memory. ZeroMQ 不会立即发送消息(单部分或多部分),而是在某个不确定的稍后时间发送。因此,多部分消息必须能放入内存。
A message (single or multipart) must fit in memory. If you want to send files of arbitrary sizes, you should break them into pieces and send each piece as separate single-part messages. Using multipart data will not reduce memory consumption. 消息(单部分或多部分)必须能放入内存。如果你想发送任意大小的文件,应将其拆分成多个部分,并将每个部分作为单独的单部分消息发送。使用多部分数据不会减少内存消耗。
You must call zmq_msg_close() when finished with a received message, in languages that don’t automatically destroy objects when a scope closes. You don’t call this method after sending a message. 在不自动在作用域关闭时销毁对象的语言中,接收消息后必须调用 zmq_msg_close() 。发送消息后不需要调用此方法。
And to be repetitive, do not use zmq_msg_init_data() yet. This is a zero-copy method and is guaranteed to create trouble for you. There are far more important things to learn about ZeroMQ before you start to worry about shaving off microseconds. 再重复一遍,暂时不要使用 zmq_msg_init_data() 。这是一种零拷贝方法,肯定会给你带来麻烦。在你开始担心节省微秒级时间之前,有许多关于 ZeroMQ 更重要的内容需要学习。
This rich API can be tiresome to work with. The methods are optimized for performance, not simplicity. If you start using these you will almost definitely get them wrong until you’ve read the man pages with some care. So one of the main jobs of a good language binding is to wrap this API up in classes that are easier to use. 这个丰富的 API 使用起来可能会让人感到疲惫。方法是为了性能优化,而非简洁性。如果你开始使用这些方法,几乎肯定会出错,除非你仔细阅读了手册页。因此,一个好的语言绑定的主要任务之一就是将这个 API 封装成更易用的类。
In all the examples so far, the main loop of most examples has been: 到目前为止,所有示例中的主循环大多是:
Wait for message on socket. 等待套接字上的消息。
Process message. 处理消息。
Repeat. 重复。
What if we want to read from multiple endpoints at the same time? The simplest way is to connect one socket to all the endpoints and get ZeroMQ to do the fan-in for us. This is legal if the remote endpoints are in the same pattern, but it would be wrong to connect a PULL socket to a PUB endpoint. 如果我们想同时从多个端点读取数据怎么办?最简单的方法是将一个套接字连接到所有端点,让 ZeroMQ 为我们执行汇聚操作。如果远程端点属于同一模式,这是合法的,但将 PULL 套接字连接到 PUB 端点则是错误的。
To actually read from multiple sockets all at once, use zmq_poll(). An even better way might be to wrap zmq_poll() in a framework that turns it into a nice event-driven reactor, but it’s significantly more work than we want to cover here. 要真正同时从多个套接字读取数据,请使用 zmq_poll() 。更好的方法可能是将 zmq_poll() 封装在一个框架中,将其变成一个良好的事件驱动反应器,但这比我们这里想要涵盖的内容要复杂得多。
Let’s start with a dirty hack, partly for the fun of not doing it right, but mainly because it lets me show you how to do nonblocking socket reads. Here is a simple example of reading from two sockets using nonblocking reads. This rather confused program acts both as a subscriber to weather updates, and a worker for parallel tasks: 让我们从一个简单的技巧开始,部分原因是为了体验不按常规操作的乐趣,但主要是因为它让我向你展示如何进行非阻塞套接字读取。这里有一个使用非阻塞读取从两个套接字读取数据的简单示例。这个有些混乱的程序既作为天气更新的订阅者,也作为并行任务的工作者:
msreader: Multiple socket reader in C msreader:C 语言中的多套接字读取器
// Reading from multiple sockets
// This version uses a simple recv loop
#include"zhelpers.h"intmain (void)
{
// Connect to task ventilator
void *context = zmq_ctx_new ();
void *receiver = zmq_socket (context, ZMQ_PULL);
zmq_connect (receiver, "tcp://localhost:5557");
// Connect to weather server
void *subscriber = zmq_socket (context, ZMQ_SUB);
zmq_connect (subscriber, "tcp://localhost:5556");
zmq_setsockopt (subscriber, ZMQ_SUBSCRIBE, "10001 ", 6);
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while (1) {
char msg [256];
while (1) {
int size = zmq_recv (receiver, msg, 255, ZMQ_DONTWAIT);
if (size != -1) {
// Process task
}
elsebreak;
}
while (1) {
int size = zmq_recv (subscriber, msg, 255, ZMQ_DONTWAIT);
if (size != -1) {
// Process weather update
}
elsebreak;
}
// No activity, so sleep for 1 msec
s_sleep (1);
}
zmq_close (receiver);
zmq_close (subscriber);
zmq_ctx_destroy (context);
return0;
}
msreader: Multiple socket reader in C++ msreader:C++中的多套接字读取器
//
// Reading from multiple sockets in C++
// This version uses a simple recv loop
//
#include"zhelpers.hpp"intmain (int argc, char *argv[])
{
// Prepare our context and sockets
zmq::context_t context(1);
// Connect to task ventilator
zmq::socket_t receiver(context, ZMQ_PULL);
receiver.connect("tcp://localhost:5557");
// Connect to weather server
zmq::socket_t subscriber(context, ZMQ_SUB);
subscriber.connect("tcp://localhost:5556");
subscriber.set(zmq::sockopt::subscribe, "10001 ");
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while (1) {
// Process any waiting tasks
bool rc;
do {
zmq::message_t task;
if ((rc = receiver.recv(&task, ZMQ_DONTWAIT)) == true) {
// process task
}
} while(rc == true);
// Process any waiting weather updates
do {
zmq::message_t update;
if ((rc = subscriber.recv(&update, ZMQ_DONTWAIT)) == true) {
// process weather update
}
} while(rc == true);
// No activity, so sleep for 1 msec
s_sleep(1);
}
return0;
}
msreader: Multiple socket reader in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid MSReader(string[] args)
{
//
// Reading from multiple sockets
// This version uses a simple recv loop
//
// Author: metadings
//
using (var context = new ZContext())
using (var receiver = new ZSocket(context, ZSocketType.PULL))
using (var subscriber = new ZSocket(context, ZSocketType.SUB))
{
// Connect to task ventilator
receiver.Connect("tcp://127.0.0.1:5557");
// Connect to weather server
subscriber.Connect("tcp://127.0.0.1:5556");
subscriber.SetOption(ZSocketOption.SUBSCRIBE, "10001 ");
// Process messages from both sockets
// We prioritize traffic from the task ventilator
ZError error;
ZFrame frame;
while (true)
{
while (true)
{
if (null != (frame = receiver.ReceiveFrame(ZSocketFlags.DontWait, out error)))
{
// Process task
}
else
{
if (error == ZError.ETERM)
return; // Interrupted
if (error != ZError.EAGAIN)
thrownew ZException(error);
break;
}
}
while (true)
{
if (null != (frame = subscriber.ReceiveFrame(ZSocketFlags.DontWait, out error)))
{
// Process weather update
}
else
{
if (error == ZError.ETERM)
return; // Interrupted
if (error != ZError.EAGAIN)
thrownew ZException(error);
break;
}
}
// No activity, so sleep for 1 msec
Thread.Sleep(1);
}
}
}
}
}
msreader: Multiple socket reader in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Reading from multiple sockets in Common Lisp;;; This version uses a simple recv loop;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.msreader
(:nicknames#:msreader)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.msreader)
(defunmain ()
;; Prepare our context and socket
(zmq:with-context (context1)
;; Connect to task ventilator
(zmq:with-socket (receivercontextzmq:pull)
(zmq:connectreceiver"tcp://localhost:5557")
;; Connect to weather server
(zmq:with-socket (subscribercontextzmq:sub)
(zmq:connectsubscriber"tcp://localhost:5556")
(zmq:setsockoptsubscriberzmq:subscribe"10001 ")
;; Process messages from both sockets;; We prioritize traffic from the task ventilator
(loop
(handler-case
(loop
(let ((task (make-instance'zmq:msg)))
(zmq:recvreceivertaskzmq:noblock)
;; process task
(dump-messagetask)
(finish-output)))
(zmq:error-again () nil))
;; Process any waiting weather updates
(handler-case
(loop
(let ((update (make-instance'zmq:msg)))
(zmq:recvsubscriberupdatezmq:noblock)
;; process weather update
(dump-messageupdate)
(finish-output)))
(zmq:error-again () nil))
;; No activity, so sleep for 1 msec
(isys:usleep1000)))))
(cleanup))
msreader: Multiple socket reader in Delphi
program msreader;
//
// Reading from multiple sockets
// This version uses a simple recv loop
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
receiver,
subscriber: TZMQSocket;
rc: Integer;
task,
update: TZMQFrame;
begin
// Prepare our context and sockets
context := TZMQContext.Create;
// Connect to task ventilator
receiver := Context.Socket( stPull );
receiver.RaiseEAgain := false;
receiver.connect( 'tcp://localhost:5557' );
// Connect to weather server
subscriber := Context.Socket( stSub );
subscriber.RaiseEAgain := false;
subscriber.connect( 'tcp://localhost:5556' );
subscriber.subscribe( '10001' );
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while True do
begin
// Process any waiting tasks
repeat
task := TZMQFrame.create;
rc := receiver.recv( task, [rfDontWait] );
if rc <> -1 then
begin
// process task
end;
task.Free;
until rc = -1;
// Process any waiting weather updates
repeat
update := TZMQFrame.Create;
rc := subscriber.recv( update, [rfDontWait] );
if rc <> -1 then
begin
// process weather update
end;
update.Free;
until rc = -1;
// No activity, so sleep for 1 msec
sleep (1);
end;
// We never get here but clean up anyhow
receiver.Free;
subscriber.Free;
context.Free;
end.
msreader: Multiple socket reader in Erlang
#! /usr/bin/env escript
%%
%% Reading from multiple sockets
%% This version uses a simple recv loop
%%
main(_) ->
%% Prepare our context and sockets
{ok, Context} = erlzmq:context(),
%% Connect to task ventilator
{ok, Receiver} = erlzmq:socket(Context, pull),
ok = erlzmq:connect(Receiver, "tcp://localhost:5557"),
%% Connect to weather server
{ok, Subscriber} = erlzmq:socket(Context, sub),
ok = erlzmq:connect(Subscriber, "tcp://localhost:5556"),
ok = erlzmq:setsockopt(Subscriber, subscribe, <<"10001">>),
%% Process messages from both sockets
loop(Receiver, Subscriber),
%% We never get here but clean up anyhow
ok = erlzmq:close(Receiver),
ok = erlzmq:close(Subscriber),
ok = erlzmq:term(Context).
loop(Receiver, Subscriber) ->
%% We prioritize traffic from the task ventilator
process_tasks(Receiver),
process_weather(Subscriber),
timer:sleep(1000),
loop(Receiver, Subscriber).
process_tasks(S) ->
%% Process any waiting tasks
caseerlzmq:recv(S, [noblock]) of
{error, eagain} -> ok;
{ok, Msg} ->
io:format("Procesing task: ~s~n", [Msg]),
process_tasks(S)
end.
process_weather(S) ->
%% Process any waiting weather updates
caseerlzmq:recv(S, [noblock]) of
{error, eagain} -> ok;
{ok, Msg} ->
io:format("Processing weather update: ~s~n", [Msg]),
process_weather(S)
end.
msreader: Multiple socket reader in Elixir
defmodule Msreader do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:27
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, receiver} = :erlzmq.socket(context, :pull)
:ok = :erlzmq.connect(receiver, 'tcp://localhost:5557')
{:ok, subscriber} = :erlzmq.socket(context, :sub)
:ok = :erlzmq.connect(subscriber, 'tcp://localhost:5556')
:ok = :erlzmq.setsockopt(subscriber, :subscribe, "10001")
loop(receiver, subscriber)
:ok = :erlzmq.close(receiver)
:ok = :erlzmq.close(subscriber)
:ok = :erlzmq.term(context)
end
def loop(receiver, subscriber) do
process_tasks(receiver)
process_weather(subscriber)
:timer.sleep(1000)
loop(receiver, subscriber)
end
#case(:erlzmq.recv(s, [:noblock])) do
def process_tasks(s) do
case(:erlzmq.recv(s, [:dontwait])) do
{:error, :eagain} ->
:ok
{:ok, msg} ->
:io.format('Procesing task: ~s~n', [msg])
process_tasks(s)
end
end
def process_weather(s) do
case(:erlzmq.recv(s, [:dontwait])) do
{:error, :eagain} ->
:ok
{:ok, msg} ->
:io.format('Processing weather update: ~s~n', [msg])
process_weather(s)
end
end
end
Msreader.main
msreader: Multiple socket reader in F#
(*
Reading from multiple sockets
This version uses a simple recv loop
*)
#r @"bin/fszmq.dll"
open fszmq
#load "zhelpers.fs"
open Context
open Socket
let main () =
// Prepare our context and sockets
use context = new Context(1)
// Connect to task ventilator
use receiver = context |> pull
connect receiver "tcp://localhost:5557"
// Connect to weather server
use subscriber = context |> sub
connect subscriber "tcp://localhost:5556"
subscribe subscriber [ encode "10001" ]
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while true do
// Process any waiting tasks
match tryRecv receiver ZMQ.NOBLOCK with
| Some(msg) -> msg |> decode |> printfn "%s" // Process task
| None -> () // Otherwise, do nothing
// Process any waiting weather updates
match tryRecv receiver ZMQ.NOBLOCK with
| Some(msg) -> msg |> decode |> printfn "%s" // Process weather update
| None -> () // Otherwise, do nothing
// No activity, so sleep for 1 msec
sleep 1
// We never get here
EXIT_SUCCESS
main ()
msreader: Multiple socket reader in Felix
//
// Reading from multiple sockets
// This version uses a simple recv loop
//
open ZMQ;
// Prepare our context and sockets
var context = zmq_init 1;
// Connect to task ventilator
var receiver = context.mk_socket ZMQ_PULL;
receiver.connect "tcp://localhost:5557";
// Connect to weather server
var subscriber = context.mk_socket ZMQ_SUB;
subscriber.connect "tcp://localhost:5556";
subscriber.set_opt$ zmq_subscribe "101 ";
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while true do
// Process any waiting tasks
var task = receiver.recv_string_dontwait;
while task != "" do
// process task
task = receiver.recv_string_dontwait;
done
// Process any waiting weather updates
var update = subscriber.recv_string_dontwait;
while update != "" do
// process update
update = subscriber.recv_string_dontwait;
done
Faio::sleep (sys_clock,0.001); // 1 ms
done
msreader: Multiple socket reader in Go
//
// Reading from multiple sockets
// This version uses a simple recv loop
//
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq""time"
)
funcmain() {
context, _ := zmq.NewContext()
defer context.Close()
// Connect to task ventilator
receiver, _ := context.NewSocket(zmq.PULL)
defer receiver.Close()
receiver.Connect("tcp://localhost:5557")
// Connect to weather server
subscriber, _ := context.NewSocket(zmq.SUB)
defer subscriber.Close()
subscriber.Connect("tcp://localhost:5556")
subscriber.SetSubscribe("10001")
// Process messages from both sockets
// We prioritize traffic from the task ventilator
for {
// ventilator
for b, _ := receiver.Recv(zmq.NOBLOCK); b != nil; {
// fake process task
}
// weather server
for b, _ := subscriber.Recv(zmq.NOBLOCK); b != nil; {
// process task
fmt.Printf("found weather =%s\n", string(b))
}
// No activity, so sleep for 1 msec
time.Sleep(1e6)
}
fmt.Println("done")
}
packageguide;
importorg.zeromq.SocketType;
importorg.zeromq.ZMQ;
importorg.zeromq.ZContext;
//
// Reading from multiple sockets in Java
// This version uses a simple recv loop
//
publicclassmsreader
{
publicstaticvoidmain(String[] args) throws Exception
{
// Prepare our context and sockets
try (ZContext context = new ZContext()) {
// Connect to task ventilator
ZMQ.Socket receiver = context.createSocket(SocketType.PULL);
receiver.connect("tcp://localhost:5557");
// Connect to weather server
ZMQ.Socket subscriber = context.createSocket(SocketType.SUB);
subscriber.connect("tcp://localhost:5556");
subscriber.subscribe("10001 ".getBytes(ZMQ.CHARSET));
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while (!Thread.currentThread().isInterrupted()) {
// Process any waiting tasks
byte[] task;
while ((task = receiver.recv(ZMQ.DONTWAIT)) != null) {
System.out.println("process task");
}
// Process any waiting weather updates
byte[] update;
while ((update = subscriber.recv(ZMQ.DONTWAIT)) != null) {
System.out.println("process weather update");
}
// No activity, so sleep for 1 msec
Thread.sleep(1000);
}
}
}
}
msreader: Multiple socket reader in Julia
#!/usr/bin/env julia# Reading from multiple sockets# The ZMQ.jl wrapper implements ZMQ.recv as a blocking function. Nonblocking i/o# in Julia is typically done using coroutines (Tasks).# The @async macro puts its enclosed expression in a Task. When the macro is# executed, its Task gets scheduled and execution continues immediately to# whatever follows the macro.# Note: the msreader example in the zguide is presented as a "dirty hack"# using the ZMQ_DONTWAIT and EAGAIN codes. Since the ZMQ.jl wrapper API# does not expose DONTWAIT directly, this example skips the hack and instead# provides an efficient solution.using ZMQ
# Prepare our context and sockets
context = ZMQ.Context()
# Connect to task ventilator
receiver = Socket(context, ZMQ.PULL)
ZMQ.connect(receiver, "tcp://localhost:5557")
# Connect to weather server
subscriber = Socket(context,ZMQ.SUB)
ZMQ.connect(subscriber,"tcp://localhost:5556")
ZMQ.set_subscribe(subscriber, "10001")
whiletrue# Process any waiting tasks@asyncbegin
msg = unsafe_string(ZMQ.recv(receiver))
println(msg)
end# Process any waiting weather updates@asyncbegin
msg = unsafe_string(ZMQ.recv(subscriber))
println(msg)
end# Sleep for 1 msec
sleep(0.001)
end
msreader: Multiple socket reader in Lua
---- Reading from multiple sockets-- This version uses a simple recv loop---- Author: Robert G. Jakabosky <bobby@sharedrealm.com>--
require"zmq"
require"zhelpers"-- Prepare our context and socketslocal context = zmq.init(1)
-- Connect to task ventilatorlocal receiver = context:socket(zmq.PULL)
receiver:connect("tcp://localhost:5557")
-- Connect to weather serverlocal subscriber = context:socket(zmq.SUB)
subscriber:connect("tcp://localhost:5556")
subscriber:setopt(zmq.SUBSCRIBE, "10001 ")
-- Process messages from both sockets-- We prioritize traffic from the task ventilatorwhiletruedo-- Process any waiting taskslocal msg
whiletruedo
msg = receiver:recv(zmq.NOBLOCK)
ifnot msg thenbreakend-- process taskend-- Process any waiting weather updateswhiletruedo
msg = subscriber:recv(zmq.NOBLOCK)
ifnot msg thenbreakend-- process weather updateend-- No activity, so sleep for 1 msec
s_sleep (1)
end-- We never get here but clean up anyhow
receiver:close()
subscriber:close()
context:term()
/* msreader.m: Reads from multiple sockets the hard way.
* *** DON'T DO THIS - see mspoller.m for a better example. *** */#import "ZMQObjC.h"
static NSString *const kTaskVentEndpoint = @"tcp://localhost:5557";
static NSString *const kWeatherServerEndpoint = @"tcp://localhost:5556";
#define MSEC_PER_NSEC (1000000)
intmain(void)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
ZMQContext *ctx = [[[ZMQContext alloc] initWithIOThreads:1U] autorelease];
/* Connect to task ventilator. */
ZMQSocket *receiver = [ctx socketWithType:ZMQ_PULL];
[receiver connectToEndpoint:kTaskVentEndpoint];
/* Connect to weather server. */
ZMQSocket *subscriber = [ctx socketWithType:ZMQ_SUB];
[subscriber connectToEndpoint:kWeatherServerEndpoint];
NSData *subData = [@"10001" dataUsingEncoding:NSUTF8StringEncoding];
[subscriber setData:subData forOption:ZMQ_SUBSCRIBE];
/* Process messages from both sockets, prioritizing the task vent. *//* Could fair queue by checking each socket for activity in turn, rather
* than continuing to service the current socket as long as it is busy. */struct timespec msec = {0, MSEC_PER_NSEC};
for (;;) {
/* Worst case: a task is always pending and we never get to weather,
* or vice versa. In such a case, memory use would rise without
* limit if we did not ensure the objects autoreleased by a single loop
* will be autoreleased whether we leave or continue in the loop. */
NSAutoreleasePool *p;
/* Process any waiting tasks. */for (p = [[NSAutoreleasePool alloc] init];
nil != [receiver receiveDataWithFlags:ZMQ_NOBLOCK];
[p drain], p = [[NSAutoreleasePool alloc] init]);
[p drain];
/* No waiting tasks - process any waiting weather updates. */for (p = [[NSAutoreleasePool alloc] init];
nil != [subscriber receiveDataWithFlags:ZMQ_NOBLOCK];
[p drain], p = [[NSAutoreleasePool alloc] init]);
[p drain];
/* Nothing doing - sleep for a millisecond. */
(void)nanosleep(&msec, NULL);
}
/* NOT REACHED */
[ctx closeSockets];
[pool drain]; /* This finally releases the autoreleased context. */return EXIT_SUCCESS;
}
# Reading from multiple sockets in Perl# This version uses a simple recv loopusestrict;
usewarnings;
usev5.10;
useZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_PULL ZMQ_SUB ZMQ_DONTWAIT);
useTryCatch;
useTime::HiResqw(usleep);
# Connect to task ventilatormy$context = ZMQ::FFI->new();
my$receiver = $context->socket(ZMQ_PULL);
$receiver->connect('tcp://localhost:5557');
# Connect to weather servermy$subscriber = $context->socket(ZMQ_SUB);
$subscriber->connect('tcp://localhost:5556');
$subscriber->subscribe('10001');
# Process messages from both sockets# We prioritize traffic from the task ventilatorwhile (1) {
PROCESS_TASK:
while (1) {
try {
my$msg = $receiver->recv(ZMQ_DONTWAIT);
# Process task
}
catch {
last PROCESS_TASK;
}
}
PROCESS_UPDATE:
while (1) {
try {
my$msg = $subscriber->recv(ZMQ_DONTWAIT);
# Process weather update
}
catch {
last PROCESS_UPDATE;
}
}
# No activity, so sleep for 1 msec
usleep(1000);
}
msreader: Multiple socket reader in PHP
<?php/*
* Reading from multiple sockets
* This version uses a simple recv loop
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/// Prepare our context and sockets
$context = new ZMQContext();
// Connect to task ventilator
$receiver = new ZMQSocket($context, ZMQ::SOCKET_PULL);
$receiver->connect("tcp://localhost:5557");
// Connect to weather server
$subscriber = new ZMQSocket($context, ZMQ::SOCKET_SUB);
$subscriber->connect("tcp://localhost:5556");
$subscriber->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "10001");
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while (true) {
// Process any waiting tasks
try {
for ($rc = 0; !$rc;) {
if ($rc = $receiver->recv(ZMQ::MODE_NOBLOCK)) {
// process task
}
}
} catch (ZMQSocketException $e) {
// do nothing
}
try {
// Process any waiting weather updates
for ($rc = 0; !$rc;) {
if ($rc = $subscriber->recv(ZMQ::MODE_NOBLOCK)) {
// process weather update
}
}
} catch (ZMQSocketException $e) {
// do nothing
}
// No activity, so sleep for 1 msec
usleep(1);
}
msreader: Multiple socket reader in Python
# encoding: utf-8## Reading from multiple sockets# This version uses a simple recv loop## Author: Jeremy Avnet (brainsik) <spork(dash)zmq(at)theory(dot)org>#importzmqimporttime# Prepare our context and sockets
context = zmq.Context()
# Connect to task ventilator
receiver = context.socket(zmq.PULL)
receiver.connect("tcp://localhost:5557")
# Connect to weather server
subscriber = context.socket(zmq.SUB)
subscriber.connect("tcp://localhost:5556")
subscriber.setsockopt(zmq.SUBSCRIBE, b"10001")
# Process messages from both sockets# We prioritize traffic from the task ventilatorwhile True:
# Process any waiting taskswhile True:
try:
msg = receiver.recv(zmq.DONTWAIT)
except zmq.Again:
break# process task# Process any waiting weather updateswhile True:
try:
msg = subscriber.recv(zmq.DONTWAIT)
except zmq.Again:
break# process weather update# No activity, so sleep for 1 msec
time.sleep(0.001)
The cost of this approach is some additional latency on the first message (the sleep at the end of the loop, when there are no waiting messages to process). This would be a problem in applications where submillisecond latency was vital. Also, you need to check the documentation for nanosleep() or whatever function you use to make sure it does not busy-loop. 这种方法的代价是在第一条消息上会有一些额外的延迟(循环末尾的休眠,当没有等待处理的消息时)。在亚毫秒延迟至关重要的应用中,这将是一个问题。此外,您需要查看 nanosleep() 或您使用的任何函数的文档,以确保它不会进行忙等待循环。
You can treat the sockets fairly by reading first from one, then the second rather than prioritizing them as we did in this example. 你可以通过先从一个套接字读取,然后再从第二个套接字读取,而不是像我们在这个例子中那样优先处理它们,从而实现对套接字的公平处理。
Now let’s see the same senseless little application done right, using zmq_poll(): 现在让我们看看使用 zmq_poll() 正确完成的同样无意义的小应用:
mspoller: Multiple socket poller in C mspoller:C 语言中的多套接字轮询器
// Reading from multiple sockets
// This version uses zmq_poll()
#include"zhelpers.h"intmain (void)
{
// Connect to task ventilator
void *context = zmq_ctx_new ();
void *receiver = zmq_socket (context, ZMQ_PULL);
zmq_connect (receiver, "tcp://localhost:5557");
// Connect to weather server
void *subscriber = zmq_socket (context, ZMQ_SUB);
zmq_connect (subscriber, "tcp://localhost:5556");
zmq_setsockopt (subscriber, ZMQ_SUBSCRIBE, "10001 ", 6);
zmq_pollitem_t items [] = {
{ receiver, 0, ZMQ_POLLIN, 0 },
{ subscriber, 0, ZMQ_POLLIN, 0 }
};
// Process messages from both sockets
while (1) {
char msg [256];
zmq_poll (items, 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
int size = zmq_recv (receiver, msg, 255, 0);
if (size != -1) {
// Process task
}
}
if (items [1].revents & ZMQ_POLLIN) {
int size = zmq_recv (subscriber, msg, 255, 0);
if (size != -1) {
// Process weather update
}
}
}
zmq_close (subscriber);
zmq_ctx_destroy (context);
return0;
}
mspoller: Multiple socket poller in C++ mspoller:C++中的多套接字轮询器
//
// Reading from multiple sockets in C++
// This version uses zmq_poll()
//
#include"zhelpers.hpp"intmain (int argc, char *argv[])
{
zmq::context_t context(1);
// Connect to task ventilator
zmq::socket_t receiver(context, ZMQ_PULL);
receiver.connect("tcp://localhost:5557");
// Connect to weather server
zmq::socket_t subscriber(context, ZMQ_SUB);
subscriber.connect("tcp://localhost:5556");
subscriber.set(zmq::sockopt::subscribe, "10001 ");
// Initialize poll set
zmq::pollitem_t items [] = {
{ receiver, 0, ZMQ_POLLIN, 0 },
{ subscriber, 0, ZMQ_POLLIN, 0 }
};
// Process messages from both sockets
while (1) {
zmq::message_t message;
zmq::poll (&items [0], 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
receiver.recv(&message);
// Process task
}
if (items [1].revents & ZMQ_POLLIN) {
subscriber.recv(&message);
// Process weather update
}
}
return0;
}
mspoller: Multiple socket poller in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid MSPoller(string[] args)
{
//
// Reading from multiple sockets
// This version uses zmq_poll()
//
// Author: metadings
//
using (var context = new ZContext())
using (var receiver = new ZSocket(context, ZSocketType.PULL))
using (var subscriber = new ZSocket(context, ZSocketType.SUB))
{
// Connect to task ventilator
receiver.Connect("tcp://127.0.0.1:5557");
// Connect to weather server
subscriber.Connect("tcp://127.0.0.1:5556");
subscriber.SetOption(ZSocketOption.SUBSCRIBE, "10001 ");
var sockets = new ZSocket[] { receiver, subscriber };
var polls = new ZPollItem[] { ZPollItem.CreateReceiver(), ZPollItem.CreateReceiver() };
// Process messages from both sockets
ZError error;
ZMessage[] msg;
while (true)
{
if (sockets.PollIn(polls, out msg, out error, TimeSpan.FromMilliseconds(64)))
{
if (msg[0] != null)
{
// Process task
}
if (msg[1] != null)
{
// Process weather update
}
}
else
{
if (error == ZError.ETERM)
return; // Interrupted
if (error != ZError.EAGAIN)
thrownew ZException(error);
}
}
}
}
}
}
mspoller: Multiple socket poller in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Reading from multiple sockets in Common Lisp;;; This version uses zmq_poll();;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.mspoller
(:nicknames#:mspoller)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.mspoller)
(defunmain ()
(zmq:with-context (context1)
;; Connect to task ventilator
(zmq:with-socket (receivercontextzmq:pull)
(zmq:connectreceiver"tcp://localhost:5557")
;; Connect to weather server
(zmq:with-socket (subscribercontextzmq:sub)
(zmq:connectsubscriber"tcp://localhost:5556")
(zmq:setsockoptsubscriberzmq:subscribe"10001 ")
;; Initialize poll set
(zmq:with-polls ((items . ((receiver . zmq:pollin)
(subscriber . zmq:pollin))))
;; Process messages from both sockets
(loop
(let ((revents (zmq:pollitems)))
(when (= (firstrevents) zmq:pollin)
(let ((message (make-instance'zmq:msg)))
(zmq:recvreceivermessage)
;; Process task
(dump-messagemessage)
(finish-output)))
(when (= (secondrevents) zmq:pollin)
(let ((message (make-instance'zmq:msg)))
(zmq:recvsubscribermessage)
;; Process weather update
(dump-messagemessage)
(finish-output)))))))))
(cleanup))
mspoller: Multiple socket poller in Delphi
program mspoller;
//
// Reading from multiple sockets
// This version uses zmq_poll()
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
receiver,
subscriber: TZMQSocket;
i,pc: Integer;
task: TZMQFrame;
poller: TZMQPoller;
pollResult: TZMQPollItem;
begin
// Prepare our context and sockets
context := TZMQContext.Create;
// Connect to task ventilator
receiver := Context.Socket( stPull );
receiver.connect( 'tcp://localhost:5557' );
// Connect to weather server
subscriber := Context.Socket( stSub );
subscriber.connect( 'tcp://localhost:5556' );
subscriber.subscribe( '10001' );
// Initialize poll set
poller := TZMQPoller.Create( true );
poller.Register( receiver, [pePollIn] );
poller.Register( subscriber, [pePollIn] );
task := nil;
// Process messages from both sockets
while True do
begin
pc := poller.poll;
if pePollIn in poller.PollItem[0].revents then
begin
receiver.recv( task );
// Process task
FreeAndNil( task );
end;
if pePollIn in poller.PollItem[1].revents then
begin
subscriber.recv( task );
// Process task
FreeAndNil( task );
end;
end;
// We never get here
poller.Free;
receiver.Free;
subscriber.Free;
context.Free;
end.
mspoller: Multiple socket poller in Erlang
#! /usr/bin/env escript
%%
%% Reading from multiple sockets
%% This version uses active sockets
%%
main(_) ->
{ok,Context} = erlzmq:context(),
%% Connect to task ventilator
{ok, Receiver} = erlzmq:socket(Context, [pull, {active, true}]),
ok = erlzmq:connect(Receiver, "tcp://localhost:5557"),
%% Connect to weather server
{ok, Subscriber} = erlzmq:socket(Context, [sub, {active, true}]),
ok = erlzmq:connect(Subscriber, "tcp://localhost:5556"),
ok = erlzmq:setsockopt(Subscriber, subscribe, <<"10001">>),
%% Process messages from both sockets
loop(Receiver, Subscriber),
%% We never get here
ok = erlzmq:close(Receiver),
ok = erlzmq:close(Subscriber),
ok = erlzmq:term(Context).
loop(Tasks, Weather) ->
receive
{zmq, Tasks, Msg, _Flags} ->
io:format("Processing task: ~s~n",[Msg]),
loop(Tasks, Weather);
{zmq, Weather, Msg, _Flags} ->
io:format("Processing weather update: ~s~n",[Msg]),
loop(Tasks, Weather)
end.
mspoller: Multiple socket poller in Elixir
defmodule Mspoller do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:27
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, receiver} = :erlzmq.socket(context, [:pull, {:active, true}])
:ok = :erlzmq.connect(receiver, 'tcp://localhost:5557')
{:ok, subscriber} = :erlzmq.socket(context, [:sub, {:active, true}])
:ok = :erlzmq.connect(subscriber, 'tcp://localhost:5556')
:ok = :erlzmq.setsockopt(subscriber, :subscribe, "10001")
loop(receiver, subscriber)
:ok = :erlzmq.close(receiver)
:ok = :erlzmq.close(subscriber)
:ok = :erlzmq.term(context)
end
def loop(tasks, weather) do
receive do
{:zmq, ^tasks, msg, _flags} ->
:io.format('Processing task: ~s~n', [msg])
loop(tasks, weather)
{:zmq, ^weather, msg, _flags} ->
:io.format('Processing weather update: ~s~n', [msg])
loop(tasks, weather)
end
end
end
Mspoller.main
mspoller: Multiple socket poller in F#
(*
Reading from multiple sockets
This version uses zmq_poll()
*)
#r @"bin/fszmq.dll"
open fszmq
#load "zhelpers.fs"
open Context
open Socket
let main () =
use context = new Context(1)
// Connect to task ventilator
use receiver = context |> pull
connect receiver "tcp://localhost:5557"
// Connect to weather server
use subscriber = context |> sub
connect subscriber "tcp://localhost:5556"
subscribe subscriber [ encode "10001" ]
// Initialize pollset
let items =
let printNextMessage = recv >> decode >> printfn "%s"
[ Poll(ZMQ.POLLIN,receiver, fun s -> // Process task
printNextMessage s)
Poll(ZMQ.POLLIN,subscriber, fun s -> // Process weather update
printNextMessage s) ]
// Process messages from both sockets
while true do
(Polling.poll -1L items) |> ignore
// We never get here
EXIT_SUCCESS
main ()
mspoller: Multiple socket poller in Felix
//
// Reading from multiple sockets
// This version uses zmq_poll()
//
open ZMQ;
var context = zmq_init 1;
// Connect to task ventilator
var receiver = context.mk_socket ZMQ_PULL;
receiver.connect "tcp://localhost:5557";
// Connect to weather server
var subscriber = context.mk_socket ZMQ_SUB;
subscriber.connect "tcp://localhost:5556";
subscriber.set_opt$ zmq_subscribe "101 ";
// Initialize poll set
var items = varray(
zmq_poll_item (receiver, ZMQ_POLLIN),
zmq_poll_item (subscriber, ZMQ_POLLOUT))
;
// Process messages from both sockets
while true do
C_hack::ignore$ poll (items, -1.0);
if (items.[0].revents \& ZMQ_POLLIN).short != 0s do
var s = receiver.recv_string;
// Process task
done
if (items.[1].revents \& ZMQ_POLLIN).short != 0s do
s = subscriber.recv_string;
done
done
mspoller: Multiple socket poller in Go
//
// Reading from multiple sockets
// This version uses zmq.Poll()
//
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
)
funcmain() {
context, _ := zmq.NewContext()
defer context.Close()
// Connect to task ventilator
receiver, _ := context.NewSocket(zmq.PULL)
defer receiver.Close()
receiver.Connect("tcp://localhost:5557")
// Connect to weather server
subscriber, _ := context.NewSocket(zmq.SUB)
defer subscriber.Close()
subscriber.Connect("tcp://localhost:5556")
subscriber.SetSubscribe("10001")
pi := zmq.PollItems{
zmq.PollItem{Socket: receiver, Events: zmq.POLLIN},
zmq.PollItem{Socket: subscriber, Events: zmq.POLLIN},
}
// Process messages from both sockets
for {
_, _ = zmq.Poll(pi, -1)
switch {
case pi[0].REvents&zmq.POLLIN != 0:
// Process task
pi[0].Socket.Recv(0) // eat the incoming message
case pi[1].REvents&zmq.POLLIN != 0:
// Process weather update
pi[1].Socket.Recv(0) // eat the incoming message
}
}
fmt.Println("done")
}
mspoller: Multiple socket poller in Haskell
{-# LANGUAGE OverloadedStrings #-}-- Reading from multiple sockets-- This version uses zmq_poll()moduleMainwhereimportControl.MonadimportSystem.ZMQ4.Monadicmain::IO()main= runZMQ $ do-- Connect to task ventilator
receiver <- socket Pull
connect receiver "tcp://localhost:5557"-- Connect to weather server
subscriber <- socket Sub
connect subscriber "tcp://localhost:5556"
subscribe subscriber "10001 "-- Process messages from both sockets
forever $
poll (-1) [ Sock receiver [In] (Just receiver_callback)
, Sock subscriber [In] (Just subscriber_callback)
]
where-- Process task
receiver_callback :: [Event] ->ZMQ z ()
receiver_callback _= return ()-- Process weather update
subscriber_callback :: [Event] ->ZMQ z ()
subscriber_callback _= return ()
---- Reading from multiple sockets-- This version uses :poll()---- Author: Robert G. Jakabosky <bobby@sharedrealm.com>--
require"zmq"
require"zmq.poller"
require"zhelpers"local context = zmq.init(1)
-- Connect to task ventilatorlocal receiver = context:socket(zmq.PULL)
receiver:connect("tcp://localhost:5557")
-- Connect to weather serverlocal subscriber = context:socket(zmq.SUB)
subscriber:connect("tcp://localhost:5556")
subscriber:setopt(zmq.SUBSCRIBE, "10001 ", 6)
local poller = zmq.poller(2)
poller:add(receiver, zmq.POLLIN, function()
local msg = receiver:recv()
-- Process taskend)
poller:add(subscriber, zmq.POLLIN, function()
local msg = subscriber:recv()
-- Process weather updateend)
-- Process messages from both sockets-- start poller's event loop
poller:start()
-- We never get here
receiver:close()
subscriber:close()
context:term()
mspoller: Multiple socket poller in Node.js
// Reading from multiple sockets.
// This version listens for emitted 'message' events.
var zmq = require('zeromq')
// Connect to task ventilator
var receiver = zmq.socket('pull')
receiver.on('message', function(msg) {
console.log("From Task Ventilator:", msg.toString())
})
// Connect to weather server.
var subscriber = zmq.socket('sub')
subscriber.subscribe('10001')
subscriber.on('message', function(msg) {
console.log("Weather Update:", msg.toString())
})
receiver.connect('tcp://localhost:5557')
subscriber.connect('tcp://localhost:5556')
# Reading from multiple sockets in Perl# This version uses AnyEvent to poll the socketsusestrict;
usewarnings;
usev5.10;
useZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_PULL ZMQ_SUB);
useAnyEvent;
useEV;
# Connect to the task ventilatormy$context = ZMQ::FFI->new();
my$receiver = $context->socket(ZMQ_PULL);
$receiver->connect('tcp://localhost:5557');
# Connect to weather servermy$subscriber = $context->socket(ZMQ_SUB);
$subscriber->connect('tcp://localhost:5556');
$subscriber->subscribe('10001');
my$pull_poller = AE::io $receiver->get_fd, 0, sub {
while ($receiver->has_pollin) {
my$msg = $receiver->recv();
# Process task
}
};
my$sub_poller = AE::io $subscriber->get_fd, 0, sub {
while ($subscriber->has_pollin) {
my$msg = $subscriber->recv();
# Process weather update
}
};
EV::run;
mspoller: Multiple socket poller in PHP
<?php/*
* Reading from multiple sockets
* This version uses zmq_poll()
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/$context = new ZMQContext();
// Connect to task ventilator
$receiver = new ZMQSocket($context, ZMQ::SOCKET_PULL);
$receiver->connect("tcp://localhost:5557");
// Connect to weather server
$subscriber = new ZMQSocket($context, ZMQ::SOCKET_SUB);
$subscriber->connect("tcp://localhost:5556");
$subscriber->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "10001");
// Initialize poll set
$poll = new ZMQPoll();
$poll->add($receiver, ZMQ::POLL_IN);
$poll->add($subscriber, ZMQ::POLL_IN);
$readable = $writeable = array();
// Process messages from both sockets
while (true) {
$events = $poll->poll($readable, $writeable);
if ($events > 0) {
foreach ($readableas$socket) {
if ($socket === $receiver) {
$message = $socket->recv();
// Process task
} elseif ($socket === $subscriber) {
$mesage = $socket->recv();
// Process weather update
}
}
}
}
// We never get here
mspoller: Multiple socket poller in Python
# encoding: utf-8## Reading from multiple sockets# This version uses zmq.Poller()## Author: Jeremy Avnet (brainsik) <spork(dash)zmq(at)theory(dot)org>#importzmq# Prepare our context and sockets
context = zmq.Context()
# Connect to task ventilator
receiver = context.socket(zmq.PULL)
receiver.connect("tcp://localhost:5557")
# Connect to weather server
subscriber = context.socket(zmq.SUB)
subscriber.connect("tcp://localhost:5556")
subscriber.setsockopt(zmq.SUBSCRIBE, b"10001")
# Initialize poll set
poller = zmq.Poller()
poller.register(receiver, zmq.POLLIN)
poller.register(subscriber, zmq.POLLIN)
# Process messages from both socketswhile True:
try:
socks = dict(poller.poll())
except KeyboardInterrupt:
breakif receiver in socks:
message = receiver.recv()
# process taskif subscriber in socks:
message = subscriber.recv()
# process weather update
The items structure has these four members: items 结构包含以下四个成员:
typedefstruct {
void *socket; // ZeroMQ socket to poll on
int fd; // OR, native file handle to poll on
short events; // Events to poll on
short revents; // Events returned after poll
} zmq_pollitem_t;
ZeroMQ lets us compose a message out of several frames, giving us a “multipart message”. Realistic applications use multipart messages heavily, both for wrapping messages with address information and for simple serialization. We’ll look at reply envelopes later. ZeroMQ 允许我们将消息由多个帧组成,形成“多部分消息”。实际应用中大量使用多部分消息,既用于为消息封装地址信息,也用于简单的序列化。我们稍后会讨论回复信封。
What we’ll learn now is simply how to blindly and safely read and write multipart messages in any application (such as a proxy) that needs to forward messages without inspecting them. 我们现在要学习的是如何在任何需要转发消息而不检查消息内容的应用(例如代理)中,盲目且安全地读取和写入多部分消息。
When you work with multipart messages, each part is a zmq_msg item. E.g., if you are sending a message with five parts, you must construct, send, and destroy five zmq_msg items. You can do this in advance (and store the zmq_msg items in an array or other structure), or as you send them, one-by-one. 处理多部分消息时,每个部分都是一个 zmq_msg 项。例如,如果你发送一个包含五个部分的消息,你必须构造、发送并销毁五个 zmq_msg 项。你可以提前完成这些操作(并将 zmq_msg 项存储在数组或其他结构中),也可以在发送时逐个处理。
Here is how we send the frames in a multipart message (we receive each frame into a message object): 以下是我们如何发送多部分消息中的帧(我们将每个帧接收到一个消息对象中):
Here is how we receive and process all the parts in a message, be it single part or multipart: 以下是我们如何接收并处理消息中的所有部分,无论是单部分还是多部分:
while (1) {
zmq_msg_t message;
zmq_msg_init (&message);
zmq_msg_recv (&message, socket, 0);
// Process the message frame
...
zmq_msg_close (&message);
if (!zmq_msg_more (&message))
break; // Last message frame
}
Some things to know about multipart messages: 关于多部分消息需要了解的一些事项:
When you send a multipart message, the first part (and all following parts) are only actually sent on the wire when you send the final part. 当你发送多部分消息时,第一部分(以及所有后续部分)只有在你发送最后一部分时才会真正通过网络发送。
If you are using zmq_poll(), when you receive the first part of a message, all the rest has also arrived. 如果您使用的是 zmq_poll() ,当您接收到消息的第一部分时,所有其余部分也已经到达。
You will receive all parts of a message, or none at all. 您将接收到消息的所有部分,或者一个都不会接收到。
Each part of a message is a separate zmq_msg item. 消息的每个部分都是一个独立的 zmq_msg 项。
You will receive all parts of a message whether or not you check the more property. 无论您是否检查 more 属性,您都会接收到消息的所有部分。
On sending, ZeroMQ queues message frames in memory until the last is received, then sends them all. 在发送时,ZeroMQ 会将消息帧排队存储在内存中,直到接收到最后一帧,然后一次性发送所有帧。
There is no way to cancel a partially sent message, except by closing the socket. 除了关闭套接字外,没有办法取消部分发送的消息。
ZeroMQ aims for decentralized intelligence, but that doesn’t mean your network is empty space in the middle. It’s filled with message-aware infrastructure and quite often, we build that infrastructure with ZeroMQ. The ZeroMQ plumbing can range from tiny pipes to full-blown service-oriented brokers. The messaging industry calls this intermediation, meaning that the stuff in the middle deals with either side. In ZeroMQ, we call these proxies, queues, forwarders, device, or brokers, depending on the context. ZeroMQ 追求去中心化智能,但这并不意味着你的网络中间是空白的。它充满了对消息感知的基础设施,而且我们经常使用 ZeroMQ 来构建这些基础设施。ZeroMQ 的管道可以从微小的管道扩展到完整的面向服务的代理。消息传递行业称之为中介,意思是中间的部分处理双方的通信。在 ZeroMQ 中,我们根据上下文将这些称为代理(proxies)、队列(queues)、转发器(forwarders)、设备(device)或代理服务器(brokers)。
This pattern is extremely common in the real world and is why our societies and economies are filled with intermediaries who have no other real function than to reduce the complexity and scaling costs of larger networks. Real-world intermediaries are typically called wholesalers, distributors, managers, and so on. 这种模式在现实世界中极为常见,这也是为什么我们的社会和经济中充满了中间商,他们的唯一真正功能就是减少更大网络的复杂性和扩展成本。现实中的中间商通常被称为批发商、分销商、经理等。
One of the problems you will hit as you design larger distributed architectures is discovery. That is, how do pieces know about each other? It’s especially difficult if pieces come and go, so we call this the “dynamic discovery problem”. 在设计更大规模的分布式架构时,你会遇到的一个问题是发现。也就是说,组件如何相互了解?如果组件不断进出,这个问题尤其困难,因此我们称之为“动态发现问题”。
There are several solutions to dynamic discovery. The simplest is to entirely avoid it by hard-coding (or configuring) the network architecture so discovery is done by hand. That is, when you add a new piece, you reconfigure the network to know about it. 针对动态发现有几种解决方案。最简单的是完全避免它,通过硬编码(或配置)网络架构来手动完成发现。也就是说,当你添加一个新组件时,需要重新配置网络以识别它。
In practice, this leads to increasingly fragile and unwieldy architectures. Let’s say you have one publisher and a hundred subscribers. You connect each subscriber to the publisher by configuring a publisher endpoint in each subscriber. That’s easy. Subscribers are dynamic; the publisher is static. Now say you add more publishers. Suddenly, it’s not so easy any more. If you continue to connect each subscriber to each publisher, the cost of avoiding dynamic discovery gets higher and higher. 在实际操作中,这会导致架构变得越来越脆弱且难以管理。假设你有一个发布者和一百个订阅者。你通过在每个订阅者中配置一个发布者端点来连接每个订阅者到发布者。这很简单。订阅者是动态的;发布者是静态的。现在假设你增加了更多的发布者。突然间,情况就不那么简单了。如果你继续将每个订阅者连接到每个发布者,避免动态发现的成本会越来越高。
Figure 13 - Pub-Sub Network with a Proxy 图 13 - 带代理的发布-订阅网络
There are quite a few answers to this, but the very simplest answer is to add an intermediary; that is, a static point in the network to which all other nodes connect. In classic messaging, this is the job of the message broker. ZeroMQ doesn’t come with a message broker as such, but it lets us build intermediaries quite easily. 对此有很多答案,但最简单的答案是添加一个中介;也就是说,在网络中设置一个静态点,所有其他节点都连接到该点。在传统消息传递中,这就是消息代理的职责。ZeroMQ 本身不带消息代理,但它让我们能够非常容易地构建中介。
You might wonder, if all networks eventually get large enough to need intermediaries, why don’t we simply have a message broker in place for all applications? For beginners, it’s a fair compromise. Just always use a star topology, forget about performance, and things will usually work. However, message brokers are greedy things; in their role as central intermediaries, they become too complex, too stateful, and eventually a problem. 你可能会想,如果所有网络最终都变得足够大以至于需要中介,为什么我们不直接为所有应用都设置一个消息代理?对于初学者来说,这是一种合理的折中方案。只要始终使用星型拓扑,忽略性能,事情通常都能正常运行。然而,消息代理是贪婪的存在;作为中央中介,它们变得过于复杂、过于有状态,最终成为一个问题。
It’s better to think of intermediaries as simple stateless message switches. A good analogy is an HTTP proxy; it’s there, but doesn’t have any special role. Adding a pub-sub proxy solves the dynamic discovery problem in our example. We set the proxy in the “middle” of the network. The proxy opens an XSUB socket, an XPUB socket, and binds each to well-known IP addresses and ports. Then, all other processes connect to the proxy, instead of to each other. It becomes trivial to add more subscribers or publishers. 最好将中介视为简单的无状态消息交换机。一个很好的类比是 HTTP 代理;它存在,但没有任何特殊角色。在我们的示例中,添加一个发布-订阅代理解决了动态发现问题。我们将代理设置在网络的“中间”。代理打开一个 XSUB 套接字和一个 XPUB 套接字,并将它们分别绑定到众所周知的 IP 地址和端口。然后,所有其他进程连接到代理,而不是彼此连接。这样,添加更多的订阅者或发布者变得非常简单。
Figure 14 - Extended Pub-Sub 图 14 - 扩展的发布-订阅模型
We need XPUB and XSUB sockets because ZeroMQ does subscription forwarding from subscribers to publishers. XSUB and XPUB are exactly like SUB and PUB except they expose subscriptions as special messages. The proxy has to forward these subscription messages from subscriber side to publisher side, by reading them from the XPUB socket and writing them to the XSUB socket. This is the main use case for XSUB and XPUB. 我们需要 XPUB 和 XSUB 套接字,因为 ZeroMQ 会将订阅信息从订阅者转发到发布者。XSUB 和 XPUB 与 SUB 和 PUB 完全相同,只是它们将订阅信息作为特殊消息暴露出来。代理必须将这些订阅消息从订阅者端转发到发布者端,通过从 XPUB 套接字读取并写入到 XSUB 套接字。这是 XSUB 和 XPUB 的主要使用场景。
Shared Queue (DEALER and ROUTER sockets)
共享队列(DEALER 和 ROUTER 套接字)#
In the Hello World client/server application, we have one client that talks to one service. However, in real cases we usually need to allow multiple services as well as multiple clients. This lets us scale up the power of the service (many threads or processes or nodes rather than just one). The only constraint is that services must be stateless, all state being in the request or in some shared storage such as a database. 在 Hello World 客户端/服务器应用程序中,我们有一个客户端与一个服务进行通信。然而,在实际情况下,我们通常需要允许多个服务以及多个客户端。这使我们能够扩展服务的能力(多个线程、进程或节点,而不仅仅是一个)。唯一的限制是服务必须是无状态的,所有状态都应包含在请求中或存储在某些共享存储中,例如数据库。
Figure 15 - Request Distribution 图 15 - 请求分发
There are two ways to connect multiple clients to multiple servers. The brute force way is to connect each client socket to multiple service endpoints. One client socket can connect to multiple service sockets, and the REQ socket will then distribute requests among these services. Let’s say you connect a client socket to three service endpoints; A, B, and C. The client makes requests R1, R2, R3, R4. R1 and R4 go to service A, R2 goes to B, and R3 goes to service C. 有两种方法可以将多个客户端连接到多个服务器。粗暴的方法是将每个客户端套接字连接到多个服务端点。一个客户端套接字可以连接到多个服务套接字,REQ 套接字随后会在这些服务之间分配请求。假设你将一个客户端套接字连接到三个服务端点:A、B 和 C。客户端发出请求 R1、R2、R3、R4。R1 和 R4 发送到服务 A,R2 发送到 B,R3 发送到服务 C。
This design lets you add more clients cheaply. You can also add more services. Each client will distribute its requests to the services. But each client has to know the service topology. If you have 100 clients and then you decide to add three more services, you need to reconfigure and restart 100 clients in order for the clients to know about the three new services. 这种设计让你可以低成本地添加更多客户端。你也可以添加更多服务。每个客户端会将请求分发给各个服务。但每个客户端必须知道服务的拓扑结构。如果你有 100 个客户端,然后决定再添加三个服务,你需要重新配置并重启这 100 个客户端,以便客户端能够识别这三个新服务。
That’s clearly not the kind of thing we want to be doing at 3 a.m. when our supercomputing cluster has run out of resources and we desperately need to add a couple of hundred of new service nodes. Too many static pieces are like liquid concrete: knowledge is distributed and the more static pieces you have, the more effort it is to change the topology. What we want is something sitting in between clients and services that centralizes all knowledge of the topology. Ideally, we should be able to add and remove services or clients at any time without touching any other part of the topology. 这显然不是我们想在凌晨 3 点做的事情——当我们的超级计算集群资源耗尽,急需添加几百个新的服务节点时。过多的静态组件就像液态混凝土:知识是分散的,静态组件越多,改变拓扑结构的工作量就越大。我们需要的是一种介于客户端和服务端之间的东西,能够集中管理所有拓扑知识。理想情况下,我们应该能够随时添加或移除服务或客户端,而无需触及拓扑结构的其他部分。
So we’ll write a little message queuing broker that gives us this flexibility. The broker binds to two endpoints, a frontend for clients and a backend for services. It then uses zmq_poll() to monitor these two sockets for activity and when it has some, it shuttles messages between its two sockets. It doesn’t actually manage any queues explicitly–ZeroMQ does that automatically on each socket. 所以我们将编写一个小型消息队列代理,为我们提供这种灵活性。代理绑定到两个端点,一个用于客户端的前端,另一个用于服务的后端。然后它使用 zmq_poll() 来监控这两个套接字的活动,当有活动时,它在两个套接字之间传递消息。它实际上并不显式管理任何队列——ZeroMQ 会自动在每个套接字上处理这些队列。
When you use REQ to talk to REP, you get a strictly synchronous request-reply dialog. The client sends a request. The service reads the request and sends a reply. The client then reads the reply. If either the client or the service try to do anything else (e.g., sending two requests in a row without waiting for a response), they will get an error. 当你使用 REQ 与 REP 通信时,会得到一个严格同步的请求-响应对话。客户端发送请求,服务端读取请求并发送响应。然后客户端读取响应。如果客户端或服务端尝试执行其他操作(例如,连续发送两个请求而不等待响应),将会收到错误。
But our broker has to be nonblocking. Obviously, we can use zmq_poll() to wait for activity on either socket, but we can’t use REP and REQ. 但我们的代理必须是非阻塞的。显然,我们可以使用 zmq_poll() 来等待任一套接字上的活动,但我们不能使用 REP 和 REQ。
Figure 16 - Extended Request-Reply 图 16 - 扩展请求-响应
Luckily, there are two sockets called DEALER and ROUTER that let you do nonblocking request-response. You’ll see in
Chapter 3 - Advanced Request-Reply Patterns how DEALER and ROUTER sockets let you build all kinds of asynchronous request-reply flows. For now, we’re just going to see how DEALER and ROUTER let us extend REQ-REP across an intermediary, that is, our little broker. 幸运的是,有两种名为 DEALER 和 ROUTER 的套接字可以让你实现非阻塞的请求-响应。你将在第 3 章——高级请求-响应模式中看到 DEALER 和 ROUTER 套接字如何让你构建各种异步请求-响应流程。现在,我们只会了解 DEALER 和 ROUTER 如何让我们将 REQ-REP 扩展到中间件,也就是我们的小型代理。
In this simple extended request-reply pattern, REQ talks to ROUTER and DEALER talks to REP. In between the DEALER and ROUTER, we have to have code (like our broker) that pulls messages off the one socket and shoves them onto the other. 在这个简单的扩展请求-响应模式中,REQ 与 ROUTER 通信,DEALER 与 REP 通信。在 DEALER 和 ROUTER 之间,我们必须有代码(如我们的代理)将消息从一个套接字拉取并推送到另一个套接字。
The request-reply broker binds to two endpoints, one for clients to connect to (the frontend socket) and one for workers to connect to (the backend). To test this broker, you will want to change your workers so they connect to the backend socket. Here is a client that shows what I mean: 请求-响应代理绑定到两个端点,一个供客户端连接(前端套接字),一个供工作线程连接(后端套接字)。要测试此代理,您需要更改工作线程,使其连接到后端套接字。下面是一个展示我意思的客户端示例:
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid RRClient(string[] args)
{
//
// Hello World client
// Connects REQ socket to tcp://127.0.0.1:5559
// Sends "Hello" to server, expects "World" back
//
// Author: metadings
//
// Socket to talk to server
using (var context = new ZContext())
using (var requester = new ZSocket(context, ZSocketType.REQ))
{
requester.Connect("tcp://127.0.0.1:5559");
for (int n = 0; n < 10; ++n)
{
requester.Send(new ZFrame("Hello"));
using (ZFrame reply = requester.ReceiveFrame())
{
Console.WriteLine("Hello {0}!", reply.ReadString());
}
}
}
}
}
}
rrclient: Request-reply client in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Hello World client in Common Lisp;;; Connects REQ socket to tcp://localhost:5555;;; Sends "Hello" to server, expects "World" back;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.rrclient
(:nicknames#:rrclient)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.rrclient)
(defunmain ()
(zmq:with-context (context1)
;; Socket to talk to server
(zmq:with-socket (requestercontextzmq:req)
(zmq:connectrequester"tcp://localhost:5559")
(dotimes (request-nbr10)
(let ((request (make-instance'zmq:msg:data"Hello")))
(zmq:sendrequesterrequest))
(let ((response (make-instance'zmq:msg)))
(zmq:recvrequesterresponse)
(message"Received reply ~D: [~A]~%"request-nbr (zmq:msg-data-as-stringresponse))))))
(cleanup))
rrclient: Request-reply client in Delphi
program rrclient;
//
// Hello World client
// Connects REQ socket to tcp://localhost:5559
// Sends "Hello" to server, expects "World" back
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
requester: TZMQSocket;
i: Integer;
s: Utf8String;
begin
context := TZMQContext.Create;
// Socket to talk to server
requester := Context.Socket( stReq );
requester.connect( 'tcp://localhost:5559' );
for i := 0 to 9 do
begin
requester.send( 'Hello' );
requester.recv( s );
Writeln( Format( 'Received reply %d [%s]',[i, s] ) );
end;
requester.Free;
context.Free;
end.
rrclient: Request-reply client in Erlang
#! /usr/bin/env escript
%%
%% Hello World client
%% Connects REQ socket to tcp://localhost:5559
%% Sends "Hello" to server, expects "World" back
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to talk to server
{ok, Requester} = erlzmq:socket(Context, req),
ok = erlzmq:connect(Requester, "tcp://*:5559"),
lists:foreach(
fun(Num) ->
erlzmq:send(Requester, <<"Hello">>),
{ok, Reply} = erlzmq:recv(Requester),
io:format("Received reply ~b [~s]~n", [Num, Reply])
end, lists:seq(1, 10)),
ok = erlzmq:close(Requester),
ok = erlzmq:term(Context).
rrclient: Request-reply client in Elixir
defmodule Rrclient do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:31
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, requester} = :erlzmq.socket(context, :req)
#:ok = :erlzmq.connect(requester, 'tcp://*:5559'):ok = :erlzmq.connect(requester, 'tcp://localhost:5559')
:lists.foreach(fn num ->
:erlzmq.send(requester, "Hello")
{:ok, reply} = :erlzmq.recv(requester)
:io.format('Received reply ~b [~s]~n', [num, reply])
end, :lists.seq(1, 10))
:ok = :erlzmq.close(requester)
:ok = :erlzmq.term(context)
end
end
Rrclient.main()
rrclient: Request-reply client in F#
(*
Hello World client
Connects REQ socket to tcp://localhost:5559
Sends "Hello" to server, expects "World" back
*)
#r @"bin/fszmq.dll"
open fszmq
open fszmq.Context
open fszmq.Socket
#load "zhelpers.fs"
let main () =
use context = new Context(1)
// socket to talk to server
use requester = req context
"tcp://localhost:5559" |> connect requester
for request_nbr in 0 .. 9 do
"Hello" |> s_send requester
let message = s_recv requester
printfn "Received reply %d [%s]" request_nbr message
EXIT_SUCCESS
main ()
// Hello World client
// Connects REQ socket to tcp://localhost:5559
// Sends "Hello" to server, expects "World" back
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
)
funcmain() {
context, _ := zmq.NewContext()
defer context.Close()
// Socket to talk to clients
requester, _ := context.NewSocket(zmq.REQ)
defer requester.Close()
requester.Connect("tcp://localhost:5559")
for i := 0; i < 10; i++ {
requester.Send([]byte("Hello"), 0)
reply, _ := requester.Recv(0)
fmt.Printf("Received reply %d [%s]\n", i, reply)
}
}
rrclient: Request-reply client in Haskell
{-# LANGUAGE OverloadedStrings #-}-- |-- Request/Reply Hello World with broker (p.50) -- Binds REQ socket to tcp://localhost:5559-- Sends "Hello" to server, expects "World" back-- -- Use with `rrbroker.hs` and `rrworker.hs`-- You need to start the broker first !moduleMainwhereimportSystem.ZMQ4.MonadicimportControl.Monad (forM_)
importData.ByteString.Char8 (unpack)
importText.Printfmain::IO()main=
runZMQ $ do
requester <- socket Req
connect requester "tcp://localhost:5559"
forM_ [1..10] $ \i ->do
send requester []"Hello"
msg <- receive requester
liftIO $ printf "Received reply %d %s\n" (i ::Int) (unpack msg)
rrclient: Request-reply client in Haxe
package ;
importneko.Lib;
importhaxe.io.Bytes;
importorg.zeromq.ZMQ;
importorg.zeromq.ZMQContext;
importorg.zeromq.ZMQSocket;
/**
* Hello World Client
* Connects REQ socket to tcp://localhost:5559
* Sends "Hello" to server, expects "World" back
*
* See: http://zguide.zeromq.org/page:all#A-Request-Reply-Broker
*
* Use with RrServer and RrBroker
*/class RrClient
{
publicstaticfunctionmain() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** RrClient (see: http://zguide.zeromq.org/page:all#A-Request-Reply-Broker)");
var requester:ZMQSocket = context.socket(ZMQ_REQ);
requester.connect ("tcp://localhost:5559");
Lib.println ("Launch and connect client.");
// Do 10 requests, waiting each time for a responsefor (i in0...10) {
var requestString = "Hello ";
// Send the message
requester.sendMsg(Bytes.ofString(requestString));
// Wait for the replyvar msg:Bytes = requester.recvMsg();
Lib.println("Received reply " + i + ": [" + msg.toString() + "]");
}
// Shut down socket and context
requester.close();
context.term();
}
}
rrclient: Request-reply client in Java
packageguide;
importorg.zeromq.SocketType;
importorg.zeromq.ZMQ;
importorg.zeromq.ZMQ.Socket;
importorg.zeromq.ZContext;
/**
* Hello World client
* Connects REQ socket to tcp://localhost:5559
* Sends "Hello" to server, expects "World" back
*/publicclassrrclient
{
publicstaticvoidmain(String[] args)
{
try (ZContext context = new ZContext()) {
// Socket to talk to server
Socket requester = context.createSocket(SocketType.REQ);
requester.connect("tcp://localhost:5559");
System.out.println("launch and connect client.");
for (int request_nbr = 0; request_nbr < 10; request_nbr++) {
requester.send("Hello", 0);
String reply = requester.recvStr(0);
System.out.println(
"Received reply " + request_nbr + " [" + reply + "]"
);
}
}
}
}
---- Hello World client-- Connects REQ socket to tcp://localhost:5559-- Sends "Hello" to server, expects "World" back---- Author: Robert G. Jakabosky <bobby@sharedrealm.com>--
require"zmq"
require"zhelpers"local context = zmq.init(1)
-- Socket to talk to serverlocal requester = context:socket(zmq.REQ)
requester:connect("tcp://localhost:5559")
for n=0,9do
requester:send("Hello")
local msg = requester:recv()
printf ("Received reply %d [%s]\n", n, msg)
end
requester:close()
context:term()
rrclient: Request-reply client in Node.js
// Hello World client in Node.js
// Connects REQ socket to tcp://localhost:5559
// Sends "Hello" to server, expects "World" back
var zmq = require('zeromq')
, requester = zmq.socket('req');
requester.connect('tcp://localhost:5559');
var replyNbr = 0;
requester.on('message', function(msg) {
console.log('got reply', replyNbr, msg.toString());
replyNbr += 1;
});
for (var i = 0; i < 10; ++i) {
requester.send("Hello");
}
rrworker: Request-reply worker in C rrworker:C 语言实现的请求-应答工作线程
// Hello World worker
// Connects REP socket to tcp://localhost:5560
// Expects "Hello" from client, replies with "World"
#include"zhelpers.h"#include<unistd.h>intmain (void)
{
void *context = zmq_ctx_new ();
// Socket to talk to clients
void *responder = zmq_socket (context, ZMQ_REP);
zmq_connect (responder, "tcp://localhost:5560");
while (1) {
// Wait for next request from client
char *string = s_recv (responder);
printf ("Received request: [%s]\n", string);
free (string);
// Do some 'work'
sleep (1);
// Send reply back to client
s_send (responder, "World");
}
// We never get here, but clean up anyhow
zmq_close (responder);
zmq_ctx_destroy (context);
return0;
}
rrworker: Request-reply worker in C++ rrworker:C++ 中的请求-应答工作线程
//
// Request-reply service in C++
// Connects REP socket to tcp://localhost:5560
// Expects "Hello" from client, replies with "World"
//
#include<zmq.hpp>#include<chrono>#include<thread>intmain(int argc, char* argv[])
{
zmq::context_t context{1};
zmq::socket_t responder{context, zmq::socket_type::rep};
responder.connect("tcp://localhost:5560");
while (true) {
// Wait for next request from client
zmq::message_t request_msg;
auto recv_result = responder.recv(request_msg, zmq::recv_flags::none);
std::string string = request_msg.to_string();
std::cout << "Received request: " << string << std::endl;
// Do some 'work'
std::this_thread::sleep_for(std::chrono::seconds(1));
// Send reply back to client
zmq::message_t reply_msg{std::string{"World"}};
responder.send(reply_msg, zmq::send_flags::none);
}
}
rrworker: Request-reply worker in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid RRWorker(string[] args)
{
//
// Hello World worker
// Connects REP socket to tcp://127.0.0.1:5560
// Expects "Hello" from client, replies with "World"
//
// Author: metadings
//
if (args == null || args.Length < 2)
{
Console.WriteLine();
Console.WriteLine("Usage: ./{0} RRWorker [Name] [Endpoint]", AppDomain.CurrentDomain.FriendlyName);
Console.WriteLine();
Console.WriteLine(" Name Your Name");
Console.WriteLine(" Endpoint Where RRWorker should connect to.");
Console.WriteLine(" Default is tcp://127.0.0.1:5560");
Console.WriteLine();
if (args.Length < 1) {
args = newstring[] { "World", "tcp://127.0.0.1:5560" };
} else {
args = newstring[] { args[0], "tcp://127.0.0.1:5560" };
}
}
string name = args[0];
string endpoint = args[1];
// Socket to talk to clients
using (var context = new ZContext())
using (var responder = new ZSocket(context, ZSocketType.REP))
{
responder.Connect(endpoint);
while (true)
{
// Wait for next request from client
using (ZFrame request = responder.ReceiveFrame())
{
Console.Write("{0} ", request.ReadString());
// Do some 'work'
Thread.Sleep(1);
// Send reply back to client
Console.WriteLine("{0}... ", name);
responder.Send(new ZFrame(name));
}
}
}
}
}
}
rrworker: Request-reply worker in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Hello World server in Common Lisp;;; Binds REP socket to tcp://*:5555;;; Expects "Hello" from client, replies with "World";;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.rrserver
(:nicknames#:rrserver)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.rrserver)
(defunmain ()
(zmq:with-context (context1)
;; Socket to talk to clients
(zmq:with-socket (respondercontextzmq:rep)
(zmq:connectresponder"tcp://localhost:5560")
(loop
(let ((request (make-instance'zmq:msg)))
;; Wait for next request from client
(zmq:recvresponderrequest)
(message"Received request: [~A]~%"
(zmq:msg-data-as-stringrequest))
;; Do some 'work'
(sleep1)
;; Send reply back to client
(let ((reply (make-instance'zmq:msg:data"World")))
(zmq:sendresponderreply))))))
(cleanup))
rrworker: Request-reply worker in Delphi
program rrserver;
//
// Hello World server
// Connects REP socket to tcp://*:5560
// Expects "Hello" from client, replies with "World"
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
responder: TZMQSocket;
s: Utf8String;
begin
context := TZMQContext.Create;
// Socket to talk to clients
responder := Context.Socket( stRep );
responder.connect( 'tcp://localhost:5560' );
while True do
begin
// Wait for next request from client
responder.recv( s );
Writeln( Format( 'Received request: [%s]', [ s ] ) );
// Do some 'work'
sleep( 1 );
// Send reply back to client
responder.send( 'World' );
end;
// We never get here but clean up anyhow
responder.Free;
context.Free;
end.
rrworker: Request-reply worker in Erlang
#! /usr/bin/env escript
%%
%% Hello World server
%% Connects REP socket to tcp://*:5560
%% Expects "Hello" from client, replies with "World"
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to talk to clients
{ok, Responder} = erlzmq:socket(Context, rep),
ok = erlzmq:connect(Responder, "tcp://*:5560"),
loop(Responder),
%% We never get here but clean up anyhow
ok = erlzmq:close(Responder),
ok = erlzmq:term(Context).
loop(Socket) ->
%% Wait for next request from client
{ok, Req} = erlzmq:recv(Socket),
io:format("Received request: [~s]~n", [Req]),
%% Do some 'work'
timer:sleep(1000),
%% Send reply back to client
ok = erlzmq:send(Socket, <<"World">>),
loop(Socket).
rrworker: Request-reply worker in Elixir
defmodule Rrworker do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:32
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, responder} = :erlzmq.socket(context, :rep)
#:ok = :erlzmq.connect(responder, 'tcp://*:5560'):ok = :erlzmq.connect(responder, 'tcp://localhost:5560')
loop(responder)
:ok = :erlzmq.close(responder)
:ok = :erlzmq.term(context)
end
def loop(socket) do
{:ok, req} = :erlzmq.recv(socket)
:io.format('Received request: [~s]~n', [req])
:timer.sleep(1000)
:ok = :erlzmq.send(socket, "World")
loop(socket)
end
end
Rrworker.main()
rrworker: Request-reply worker in F#
(*
Hello World server
Connects REP socket to tcp://*:5560
Expects "Hello" from client, replies with "World"
*)
#r @"bin/fszmq.dll"
open fszmq
open fszmq.Context
open fszmq.Socket
#load "zhelpers.fs"
let main () =
use context = new Context(1)
// socket to talk to clients
use responder = rep context
"tcp://localhost:5560" |> connect responder
while true do
// wait for next request from client
let message = s_recv responder
printfn "Received request: [%s]" message
// do some 'work'
sleep 1
// send reply back to client
"World" |> s_send responder
// we never get here but clean up anyhow
EXIT_SUCCESS
main ()
// Hello World server
// Connects REP socket to tcp://*:5560
// Expects "Hello" from client, replies with "World"
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq""time"
)
funcmain() {
context, _ := zmq.NewContext()
defer context.Close()
// Socket to talk to clients
responder, _ := context.NewSocket(zmq.REP)
defer responder.Close()
responder.Connect("tcp://localhost:5560")
for {
// Wait for next request from client
request, _ := responder.Recv(0)
fmt.Printf("Received request: [%s]\n", request)
// Do some 'work'
time.Sleep(1 * time.Second)
// Send reply back to client
responder.Send([]byte("World"), 0)
}
}
rrworker: Request-reply worker in Haskell
{-# LANGUAGE OverloadedStrings #-}-- |-- A worker that simulates some work with a timeout-- And send back "World"-- Connect REP socket to tcp://*:5560-- Expects "Hello" from client, replies with "World"-- moduleMainwhereimportSystem.ZMQ4.MonadicimportControl.Monad (forever)
importData.ByteString.Char8 (unpack)
importControl.Concurrent (threadDelay)
importText.Printfmain::IO()main=
runZMQ $ do
responder <- socket Rep
connect responder "tcp://localhost:5560"
forever $ do
receive responder >>= liftIO . printf "Received request: [%s]\n" . unpack
-- Simulate doing some 'work' for 1 second
liftIO $ threadDelay (1 * 1000 * 1000)
send responder []"World"
rrworker: Request-reply worker in Haxe
package ;
importhaxe.io.Bytes;
importhaxe.Stack;
importneko.Lib;
importneko.Sys;
importorg.zeromq.ZMQ;
importorg.zeromq.ZMQContext;
importorg.zeromq.ZMQException;
importorg.zeromq.ZMQSocket;
/**
* Hello World server in Haxe
* Binds REP to tcp://*:5560
* Expects "Hello" from client, replies with "World"
* Use with RrClient.hx and RrBroker.hx
*
*/class RrServer
{
publicstaticfunctionmain() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** RrServer (see: http://zguide.zeromq.org/page:all#A-Request-Reply-Broker)");
// Socket to talk to clientsvar responder:ZMQSocket = context.socket(ZMQ_REP);
responder.connect("tcp://localhost:5560");
Lib.println("Launch and connect server.");
ZMQ.catchSignals();
while (true) {
try {
// Wait for next request from clientvar request:Bytes = responder.recvMsg();
trace ("Received request:" + request.toString());
// Do some work
Sys.sleep(1);
// Send reply back to client
responder.sendMsg(Bytes.ofString("World"));
} catch (e:ZMQException) {
if (ZMQ.isInterrupted()) {
break;
}
// Handle other errors
trace("ZMQException #:" + e.errNo + ", str:" + e.str());
trace (Stack.toString(Stack.exceptionStack()));
}
}
responder.close();
context.term();
}
}
rrworker: Request-reply worker in Java
packageguide;
importorg.zeromq.SocketType;
importorg.zeromq.ZMQ;
importorg.zeromq.ZMQ.Socket;
importorg.zeromq.ZContext;
// Hello World worker
// Connects REP socket to tcp://*:5560
// Expects "Hello" from client, replies with "World"
publicclassrrworker
{
publicstaticvoidmain(String[] args) throws Exception
{
try (ZContext context = new ZContext()) {
// Socket to talk to server
Socket responder = context.createSocket(SocketType.REP);
responder.connect("tcp://localhost:5560");
while (!Thread.currentThread().isInterrupted()) {
// Wait for next request from client
String string = responder.recvStr(0);
System.out.printf("Received request: [%s]\n", string);
// Do some 'work'
Thread.sleep(1000);
// Send reply back to client
responder.send("World");
}
}
}
}
---- Hello World server-- Connects REP socket to tcp://*:5560-- Expects "Hello" from client, replies with "World"---- Author: Robert G. Jakabosky <bobby@sharedrealm.com>--
require"zmq"
require"zhelpers"local context = zmq.init(1)
-- Socket to talk to clientslocal responder = context:socket(zmq.REP)
responder:connect("tcp://localhost:5560")
whiletruedo-- Wait for next request from clientlocal msg = responder:recv()
printf ("Received request: [%s]\n", msg)
-- Do some 'work'
s_sleep (1000)
-- Send reply back to client
responder:send("World")
end-- We never get here but clean up anyhow
responder:close()
context:term()
rrworker: Request-reply worker in Node.js
// Hello World server in Node.js
// Connects REP socket to tcp://*:5560
// Expects "Hello" from client, replies with "World"
var zmq = require('zeromq')
, responder = zmq.socket('rep');
responder.connect('tcp://localhost:5560');
responder.on('message', function(msg) {
console.log('received request:', msg.toString());
setTimeout(function() {
responder.send("World");
}, 1000);
});
# Hello world worker in Perl# Connects REP socket to tcp://localhost:5560# Expects "Hello from client, replies with "World"usestrict;
usewarnings;
usev5.10;
useZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_REP);
my$context = ZMQ::FFI->new();
# Socket to talk to clientsmy$responder = $context->socket(ZMQ_REP);
$responder->connect('tcp://localhost:5560');
while (1) {
# Wait for next request from clientmy$string = $responder->recv();
say "Received request: [$string]";
# Do some 'work'sleep1;
# Send reply back to client$responder->send("World");
}
rrworker: Request-reply worker in PHP
<?php/*
* Hello World server
* Connects REP socket to tcp://*:5560
* Expects "Hello" from client, replies with "World"
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/$context = new ZMQContext();
// Socket to talk to clients
$responder = new ZMQSocket($context, ZMQ::SOCKET_REP);
$responder->connect("tcp://localhost:5560");
while (true) {
// Wait for next request from client
$string = $responder->recv();
printf ("Received request: [%s]%s", $string, PHP_EOL);
// Do some 'work'
sleep(1);
// Send reply back to client
$responder->send("World");
}
rrworker: Request-reply worker in Python
## Request-reply service in Python# Connects REP socket to tcp://localhost:5560# Expects "Hello" from client, replies with "World"#importzmq
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.connect("tcp://localhost:5560")
while True:
message = socket.recv()
print(f"Received request: {message}")
socket.send(b"World")
rrbroker: Request-reply broker in C rrbroker:C 语言实现的请求-应答代理
// Simple request-reply broker
#include"zhelpers.h"intmain (void)
{
// Prepare our context and sockets
void *context = zmq_ctx_new ();
void *frontend = zmq_socket (context, ZMQ_ROUTER);
void *backend = zmq_socket (context, ZMQ_DEALER);
zmq_bind (frontend, "tcp://*:5559");
zmq_bind (backend, "tcp://*:5560");
// Initialize poll set
zmq_pollitem_t items [] = {
{ frontend, 0, ZMQ_POLLIN, 0 },
{ backend, 0, ZMQ_POLLIN, 0 }
};
// Switch messages between sockets
while (1) {
zmq_msg_t message;
zmq_poll (items, 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
while (1) {
// Process all parts of the message
zmq_msg_init (&message);
zmq_msg_recv (&message, frontend, 0);
int more = zmq_msg_more (&message);
zmq_msg_send (&message, backend, more? ZMQ_SNDMORE: 0);
zmq_msg_close (&message);
if (!more)
break; // Last message part
}
}
if (items [1].revents & ZMQ_POLLIN) {
while (1) {
// Process all parts of the message
zmq_msg_init (&message);
zmq_msg_recv (&message, backend, 0);
int more = zmq_msg_more (&message);
zmq_msg_send (&message, frontend, more? ZMQ_SNDMORE: 0);
zmq_msg_close (&message);
if (!more)
break; // Last message part
}
}
}
// We never get here, but clean up anyhow
zmq_close (frontend);
zmq_close (backend);
zmq_ctx_destroy (context);
return0;
}
rrbroker: Request-reply broker in C++ rrbroker:C++中的请求-响应代理
//
// Simple request-reply broker in C++
//
#include"zhelpers.hpp"intmain (int argc, char *argv[])
{
// Prepare our context and sockets
zmq::context_t context(1);
zmq::socket_t frontend (context, ZMQ_ROUTER);
zmq::socket_t backend (context, ZMQ_DEALER);
frontend.bind("tcp://*:5559");
backend.bind("tcp://*:5560");
// Initialize poll set
zmq::pollitem_t items [] = {
{ frontend, 0, ZMQ_POLLIN, 0 },
{ backend, 0, ZMQ_POLLIN, 0 }
};
// Switch messages between sockets
while (1) {
zmq::message_t message;
int more; // Multipart detection
zmq::poll (&items [0], 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
while (1) {
// Process all parts of the message
frontend.recv(&message);
// frontend.recv(message, zmq::recv_flags::none); // new syntax
size_t more_size = sizeof (more);
frontend.getsockopt(ZMQ_RCVMORE, &more, &more_size);
backend.send(message, more? ZMQ_SNDMORE: 0);
// more = frontend.get(zmq::sockopt::rcvmore); // new syntax
// backend.send(message, more? zmq::send_flags::sndmore : zmq::send_flags::none);
if (!more)
break; // Last message part
}
}
if (items [1].revents & ZMQ_POLLIN) {
while (1) {
// Process all parts of the message
backend.recv(&message);
size_t more_size = sizeof (more);
backend.getsockopt(ZMQ_RCVMORE, &more, &more_size);
frontend.send(message, more? ZMQ_SNDMORE: 0);
// more = backend.get(zmq::sockopt::rcvmore); // new syntax
//frontend.send(message, more? zmq::send_flags::sndmore : zmq::send_flags::none);
if (!more)
break; // Last message part
}
}
}
return0;
}
rrbroker: Request-reply broker in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid RRBroker(string[] args)
{
//
// Simple request-reply broker
//
// Author: metadings
//
// Prepare our context and sockets
using (var ctx = new ZContext())
using (var frontend = new ZSocket(ctx, ZSocketType.ROUTER))
using (var backend = new ZSocket(ctx, ZSocketType.DEALER))
{
frontend.Bind("tcp://*:5559");
backend.Bind("tcp://*:5560");
// Initialize poll set
var poll = ZPollItem.CreateReceiver();
// Switch messages between sockets
ZError error;
ZMessage message;
while (true)
{
if (frontend.PollIn(poll, out message, out error, TimeSpan.FromMilliseconds(64)))
{
// Process all parts of the message
Console_WriteZMessage("frontend", 2, message);
backend.Send(message);
}
else
{
if (error == ZError.ETERM)
return; // Interrupted
if (error != ZError.EAGAIN)
thrownew ZException(error);
}
if (backend.PollIn(poll, out message, out error, TimeSpan.FromMilliseconds(64)))
{
// Process all parts of the message
Console_WriteZMessage(" backend", 2, message);
frontend.Send(message);
}
else
{
if (error == ZError.ETERM)
return; // Interrupted
if (error != ZError.EAGAIN)
thrownew ZException(error);
}
}
}
}
}
}
rrbroker: Request-reply broker in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Simple request-reply broker in Common Lisp;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.rrbroker
(:nicknames#:rrbroker)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.rrbroker)
(defunmain ()
;; Prepare our context and sockets
(zmq:with-context (context1)
(zmq:with-socket (frontendcontextzmq:router)
(zmq:with-socket (backendcontextzmq:dealer)
(zmq:bindfrontend"tcp://*:5559")
(zmq:bindbackend"tcp://*:5560")
;; Initialize poll set
(zmq:with-polls ((items . ((frontend . zmq:pollin)
(backend . zmq:pollin))))
;; Switch messages between sockets
(loop
(let ((revents (zmq:pollitems)))
(when (= (firstrevents) zmq:pollin)
(loop;; Process all parts of the message
(let ((message (make-instance'zmq:msg)))
(zmq:recvfrontendmessage)
(if (not (zerop (zmq:getsockoptfrontendzmq:rcvmore)))
(zmq:sendbackendmessagezmq:sndmore)
(progn
(zmq:sendbackendmessage0)
;; Last message part
(return))))))
(when (= (secondrevents) zmq:pollin)
(loop;; Process all parts of the message
(let ((message (make-instance'zmq:msg)))
(zmq:recvbackendmessage)
(if (not (zerop (zmq:getsockoptbackendzmq:rcvmore)))
(zmq:sendfrontendmessagezmq:sndmore)
(progn
(zmq:sendfrontendmessage0)
;; Last message part
(return))))))))))))
(cleanup))
rrbroker: Request-reply broker in Delphi
program rrbroker;
//
// Simple request-reply broker
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
frontend,
backend: TZMQSocket;
poller: TZMQPoller;
msg: TZMQFrame;
more: Boolean;
begin
// Prepare our context and sockets
context := TZMQContext.Create;
frontend := Context.Socket( stRouter );
backend := Context.Socket( stDealer );
frontend.bind( 'tcp://*:5559' );
backend.bind( 'tcp://*:5560' );
// Initialize poll set
poller := TZMQPoller.Create( true );
poller.register( frontend, [pePollIn] );
poller.register( backend, [pePollIn] );
// Switch messages between sockets
while True do
begin
poller.poll;
more := true;
if pePollIn in poller.PollItem[0].revents then
while more do
begin
// Process all parts of the message
msg := TZMQFrame.Create;
frontend.recv( msg );
more := frontend.rcvMore;
if more then
backend.send( msg, [sfSndMore] )
else
backend.send( msg, [] );
end;
if pePollIn in poller.PollItem[1].revents then
while more do
begin
// Process all parts of the message
msg := TZMQFrame.Create;
backend.recv( msg );
more := backend.rcvMore;
if more then
frontend.send( msg, [sfSndMore] )
else
frontend.send( msg, [] );
end;
end;
// We never get here but clean up anyhow
poller.Free;
frontend.Free;
backend.Free;
context.Free;
end.
rrbroker: Request-reply broker in Erlang
#! /usr/bin/env escript
%%
%% Simple request-reply broker
%%
main(_) ->
%% Prepare our context and sockets
{ok, Context} = erlzmq:context(),
{ok, Frontend} = erlzmq:socket(Context, [router, {active, true}]),
{ok, Backend} = erlzmq:socket(Context, [dealer, {active, true}]),
ok = erlzmq:bind(Frontend, "tcp://*:5559"),
ok = erlzmq:bind(Backend, "tcp://*:5560"),
%% Switch messages between sockets
loop(Frontend, Backend),
%% We never get here but clean up anyhow
ok = erlzmq:close(Frontend),
ok = erlzmq:close(Backend),
ok = erlzmq:term(Context).
loop(Frontend, Backend) ->
receive
{zmq, Frontend, Msg, Flags} ->
caseproplists:get_bool(rcvmore, Flags) of
true ->
erlzmq:send(Backend, Msg, [sndmore]);
false ->
erlzmq:send(Backend, Msg)
end;
{zmq, Backend, Msg, Flags} ->
caseproplists:get_bool(rcvmore, Flags) of
true ->
erlzmq:send(Frontend, Msg, [sndmore]);
false ->
erlzmq:send(Frontend, Msg)
endend,
loop(Frontend, Backend).
rrbroker: Request-reply broker in Elixir
defmodule Rrbroker do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:31
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, frontend} = :erlzmq.socket(context, [:router, {:active, true}])
{:ok, backend} = :erlzmq.socket(context, [:dealer, {:active, true}])
:ok = :erlzmq.bind(frontend, 'tcp://*:5559')
:ok = :erlzmq.bind(backend, 'tcp://*:5560')
loop(frontend, backend)
:ok = :erlzmq.close(frontend)
:ok = :erlzmq.close(backend)
:ok = :erlzmq.term(context)
end
def loop(frontend, backend) do
receive do
{:zmq, ^frontend, msg, flags} ->
case(:proplists.get_bool(:rcvmore, flags)) do
true ->
:erlzmq.send(backend, msg, [:sndmore])
false ->
:erlzmq.send(backend, msg)
end
{:zmq, ^backend, msg, flags} ->
case(:proplists.get_bool(:rcvmore, flags)) do
true ->
:erlzmq.send(frontend, msg, [:sndmore])
false ->
:erlzmq.send(frontend, msg)
end
end
loop(frontend, backend)
end
end
Rrbroker.main()
rrbroker: Request-reply broker in F#
(*
Simple request-reply broker
*)
#r @"bin/fszmq.dll"
open fszmq
open fszmq.Context
open fszmq.Polling
open fszmq.Socket
#load "zhelpers.fs"
let main () =
// prepare our context and sockets
use context = new Context(1)
use frontend = route context
use backend = deal context
"tcp://*:5559" |> bind frontend
"tcp://*:5560" |> bind backend
// initialize poll set
let items = [Poll(ZMQ.POLLIN,frontend,fun s -> s >|< backend )
Poll(ZMQ.POLLIN,backend ,fun s -> s >|< frontend)]
//NOTE: the poll item callbacks above use the transfer operator (>|<).
// fs-zmq defines this operator as a convenience for transferring
// all parts of a multi-part message from one socket to another.
// for a lengthier, but more obvious alternative (which more
// closely matches the C version of the guide), see wuproxy.fsx
// switch messages between sockets
while true do items |> poll -1L |> ignore
// we never get here but clean up anyhow
EXIT_SUCCESS
main ()
---- Simple request-reply broker---- Author: Robert G. Jakabosky <bobby@sharedrealm.com>--
require"zmq"
require"zmq.poller"
require"zhelpers"-- Prepare our context and socketslocal context = zmq.init(1)
local frontend = context:socket(zmq.ROUTER)
local backend = context:socket(zmq.DEALER)
frontend:bind("tcp://*:5559")
backend:bind("tcp://*:5560")
-- Switch messages between socketslocal poller = zmq.poller(2)
poller:add(frontend, zmq.POLLIN, function()
whiletruedo-- Process all parts of the messagelocal msg = frontend:recv()
if (frontend:getopt(zmq.RCVMORE) == 1) then
backend:send(msg, zmq.SNDMORE)
else
backend:send(msg, 0)
break; -- Last message partendendend)
poller:add(backend, zmq.POLLIN, function()
whiletruedo-- Process all parts of the messagelocal msg = backend:recv()
if (backend:getopt(zmq.RCVMORE) == 1) then
frontend:send(msg, zmq.SNDMORE)
else
frontend:send(msg, 0)
break; -- Last message partendendend)
-- start poller's event loop
poller:start()
-- We never get here but clean up anyhow
frontend:close()
backend:close()
context:term()
rrbroker: Request-reply broker in Node.js
// Simple request-reply broker in Node.js
var zmq = require('zeromq')
, frontend = zmq.socket('router')
, backend = zmq.socket('dealer');
frontend.bindSync('tcp://*:5559');
backend.bindSync('tcp://*:5560');
frontend.on('message', function() {
// Note that separate message parts come as function arguments.
var args = Array.apply(null, arguments);
// Pass array of strings/buffers to send multipart messages.
backend.send(args);
});
backend.on('message', function() {
var args = Array.apply(null, arguments);
frontend.send(args);
});
Using a request-reply broker makes your client/server architectures easier to scale because clients don’t see workers, and workers don’t see clients. The only static node is the broker in the middle. 使用请求-响应代理使您的客户端/服务器架构更易于扩展,因为客户端看不到工作者,工作者也看不到客户端。唯一的静态节点是中间的代理。
You may wonder how a response is routed back to the right client. Router uses envelop for the message that has info on the client to the dealer and dealer response will include envelope that will be used to map the response back to the client. 你可能会想知道响应是如何路由回正确的客户端的。Router 使用信封(envelope)来携带包含客户端信息的消息发送给 Dealer,Dealer 的响应也会包含信封,这个信封将用于将响应映射回客户端。
It turns out that the core loop in the previous section’s rrbroker is very useful, and reusable. It lets us build pub-sub forwarders and shared queues and other little intermediaries with very little effort. ZeroMQ wraps this up in a single method, zmq_proxy(): 事实证明,上一节中 rrbroker 的核心循环非常有用且可重用。它让我们能够轻松构建发布-订阅转发器、共享队列以及其他小型中间件。ZeroMQ 将其封装在一个名为 zmq_proxy() 的方法中:
zmq_proxy (frontend, backend, capture);
The two (or three sockets, if we want to capture data) must be properly connected, bound, and configured. When we call the zmq_proxy method, it’s exactly like starting the main loop of rrbroker. Let’s rewrite the request-reply broker to call zmq_proxy, and re-badge this as an expensive-sounding “message queue” (people have charged houses for code that did less): 这两个(或者如果我们想捕获数据则是三个)套接字必须正确连接、绑定和配置。当我们调用 zmq_proxy 方法时,就像启动 rrbroker 的主循环一样。让我们重写请求-响应代理以调用 zmq_proxy ,并将其重新标记为听起来很昂贵的“消息队列”(有人为功能更少的代码收取了高价):
msgqueue: Message queue broker in C msgqueue:用 C 语言编写的消息队列代理
// Simple message queuing broker
// Same as request-reply broker but using shared queue proxy
#include"zhelpers.h"intmain (void)
{
void *context = zmq_ctx_new ();
// Socket facing clients
void *frontend = zmq_socket (context, ZMQ_ROUTER);
int rc = zmq_bind (frontend, "tcp://*:5559");
assert (rc == 0);
// Socket facing services
void *backend = zmq_socket (context, ZMQ_DEALER);
rc = zmq_bind (backend, "tcp://*:5560");
assert (rc == 0);
// Start the proxy
zmq_proxy (frontend, backend, NULL);
// We never get here...
zmq_close (frontend);
zmq_close (backend);
zmq_ctx_destroy (context);
return0;
}
msgqueue: Message queue broker in C++ msgqueue:C++中的消息队列代理
//
// Simple message queuing broker in C++
// Same as request-reply broker but using QUEUE device
//
#include"zhelpers.hpp"intmain (int argc, char *argv[])
{
zmq::context_t context(1);
// Socket facing clients
zmq::socket_t frontend (context, ZMQ_ROUTER);
frontend.bind("tcp://*:5559");
// Socket facing services
zmq::socket_t backend (context, ZMQ_DEALER);
backend.bind("tcp://*:5560");
// Start the proxy
zmq::proxy(static_cast<void*>(frontend),
static_cast<void*>(backend),
nullptr);
return0;
}
msgqueue: Message queue broker in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid MsgQueue(string[] args)
{
//
// Simple message queuing broker
// Same as request-reply broker but using QUEUE device
//
// Author: metadings
//
// Socket facing clients and
// Socket facing services
using (var context = new ZContext())
using (var frontend = new ZSocket(context, ZSocketType.ROUTER))
using (var backend = new ZSocket(context, ZSocketType.DEALER))
{
frontend.Bind("tcp://*:5559");
backend.Bind("tcp://*:5560");
// Start the proxy
ZContext.Proxy(frontend, backend);
}
}
}
}
msgqueue: Message queue broker in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Simple message queuing broker in Common Lisp;;; Same as request-reply broker but using QUEUE device;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.msgqueue
(:nicknames#:msgqueue)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.msgqueue)
(defunmain ()
(zmq:with-context (context1)
;; Socket facing clients
(zmq:with-socket (frontendcontextzmq:router)
(zmq:bindfrontend"tcp://*:5559")
;; Socket facing services
(zmq:with-socket (backendcontextzmq:dealer)
(zmq:bindbackend"tcp://*:5560")
;; Start built-in device
(zmq:devicezmq:queuefrontendbackend))))
(cleanup))
msgqueue: Message queue broker in Delphi
program msgqueue;
//
// Simple message queuing broker
// Same as request-reply broker but using shared queue proxy
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
frontend,
backend: TZMQSocket;
begin
context := TZMQContext.Create;
// Socket facing clients
frontend := Context.Socket( stRouter );
frontend.bind( 'tcp://*:5559' );
// Socket facing services
backend := Context.Socket( stDealer );
backend.bind( 'tcp://*:5560' );
// Start the proxy
ZMQProxy( frontend, backend, nil );
// We never get here...
frontend.Free;
backend.Free;
context.Free;
end.
msgqueue: Message queue broker in Erlang
#!/usr/bin/env escript
%%
%% Simple message queuing broker
%% Same as request-reply broker but using QUEUE device
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket facing clients
{ok, Frontend} = erlzmq:socket(Context, [router, {active, true}]),
ok = erlzmq:bind(Frontend, "tcp://*:5559"),
%% Socket facing services
{ok, Backend} = erlzmq:socket(Context, [dealer, {active, true}]),
ok = erlzmq:bind(Backend, "tcp://*:5560"),
%% Start built-in device
erlzmq_device:queue(Frontend, Backend),
%% We never get here...
ok = erlzmq:close(Frontend),
ok = erlzmq:close(Backend),
ok = erlzmq:term(Context).
msgqueue: Message queue broker in Elixir
defmodule msgqueue do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:26
"""
def main(_) do
{:ok, context} = :erlzmq.context()
{:ok, frontend} = :erlzmq.socket(context, [:router, {:active, true}])
:ok = :erlzmq.bind(frontend, 'tcp://*:5559')
{:ok, backend} = :erlzmq.socket(context, [:dealer, {:active, true}])
:ok = :erlzmq.bind(backend, 'tcp://*:5560')
:erlzmq_device.queue(frontend, backend)
:ok = :erlzmq.close(frontend)
:ok = :erlzmq.close(backend)
:ok = :erlzmq.term(context)
end
end
msgqueue: Message queue broker in F#
(*
Simple message queuing broker
Same as request-reply broker but using QUEUE device
*)
#r @"bin/fszmq.dll"
#r @"bin/fszmq.devices.dll"
open fszmq
open fszmq.Context
open fszmq.Socket
#load "zhelpers.fs"
let main () =
use context = new Context(1)
// socket facing clients
use frontend = route context
"tcp://*:5559" |> bind frontend
// socket facing services
use backend = deal context
"tcp://*:5560" |> bind backend
// start built-in device
(frontend,backend) |> Devices.queue |> ignore
// we never get here...
EXIT_SUCCESS
main ()
If you’re like most ZeroMQ users, at this stage your mind is starting to think, “What kind of evil stuff can I do if I plug random socket types into the proxy?” The short answer is: try it and work out what is happening. In practice, you would usually stick to ROUTER/DEALER, XSUB/XPUB, or PULL/PUSH. 如果你和大多数 ZeroMQ 用户一样,这个阶段你的脑海中可能会开始思考:“如果我把随机的套接字类型插入代理,会发生什么邪恶的事情?”简短的回答是:试试看,弄清楚发生了什么。实际上,你通常会坚持使用 ROUTER/DEALER、XSUB/XPUB 或 PULL/PUSH。
A frequent request from ZeroMQ users is, “How do I connect my ZeroMQ network with technology X?” where X is some other networking or messaging technology. ZeroMQ 用户经常提出的一个请求是:“我如何将我的 ZeroMQ 网络与技术 X 连接?”其中 X 是其他某种网络或消息传递技术。
The simple answer is to build a bridge. A bridge is a small application that speaks one protocol at one socket, and converts to/from a second protocol at another socket. A protocol interpreter, if you like. A common bridging problem in ZeroMQ is to bridge two transports or networks. 简单的答案是构建一个桥接程序。桥接程序是一个小型应用程序,在一个套接字上使用一种协议通信,并在另一个套接字上转换为另一种协议。可以把它看作是协议解释器。ZeroMQ 中一个常见的桥接问题是桥接两种传输或网络。
As an example, we’re going to write a little proxy that sits in between a publisher and a set of subscribers, bridging two networks. The frontend socket (SUB) faces the internal network where the weather server is sitting, and the backend (PUB) faces subscribers on the external network. It subscribes to the weather service on the frontend socket, and republishes its data on the backend socket. 作为示例,我们将编写一个小型代理,位于发布者和一组订阅者之间,桥接两个网络。前端套接字(SUB)面向内部网络,天气服务器就在该网络中,后端套接字(PUB)面向外部网络的订阅者。它在前端套接字上订阅天气服务,并在后端套接字上重新发布其数据。
// Weather proxy device
#include"zhelpers.h"intmain (void)
{
void *context = zmq_ctx_new ();
// This is where the weather server sits
void *frontend = zmq_socket (context, ZMQ_XSUB);
zmq_connect (frontend, "tcp://192.168.55.210:5556");
// This is our public endpoint for subscribers
void *backend = zmq_socket (context, ZMQ_XPUB);
zmq_bind (backend, "tcp://10.1.1.0:8100");
// Run the proxy until the user interrupts us
zmq_proxy (frontend, backend, NULL);
zmq_close (frontend);
zmq_close (backend);
zmq_ctx_destroy (context);
return0;
}
wuproxy: Weather update proxy in C++ wuproxy:用 C++编写的天气更新代理
//
// Weather proxy device C++
//
#include"zhelpers.hpp"intmain (int argc, char *argv[])
{
zmq::context_t context(1);
// This is where the weather server sits
zmq::socket_t frontend(context, ZMQ_SUB);
frontend.connect("tcp://192.168.55.210:5556");
// This is our public endpoint for subscribers
zmq::socket_t backend (context, ZMQ_PUB);
backend.bind("tcp://10.1.1.0:8100");
// Subscribe on everything
frontend.set(zmq::sockopt::subscribe, "");
// Shunt messages out to our own subscribers
while (1) {
while (1) {
zmq::message_t message;
int more;
size_t more_size = sizeof (more);
// Process all parts of the message
frontend.recv(&message);
frontend.getsockopt( ZMQ_RCVMORE, &more, &more_size);
backend.send(message, more? ZMQ_SNDMORE: 0);
if (!more)
break; // Last message part
}
}
return0;
}
wuproxy: Weather update proxy in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Net;
usingSystem.Net.NetworkInformation;
usingSystem.Net.Sockets;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid WUProxy(string[] args)
{
//
// Weather proxy device
//
// Author: metadings
//
using (var context = new ZContext())
using (var frontend = new ZSocket(context, ZSocketType.XSUB))
using (var backend = new ZSocket(context, ZSocketType.XPUB))
{
// Frontend is where the weather server sits
string localhost = "tcp://127.0.0.1:5556";
Console.WriteLine("I: Connecting to {0}", localhost);
frontend.Connect(localhost);
// Backend is our public endpoint for subscribers
foreach (IPAddress address in WUProxy_GetPublicIPs())
{
var tcpAddress = string.Format("tcp://{0}:8100", address);
Console.WriteLine("I: Binding on {0}", tcpAddress);
backend.Bind(tcpAddress);
var epgmAddress = string.Format("epgm://{0};239.192.1.1:8100", address);
Console.WriteLine("I: Binding on {0}", epgmAddress);
backend.Bind(epgmAddress);
}
using (var subscription = ZFrame.Create(1))
{
subscription.Write(newbyte[] { 0x1 }, 0, 1);
backend.Send(subscription);
}
// Run the proxy until the user interrupts us
ZContext.Proxy(frontend, backend);
}
}
static IEnumerable<IPAddress> WUProxy_GetPublicIPs()
{
var list = new List<IPAddress>();
NetworkInterface[] ifaces = NetworkInterface.GetAllNetworkInterfaces();
foreach (NetworkInterface iface in ifaces)
{
if (iface.NetworkInterfaceType == NetworkInterfaceType.Loopback)
continue;
if (iface.OperationalStatus != OperationalStatus.Up)
continue;
var props = iface.GetIPProperties();
var addresses = props.UnicastAddresses;
foreach (UnicastIPAddressInformation address in addresses)
{
if (address.Address.AddressFamily == AddressFamily.InterNetwork)
list.Add(address.Address);
// if (address.Address.AddressFamily == AddressFamily.InterNetworkV6)
// list.Add(address.Address);
}
}
return list;
}
}
}
wuproxy: Weather update proxy in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Weather proxy device in Common Lisp;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.wuproxy
(:nicknames#:wuproxy)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.wuproxy)
(defunmain ()
(zmq:with-context (context1)
;; This is where the weather server sits
(zmq:with-socket (frontendcontextzmq:sub)
(zmq:connectfrontend"tcp://192.168.55.210:5556")
;; This is our public endpoint for subscribers
(zmq:with-socket (backendcontextzmq:pub)
(zmq:bindbackend"tcp://10.1.1.0:8100")
;; Subscribe on everything
(zmq:setsockoptfrontendzmq:subscribe"")
;; Shunt messages out to our own subscribers
(loop
(loop;; Process all parts of the message
(let ((message (make-instance'zmq:msg)))
(zmq:recvfrontendmessage)
(if (not (zerop (zmq:getsockoptfrontendzmq:rcvmore)))
(zmq:sendbackendmessagezmq:sndmore)
(progn
(zmq:sendbackendmessage0)
;; Last message part
(return)))))))))
(cleanup))
wuproxy: Weather update proxy in Delphi
program wuproxy;
//
// Weather proxy device
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
frontend,
backend: TZMQSocket;
begin
context := TZMQContext.Create;
// This is where the weather server sits
frontend := Context.Socket( stXSub );
frontend.connect( 'tcp://192.168.55.210:5556' );
// This is our public endpoint for subscribers
backend := Context.Socket( stXPub );
backend.bind( 'tcp://10.1.1.0:8100' );
// Run the proxy until the user interrupts us
ZMQProxy( frontend, backend, nil );
frontend.Free;
backend.Free;
context.Free;
end.
wuproxy: Weather update proxy in Erlang
#! /usr/bin/env escript
%%
%% Weather proxy device
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% This is where the weather server sits
{ok, Frontend} = erlzmq:socket(Context, sub),
ok = erlzmq:connect(Frontend, "tcp://localhost:5556"),
%% This is our public endpoint for subscribers
{ok, Backend} = erlzmq:socket(Context, pub),
ok = erlzmq:bind(Backend, "tcp://*:8100"),
%% Subscribe on everything
ok = erlzmq:setsockopt(Frontend, subscribe, <<>>),
%% Shunt messages out to our own subscribers
loop(Frontend, Backend),
%% We don't actually get here but if we did, we'd shut down neatly
ok = erlzmq:close(Frontend),
ok = erlzmq:close(Backend),
ok = erlzmq:term(Context).
loop(Frontend, Backend) ->
{ok, Msg} = erlzmq:recv(Frontend),
caseerlzmq:getsockopt(Frontend, rcvmore) of
{ok, true} -> erlzmq:send(Backend, Msg, [sndmore]);
{ok, false} -> erlzmq:send(Backend, Msg)
end,
loop(Frontend, Backend).
wuproxy: Weather update proxy in Elixir
defmodule Wuproxy do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:39
"""
def main(_) do
{:ok, context} = :erlzmq.context()
{:ok, frontend} = :erlzmq.socket(context, :sub)
:ok = :erlzmq.connect(frontend, 'tcp://localhost:5556')
{:ok, backend} = :erlzmq.socket(context, :pub)
:ok = :erlzmq.bind(backend, 'tcp://*:8100')
:ok = :erlzmq.setsockopt(frontend, :subscribe, <<>>)
loop(frontend, backend)
:ok = :erlzmq.close(frontend)
:ok = :erlzmq.close(backend)
:ok = :erlzmq.term(context)
end
def loop(frontend, backend) do
{:ok, msg} = :erlzmq.recv(frontend)
case(:erlzmq.getsockopt(frontend, :rcvmore)) do
{:ok, true} ->
:erlzmq.send(backend, msg, [:sndmore])
{:ok, false} ->
:erlzmq.send(backend, msg)
{:ok, 0} ->
:erlzmq.send(backend, msg)
end
loop(frontend, backend)
end
end
Wuproxy.main(:ok)
wuproxy: Weather update proxy in F#
(*
Weather proxy device
*)
#r @"bin/fszmq.dll"
open fszmq
open fszmq.Context
open fszmq.Socket
#load "zhelpers.fs"
let main () =
use context = new Context(1)
// this is where the weather server sits
use frontend = context |> sub
connect frontend "tcp://localhost:5556"
// this is our public endpoint for subscribers
use backend = context |> pub
bind backend "tcp://*:8100"
// subscribe on everything
subscribe frontend [""B]
// shunt messages out to our own subscribers
while true do
let more = ref true
while !more do
// process all parts of the message
let message = frontend |> recv
more := frontend |> recvMore
if !more then sendMore backend message |> ignore
else send backend message
//NOTE: fs-zmq contains other idioms (eg: sendAll,recvAll,transfer)
// which allow for more concise (and possibly more efficient)
// implementations of the previous loop...
// but this example translates most directly to it's C cousin
// for a very concise alternative, see rrbroker.fsx
// we don't actually get here but if we did, we'd shut down neatly
EXIT_SUCCESS
main ()
// Weather proxy device
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
zmq "github.com/alecthomas/gozmq"
)
funcmain() {
context, _ := zmq.NewContext()
defer context.Close()
// This is where the weather server sits
frontend, _ := context.NewSocket(zmq.SUB)
defer frontend.Close()
frontend.Connect("tcp://localhost:5556")
// This is our public endpoint for subscribers
backend, _ := context.NewSocket(zmq.PUB)
defer backend.Close()
backend.Bind("tcp://*:8100")
// Subscribe on everything
frontend.SetSubscribe("")
// Shunt messages out to our own subscribers
for {
message, _ := frontend.Recv(0)
backend.Send(message, 0)
}
}
wuproxy: Weather update proxy in Haskell
-- Weather proxy devicemoduleMainwhereimportSystem.ZMQ4.Monadicmain::IO()main= runZMQ $ do-- This is where the weather service sits
frontend <- socket XSub
connect frontend "tcp://192.168.55.210:5556"-- This is our public endpoint for subscribers
backend <- socket XPub
bind backend "tcp://10.1.1.0:8100"-- Run the proxy until the user interrupts us
proxy frontend backend Nothing
wuproxy: Weather update proxy in Haxe
package ;
importhaxe.io.Bytes;
importhaxe.Stack;
importneko.Lib;
importorg.zeromq.ZMQ;
importorg.zeromq.ZMQContext;
importorg.zeromq.ZMQSocket;
importorg.zeromq.ZMQException;
/**
* Weather proxy device.
*
* See: http://zguide.zeromq.org/page:all#A-Publish-Subscribe-Proxy-Server
*
* Use with WUClient and WUServer
*/class WUProxy
{
publicstaticfunctionmain() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** WUProxy (see: http://zguide.zeromq.org/page:all#A-Publish-Subscribe-Proxy-Server)");
// This is where the weather service sitsvar frontend:ZMQSocket = context.socket(ZMQ_SUB);
frontend.connect("tcp://localhost:5556");
// This is our public endpoint for subscribersvar backend:ZMQSocket = context.socket(ZMQ_PUB);
backend.bind("tcp://10.1.1.0:8100");
// Subscribe on everything
frontend.setsockopt(ZMQ_SUBSCRIBE, Bytes.ofString(""));
var more = false;
var msgBytes:Bytes;
ZMQ.catchSignals();
var stopped = false;
while (!stopped) {
try {
msgBytes = frontend.recvMsg();
more = frontend.hasReceiveMore();
// proxy it
backend.sendMsg(msgBytes, { if (more) SNDMORE elsenull; } );
if (!more) {
stopped = true;
}
} catch (e:ZMQException) {
if (ZMQ.isInterrupted()) {
stopped = true;
} else {
// Handle other errors
trace("ZMQException #:" + e.errNo + ", str:" + e.str());
trace (Stack.toString(Stack.exceptionStack()));
}
}
}
frontend.close();
backend.close();
context.term();
}
}
wuproxy: Weather update proxy in Java
packageguide;
importorg.zeromq.SocketType;
importorg.zeromq.ZMQ;
importorg.zeromq.ZMQ.Socket;
importorg.zeromq.ZContext;
/**
* Weather proxy device.
*/publicclasswuproxy
{
publicstaticvoidmain(String[] args)
{
// Prepare our context and sockets
try (ZContext context = new ZContext()) {
// This is where the weather server sits
Socket frontend = context.createSocket(SocketType.SUB);
frontend.connect("tcp://192.168.55.210:5556");
// This is our public endpoint for subscribers
Socket backend = context.createSocket(SocketType.PUB);
backend.bind("tcp://10.1.1.0:8100");
// Subscribe on everything
frontend.subscribe(ZMQ.SUBSCRIPTION_ALL);
// Run the proxy until the user interrupts us
ZMQ.proxy(frontend, backend, null);
}
}
}
---- Weather proxy device---- Author: Robert G. Jakabosky <bobby@sharedrealm.com>--
require"zmq"local context = zmq.init(1)
-- This is where the weather server sitslocal frontend = context:socket(zmq.SUB)
frontend:connect(arg[1] or"tcp://192.168.55.210:5556")
-- This is our public endpolocal for subscriberslocal backend = context:socket(zmq.PUB)
backend:bind(arg[2] or"tcp://10.1.1.0:8100")
-- Subscribe on everything
frontend:setopt(zmq.SUBSCRIBE, "")
-- Shunt messages out to our own subscriberswhiletruedowhiletruedo-- Process all parts of the messagelocal message = frontend:recv()
if frontend:getopt(zmq.RCVMORE) == 1then
backend:send(message, zmq.SNDMORE)
else
backend:send(message)
break-- Last message partendendend-- We don't actually get here but if we did, we'd shut down neatly
frontend:close()
backend:close()
context:term()
wuproxy: Weather update proxy in Node.js
// Weather proxy device in Node.js
var zmq = require('zeromq')
, frontend = zmq.socket('sub')
, backend = zmq.socket('pub');
backend.bindSync("tcp://10.1.1.0:8100");
frontend.subscribe('');
frontend.connect("tcp://192.168.55.210:5556");
frontend.on('message', function() {
// all parts of a message come as function arguments
var args = Array.apply(null, arguments);
backend.send(args);
});
# Weather proxy device in Perlusestrict;
usewarnings;
usev5.10;
useZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_XSUB ZMQ_XPUB);
my$context = ZMQ::FFI->new();
# This is where the weather server sitsmy$frontend = $context->socket(ZMQ_XSUB);
$frontend->connect('tcp://192.168.55.210:5556');
# This is our public endpoing fro subscribersmy$backend = $context->socket(ZMQ_XPUB);
$backend->bind('tcp://10.1.1.0:8100');
# Run the proxy until the user interrupts us$context->proxy($frontend, $backend);
wuproxy: Weather update proxy in PHP
<?php/*
* Weather proxy device
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/$context = new ZMQContext();
// This is where the weather server sits
$frontend = new ZMQSocket($context, ZMQ::SOCKET_SUB);
$frontend->connect("tcp://192.168.55.210:5556");
// This is our public endpoint for subscribers
$backend = new ZMQSocket($context, ZMQ::SOCKET_PUB);
$backend->bind("tcp://10.1.1.0:8100");
// Subscribe on everything
$frontend->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "");
// Shunt messages out to our own subscribers
while (true) {
while (true) {
// Process all parts of the message
$message = $frontend->recv();
$more = $frontend->getSockOpt(ZMQ::SOCKOPT_RCVMORE);
$backend->send($message, $more ? ZMQ::MODE_SNDMORE : 0);
if (!$more) {
break; // Last message part
}
}
}
wuproxy: Weather update proxy in Python
# Weather proxy device## Author: Lev Givon <lev(at)columbia(dot)edu>importzmq
context = zmq.Context()
# This is where the weather server sits
frontend = context.socket(zmq.SUB)
frontend.connect("tcp://192.168.55.210:5556")
# This is our public endpoint for subscribers
backend = context.socket(zmq.PUB)
backend.bind("tcp://10.1.1.0:8100")
# Subscribe on everything
frontend.setsockopt(zmq.SUBSCRIBE, b'')
# Shunt messages out to our own subscriberswhile True:
# Process all parts of the message
message = frontend.recv_multipart()
backend.send_multipart(message)
It looks very similar to the earlier proxy example, but the key part is that the frontend and backend sockets are on two different networks. We can use this model for example to connect a multicast network (pgm transport) to a tcp publisher. 它看起来与之前的代理示例非常相似,但关键部分是前端和后端套接字位于两个不同的网络上。我们可以使用此模型,例如将多播网络( pgm 传输)连接到 tcp 发布者。
ZeroMQ’s error handling philosophy is a mix of fail-fast and resilience. Processes, we believe, should be as vulnerable as possible to internal errors, and as robust as possible against external attacks and errors. To give an analogy, a living cell will self-destruct if it detects a single internal error, yet it will resist attack from the outside by all means possible. ZeroMQ 的错误处理理念是快速失败与弹性的结合。我们认为,进程应尽可能对内部错误保持脆弱,同时对外部攻击和错误保持尽可能的强健。打个比方,活细胞如果检测到单一的内部错误会自我毁灭,但它会尽一切可能抵抗外部攻击。
Assertions, which pepper the ZeroMQ code, are absolutely vital to robust code; they just have to be on the right side of the cellular wall. And there should be such a wall. If it is unclear whether a fault is internal or external, that is a design flaw to be fixed. In C/C++, assertions stop the application immediately with an error. In other languages, you may get exceptions or halts. 断言在 ZeroMQ 代码中随处可见,对健壮代码至关重要;它们必须位于细胞壁的正确一侧。而且必须有这样一道墙。如果无法明确故障是内部还是外部的,那就是设计缺陷,需要修正。在 C/C++ 中,断言会立即以错误停止应用程序。在其他语言中,可能会抛出异常或停止执行。
When ZeroMQ detects an external fault it returns an error to the calling code. In some rare cases, it drops messages silently if there is no obvious strategy for recovering from the error. 当 ZeroMQ 检测到外部故障时,会向调用代码返回错误。在某些罕见情况下,如果没有明显的错误恢复策略,它会静默丢弃消息。
In most of the C examples we’ve seen so far there’s been no error handling. Real code should do error handling on every single ZeroMQ call. If you’re using a language binding other than C, the binding may handle errors for you. In C, you do need to do this yourself. There are some simple rules, starting with POSIX conventions: 到目前为止,我们看到的大多数 C 语言示例中都没有错误处理。实际代码应对每一个 ZeroMQ 调用进行错误处理。如果你使用的是除 C 以外的语言绑定,绑定可能会为你处理错误。在 C 语言中,你需要自己处理错误。有一些简单的规则,首先遵循 POSIX 约定:
Methods that create objects return NULL if they fail. 创建对象的方法在失败时返回 NULL。
Methods that process data may return the number of bytes processed, or -1 on an error or failure. 处理数据的方法可能返回处理的字节数,或者在错误或失败时返回 -1。
Other methods return 0 on success and -1 on an error or failure. 其他方法在成功时返回 0,错误或失败时返回 -1。
There are two main exceptional conditions that you should handle as nonfatal: 有两种主要的异常情况应作为非致命错误处理:
When your code receives a message with the ZMQ_DONTWAIT option and there is no waiting data, ZeroMQ will return -1 and set errno to EAGAIN. 当您的代码使用 ZMQ_DONTWAIT 选项接收消息且没有等待数据时,ZeroMQ 将返回 -1 并将 errno 设置为 EAGAIN 。
When one thread calls zmq_ctx_destroy(), and other threads are still doing blocking work, the zmq_ctx_destroy() call closes the context and all blocking calls exit with -1, and errno set to ETERM. 当一个线程调用 zmq_ctx_destroy() ,而其他线程仍在进行阻塞操作时, zmq_ctx_destroy() 调用会关闭上下文,所有阻塞调用以 -1 退出,并且 errno 被设置为 ETERM 。
In C/C++, asserts can be removed entirely in optimized code, so don’t make the mistake of wrapping the whole ZeroMQ call in an assert(). It looks neat; then the optimizer removes all the asserts and the calls you want to make, and your application breaks in impressive ways. 在 C/C++ 中,断言在优化代码中可以被完全移除,所以不要犯将整个 ZeroMQ 调用包裹在 assert() 中的错误。看起来很整洁;但优化器会移除所有断言和你想要执行的调用,导致你的应用程序以令人印象深刻的方式崩溃。
Let’s see how to shut down a process cleanly. We’ll take the parallel pipeline example from the previous section. If we’ve started a whole lot of workers in the background, we now want to kill them when the batch is finished. Let’s do this by sending a kill message to the workers. The best place to do this is the sink because it really knows when the batch is done. 让我们看看如何干净地关闭一个进程。我们将使用上一节的并行流水线示例。如果我们在后台启动了大量的工作线程,现在想在批处理完成时终止它们。我们通过向工作线程发送一个终止消息来实现这一点。最合适的地方是 sink,因为它真正知道批处理何时完成。
How do we connect the sink to the workers? The PUSH/PULL sockets are one-way only. We could switch to another socket type, or we could mix multiple socket flows. Let’s try the latter: using a pub-sub model to send kill messages to the workers: 我们如何将汇聚器连接到工作线程?PUSH/PULL 套接字是单向的。我们可以切换到另一种套接字类型,或者混合使用多种套接字流。让我们尝试后者:使用发布-订阅模型向工作线程发送终止消息:
The sink creates a PUB socket on a new endpoint. 汇聚端在一个新的端点上创建一个 PUB 套接字。
Workers connect their input socket to this endpoint. 工作线程将它们的输入套接字连接到此端点。
When the sink detects the end of the batch, it sends a kill to its PUB socket. 当汇聚端检测到批处理结束时,它会向其 PUB 套接字发送一个终止信号。
When a worker detects this kill message, it exits. 当工作线程检测到此终止消息时,它会退出。
It doesn’t take much new code in the sink: 汇聚端不需要太多新代码:
void *controller = zmq_socket (context, ZMQ_PUB);
zmq_bind (controller, "tcp://*:5559");
...
// Send kill signal to workers
s_send (controller, "KILL");
Here is the worker process, which manages two sockets (a PULL socket getting tasks, and a SUB socket getting control commands), using the zmq_poll() technique we saw earlier: 这是工作进程,它管理两个套接字(一个用于接收任务的 PULL 套接字和一个用于接收控制命令的 SUB 套接字),使用了我们之前看到的 zmq_poll() 技术:
taskwork2: Parallel task worker with kill signaling in Ada
taskwork2: Parallel task worker with kill signaling in C
// Task worker - design 2
// Adds pub-sub flow to receive and respond to kill signal
#include"zhelpers.h"intmain (void)
{
// Socket to receive messages on
void *context = zmq_ctx_new ();
void *receiver = zmq_socket (context, ZMQ_PULL);
zmq_connect (receiver, "tcp://localhost:5557");
// Socket to send messages to
void *sender = zmq_socket (context, ZMQ_PUSH);
zmq_connect (sender, "tcp://localhost:5558");
// Socket for control input
void *controller = zmq_socket (context, ZMQ_SUB);
zmq_connect (controller, "tcp://localhost:5559");
zmq_setsockopt (controller, ZMQ_SUBSCRIBE, "", 0);
// Process messages from either socket
while (1) {
zmq_pollitem_t items [] = {
{ receiver, 0, ZMQ_POLLIN, 0 },
{ controller, 0, ZMQ_POLLIN, 0 }
};
zmq_poll (items, 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
char *string = s_recv (receiver);
printf ("%s.", string); // Show progress
fflush (stdout);
s_sleep (atoi (string)); // Do the work
free (string);
s_send (sender, ""); // Send results to sink
}
// Any waiting controller command acts as 'KILL'
if (items [1].revents & ZMQ_POLLIN)
break; // Exit loop
}
zmq_close (receiver);
zmq_close (sender);
zmq_close (controller);
zmq_ctx_destroy (context);
return0;
}
taskwork2: Parallel task worker with kill signaling in C++ taskwork2:带有终止信号的并行任务工作者(C++)
//
// Task worker in C++ - design 2
// Adds pub-sub flow to receive and respond to kill signal
//
#include"zhelpers.hpp"#include<string>intmain (int argc, char *argv[])
{
zmq::context_t context(1);
// Socket to receive messages on
zmq::socket_t receiver(context, ZMQ_PULL);
receiver.connect("tcp://localhost:5557");
// Socket to send messages to
zmq::socket_t sender(context, ZMQ_PUSH);
sender.connect("tcp://localhost:5558");
// Socket for control input
zmq::socket_t controller (context, ZMQ_SUB);
controller.connect("tcp://localhost:5559");
controller.set(zmq::sockopt::subscribe, "");
// Process messages from receiver and controller
zmq::pollitem_t items [] = {
{ receiver, 0, ZMQ_POLLIN, 0 },
{ controller, 0, ZMQ_POLLIN, 0 }
};
// Process messages from both sockets
while (1) {
zmq::message_t message;
zmq::poll (&items [0], 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
receiver.recv(&message);
// Process task
int workload; // Workload in msecs
std::string sdata(static_cast<char*>(message.data()), message.size());
std::istringstream iss(sdata);
iss >> workload;
// Do the work
s_sleep(workload);
// Send results to sink
message.rebuild();
sender.send(message);
// Simple progress indicator for the viewer
std::cout << "." << std::flush;
}
// Any waiting controller command acts as 'KILL'
if (items [1].revents & ZMQ_POLLIN) {
std::cout << std::endl;
break; // Exit loop
}
}
// Finished
return0;
}
taskwork2: Parallel task worker with kill signaling in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid TaskWork2(string[] args)
{
//
// Task worker - design 2
// Adds pub-sub flow to receive and respond to kill signal
//
// Author: metadings
//
// Socket to receive messages on,
// Socket to send messages to and
// Socket for control input
using (var context = new ZContext())
using (var receiver = new ZSocket(context, ZSocketType.PULL))
using (var sender = new ZSocket(context, ZSocketType.PUSH))
using (var controller = new ZSocket(context, ZSocketType.SUB))
{
receiver.Connect("tcp://127.0.0.1:5557");
sender.Connect("tcp://127.0.0.1:5558");
controller.Connect("tcp://127.0.0.1:5559");
controller.SubscribeAll();
var poll = ZPollItem.CreateReceiver();
ZError error;
ZMessage message;
while (true)
{
// Process messages from either socket
if (receiver.PollIn(poll, out message, out error, TimeSpan.FromMilliseconds(64)))
{
int workload = message[0].ReadInt32();
Console.WriteLine("{0}.", workload); // Show progress
Thread.Sleep(workload); // Do the work
sender.Send(newbyte[0], 0, 0); // Send results to sink
}
// Any waiting controller command acts as 'KILL'
if (controller.PollIn(poll, out message, out error, TimeSpan.FromMilliseconds(64)))
{
break; // Exit loop
}
}
}
}
}
}
taskwork2: Parallel task worker with kill signaling in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Task worker - design 2 in Common Lisp;;; Connects PULL socket to tcp://localhost:5557;;; Collects workloads from ventilator via that socket;;; Connects PUSH socket to tcp://localhost:5558;;; Sends results to sink via that socket;;; Adds pub-sub flow to receive and respond to kill signal;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.taskwork2
(:nicknames#:taskwork2)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.taskwork2)
(defunmain ()
(zmq:with-context (context1)
;; Socket to receive messages on
(zmq:with-socket (receivercontextzmq:pull)
(zmq:connectreceiver"tcp://localhost:5557")
;; Socket to send messages to
(zmq:with-socket (sendercontextzmq:push)
(zmq:connectsender"tcp://localhost:5558")
;; Socket for control input
(zmq:with-socket (controllercontextzmq:sub)
(zmq:connectcontroller"tcp://localhost:5559")
(zmq:setsockoptcontrollerzmq:subscribe"")
;; Process messages from receiver and controller
(zmq:with-polls ((items . ((receiver . zmq:pollin)
(controller . zmq:pollin))))
(loop
(let ((revents (zmq:pollitems)))
(when (= (firstrevents) zmq:pollin)
(let ((pull-msg (make-instance'zmq:msg)))
(zmq:recvreceiverpull-msg)
;; Process task
(let* ((string (zmq:msg-data-as-stringpull-msg))
(delay (* (parse-integerstring) 1000)))
;; Simple progress indicator for the viewer
(message"~A."string)
;; Do the work
(isys:usleepdelay)
;; Send results to sink
(let ((push-msg (make-instance'zmq:msg:data"")))
(zmq:sendsenderpush-msg)))))
(when (= (secondrevents) zmq:pollin)
;; Any waiting controller command acts as 'KILL'
(return)))))))))
(cleanup))
taskwork2: Parallel task worker with kill signaling in Delphi
program taskwork2;
//
// Task worker - design 2
// Adds pub-sub flow to receive and respond to kill signal
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
receiver,
sender,
controller: TZMQSocket;
frame: TZMQFrame;
poller: TZMQPoller;
begin
context := TZMQContext.Create;
// Socket to receive messages on
receiver := Context.Socket( stPull );
receiver.connect( 'tcp://localhost:5557' );
// Socket to send messages to
sender := Context.Socket( stPush );
sender.connect( 'tcp://localhost:5558' );
// Socket for control input
controller := Context.Socket( stSub );
controller.connect( 'tcp://localhost:5559' );
controller.subscribe('');
// Process messages from receiver and controller
poller := TZMQPoller.Create( true );
poller.register( receiver, [pePollIn] );
poller.register( controller, [pePollIn] );
// Process messages from both sockets
while true do
begin
poller.poll;
if pePollIn in poller.PollItem[0].revents then
begin
frame := TZMQFrame.create;
receiver.recv( frame );
// Do the work
sleep( StrToInt( frame.asUtf8String ) );
frame.Free;
// Send results to sink
sender.send('');
// Simple progress indicator for the viewer
writeln('.');
end;
// Any waiting controller command acts as 'KILL'
if pePollIn in poller.PollItem[1].revents then
break; // Exit loop
end;
receiver.Free;
sender.Free;
controller.Free;
poller.Free;
context.Free;
end.
taskwork2: Parallel task worker with kill signaling in Erlang
#! /usr/bin/env escript
%%
%% Task worker - design 2
%% Adds pub-sub flow to receive and respond to kill signal
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to receive messages on
{ok, Receiver} = erlzmq:socket(Context, [pull, {active, true}]),
ok = erlzmq:connect(Receiver, "tcp://localhost:5557"),
%% Socket to send messages to
{ok, Sender} = erlzmq:socket(Context, push),
ok = erlzmq:connect(Sender, "tcp://localhost:5558"),
%% Socket for control input
{ok, Controller} = erlzmq:socket(Context, [sub, {active, true}]),
ok = erlzmq:connect(Controller, "tcp://localhost:5559"),
ok = erlzmq:setsockopt(Controller, subscribe, <<>>),
%% Process messages from receiver and controller
process_messages(Receiver, Controller, Sender),
%% Finished
ok = erlzmq:close(Receiver),
ok = erlzmq:close(Sender),
ok = erlzmq:close(Controller),
ok = erlzmq:term(Context).
process_messages(Receiver, Controller, Sender) ->
receive
{zmq, Receiver, Msg, _Flags} ->
%% Do the work
timer:sleep(list_to_integer(binary_to_list(Msg))),
%% Send results to sink
ok = erlzmq:send(Sender, Msg),
%% Simple progress indicator for the viewer
io:format("."),
process_messages(Receiver, Controller, Sender);
{zmq, Controller, _Msg, _Flags} ->
%% Any waiting controller command acts as 'KILL'
ok
end.
taskwork2: Parallel task worker with kill signaling in Elixir
taskwork2: Parallel task worker with kill signaling in F#
(*
Task worker - design 2
Adds pub-sub flow to receive and respond to kill signal
*)
#r @"bin/fszmq.dll"
open fszmq
#load "zhelpers.fs"
open Context
open Socket
open Polling
let main () =
use context = new Context(1)
// Socket to receive messages on
use receiver = context |> pull
connect receiver "tcp://localhost:5557"
// Socket to send messages to
use sender = context |> push
connect sender "tcp://localhost:5558"
// Socket for control input
use controller = context |> sub
connect controller "tcp://localhost:5559"
subscribe controller [ ""B ]
// Process messages from receiver and controller
let doLoop = ref true
let items =
[ Poll(ZMQ.POLLIN,receiver,
fun s -> let msg = s |> recv |> decode
// Do the work
sleep (int msg)
// Send results to sink
s_send sender ""
// Simple progress indicator for the viewer
fflush()
printf "%s." msg)
Poll(ZMQ.POLLIN,controller,
fun _ -> // Any waiting controller command acts as 'KILL')
doLoop := false) ]
// Process messages from both sockets
while !doLoop do (poll -1L items) |> ignore
// Finished
EXIT_SUCCESS
main ()
taskwork2: Parallel task worker with kill signaling in Felix
taskwork2: Parallel task worker with kill signaling in Lua
---- Task worker - design 2-- Adds pub-sub flow to receive and respond to kill signal---- Author: Robert G. Jakabosky <bobby@sharedrealm.com>--
require"zmq"
require"zmq.poller"
require"zhelpers"local context = zmq.init(1)
-- Socket to receive messages onlocal receiver = context:socket(zmq.PULL)
receiver:connect("tcp://localhost:5557")
-- Socket to send messages tolocal sender = context:socket(zmq.PUSH)
sender:connect("tcp://localhost:5558")
-- Socket for control inputlocal controller = context:socket(zmq.SUB)
controller:connect("tcp://localhost:5559")
controller:setopt(zmq.SUBSCRIBE, "", 0)
-- Process messages from receiver and controllerlocal poller = zmq.poller(2)
poller:add(receiver, zmq.POLLIN, function()
local msg = receiver:recv()
-- Do the work
s_sleep(tonumber(msg))
-- Send results to sink
sender:send("")
-- Simple progress indicator for the viewer
io.write(".")
io.stdout:flush()
end)
poller:add(controller, zmq.POLLIN, function()
poller:stop() -- Exit loopend)
-- start poller's event loop
poller:start()
-- Finished
receiver:close()
sender:close()
controller:close()
context:term()
taskwork2: Parallel task worker with kill signaling in Node.js
// Task worker in Node.js
// Connects PULL socket to tcp://localhost:5557
// Collects workloads from ventilator via that socket
// Connects PUSH socket to tcp://localhost:5558
// Sends results to sink via that socket
var zmq = require('zeromq')
, receiver = zmq.socket('pull')
, sender = zmq.socket('push')
, controller = zmq.socket('sub');
receiver.on('message', function(buf) {
var msec = parseInt(buf.toString(), 10);
// simple progress indicator for the viewer
process.stdout.write(buf.toString() + ".");
// do the work
// not a great node sample for zeromq,
// node receives messages while timers run.
setTimeout(function() {
sender.send("");
}, msec);
});
controller.on('message', function() {
// received KILL signal
receiver.close();
sender.close();
controller.close();
process.exit();
});
receiver.connect('tcp://localhost:5557');
sender.connect('tcp://localhost:5558');
controller.subscribe('');
controller.connect('tcp://localhost:5559');
taskwork2: Parallel task worker with kill signaling in Objective-C
/* taskwork2.m: PULLs workload from tcp://localhost:5557
* PUSHes results to tcp://localhost:5558
* SUBs to tcp://localhost:5559 to receive kill signal (*** NEW ***)
*/#import <Foundation/Foundation.h>
#import "ZMQObjC.h"
#define NSEC_PER_MSEC (1000000)
intmain(void)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
ZMQContext *ctx = [[[ZMQContext alloc] initWithIOThreads:1U] autorelease];
/* (jws/2011-02-05)!!!: Do NOT terminate the endpoint with a final slash.
* If you connect to @"tcp://localhost:5557/", you will get
* Assertion failed: rc == 0 (zmq_connecter.cpp:46)
* instead of a connected socket. Binding works fine, though. */
ZMQSocket *pull = [ctx socketWithType:ZMQ_PULL];
[pull connectToEndpoint:@"tcp://localhost:5557"];
ZMQSocket *push = [ctx socketWithType:ZMQ_PUSH];
[push connectToEndpoint:@"tcp://localhost:5558"];
ZMQSocket *control = [ctx socketWithType:ZMQ_SUB];
[control setData:nil forOption:ZMQ_SUBSCRIBE];
[control connectToEndpoint:@"tcp://localhost:5559"];
/* Process tasks forever, multiplexing between |pull| and |control|. */enum {POLL_PULL, POLL_CONTROL};
zmq_pollitem_t items[2];
[pull getPollItem:&items[POLL_PULL] forEvents:ZMQ_POLLIN];
[control getPollItem:&items[POLL_CONTROL] forEvents:ZMQ_POLLIN];
size_t itemCount = sizeof(items)/sizeof(*items);
struct timespec t;
NSData *emptyData = [NSData data];
bool shouldExit = false;
while (!shouldExit) {
NSAutoreleasePool *p = [[NSAutoreleasePool alloc] init];
[ZMQContext pollWithItems:items count:itemCount
timeoutAfterUsec:ZMQPollTimeoutNever];
if (items[POLL_PULL].revents & ZMQ_POLLIN) {
NSData *d = [pull receiveDataWithFlags:0];
NSString *s = [NSString stringWithUTF8String:[d bytes]];
t.tv_sec = 0;
t.tv_nsec = [s integerValue] * NSEC_PER_MSEC;
printf("%d.", [s intValue]);
fflush(stdout);
/* Do work, then report finished. */
(void)nanosleep(&t, NULL);
[push sendData:emptyData withFlags:0];
}
/* Any inbound data on |control| signals us to die. */if (items[POLL_CONTROL].revents & ZMQ_POLLIN) {
/* Do NOT just break here: |p| must be drained first. */
shouldExit = true;
}
[p drain];
}
[ctx closeSockets];
[pool drain];
return EXIT_SUCCESS;
}
taskwork2: Parallel task worker with kill signaling in ooc
taskwork2: Parallel task worker with kill signaling in Perl
# Task worker - design 2 in Perl# Adds pub-sub flow to receive and respond to kill signalusestrict;
usewarnings;
usev5.10;
$| = 1; # autoflush stdout after each printuseZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_PULL ZMQ_PUSH ZMQ_SUB);
useTime::HiResqw(usleep);
useAnyEvent;
useEV;
# Socket to receive messages onmy$context = ZMQ::FFI->new();
my$receiver = $context->socket(ZMQ_PULL);
$receiver->connect('tcp://localhost:5557');
# Socket to send messages tomy$sender = $context->socket(ZMQ_PUSH);
$sender->connect('tcp://localhost:5558');
# Socket for control inputmy$controller = $context->socket(ZMQ_SUB);
$controller->connect('tcp://localhost:5559');
$controller->subscribe('');
# Process messages from either socketmy$receiver_poller = AE::io $receiver->get_fd, 0, sub {
while ($receiver->has_pollin) {
my$string = $receiver->recv();
print"$string."; # Show progress
usleep $string*1000; # Do the work$sender->send(''); # Send results to sink
}
};
# Any controller command acts as 'KILL'my$controller_poller = AE::io $controller->get_fd, 0, sub {
if ($controller->has_pollin) {
EV::break; # Exit loop
}
};
EV::run;
taskwork2: Parallel task worker with kill signaling in PHP
<?php/*
* Task worker - design 2
* Adds pub-sub flow to receive and respond to kill signal
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/$context = new ZMQContext();
// Socket to receive messages on
$receiver = new ZMQSocket($context, ZMQ::SOCKET_PULL);
$receiver->connect("tcp://localhost:5557");
// Socket to send messages to
$sender = new ZMQSocket($context, ZMQ::SOCKET_PUSH);
$sender->connect("tcp://localhost:5558");
// Socket for control input
$controller = new ZMQSocket($context, ZMQ::SOCKET_SUB);
$controller->connect("tcp://localhost:5559");
$controller->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "");
// Process messages from receiver and controller
$poll = new ZMQPoll();
$poll->add($receiver, ZMQ::POLL_IN);
$poll->add($controller, ZMQ::POLL_IN);
$readable = $writeable = array();
// Process messages from both sockets
while (true) {
$events = $poll->poll($readable, $writeable);
if ($events > 0) {
foreach ($readableas$socket) {
if ($socket === $receiver) {
$message = $socket->recv();
// Simple progress indicator for the viewer
echo$message, PHP_EOL;
// Do the work
usleep($message * 1000);
// Send results to sink
$sender->send("");
}
// Any waiting controller command acts as 'KILL'
elseif ($socket === $controller) {
exit();
}
}
}
}
taskwork2: Parallel task worker with kill signaling in Python
# encoding: utf-8## Task worker - design 2# Adds pub-sub flow to receive and respond to kill signal## Author: Jeremy Avnet (brainsik) <spork(dash)zmq(at)theory(dot)org>#importsysimporttimeimportzmq
context = zmq.Context()
# Socket to receive messages on
receiver = context.socket(zmq.PULL)
receiver.connect("tcp://localhost:5557")
# Socket to send messages to
sender = context.socket(zmq.PUSH)
sender.connect("tcp://localhost:5558")
# Socket for control input
controller = context.socket(zmq.SUB)
controller.connect("tcp://localhost:5559")
controller.setsockopt(zmq.SUBSCRIBE, b"")
# Process messages from receiver and controller
poller = zmq.Poller()
poller.register(receiver, zmq.POLLIN)
poller.register(controller, zmq.POLLIN)
# Process messages from both socketswhile True:
socks = dict(poller.poll())
if socks.get(receiver) == zmq.POLLIN:
message = receiver.recv_string()
# Process task
workload = int(message) # Workload in msecs# Do the work
time.sleep(workload / 1000.0)
# Send results to sink
sender.send_string(message)
# Simple progress indicator for the viewer
sys.stdout.write(".")
sys.stdout.flush()
# Any waiting controller command acts as 'KILL'if socks.get(controller) == zmq.POLLIN:
break# Finished
receiver.close()
sender.close()
controller.close()
context.term()
taskwork2: Parallel task worker with kill signaling in Q
Here is the modified sink application. When it’s finished collecting results, it broadcasts a kill message to all workers: 这是修改后的 sink 应用程序。当它完成收集结果后,会向所有工作线程广播一个终止消息:
tasksink2: Parallel task sink with kill signaling in Ada
tasksink2: Parallel task sink with kill signaling in C
// Task sink - design 2
// Adds pub-sub flow to send kill signal to workers
#include"zhelpers.h"intmain (void)
{
// Socket to receive messages on
void *context = zmq_ctx_new ();
void *receiver = zmq_socket (context, ZMQ_PULL);
zmq_bind (receiver, "tcp://*:5558");
// Socket for worker control
void *controller = zmq_socket (context, ZMQ_PUB);
zmq_bind (controller, "tcp://*:5559");
// Wait for start of batch
char *string = s_recv (receiver);
free (string);
// Start our clock now
int64_t start_time = s_clock ();
// Process 100 confirmations
int task_nbr;
for (task_nbr = 0; task_nbr < 100; task_nbr++) {
char *string = s_recv (receiver);
free (string);
if (task_nbr % 10 == 0)
printf (":");
else
printf (".");
fflush (stdout);
}
printf ("Total elapsed time: %d msec\n",
(int) (s_clock () - start_time));
// Send kill signal to workers
s_send (controller, "KILL");
zmq_close (receiver);
zmq_close (controller);
zmq_ctx_destroy (context);
return0;
}
tasksink2: Parallel task sink with kill signaling in C++ tasksink2:带有终止信号的并行任务接收器(C++)
//
// Task sink in C++ - design 2
// Adds pub-sub flow to send kill signal to workers
//
#include"zhelpers.hpp"intmain (int argc, char *argv[])
{
zmq::context_t context(1);
// Socket to receive messages on
zmq::socket_t receiver (context, ZMQ_PULL);
receiver.bind("tcp://*:5558");
// Socket for worker control
zmq::socket_t controller (context, ZMQ_PUB);
controller.bind("tcp://*:5559");
// Wait for start of batch
s_recv (receiver);
// Start our clock now
structtimeval tstart;
gettimeofday (&tstart, NULL);
// Process 100 confirmations
int task_nbr;
for (task_nbr = 0; task_nbr < 100; task_nbr++) {
s_recv (receiver);
if (task_nbr % 10 == 0)
std::cout << ":" ;
else
std::cout << "." ;
}
// Calculate and report duration of batch
structtimeval tend, tdiff;
gettimeofday (&tend, NULL);
if (tend.tv_usec < tstart.tv_usec) {
tdiff.tv_sec = tend.tv_sec - tstart.tv_sec - 1;
tdiff.tv_usec = 1000000 + tend.tv_usec - tstart.tv_usec;
}
else {
tdiff.tv_sec = tend.tv_sec - tstart.tv_sec;
tdiff.tv_usec = tend.tv_usec - tstart.tv_usec;
}
int total_msec = tdiff.tv_sec * 1000 + tdiff.tv_usec / 1000;
std::cout << "\nTotal elapsed time: " << total_msec
<< " msec\n" << std::endl;
// Send kill signal to workers
s_send (controller, std::string("KILL"));
// Finished
sleep (1); // Give 0MQ time to deliver
return0;
}
tasksink2: Parallel task sink with kill signaling in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Diagnostics;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid TaskSink2(string[] args)
{
//
// Task sink - design 2
// Adds pub-sub flow to send kill signal to workers
//
// Author: metadings
//
// Socket to receive messages on and
// Socket for worker control
using (var context = new ZContext())
using (var receiver = new ZSocket(context, ZSocketType.PULL))
using (var controller = new ZSocket(context, ZSocketType.PUB))
{
receiver.Bind("tcp://*:5558");
controller.Bind("tcp://*:5559");
// Wait for start of batch
receiver.ReceiveFrame();
// Start our clock now
var stopwatch = new Stopwatch();
stopwatch.Start();
// Process 100 confirmations
for (int i = 0; i < 100; ++i)
{
receiver.ReceiveFrame();
if ((i / 10) * 10 == i)
Console.Write(":");
else
Console.Write(".");
}
stopwatch.Stop();
Console.WriteLine("Total elapsed time: {0} ms", stopwatch.ElapsedMilliseconds);
// Send kill signal to workers
controller.Send(new ZFrame("KILL"));
}
}
}
}
tasksink2: Parallel task sink with kill signaling in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Task sink - design 2 in Common Lisp;;; Binds PULL socket to tcp://localhost:5558;;; Collects results from workers via that socket;;; Adds pub-sub flow to send kill signal to workers;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.tasksink2
(:nicknames#:tasksink2)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.tasksink2)
(defunmain ()
(zmq:with-context (context1)
;; Socket to receive messages on
(zmq:with-socket (receivercontextzmq:pull)
(zmq:bindreceiver"tcp://*:5558")
;; Socket for worker control
(zmq:with-socket (controllercontextzmq:pub)
(zmq:bindcontroller"tcp://*:5559")
;; Wait for start of batch
(let ((msg (make-instance'zmq:msg)))
(zmq:recvreceivermsg))
;; Start our clock now
(let ((elapsed-time
(with-stopwatch
(dotimes (task-nbr100)
(let ((msg (make-instance'zmq:msg)))
(zmq:recvreceivermsg)
(let ((string (zmq:msg-data-as-stringmsg)))
(declare (ignorestring))
(if (=1 (denominator (/task-nbr10)))
(message":")
(message"."))))))))
;; Calculate and report duration of batch
(message"Total elapsed time: ~F msec~%" (/elapsed-time1000.0)))
;; Send kill signal to workers
(let ((kill (make-instance'zmq:msg:data"KILL")))
(zmq:sendcontrollerkill))
;; Give 0MQ time to deliver
(sleep1))))
(cleanup))
tasksink2: Parallel task sink with kill signaling in Delphi
program tasksink2;
//
// Task sink - design 2
// Adds pub-sub flow to send kill signal to workers
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, Windows
, zmqapi
;
const
task_count = 100;
var
context: TZMQContext;
receiver,
controller: TZMQSocket;
s: Utf8String;
task_nbr: Integer;
fFrequency,
fstart,
fStop : Int64;
begin
// Prepare our context and socket
context := TZMQContext.Create;
receiver := Context.Socket( stPull );
receiver.bind( 'tcp://*:5558' );
// Socket for worker control
controller := Context.Socket( stPub );
controller.bind( 'tcp://*:5559' );
// Wait for start of batch
receiver.recv( s );
// Start our clock now
QueryPerformanceFrequency( fFrequency );
QueryPerformanceCounter( fStart );
// Process 100 confirmations
for task_nbr := 0 to task_count - 1 do
begin
receiver.recv( s );
if ((task_nbr / 10) * 10 = task_nbr) then
Write( ':' )
else
Write( '.' );
end;
// Calculate and report duration of batch
QueryPerformanceCounter( fStop );
Writeln( Format( 'Total elapsed time: %d msec', [
((MSecsPerSec * (fStop - fStart)) div fFrequency) ]) );
controller.send( 'KILL' );
// Finished
sleep(1000); // Give 0MQ time to deliver
receiver.Free;
controller.Free;
context.Free;
end.
tasksink2: Parallel task sink with kill signaling in Erlang
#! /usr/bin/env escript
%%
%% Task sink - design 2
%% Adds pub-sub flow to send kill signal to workers
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to receive messages on
{ok, Receiver} = erlzmq:socket(Context, pull),
ok = erlzmq:bind(Receiver, "tcp://*:5558"),
%% Socket for worker control
{ok, Controller} = erlzmq:socket(Context, pub),
ok = erlzmq:bind(Controller, "tcp://*:5559"),
%% Wait for start of batch
{ok, _} = erlzmq:recv(Receiver),
%% Start our clock now
Start = now(),
%% Process 100 confirmations
process_confirmations(Receiver, 100),
io:format("Total elapsed time: ~b msec~n",
[timer:now_diff(now(), Start) div1000]),
%% Send kill signal to workers
ok = erlzmq:send(Controller, <<"KILL">>),
%% Finished
ok = erlzmq:close(Controller),
ok = erlzmq:close(Receiver),
ok = erlzmq:term(Context, 1000).
process_confirmations(_Receiver, 0) -> ok;
process_confirmations(Receiver, N) whenN > 0 ->
{ok, _} = erlzmq:recv(Receiver),
caseN - 1rem10of0 -> io:format(":");
_ -> io:format(".")
end,
process_confirmations(Receiver, N - 1).
tasksink2: Parallel task sink with kill signaling in Elixir
defmodule Tasksink2 do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:35
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, receiver} = :erlzmq.socket(context, :pull)
:ok = :erlzmq.bind(receiver, 'tcp://*:5558')
{:ok, controller} = :erlzmq.socket(context, :pub)
:ok = :erlzmq.bind(controller, 'tcp://*:5559')
{:ok, _} = :erlzmq.recv(receiver)
start = :erlang.now()
process_confirmations(receiver, 100)
:io.format('Total elapsed time: ~b msec~n', [div(:timer.now_diff(:erlang.now(), start), 1000)])
:ok = :erlzmq.send(controller, "KILL")
:ok = :erlzmq.close(controller)
:ok = :erlzmq.close(receiver)
:ok = :erlzmq.term(context, 1000)
end
def process_confirmations(_receiver, 0) do
:ok
end
def process_confirmations(receiver, n) when n > 0 do
{:ok, _} = :erlzmq.recv(receiver)
case(n - rem(1, 10)) do
0 ->
:io.format(':')
_ ->
:io.format('.')
end
process_confirmations(receiver, n - 1)
end
end
Tasksink2.main
tasksink2: Parallel task sink with kill signaling in F#
(*
Task sink - design 2
Adds pub-sub flow to send kill signal to workers
*)
#r @"bin/fszmq.dll"
open fszmq
#load "zhelpers.fs"
open Context
open Socket
open Polling
let main () =
// Prepare our context and socket
use context = new Context(1)
use receiver = context |> pull
bind receiver "tcp://*:5558"
// Socket for worker control
use controller = context |> pub
bind controller "tcp://*:5559"
// Wait for start of batch
s_recv receiver |> ignore
// Start our clock now
let watch = s_clock_start()
// Process 100 confirmations
for task_nbr in 0 .. 99 do
s_recv receiver |> ignore
printf (if (task_nbr / 10) * 10 = task_nbr then ":" else ".")
fflush()
// Calculate and report duration of batch
printfn "\nTotal elapsed time: %d msec" (s_clock_stop watch)
// Send kill signal to workers
s_send controller "KILL"
// Finished
sleep 1 // Give 0MQ time to deliver
EXIT_SUCCESS
main ()
tasksink2: Parallel task sink with kill signaling in Felix
Realistic applications need to shut down cleanly when interrupted with Ctrl-C or another signal such as SIGTERM. By default, these simply kill the process, meaning messages won’t be flushed, files won’t be closed cleanly, and so on. 实际应用需要在按下 Ctrl-C 或其他信号(如 SIGTERM )中断时能够干净地关闭。默认情况下,这些信号会直接终止进程,导致消息无法刷新,文件无法正常关闭,等等。
Here is how we handle a signal in various languages: 以下是在各种语言中处理信号的方法:
program interrupt;
//
// Shows how to handle Ctrl-C
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
socket: TZMQSocket;
frame: TZMQFrame;
begin
context := TZMQContext.Create;
socket := Context.Socket( stRep );
socket.bind( 'tcp://*:5555' );
while not context.Terminated do
begin
frame := TZMQFrame.Create;
try
socket.recv( frame );
except
on e: Exception do
Writeln( 'Exception, ' + e.Message );
end;
FreeAndNil( frame );
if socket.context.Terminated then
begin
Writeln( 'W: interrupt received, killing server...');
break;
end;
end;
socket.Free;
context.Free;
end.
interrupt: Handling Ctrl-C cleanly in Erlang
#! /usr/bin/env escript
%%
%% Illustrates the equivalent in Erlang to signal handling for shutdown
%%
%% Erlang applications don't use system signals for shutdown (they can't
%% without some sort of custom native extension). Instead they rely on an
%% explicit shutdown routine, either per process (as illustrated here) or
%% system wide (e.g. init:stop() and OTP application shutdown).
%%
main(_) ->
%% Start a process that manages its own ZeroMQ startup and shutdown
Server = start_server(),
%% Run for a while
timer:sleep(5000),
%% Send the process a shutdown message - this could be triggered any number
%% of ways (e.g. handling `terminate` in an OTP compliant process)
Server ! {shutdown, self()},
%% Wait for notification that the process has exited cleanly
receive
{ok, Server} -> ok
end.
start_server() ->
%% Start the server in a separate Erlang process
spawn(
fun() ->
%% The process manages its own ZeroMQ context
{ok, Context} = erlzmq:context(),
{ok, Socket} = erlzmq:socket(Context, [rep, {active, true}]),
ok = erlzmq:bind(Socket, "tcp://*:5555"),
io:format("Server started on port 5555~n"),
loop(Context, Socket)
end).
loop(Context, Socket) ->
receive
{zmq, Socket, Msg, _Flags} ->
erlzmq:send(Socket, <<"You said: ", Msg/binary>>),
timer:sleep(1000),
loop(Context, Socket);
{shutdown, From} ->
io:format("Stopping server... "),
ok = erlzmq:close(Socket),
ok = erlzmq:term(Context),
io:format("done~n"),
From ! {ok, self()}
end.
interrupt: Handling Ctrl-C cleanly in Elixir
defmodule Interrupt do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:25
"""
def main() do
server = start_server()
:timer.sleep(5000)
send(server, {:shutdown, self()})
receive do
{:ok, ^server} ->
:ok
end
end
def start_server() do
:erlang.spawn(fn ->
{:ok, context} = :erlzmq.context()
{:ok, socket} = :erlzmq.socket(context, [:rep, {:active, true}])
:ok = :erlzmq.bind(socket, 'tcp://*:5555')
:io.format('Server started on port 5555~n')
loop(context, socket)
end)
end
def loop(context, socket) do
receive do
{:zmq, ^socket, msg, _flags} ->
:erlzmq.send(socket, <<"You said: ", msg::binary>>)
:timer.sleep(1000)
loop(context, socket)
{:shutdown, from} ->
:io.format('Stopping server... ')
:ok = :erlzmq.close(socket)
:ok = :erlzmq.term(context)
:io.format('done~n')
send(from, {:ok, self()})
end
end
end
Interrupt.main
---- Shows how to handle Ctrl-C---- Author: Robert G. Jakabosky <bobby@sharedrealm.com>--
require"zmq"
require"zhelpers"local context = zmq.init(1)
local server = context:socket(zmq.REP)
server:bind("tcp://*:5555")
s_catch_signals ()
whiletruedo-- Blocking read will exit on a signallocal request = server:recv()
if (s_interrupted) then
printf ("W: interrupt received, killing server...\n")
breakend
server:send("World")
end
server:close()
context:term()
interrupt: Handling Ctrl-C cleanly in Node.js
// Show how to handle Ctrl+C in Node.js
var zmq = require('zeromq')
, socket = zmq.createSocket('rep');
socket.on('message', function(buf) {
// echo request back
socket.send(buf);
});
process.on('SIGINT', function() {
socket.close();
process.exit();
});
socket.bindSync('tcp://*:5555');
# Shows how to handle Ctrl-C (SIGINT) and SIGTERM in Perlusestrict;
usewarnings;
usev5.10;
useErrnoqw(EINTR);
useZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_REP);
my$interrupted;
$SIG{INT} = sub { $interrupted = 1; };
$SIG{TERM} = sub { $interrupted = 1; };
my$context = ZMQ::FFI->new();
my$socket = $context->socket(ZMQ_REP);
$socket->bind('tcp://*:5558');
$socket->die_on_error(0);
while (!$interrupted) {
$socket->recv();
if ($socket->last_errno != EINTR) {
die$socket->last_strerror;
}
}
warn"interrupt received, killing server...";
interrupt: Handling Ctrl-C cleanly in PHP
<?php/*
* Interrupt in PHP
* Shows how to handle CTRL+C
* @author Nicolas Van Eenaeme <nicolas(at)poison(dot)be>
*/declare(ticks=1); // PHP internal, make signal handling work
if (!function_exists('pcntl_signal'))
{
printf("Error, you need to enable the pcntl extension in your php binary, see http://www.php.net/manual/en/pcntl.installation.php for more info%s", PHP_EOL);
exit(1);
}
$running = true;
functionsignalHandler($signo)
{
global$running;
$running = false;
printf("Warning: interrupt received, killing server...%s", PHP_EOL);
}
pcntl_signal(SIGINT, 'signalHandler');
$context = new ZMQContext();
// Socket to talk to clients
$responder = new ZMQSocket($context, ZMQ::SOCKET_REP);
$responder->bind("tcp://*:5558");
while ($running)
{
// Wait for next request from client
try
{
$string = $responder->recv(); // The recv call will throw an ZMQSocketException when interrupted
// PHP Fatal error: Uncaught exception 'ZMQSocketException' with message 'Failed to receive message: Interrupted system call' in interrupt.php:35
}
catch (ZMQSocketException $e)
{
if ($e->getCode() == 4) // 4 == EINTR, interrupted system call (Ctrl+C will interrupt the blocking call as well)
{
usleep(1); // Don't just continue, otherwise the ticks function won't be processed, and the signal will be ignored, try it!
continue; // Ignore it, if our signal handler caught the interrupt as well, the $running flag will be set to false, so we'll break out
}
throw$e; // It's another exception, don't hide it to the user
}
printf("Received request: [%s]%s", $string, PHP_EOL);
// Do some 'work'
sleep(1);
// Send reply back to client
$responder->send("World");
}
// Do here all the cleanup that needs to be done
printf("Program ended cleanly%s", PHP_EOL);
interrupt: Handling Ctrl-C cleanly in Python
## Shows how to handle Ctrl-C#importsignalimporttimeimportzmq
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5558")
# SIGINT will normally raise a KeyboardInterrupt, just like any other Python calltry:
socket.recv()
except KeyboardInterrupt:
print("W: interrupt received, stopping...")
finally:
# clean up
socket.close()
context.term()
The program provides s_catch_signals(), which traps Ctrl-C (SIGINT) and SIGTERM. When either of these signals arrive, the s_catch_signals() handler sets the global variable s_interrupted. Thanks to your signal handler, your application will not die automatically. Instead, you have a chance to clean up and exit gracefully. You have to now explicitly check for an interrupt and handle it properly. Do this by calling s_catch_signals() (copy this from interrupt.c) at the start of your main code. This sets up the signal handling. The interrupt will affect ZeroMQ calls as follows: 该程序提供了 s_catch_signals() ,用于捕获 Ctrl-C( SIGINT )和 SIGTERM 。当任一信号到达时, s_catch_signals() 处理程序会设置全局变量 s_interrupted 。多亏了您的信号处理程序,您的应用程序不会自动终止。相反,您有机会进行清理并优雅地退出。您现在必须显式检查中断并正确处理它。通过在主代码开始处调用 s_catch_signals() (从 interrupt.c 复制)来实现。这会设置信号处理。中断将对 ZeroMQ 调用产生以下影响:
If your code is blocking in a blocking call (sending a message, receiving a message, or polling), then when a signal arrives, the call will return with EINTR. 如果您的代码在阻塞调用(发送消息、接收消息或轮询)中阻塞,那么当信号到达时,该调用将返回 EINTR 。
Wrappers like s_recv() return NULL if they are interrupted. 像 s_recv() 这样的封装在被中断时会返回 NULL。
So check for an EINTR return code, a NULL return, and/or s_interrupted. 所以检查返回码是否为 EINTR ,返回值是否为 NULL,和/或是否为 s_interrupted 。
Here is a typical code fragment: 下面是一个典型的代码片段:
s_catch_signals ();
client = zmq_socket (...);
while (!s_interrupted) {
char *message = s_recv (client);
if (!message)
break; // Ctrl-C used
}
zmq_close (client);
If you call s_catch_signals() and don’t test for interrupts, then your application will become immune to Ctrl-C and SIGTERM, which may be useful, but is usually not. 如果调用 s_catch_signals() 而不检测中断,那么您的应用程序将对 Ctrl-C 和 SIGTERM 免疫,这可能有用,但通常不是。
Any long-running application has to manage memory correctly, or eventually it’ll use up all available memory and crash. If you use a language that handles this automatically for you, congratulations. If you program in C or C++ or any other language where you’re responsible for memory management, here’s a short tutorial on using valgrind, which among other things will report on any leaks your programs have. 任何长时间运行的应用程序都必须正确管理内存,否则最终会耗尽所有可用内存并崩溃。如果你使用的语言能自动处理内存管理,恭喜你。如果你使用 C、C++ 或任何其他需要自己负责内存管理的语言编程,这里有一个关于使用 valgrind 的简短教程,valgrind 可以报告程序中的内存泄漏等问题。
To install valgrind, e.g., on Ubuntu or Debian, issue this command: 要在 Ubuntu 或 Debian 上安装 valgrind,请执行以下命令:
sudo apt-get install valgrind
By default, ZeroMQ will cause valgrind to complain a lot. To remove these warnings, create a file called vg.supp that contains this: 默认情况下,ZeroMQ 会导致 valgrind 报告大量警告。要消除这些警告,请创建一个名为 vg.supp 的文件,内容如下:
Fix your applications to exit cleanly after Ctrl-C. For any application that exits by itself, that’s not needed, but for long-running applications, this is essential, otherwise valgrind will complain about all currently allocated memory. 修正您的应用程序,使其在按下 Ctrl-C 后能够干净地退出。对于任何自动退出的应用程序,这不是必需的,但对于长时间运行的应用程序来说,这是必不可少的,否则 valgrind 会抱怨所有当前分配的内存。
Build your application with -DDEBUG if it’s not your default setting. That ensures valgrind can tell you exactly where memory is being leaked. 如果这不是您的默认设置,请使用 -DDEBUG 构建您的应用程序。这样可以确保 valgrind 精确地告诉您内存泄漏的位置。
ZeroMQ is perhaps the nicest way ever to write multithreaded (MT) applications. Whereas ZeroMQ sockets require some readjustment if you are used to traditional sockets, ZeroMQ multithreading will take everything you know about writing MT applications, throw it into a heap in the garden, pour gasoline over it, and set it alight. It’s a rare book that deserves burning, but most books on concurrent programming do. ZeroMQ 可能是迄今为止编写多线程(MT)应用程序最优雅的方式。虽然如果你习惯了传统套接字,ZeroMQ 套接字需要一些调整,但 ZeroMQ 的多线程会将你对编写多线程应用程序的所有认知,统统扔进花园里的堆里,浇上汽油,然后点燃。很少有书值得焚烧,但大多数关于并发编程的书都值得。
To make utterly perfect MT programs (and I mean that literally), we don’t need mutexes, locks, or any other form of inter-thread communication except messages sent across ZeroMQ sockets. 为了编写绝对完美的多线程程序(我是字面意思),我们不需要互斥锁、锁机制或任何其他形式的线程间通信,唯一需要的就是通过 ZeroMQ 套接字发送的消息。
By “perfect MT programs”, I mean code that’s easy to write and understand, that works with the same design approach in any programming language, and on any operating system, and that scales across any number of CPUs with zero wait states and no point of diminishing returns. 所谓“完美的多线程程序”,是指那些易于编写和理解的代码,能够在任何编程语言和操作系统中采用相同的设计方法运行,并且能够在任意数量的 CPU 上扩展,且无等待状态且不存在收益递减点。
If you’ve spent years learning tricks to make your MT code work at all, let alone rapidly, with locks and semaphores and critical sections, you will be disgusted when you realize it was all for nothing. If there’s one lesson we’ve learned from 30+ years of concurrent programming, it is: just don’t share state. It’s like two drunkards trying to share a beer. It doesn’t matter if they’re good buddies. Sooner or later, they’re going to get into a fight. And the more drunkards you add to the table, the more they fight each other over the beer. The tragic majority of MT applications look like drunken bar fights. 如果你花了多年时间学习各种技巧,让你的多线程代码能够运行,甚至运行得很快,使用锁、信号量和临界区,那么当你意识到这一切都是徒劳时,你会感到非常失望。我们从 30 多年的并发编程中学到的一个教训是:千万不要共享状态。这就像两个醉汉试图共享一瓶啤酒。无论他们多么要好,迟早都会吵架。而且你桌上醉汉越多,他们为了那瓶啤酒争吵得越激烈。绝大多数多线程应用看起来就像醉酒的酒吧斗殴。
The list of weird problems that you need to fight as you write classic shared-state MT code would be hilarious if it didn’t translate directly into stress and risk, as code that seems to work suddenly fails under pressure. A large firm with world-beating experience in buggy code released its list of “11 Likely Problems In Your Multithreaded Code”, which covers forgotten synchronization, incorrect granularity, read and write tearing, lock-free reordering, lock convoys, two-step dance, and priority inversion. 当你编写经典的共享状态多线程代码时,需要应对的一系列奇怪问题本来会让人发笑,但由于这些问题直接转化为压力和风险——看似正常工作的代码在压力下突然失败——情况就不那么好笑了。一家在处理有缺陷代码方面拥有世界领先经验的大公司发布了他们的“多线程代码中 11 个可能出现的问题”清单,涵盖了忘记同步、粒度不正确、读写撕裂、无锁重排序、锁队列、两步舞以及优先级反转等问题。
Yeah, we counted seven problems, not eleven. That’s not the point though. The point is, do you really want that code running the power grid or stock market to start getting two-step lock convoys at 3 p.m. on a busy Thursday? Who cares what the terms actually mean? This is not what turned us on to programming, fighting ever more complex side effects with ever more complex hacks. 是的,我们数了七个问题,不是十一。重点不是这个。重点是,你真的希望那些控制电网或股市的代码,在一个繁忙的星期四下午三点开始出现两步锁定连锁反应吗?谁在乎这些术语实际上是什么意思?这并不是我们热衷于编程的原因,我们并不是为了用越来越复杂的黑客手段来对抗越来越复杂的副作用。
Some widely used models, despite being the basis for entire industries, are fundamentally broken, and shared state concurrency is one of them. Code that wants to scale without limit does it like the Internet does, by sending messages and sharing nothing except a common contempt for broken programming models. 一些被广泛使用的模型,尽管是整个行业的基础,实际上是根本有缺陷的,共享状态并发就是其中之一。想要无限扩展的代码,就像互联网那样,通过发送消息来实现,除了对破碎编程模型的共同蔑视外,不共享任何东西。
You should follow some rules to write happy multithreaded code with ZeroMQ: 编写使用 ZeroMQ 的高效多线程代码时,应遵循以下规则:
Isolate data privately within its thread and never share data in multiple threads. The only exception to this are ZeroMQ contexts, which are threadsafe. 在其线程内私有隔离数据,绝不在多个线程间共享数据。唯一的例外是 ZeroMQ 上下文,它们是线程安全的。
Stay away from the classic concurrency mechanisms like as mutexes, critical sections, semaphores, etc. These are an anti-pattern in ZeroMQ applications. 远离经典的并发机制,如互斥锁、临界区、信号量等。这些在 ZeroMQ 应用中是反模式。
Create one ZeroMQ context at the start of your process, and pass that to all threads that you want to connect via inproc sockets. 在进程开始时创建一个 ZeroMQ 上下文,并将其传递给所有希望通过 inproc 套接字连接的线程。
Use attached threads to create structure within your application, and connect these to their parent threads using PAIR sockets over inproc. The pattern is: bind parent socket, then create child thread which connects its socket. 使用附属线程在应用程序内创建结构,并使用 PAIR 套接字通过 inproc 将它们连接到其父线程。模式是:绑定父套接字,然后创建连接其套接字的子线程。
Use detached threads to simulate independent tasks, with their own contexts. Connect these over tcp. Later you can move these to stand-alone processes without changing the code significantly. 使用分离线程模拟独立任务,拥有自己的上下文。通过 tcp 连接它们。以后你可以将这些线程迁移到独立进程,而无需显著更改代码。
All interaction between threads happens as ZeroMQ messages, which you can define more or less formally. 线程之间的所有交互都以 ZeroMQ 消息的形式进行,您可以更正式或更随意地定义这些消息。
Don’t share ZeroMQ sockets between threads. ZeroMQ sockets are not threadsafe. Technically it’s possible to migrate a socket from one thread to another but it demands skill. The only place where it’s remotely sane to share sockets between threads are in language bindings that need to do magic like garbage collection on sockets. 不要在多个线程之间共享 ZeroMQ 套接字。ZeroMQ 套接字不是线程安全的。从技术上讲,可以将套接字从一个线程迁移到另一个线程,但这需要一定的技巧。唯一可能在多个线程之间共享套接字的合理场景是那些需要对套接字进行垃圾回收等魔法操作的语言绑定。
If you need to start more than one proxy in an application, for example, you will want to run each in their own thread. It is easy to make the error of creating the proxy frontend and backend sockets in one thread, and then passing the sockets to the proxy in another thread. This may appear to work at first but will fail randomly in real use. Remember: Do not use or close sockets except in the thread that created them. 如果你需要在一个应用程序中启动多个代理,例如,你会希望让每个代理运行在自己的线程中。很容易犯的一个错误是,在一个线程中创建代理的前端和后端套接字,然后将这些套接字传递给另一个线程中的代理。这在开始时可能看起来可行,但在实际使用中会随机失败。请记住:不要在创建套接字的线程之外使用或关闭套接字。
If you follow these rules, you can quite easily build elegant multithreaded applications, and later split off threads into separate processes as you need to. Application logic can sit in threads, processes, or nodes: whatever your scale needs. 如果遵循这些规则,你可以轻松构建优雅的多线程应用程序,之后根据需要将线程拆分为独立的进程。应用逻辑可以运行在线程、进程或节点中:完全取决于你的规模需求。
ZeroMQ uses native OS threads rather than virtual “green” threads. The advantage is that you don’t need to learn any new threading API, and that ZeroMQ threads map cleanly to your operating system. You can use standard tools like Intel’s ThreadChecker to see what your application is doing. The disadvantages are that native threading APIs are not always portable, and that if you have a huge number of threads (in the thousands), some operating systems will get stressed. ZeroMQ 使用本地操作系统线程,而非虚拟的“绿色”线程。其优点是您无需学习任何新的线程 API,且 ZeroMQ 线程能够与操作系统一一对应。您可以使用诸如 Intel 的 ThreadChecker 等标准工具来查看应用程序的运行情况。缺点是本地线程 API 并不总是具有可移植性,且如果线程数量庞大(达到数千个),某些操作系统可能会出现压力。
Let’s see how this works in practice. We’ll turn our old Hello World server into something more capable. The original server ran in a single thread. If the work per request is low, that’s fine: one ØMQ thread can run at full speed on a CPU core, with no waits, doing an awful lot of work. But realistic servers have to do nontrivial work per request. A single core may not be enough when 10,000 clients hit the server all at once. So a realistic server will start multiple worker threads. It then accepts requests as fast as it can and distributes these to its worker threads. The worker threads grind through the work and eventually send their replies back. 让我们看看这在实际中的运作方式。我们将把旧的 Hello World 服务器改造成更强大的版本。原始服务器运行在单线程中。如果每个请求的工作量很小,那没问题:一个 ØMQ 线程可以在一个 CPU 核心上全速运行,无需等待,完成大量工作。但现实中的服务器每个请求都需要做非平凡的工作。当有 10,000 个客户端同时访问服务器时,单个核心可能不够用。因此,现实中的服务器会启动多个工作线程。它会尽可能快地接受请求,并将这些请求分发给工作线程。工作线程处理完任务后,最终将回复发送回去。
You can, of course, do all this using a proxy broker and external worker processes, but often it’s easier to start one process that gobbles up sixteen cores than sixteen processes, each gobbling up one core. Further, running workers as threads will cut out a network hop, latency, and network traffic. 当然,你也可以使用代理代理和外部工作进程来完成所有这些,但通常启动一个占用十六个核心的进程比启动十六个各占用一个核心的进程更简单。此外,将工作线程作为线程运行可以减少一次网络跳转、延迟和网络流量。
The MT version of the Hello World service basically collapses the broker and workers into a single process: Hello World 服务的 MT 版本基本上将代理和工作线程合并到一个进程中:
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid MTServer(string[] args)
{
//
// Multithreaded Hello World server
//
// Author: metadings
//
// Socket to talk to clients and
// Socket to talk to workers
using (var ctx = new ZContext())
using (var clients = new ZSocket(ctx, ZSocketType.ROUTER))
using (var workers = new ZSocket(ctx, ZSocketType.DEALER))
{
clients.Bind("tcp://*:5555");
workers.Bind("inproc://workers");
// Launch pool of worker threads
for (int i = 0; i < 5; ++i)
{
new Thread(() => MTServer_Worker(ctx)).Start();
}
// Connect work threads to client threads via a queue proxy
ZContext.Proxy(clients, workers);
}
}
staticvoid MTServer_Worker(ZContext ctx)
{
// Socket to talk to dispatcher
using (var server = new ZSocket(ctx, ZSocketType.REP))
{
server.Connect("inproc://workers");
while (true)
{
using (ZFrame frame = server.ReceiveFrame())
{
Console.Write("Received: {0}", frame.ReadString());
// Do some 'work'
Thread.Sleep(1);
// Send reply back to client
string replyText = "World";
Console.WriteLine(", Sending: {0}", replyText);
server.Send(new ZFrame(replyText));
}
}
}
}
}
}
mtserver: Multithreaded service in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Multithreaded Hello World server in Common Lisp;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.mtserver
(:nicknames#:mtserver)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.mtserver)
(defunworker-routine (context)
;; Socket to talk to dispatcher
(zmq:with-socket (receivercontextzmq:rep)
(zmq:connectreceiver"inproc://workers")
(loop
(let ((request (make-instance'zmq:msg)))
(zmq:recvreceiverrequest)
(message"Received request: [~A]~%" (zmq:msg-data-as-stringrequest))
;; Do some 'work'
(sleep1)
;; Send reply back to client
(let ((reply (make-instance'zmq:msg:data"World")))
(zmq:sendreceiverreply))))))
(defunmain ()
;; Prepare our context and socket
(zmq:with-context (context1)
;; Socket to talk to clients
(zmq:with-socket (clientscontextzmq:router)
(zmq:bindclients"tcp://*:5555")
;; Socket to talk to workers
(zmq:with-socket (workerscontextzmq:dealer)
(zmq:bindworkers"inproc://workers")
;; Launch pool of worker threads
(dotimes (i5)
(bt:make-thread (lambda () (worker-routinecontext))
:name (formatnil"worker-~D"i)))
;; Connect work threads to client threads via a queue
(zmq:devicezmq:queueclientsworkers))))
(cleanup))
mtserver: Multithreaded service in Delphi
program mtserver;
//
// Multithreaded Hello World server
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
procedure worker_routine( lcontext: TZMQContext );
var
receiver: TZMQSocket;
s: Utf8String;
begin
// Socket to talk to dispatcher
receiver := lContext.Socket( stRep );
receiver.connect( 'inproc://workers' );
while True do
begin
receiver.recv( s );
Writeln( Format( 'Received request: [%s]', [s] ) );
// Do some 'work'
sleep (1000);
// Send reply back to client
receiver.send( 'World' );
end;
receiver.Free;
end;
var
context: TZMQContext;
clients,
workers: TZMQSocket;
i: Integer;
tid: Cardinal;
begin
context := TZMQContext.Create;
// Socket to talk to clients
clients := Context.Socket( stRouter );
clients.bind( 'tcp://*:5555' );
// Socket to talk to workers
workers := Context.Socket( stDealer );
workers.bind( 'inproc://workers' );
// Launch pool of worker threads
for i := 0 to 4 do
BeginThread( nil, 0, @worker_routine, context, 0, tid );
// Connect work threads to client threads via a queue
ZMQProxy( clients, workers, nil );
// We never get here but clean up anyhow
clients.Free;
workers.Free;
context.Free;
end.
mtserver: Multithreaded service in Erlang
#!/usr/bin/env escript
%%
%% Multiprocess Hello World server (analogous to C threads example)
%%
worker_routine(Context) ->
%% Socket to talk to dispatcher
{ok, Receiver} = erlzmq:socket(Context, rep),
ok = erlzmq:connect(Receiver, "inproc://workers"),
worker_loop(Receiver),
ok = erlzmq:close(Receiver).
worker_loop(Receiver) ->
{ok, Msg} = erlzmq:recv(Receiver),
io:format("Received ~s [~p]~n", [Msg, self()]),
%% Do some work
timer:sleep(1000),
erlzmq:send(Receiver, <<"World">>),
worker_loop(Receiver).
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to talk to clients
{ok, Clients} = erlzmq:socket(Context, [router, {active, true}]),
ok = erlzmq:bind(Clients, "tcp://*:5555"),
%% Socket to talk to workers
{ok, Workers} = erlzmq:socket(Context, [dealer, {active, true}]),
ok = erlzmq:bind(Workers, "inproc://workers"),
%% Start worker processes
start_workers(Context, 5),
%% Connect work threads to client threads via a queue
erlzmq_device:queue(Clients, Workers),
%% We never get here but cleanup anyhow
ok = erlzmq:close(Clients),
ok = erlzmq:close(Workers),
ok = erlzmq:term(Context).
start_workers(_Context, 0) -> ok;
start_workers(Context, N) whenN > 0 ->
spawn(fun() -> worker_routine(Context) end),
start_workers(Context, N - 1).
mtserver: Multithreaded service in Elixir
defmodule Mtserver do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:29
"""
def worker_routine(context) do
{:ok, receiver} = :erlzmq.socket(context, :rep)
:ok = :erlzmq.connect(receiver, 'inproc://workers')
worker_loop(receiver)
:ok = :erlzmq.close(receiver)
end
def worker_loop(receiver) do
{:ok, msg} = :erlzmq.recv(receiver)
:io.format('Received ~s [~p]~n', [msg, self()])
:timer.sleep(1000)
:erlzmq.send(receiver, "World")
worker_loop(receiver)
end
def main() do
{:ok, context} = :erlzmq.context()
{:ok, clients} = :erlzmq.socket(context, [:router, {:active, true}])
:ok = :erlzmq.bind(clients, 'tcp://*:5555')
{:ok, workers} = :erlzmq.socket(context, [:dealer, {:active, true}])
:ok = :erlzmq.bind(workers, 'inproc://workers')
start_workers(context, 5)
:erlzmq_device.queue(clients, workers)
:ok = :erlzmq.close(clients)
:ok = :erlzmq.close(workers)
:ok = :erlzmq.term(context)
end
def start_workers(_context, 0) do
:ok
end
def start_workers(context, n) when n > 0 do
:erlang.spawn(fn -> worker_routine(context) end)
start_workers(context, n - 1)
end
end
Mtserver.main
mtserver: Multithreaded service in F#
(*
Multithreaded Hello World server
*)
#r @"bin/fszmq.dll"
#r @"bin/fszmq.devices.dll"
open fszmq
open fszmq.Context
open fszmq.Devices
open fszmq.Socket
#load "zhelpers.fs"
open System.Threading
let worker_routine (o:obj) =
// socket to talk to dispatcher
use receiver = (o :?> Context) |> rep
"inproc://workers" |> connect receiver
while true do
let message = s_recv receiver
printfn "Received request: [%s]" message
// do some 'work'
sleep 1
"World" |> s_send receiver
let main () =
use context = new Context(1)
// socket to talk to clients
use clients = route context
"tcp://*:5555" |> bind clients
// socket to talk to clients
use workers = deal context
"inproc://workers" |> bind workers
// launch pool of worker threads
for thread_nbr in 0 .. 4 do
let t = Thread(ParameterizedThreadStart worker_routine)
t.Start(context)
// connect work threads to client threads via a queue
(clients,workers) |> queue |> ignore
// we never get here but clen up anyhow
EXIT_SUCCESS
main ()
// Multithreaded Hello World server.
// Uses Goroutines. We could also use channels (a native form of
// inproc), but I stuck to the example.
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq""time"
)
funcmain() {
// Launch pool of worker threads
for i := 0; i != 5; i = i + 1 {
goworker()
}
// Prepare our context and sockets
context, _ := zmq.NewContext()
defer context.Close()
// Socket to talk to clients
clients, _ := context.NewSocket(zmq.ROUTER)
defer clients.Close()
clients.Bind("tcp://*:5555")
// Socket to talk to workers
workers, _ := context.NewSocket(zmq.DEALER)
defer workers.Close()
workers.Bind("ipc://workers.ipc")
// connect work threads to client threads via a queue
zmq.Device(zmq.QUEUE, clients, workers)
}
funcworker() {
context, _ := zmq.NewContext()
defer context.Close()
// Socket to talk to dispatcher
receiver, _ := context.NewSocket(zmq.REP)
defer receiver.Close()
receiver.Connect("ipc://workers.ipc")
fortrue {
received, _ := receiver.Recv(0)
fmt.Printf("Received request [%s]\n", received)
// Do some 'work'
time.Sleep(time.Second)
// Send reply back to client
receiver.Send([]byte("World"), 0)
}
}
mtserver: Multithreaded service in Haskell
{-# LANGUAGE OverloadedStrings #-}-- |-- Multithreaded Hello World server (p.65)-- (Client) REQ >-> ROUTER (Proxy) DEALER >-> REP ([Worker])-- The client is provided by `hwclient.hs`-- Compile with -threadedmoduleMainwhereimportSystem.ZMQ4.MonadicimportControl.Monad (forever, replicateM_)
importData.ByteString.Char8 (unpack)
importControl.Concurrent (threadDelay)
importText.Printfmain::IO()main=
runZMQ $ do-- Server frontend socket to talk to clients
server <- socket Router
bind server "tcp://*:5555"-- Socket to talk to workers
workers <- socket Dealer
bind workers "inproc://workers"-- using inproc (inter-thread) we expect to share the same context
replicateM_ 5 (async worker)
-- Connect work threads to client threads via a queue
proxy server workers Nothingworker::ZMQ z ()worker=do
receiver <- socket Rep
connect receiver "inproc://workers"
forever $ do
receive receiver >>= liftIO . printf "Received request:%s\n" . unpack
-- Simulate doing some 'work' for 1 second
liftIO $ threadDelay (1 * 1000 * 1000)
send receiver []"World"
mtserver: Multithreaded service in Haxe
package ;
importhaxe.io.Bytes;
importhaxe.Stack;
importneko.Lib;
importneko.Sys;
#if !phpimportneko.vm.Thread;
#endimportorg.zeromq.ZMQ;
importorg.zeromq.ZMQContext;
importorg.zeromq.ZMQPoller;
importorg.zeromq.ZMQSocket;
importorg.zeromq.ZMQException;
/**
* Multithreaded Hello World Server
*
* See: http://zguide.zeromq.org/page:all#Multithreading-with-MQ
* Use with HelloWorldClient.hx
*
*/class MTServer
{
staticfunctionworker() {
var context:ZMQContext = ZMQContext.instance();
// Socket to talk to dispatchervar responder:ZMQSocket = context.socket(ZMQ_REP);
#if (neko || cpp)
responder.connect("inproc://workers");
#elseif php
responder.connect("ipc://workers.ipc");
#end
ZMQ.catchSignals();
while (true) {
try {
// Wait for next request from clientvar request:Bytes = responder.recvMsg();
trace ("Received request:" + request.toString());
// Do some work
Sys.sleep(1);
// Send reply back to client
responder.sendMsg(Bytes.ofString("World"));
} catch (e:ZMQException) {
if (ZMQ.isInterrupted()) {
break;
}
trace (e.toString());
}
}
responder.close();
returnnull;
}
/**
* Implements a reqeust/reply QUEUE broker device
* Returns if poll is interrupted
* @param ctx
* @param frontend
* @param backend
*/staticfunctionqueueDevice(ctx:ZMQContext, frontend:ZMQSocket, backend:ZMQSocket) {
// Initialise pollsetvar poller:ZMQPoller = ctx.poller();
poller.registerSocket(frontend, ZMQ.ZMQ_POLLIN());
poller.registerSocket(backend, ZMQ.ZMQ_POLLIN());
ZMQ.catchSignals();
while (true) {
try {
poller.poll();
if (poller.pollin(1)) {
var more:Bool = true;
while (more) {
// Receive messagevar msg = frontend.recvMsg();
more = frontend.hasReceiveMore();
// Broker it
backend.sendMsg(msg, { if (more) SNDMORE elsenull; } );
}
}
if (poller.pollin(2)) {
var more:Bool = true;
while (more) {
// Receive messagevar msg = backend.recvMsg();
more = backend.hasReceiveMore();
// Broker it
frontend.sendMsg(msg, { if (more) SNDMORE elsenull; } );
}
}
} catch (e:ZMQException) {
if (ZMQ.isInterrupted()) {
break;
}
// Handle other errors
trace("ZMQException #:" + e.errNo + ", str:" + e.str());
trace (Stack.toString(Stack.exceptionStack()));
}
}
}
publicstaticfunctionmain() {
var context:ZMQContext = ZMQContext.instance();
Lib.println ("** MTServer (see: http://zguide.zeromq.org/page:all#Multithreading-with-MQ)");
// Socket to talk to clientsvar clients:ZMQSocket = context.socket(ZMQ_ROUTER);
clients.bind ("tcp://*:5556");
// Socket to talk to workersvar workers:ZMQSocket = context.socket(ZMQ_DEALER);
#if (neko || cpp)
workers.bind ("inproc://workers");
// Launch worker thread poolvar workerThreads:List<Thread> = new List<Thread>();
for (thread_nbr in0 ... 5) {
workerThreads.add(Thread.create(worker));
}
#elseif php
workers.bind ("ipc://workers.ipc");
// Launch pool of worker processes, due to php's lack of thread support// See: https://github.com/imatix/zguide/blob/master/examples/PHP/mtserver.phpfor (thread_nbr in0 ... 5) {
untyped __php__('
$pid = pcntl_fork();
if ($pid == 0) {
// Running in child process
worker();
exit();
}');
}
#end// Invoke request / reply broker (aka QUEUE device) to connect clients to workers
queueDevice(context, clients, workers);
// Close up shop
clients.close();
workers.close();
context.term();
}
}
mtserver: Multithreaded service in Java
packageguide;
importorg.zeromq.SocketType;
importorg.zeromq.ZMQ;
importorg.zeromq.ZMQ.Socket;
importorg.zeromq.ZContext;
/**
* Multi threaded Hello World server
*/publicclassmtserver
{
privatestaticclassWorkerextends Thread
{
private ZContext context;
privateWorker(ZContext context)
{
this.context = context;
}
@Overridepublicvoidrun()
{
ZMQ.Socket socket = context.createSocket(SocketType.REP);
socket.connect("inproc://workers");
while (true) {
// Wait for next request from client (C string)
String request = socket.recvStr(0);
System.out.println(Thread.currentThread().getName() + " Received request: [" + request + "]");
// Do some 'work'
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
}
// Send reply back to client (C string)
socket.send("world", 0);
}
}
}
publicstaticvoidmain(String[] args)
{
try (ZContext context = new ZContext()) {
Socket clients = context.createSocket(SocketType.ROUTER);
clients.bind("tcp://*:5555");
Socket workers = context.createSocket(SocketType.DEALER);
workers.bind("inproc://workers");
for (int thread_nbr = 0; thread_nbr < 5; thread_nbr++) {
Thread worker = new Worker(context);
worker.start();
}
// Connect work threads to client threads via a queue
ZMQ.proxy(clients, workers, null);
}
}
}
---- Multithreaded Hello World server---- Author: Robert G. Jakabosky <bobby@sharedrealm.com>--
require"zmq"
require"zmq.threads"
require"zhelpers"local worker_code = [[
local id = ...
local zmq = require"zmq"
require"zhelpers"
local threads = require"zmq.threads"
local context = threads.get_parent_ctx()
-- Socket to talk to dispatcher
local receiver = context:socket(zmq.REP)
assert(receiver:connect("inproc://workers"))
while true do
local msg = receiver:recv()
printf ("Received request: [%s]\n", msg)
-- Do some 'work'
s_sleep (1000)
-- Send reply back to client
receiver:send("World")
end
receiver:close()
return nil
]]
s_version_assert (2, 1)
local context = zmq.init(1)
-- Socket to talk to clientslocal clients = context:socket(zmq.ROUTER)
clients:bind("tcp://*:5555")
-- Socket to talk to workerslocal workers = context:socket(zmq.DEALER)
workers:bind("inproc://workers")
-- Launch pool of worker threadslocal worker_pool = {}
for n=1,5do
worker_pool[n] = zmq.threads.runstring(context, worker_code, n)
worker_pool[n]:start()
end-- Connect work threads to client threads via a queue
print("start queue device.")
zmq.device(zmq.QUEUE, clients, workers)
-- We never get here but clean up anyhow
clients:close()
workers:close()
context:term()
# Multithreaded Hello World server in Perlusestrict;
usewarnings;
usev5.10;
useZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_REP ZMQ_ROUTER ZMQ_DEALER);
usethreads;
subworker_routine {
my ($context) = @_;
# Socket to talk to dispatchermy$receiver = $context->socket(ZMQ_REP);
$receiver->connect('inproc://workers');
while (1) {
my$string = $receiver->recv();
say "Received request: [$string]";
# Do some 'work'sleep1;
# Send reply back to client$receiver->send('World');
}
}
my$context = ZMQ::FFI->new();
# Socket to talk to clientsmy$clients = $context->socket(ZMQ_ROUTER);
$clients->bind('tcp://*:5555');
# Socket to talk to workersmy$workers = $context->socket(ZMQ_DEALER);
$workers->bind('inproc://workers');
# Launch pool of worker threadsfor (1..5) {
threads->create('worker_routine', $context);
}
# Connect work threads to client threads via a queue proxy$context->proxy($clients, $workers);
# We never get here
mtserver: Multithreaded service in PHP
<?php/*
* Multithreaded Hello World server. Uses proceses due
* to PHP's lack of threads!
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/functionworker_routine()
{
$context = new ZMQContext();
// Socket to talk to dispatcher
$receiver = new ZMQSocket($context, ZMQ::SOCKET_REP);
$receiver->connect("ipc://workers.ipc");
while (true) {
$string = $receiver->recv();
printf ("Received request: [%s]%s", $string, PHP_EOL);
// Do some 'work'
sleep(1);
// Send reply back to client
$receiver->send("World");
}
}
// Launch pool of worker threads
for ($thread_nbr = 0; $thread_nbr != 5; $thread_nbr++) {
$pid = pcntl_fork();
if ($pid == 0) {
worker_routine();
exit();
}
}
// Prepare our context and sockets
$context = new ZMQContext();
// Socket to talk to clients
$clients = new ZMQSocket($context, ZMQ::SOCKET_ROUTER);
$clients->bind("tcp://*:5555");
// Socket to talk to workers
$workers = new ZMQSocket($context, ZMQ::SOCKET_DEALER);
$workers->bind("ipc://workers.ipc");
// Connect work threads to client threads via a queue
$device = new ZMQDevice($clients, $workers);
$device->run ();
mtserver: Multithreaded service in Python
"""
Multithreaded Hello World server
Author: Guillaume Aubert (gaubert) <guillaume(dot)aubert(at)gmail(dot)com>
"""importtimeimportthreadingimportzmqdefworker_routine(worker_url: str,
context: zmq.Context = None):
"""Worker routine"""
context = context or zmq.Context.instance()
# Socket to talk to dispatcher
socket = context.socket(zmq.REP)
socket.connect(worker_url)
while True:
string = socket.recv()
print(f"Received request: [ {string} ]")
# Do some 'work'
time.sleep(1)
# Send reply back to client
socket.send(b"World")
defmain():
"""Server routine"""
url_worker = "inproc://workers"
url_client = "tcp://*:5555"# Prepare our context and sockets
context = zmq.Context.instance()
# Socket to talk to clients
clients = context.socket(zmq.ROUTER)
clients.bind(url_client)
# Socket to talk to workers
workers = context.socket(zmq.DEALER)
workers.bind(url_worker)
# Launch pool of worker threadsfor i inrange(5):
thread = threading.Thread(target=worker_routine, args=(url_worker,))
thread.daemon = True
thread.start()
zmq.proxy(clients, workers)
# We never get here but clean up anyhow
clients.close()
workers.close()
context.term()
if __name__ == "__main__":
main()
mtserver: Multithreaded service in Q
// Multithreaded Hello World server
\l qzmq.q
worker_routine:{[args; ctx; pipe]
// Socket to talk to dispatcher
receiver:zsocket.new[ctx; zmq.REP];
zsocket.connect[receiver; `inproc://workers];
while[1b;
s:zstr.recv[receiver];
// Do some 'work'
zclock.sleep 1;
// Send reply back to client
zstr.send[receiver; "World"]];
zsocket.destroy[ctx; receiver]}
ctx:zctx.new[]
// Socket to talk to clients
clients:zsocket.new[ctx; zmq.ROUTER]
clientsport:zsocket.bind[clients; `$"tcp://*:5555"]
// Socket to talk to workers
workers:zsocket.new[ctx; zmq.DEALER]
workersport:zsocket.bind[workers; `inproc://workers]
// Launch pool of worker threads
do[5; zthread.fork[ctx; `worker_routine; 0]]
// Connect work threads to client threads via a queue
rc:libzmq.device[zmq.QUEUE; clients; workers]
if[rc<>-1; '`fail]
// We never get here but clean up anyhow
zsocket.destroy[ctx; clients]
zsocket.destroy[ctx; workers]
zctx.destroy[ctx]
\\
All the code should be recognizable to you by now. How it works: 到现在为止,所有代码你应该都能认得。它的工作原理是:
The server starts a set of worker threads. Each worker thread creates a REP socket and then processes requests on this socket. Worker threads are just like single-threaded servers. The only differences are the transport (inproc instead of tcp), and the bind-connect direction. 服务器启动一组工作线程。每个工作线程创建一个 REP 套接字,然后在该套接字上处理请求。工作线程就像单线程服务器。唯一的区别是传输方式( inproc 而不是 tcp )以及绑定-连接方向。
The server creates a ROUTER socket to talk to clients and binds this to its external interface (over tcp). 服务器创建一个 ROUTER 套接字与客户端通信,并将其绑定到外部接口(通过 tcp )。
The server creates a DEALER socket to talk to the workers and binds this to its internal interface (over inproc). 服务器创建一个 DEALER 套接字与工作线程通信,并将其绑定到内部接口(通过 inproc )。
The server starts a proxy that connects the two sockets. The proxy pulls incoming requests fairly from all clients, and distributes those out to workers. It also routes replies back to their origin. 服务器启动一个代理,连接这两个套接字。代理公平地从所有客户端拉取传入请求,并将其分发给工作线程。同时,它还将回复路由回请求的源头。
Note that creating threads is not portable in most programming languages. The POSIX library is pthreads, but on Windows you have to use a different API. In our example, the pthread_create call starts up a new thread running the worker_routine function we defined. We’ll see in
Chapter 3 - Advanced Request-Reply Patterns how to wrap this in a portable API. 请注意,在大多数编程语言中创建线程并不具备可移植性。POSIX 库是 pthreads,但在 Windows 上必须使用不同的 API。在我们的示例中, pthread_create 调用启动了一个运行我们定义的 worker_routine 函数的新线程。我们将在第 3 章 - 高级请求-响应模式中看到如何将其封装在一个可移植的 API 中。
Here the “work” is just a one-second pause. We could do anything in the workers, including talking to other nodes. This is what the MT server looks like in terms of ØMQ sockets and nodes. Note how the request-reply chain is REQ-ROUTER-queue-DEALER-REP. 这里的“工作”只是暂停一秒钟。我们可以在工作线程中执行任何操作,包括与其他节点通信。这就是 MT 服务器在 ØMQ 套接字和节点方面的样子。请注意请求-响应链是 REQ-ROUTER-queue-DEALER-REP 。
Signaling Between Threads (PAIR Sockets)
线程间信号传递(PAIR 套接字)#
When you start making multithreaded applications with ZeroMQ, you’ll encounter the question of how to coordinate your threads. Though you might be tempted to insert “sleep” statements, or use multithreading techniques such as semaphores or mutexes, the only mechanism that you should use are ZeroMQ messages. Remember the story of The Drunkards and The Beer Bottle. 当你开始使用 ZeroMQ 编写多线程应用程序时,你会遇到如何协调线程的问题。虽然你可能会想插入“sleep”语句,或使用信号量、互斥锁等多线程技术,但你唯一应该使用的机制是 ZeroMQ 消息。记住《醉汉与啤酒瓶》的故事。
Let’s make three threads that signal each other when they are ready. In this example, we use PAIR sockets over the inproc transport: 让我们创建三个线程,当它们准备好时相互发送信号。在此示例中,我们使用基于 inproc 传输的 PAIR 套接字:
mtrelay: Multithreaded relay in C++ mtrelay:C++中的多线程接力
/*
author: Saad Hussain <saadnasir31@gmail.com>
*/#include<iostream>#include<thread>#include<zmq.hpp>voidstep1(zmq::context_t &context) {
// Connect to step2 and tell it we're ready
zmq::socket_t xmitter(context, zmq::socket_type::pair);
xmitter.connect("inproc://step2");
std::cout << "Step 1 ready, signaling step 2" << std::endl;
zmq::message_t msg("READY");
xmitter.send(msg, zmq::send_flags::none);
}
voidstep2(zmq::context_t &context) {
// Bind inproc socket before starting step1
zmq::socket_t receiver(context, zmq::socket_type::pair);
receiver.bind("inproc://step2");
std::thread thd(step1, std::ref(context));
// Wait for signal and pass it on
zmq::message_t msg;
receiver.recv(msg, zmq::recv_flags::none);
// Connect to step3 and tell it we're ready
zmq::socket_t xmitter(context, zmq::socket_type::pair);
xmitter.connect("inproc://step3");
std::cout << "Step 2 ready, signaling step 3" << std::endl;
xmitter.send(zmq::str_buffer("READY"), zmq::send_flags::none);
thd.join();
}
intmain() {
zmq::context_t context(1);
// Bind inproc socket before starting step2
zmq::socket_t receiver(context, zmq::socket_type::pair);
receiver.bind("inproc://step3");
std::thread thd(step2, std::ref(context));
// Wait for signal
zmq::message_t msg;
receiver.recv(msg, zmq::recv_flags::none);
std::cout << "Test successful!" << std::endl;
thd.join();
return0;
}
mtrelay: Multithreaded relay in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid MTRelay(string[] args)
{
//
// Multithreaded relay
//
// Author: metadings
//
// Bind inproc socket before starting step2
using (var ctx = new ZContext())
using (var receiver = new ZSocket(ctx, ZSocketType.PAIR))
{
receiver.Bind("inproc://step3");
new Thread(() => MTRelay_step2(ctx)).Start();
// Wait for signal
receiver.ReceiveFrame();
Console.WriteLine("Test successful!");
}
}
staticvoid MTRelay_step2(ZContext ctx)
{
// Bind inproc socket before starting step1
using (var receiver = new ZSocket(ctx, ZSocketType.PAIR))
{
receiver.Bind("inproc://step2");
new Thread(() => MTRelay_step1(ctx)).Start();
// Wait for signal and pass it on
receiver.ReceiveFrame();
}
// Connect to step3 and tell it we're ready
using (var xmitter = new ZSocket(ctx, ZSocketType.PAIR))
{
xmitter.Connect("inproc://step3");
Console.WriteLine("Step 2 ready, signaling step 3");
xmitter.Send(new ZFrame("READY"));
}
}
staticvoid MTRelay_step1(ZContext ctx)
{
// Connect to step2 and tell it we're ready
using (var xmitter = new ZSocket(ctx, ZSocketType.PAIR))
{
xmitter.Connect("inproc://step2");
Console.WriteLine("Step 1 ready, signaling step 2");
xmitter.Send(new ZFrame("READY"));
}
}
}
}
mtrelay: Multithreaded relay in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Multithreaded relay in Common Lisp;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.mtrelay
(:nicknames#:mtrelay)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.mtrelay)
(defunstep1 (context)
;; Signal downstream to step 2
(zmq:with-socket (sendercontextzmq:pair)
(zmq:connectsender"inproc://step2")
(let ((msg (make-instance'zmq:msg:data"")))
(zmq:sendsendermsg))))
(defunstep2 (context)
;; Bind to inproc: endpoint, then start upstream thread
(zmq:with-socket (receivercontextzmq:pair)
(zmq:bindreceiver"inproc://step2")
(bt:make-thread (lambda () (step1context)))
;; Wait for signal
(let ((msg (make-instance'zmq:msg)))
(zmq:recvreceivermsg))
;; Signal downstream to step 3
(zmq:with-socket (sendercontextzmq:pair)
(zmq:connectsender"inproc://step3")
(let ((msg (make-instance'zmq:msg:data"")))
(zmq:sendsendermsg)))))
(defunmain ()
(zmq:with-context (context1)
;; Bind to inproc: endpoint, then start upstream thread
(zmq:with-socket (receivercontextzmq:pair)
(zmq:bindreceiver"inproc://step3")
(bt:make-thread (lambda () (step2context)))
;; Wait for signal
(let ((msg (make-instance'zmq:msg)))
(zmq:recvreceivermsg)))
(message"Test successful!~%"))
(cleanup))
mtrelay: Multithreaded relay in Delphi
program mtrelay;
//
// Multithreaded relay
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
procedure step1( lcontext: TZMQContext );
var
xmitter: TZMQSocket;
begin
// Connect to step2 and tell it we're ready
xmitter := lContext.Socket( stPair );
xmitter.connect( 'inproc://step2' );
Writeln( 'Step 1 ready, signaling step 2' );
xmitter.send( 'READY' );
xmitter.Free;
end;
procedure step2( lcontext: TZMQContext );
var
receiver,
xmitter: TZMQSocket;
s: Utf8String;
tid: Cardinal;
begin
// Bind inproc socket before starting step1
receiver := lContext.Socket( stPair );
receiver.bind( 'inproc://step2' );
BeginThread( nil, 0, @step1, lcontext, 0, tid );
// Wait for signal and pass it on
receiver.recv( s );
receiver.Free;
// Connect to step3 and tell it we're ready
xmitter := lContext.Socket( stPair );
xmitter.connect( 'inproc://step3' );
Writeln( 'Step 2 ready, signaling step 3' );
xmitter.send( 'READY' );
xmitter.Free;
end;
var
context: TZMQContext;
receiver: TZMQSocket;
tid: Cardinal;
s: Utf8String;
begin
context := TZMQContext.Create;
// Bind inproc socket before starting step2
receiver := Context.Socket( stPair );
receiver.bind( 'inproc://step3' );
BeginThread( nil, 0, @step2, context, 0, tid );
// Wait for signal
receiver.recv ( s );
receiver.Free;
Writeln( 'Test successful!' );
context.Free;
end.
mtrelay: Multithreaded relay in Erlang
#!/usr/bin/env escript
%%
%% Multithreaded relay
%%
%% This example illustrates how inproc sockets can be used to communicate
%% across "threads". Erlang of course supports this natively, but it's fun to
%% see how 0MQ lets you do this across other languages!
%%
step1(Context) ->
%% Connect to step2 and tell it we're ready
{ok, Xmitter} = erlzmq:socket(Context, pair),
ok = erlzmq:connect(Xmitter, "inproc://step2"),
io:format("Step 1 ready, signaling step 2~n"),
ok = erlzmq:send(Xmitter, <<"READY">>),
ok = erlzmq:close(Xmitter).
step2(Context) ->
%% Bind inproc socket before starting step1
{ok, Receiver} = erlzmq:socket(Context, pair),
ok = erlzmq:bind(Receiver, "inproc://step2"),
spawn(fun() -> step1(Context) end),
%% Wait for signal and pass it on
{ok, _} = erlzmq:recv(Receiver),
ok = erlzmq:close(Receiver),
%% Connect to step3 and tell it we're ready
{ok, Xmitter} = erlzmq:socket(Context, pair),
ok = erlzmq:connect(Xmitter, "inproc://step3"),
io:format("Step 2 ready, signaling step 3~n"),
ok = erlzmq:send(Xmitter, <<"READY">>),
ok = erlzmq:close(Xmitter).
main(_) ->
{ok, Context} = erlzmq:context(),
%% Bind inproc socket before starting step2
{ok, Receiver} = erlzmq:socket(Context, pair),
ok = erlzmq:bind(Receiver, "inproc://step3"),
spawn(fun() -> step2(Context) end),
%% Wait for signal
{ok, _} = erlzmq:recv(Receiver),
erlzmq:close(Receiver),
io:format("Test successful~n"),
ok = erlzmq:term(Context).
(*
Multithreaded relay
*)
#r @"bin/fszmq.dll"
open fszmq
open fszmq.Context
open fszmq.Socket
#load "zhelpers.fs"
open System.Threading
let step1 (o:obj) =
// connect to step2 and tell it we're ready
use xmitter = (o :?> Context) |> pair
"inproc://step2" |> connect xmitter
printfn "Step 1 ready, signaling step 2"
"READY" |> s_send xmitter
let step2 (o:obj) =
let context : Context = downcast o
// bind inproc socket before starting step1
use receiver = pair context
"inproc://step2" |> bind receiver
let t = Thread(ParameterizedThreadStart step1)
t.Start(o)
// wait for signal and pass it on
s_recv receiver |> ignore
// connect to step3 and tell it we're ready
use xmitter = pair context
"inproc://step3" |> connect xmitter
printfn "Step 2 ready, signaling step 3"
"READY" |> s_send xmitter
let main () =
use context = new Context(1)
// bind inproc socket before starting step2
use receiver = pair context
"inproc://step3" |> bind receiver
let t = Thread(ParameterizedThreadStart step2)
t.Start(context)
// wait for signal
s_recv receiver |> ignore
printfn "Test successful"
EXIT_SUCCESS
main ()
# Multithreaded relay in Perlusestrict;
usewarnings;
usev5.10;
useZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_PAIR);
usethreads;
substep1 {
my ($context) = @_;
# Connect to step2 and tell it we're readymy$xmitter = $context->socket(ZMQ_PAIR);
$xmitter->connect('inproc://step2');
say "Step 1 ready, signaling step 2";
$xmitter->send("READY");
}
substep2 {
my ($context) = @_;
# Bind inproc socket before starting step1my$receiver = $context->socket(ZMQ_PAIR);
$receiver->bind('inproc://step2');
threads->create('step1', $context)
->detach();
# Wait for signal and pass it onmy$string = $receiver->recv();
# Connect to step3 and tell it we're readymy$xmitter = $context->socket(ZMQ_PAIR);
$xmitter->connect('inproc://step3');
say "Step 2 ready, signaling step 3";
$xmitter->send("READY");
}
my$context = ZMQ::FFI->new();
# Bind inproc socket before starting step2my$receiver = $context->socket(ZMQ_PAIR);
$receiver->bind('inproc://step3');
threads->create('step2', $context)
->detach();
# Wait for signal$receiver->recv();
say "Test successful!";
mtrelay: Multithreaded relay in PHP
<?php/*
* Multithreaded relay. Actually using processes due a lack
* of PHP threads.
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/functionstep1()
{
$context = new ZMQContext();
// Signal downstream to step 2
$sender = new ZMQSocket($context, ZMQ::SOCKET_PAIR);
$sender->connect("ipc://step2.ipc");
$sender->send("");
}
functionstep2()
{
$pid = pcntl_fork();
if ($pid == 0) {
step1();
exit();
}
$context = new ZMQContext();
// Bind to ipc: endpoint, then start upstream thread
$receiver = new ZMQSocket($context, ZMQ::SOCKET_PAIR);
$receiver->bind("ipc://step2.ipc");
// Wait for signal
$receiver->recv();
// Signal downstream to step 3
$sender = new ZMQSocket($context, ZMQ::SOCKET_PAIR);
$sender->connect("ipc://step3.ipc");
$sender->send("");
}
// Start upstream thread then bind to icp: endpoint
$pid = pcntl_fork();
if ($pid == 0) {
step2();
exit();
}
$context = new ZMQContext();
$receiver = new ZMQSocket($context, ZMQ::SOCKET_PAIR);
$receiver->bind("ipc://step3.ipc");
// Wait for signal
$receiver->recv();
echo"Test succesful!", PHP_EOL;
mtrelay: Multithreaded relay in Python
"""
Multithreaded relay
Author: Guillaume Aubert (gaubert) <guillaume(dot)aubert(at)gmail(dot)com>
"""importthreadingimportzmqdefstep1(context: zmq.Context = None):
"""Step 1"""
context = context or zmq.Context.instance()
# Signal downstream to step 2
sender = context.socket(zmq.PAIR)
sender.connect("inproc://step2")
sender.send(b"")
defstep2(context: zmq.Context = None):
"""Step 2"""
context = context or zmq.Context.instance()
# Bind to inproc: endpoint, then start upstream thread
receiver = context.socket(zmq.PAIR)
receiver.bind("inproc://step2")
thread = threading.Thread(target=step1)
thread.start()
# Wait for signal
msg = receiver.recv()
# Signal downstream to step 3
sender = context.socket(zmq.PAIR)
sender.connect("inproc://step3")
sender.send(b"")
defmain():
""" server routine """# Prepare our context and sockets
context = zmq.Context.instance()
# Bind to inproc: endpoint, then start upstream thread
receiver = context.socket(zmq.PAIR)
receiver.bind("inproc://step3")
thread = threading.Thread(target=step2)
thread.start()
# Wait for signal
string = receiver.recv()
print("Test successful!")
receiver.close()
context.term()
if __name__ == "__main__":
main()
mtrelay: Multithreaded relay in Q
// Multithreaded relay
\l qzmq.q
step1:{[args; ctx; pipe]
// Connect to step2 and tell it we're ready
xmitter:zsocket.new[ctx; zmq.PAIR];
zsocket.connect[xmitter; `inproc://step2];
zclock.log "Step 1 ready, signaling step 2";
zstr.send[xmitter; "READY"];
zsocket.destroy[ctx; xmitter]}
step2:{[args; ctx; pipe]
// Bind inproc socket before starting step1
receiver:zsocket.new[ctx; zmq.PAIR];
port:zsocket.bind[receiver; `inproc://step2];
pipe:zthread.fork[ctx; `step1; 0N];
// Wait for signal and pass it on
zclock.log s:zstr.recv[receiver];
// Connect to step3 and tell it we're ready
xmitter:zsocket.new[ctx; zmq.PAIR];
zsocket.connect[xmitter; `inproc://step3];
zclock.log "Step 2 ready, signaling step 3";
zstr.send[xmitter; "READY"];
zsocket.destroy[ctx; xmitter]}
ctx:zctx.new[]
// Bind inproc socket before starting step2
receiver:zsocket.new[ctx; zmq.PAIR]
port:zsocket.bind[receiver; `inproc://step3]
pipe:zthread.fork[ctx; `step2; 0N]
// Wait for signal
zclock.log s:zstr.recv[receiver]
zclock.log "Test successful!"
zctx.destroy[ctx]
\\
This is a classic pattern for multithreading with ZeroMQ: 这是使用 ZeroMQ 进行多线程的经典模式:
Two threads communicate over inproc, using a shared context. 两个线程通过 inproc 进行通信,使用共享上下文。
The parent thread creates one socket, binds it to an inproc:@<*>@ endpoint, and *then// starts the child thread, passing the context to it. 父线程创建一个套接字,将其绑定到一个 inproc:@<*>@ 端点,然后启动子线程,将上下文传递给它。
The child thread creates the second socket, connects it to that inproc:@<*>@ endpoint, and *then// signals to the parent thread that it’s ready. 子线程创建第二个套接字,连接到该 inproc:@<*>@ 端点,然后向父线程发出准备就绪的信号。
Note that multithreading code using this pattern is not scalable out to processes. If you use inproc and socket pairs, you are building a tightly-bound application, i.e., one where your threads are structurally interdependent. Do this when low latency is really vital. The other design pattern is a loosely bound application, where threads have their own context and communicate over ipc or tcp. You can easily break loosely bound threads into separate processes. 请注意,使用此模式的多线程代码无法扩展到进程。如果您使用 inproc 和套接字对,您正在构建一个紧密绑定的应用程序,即线程在结构上相互依赖的应用程序。当低延迟非常关键时,请采用此方法。另一种设计模式是松散绑定的应用程序,线程拥有自己的上下文,并通过 ipc 或 tcp 进行通信。您可以轻松地将松散绑定的线程拆分为独立的进程。
This is the first time we’ve shown an example using PAIR sockets. Why use PAIR? Other socket combinations might seem to work, but they all have side effects that could interfere with signaling: 这是我们第一次展示使用 PAIR 套接字的示例。为什么使用 PAIR?其他套接字组合看似可行,但它们都有可能干扰信号传递的副作用:
You can use PUSH for the sender and PULL for the receiver. This looks simple and will work, but remember that PUSH will distribute messages to all available receivers. If you by accident start two receivers (e.g., you already have one running and you start a second), you’ll “lose” half of your signals. PAIR has the advantage of refusing more than one connection; the pair is exclusive. 你可以使用 PUSH 作为发送端,PULL 作为接收端。这看起来简单且可行,但请记住,PUSH 会将消息分发给所有可用的接收端。如果你不小心启动了两个接收端(例如,你已经有一个在运行,然后又启动了第二个),你将“丢失”一半的信号。PAIR 的优点是拒绝多个连接;该对等连接是独占的。
You can use DEALER for the sender and ROUTER for the receiver. ROUTER, however, wraps your message in an “envelope”, meaning your zero-size signal turns into a multipart message. If you don’t care about the data and treat anything as a valid signal, and if you don’t read more than once from the socket, that won’t matter. If, however, you decide to send real data, you will suddenly find ROUTER providing you with “wrong” messages. DEALER also distributes outgoing messages, giving the same risk as PUSH. 你可以使用 DEALER 作为发送端,ROUTER 作为接收端。然而,ROUTER 会将你的消息包装在一个“信封”中,这意味着你的零长度信号会变成一个多部分消息。如果你不关心数据,并且将任何内容都视为有效信号,且不会从套接字读取多次,那这不会有影响。但如果你决定发送真实数据,你会突然发现 ROUTER 会给你提供“错误”的消息。DEALER 也会分发外发消息,存在与 PUSH 相同的风险。
You can use PUB for the sender and SUB for the receiver. This will correctly deliver your messages exactly as you sent them and PUB does not distribute as PUSH or DEALER do. However, you need to configure the subscriber with an empty subscription, which is annoying. 你可以使用 PUB 作为发送端,SUB 作为接收端。这将准确地按你发送的方式传递消息,且 PUB 不像 PUSH 或 DEALER 那样进行分发。然而,你需要为订阅者配置一个空订阅,这比较麻烦。
For these reasons, PAIR makes the best choice for coordination between pairs of threads. 基于这些原因,PAIR 是线程对之间协调的最佳选择。
When you want to coordinate a set of nodes on a network, PAIR sockets won’t work well any more. This is one of the few areas where the strategies for threads and nodes are different. Principally, nodes come and go whereas threads are usually static. PAIR sockets do not automatically reconnect if the remote node goes away and comes back. 当你想协调网络上的一组节点时,PAIR 套接字将不再适用。这是线程和节点策略不同的少数几个领域之一。主要区别在于,节点会动态加入和离开,而线程通常是静态的。如果远程节点断开后再重新连接,PAIR 套接字不会自动重连。
The second significant difference between threads and nodes is that you typically have a fixed number of threads but a more variable number of nodes. Let’s take one of our earlier scenarios (the weather server and clients) and use node coordination to ensure that subscribers don’t lose data when starting up. 线程和节点之间的第二个显著区别是,线程的数量通常是固定的,而节点的数量则更为可变。让我们以之前的一个场景(天气服务器和客户端)为例,使用节点协调来确保订阅者在启动时不会丢失数据。
This is how the application will work: 应用程序的工作流程如下:
The publisher knows in advance how many subscribers it expects. This is just a magic number it gets from somewhere. 发布者事先知道它期望有多少订阅者。这只是它从某处获得的一个魔数。
The publisher starts up and waits for all subscribers to connect. This is the node coordination part. Each subscriber subscribes and then tells the publisher it’s ready via another socket. 发布者启动并等待所有订阅者连接。这是节点协调部分。每个订阅者订阅后,通过另一个套接字告诉发布者它已准备好。
When the publisher has all subscribers connected, it starts to publish data. 当发布者所有订阅者都连接后,它开始发布数据。
In this case, we’ll use a REQ-REP socket flow to synchronize subscribers and publisher. Here is the publisher: 在这种情况下,我们将使用 REQ-REP 套接字流程来同步订阅者和发布者。以下是发布者:
// Synchronized publisher
#include"zhelpers.h"#define SUBSCRIBERS_EXPECTED 10 // We wait for 10 subscribers
intmain (void)
{
void *context = zmq_ctx_new ();
// Socket to talk to clients
void *publisher = zmq_socket (context, ZMQ_PUB);
int sndhwm = 1100000;
zmq_setsockopt (publisher, ZMQ_SNDHWM, &sndhwm, sizeof (int));
zmq_bind (publisher, "tcp://*:5561");
// Socket to receive signals
void *syncservice = zmq_socket (context, ZMQ_REP);
zmq_bind (syncservice, "tcp://*:5562");
// Get synchronization from subscribers
printf ("Waiting for subscribers\n");
int subscribers = 0;
while (subscribers < SUBSCRIBERS_EXPECTED) {
// - wait for synchronization request
char *string = s_recv (syncservice);
free (string);
// - send synchronization reply
s_send (syncservice, "");
subscribers++;
}
// Now broadcast exactly 1M updates followed by END
printf ("Broadcasting messages\n");
int update_nbr;
for (update_nbr = 0; update_nbr < 1000000; update_nbr++)
s_send (publisher, "Rhubarb");
s_send (publisher, "END");
zmq_close (publisher);
zmq_close (syncservice);
zmq_ctx_destroy (context);
return0;
}
syncpub: Synchronized publisher in C++ syncpub:C++中的同步发布者
//
// Synchronized publisher in C++
//
#include"zhelpers.hpp"// We wait for 10 subscribers
#define SUBSCRIBERS_EXPECTED 10
intmain () {
zmq::context_t context(1);
// Socket to talk to clients
zmq::socket_t publisher (context, ZMQ_PUB);
int sndhwm = 0;
publisher.setsockopt (ZMQ_SNDHWM, &sndhwm, sizeof (sndhwm));
publisher.bind("tcp://*:5561");
// Socket to receive signals
zmq::socket_t syncservice (context, ZMQ_REP);
syncservice.bind("tcp://*:5562");
// Get synchronization from subscribers
int subscribers = 0;
while (subscribers < SUBSCRIBERS_EXPECTED) {
// - wait for synchronization request
s_recv (syncservice);
// - send synchronization reply
s_send (syncservice, std::string(""));
subscribers++;
}
// Now broadcast exactly 1M updates followed by END
int update_nbr;
for (update_nbr = 0; update_nbr < 1000000; update_nbr++) {
s_send (publisher, std::string("Rhubarb"));
}
s_send (publisher, std::string("END"));
sleep (1); // Give 0MQ time to flush output
return0;
}
syncpub: Synchronized publisher in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
constint SyncPub_SubscribersExpected = 3; // We wait for 3 subscribers
publicstaticvoid SyncPub(string[] args)
{
//
// Synchronized publisher
//
// Author: metadings
//
// Socket to talk to clients and
// Socket to receive signals
using (var context = new ZContext())
using (var publisher = new ZSocket(context, ZSocketType.PUB))
using (var syncservice = new ZSocket(context, ZSocketType.REP))
{
publisher.SendHighWatermark = 1100000;
publisher.Bind("tcp://*:5561");
syncservice.Bind("tcp://*:5562");
// Get synchronization from subscribers
int subscribers = SyncPub_SubscribersExpected;
do
{
Console.WriteLine("Waiting for {0} subscriber" + (subscribers > 1 ? "s" : string.Empty) + "...", subscribers);
// - wait for synchronization request
syncservice.ReceiveFrame();
// - send synchronization reply
syncservice.Send(new ZFrame());
}
while (--subscribers > 0);
// Now broadcast exactly 20 updates followed by END
Console.WriteLine("Broadcasting messages:");
for (int i = 0; i < 20; ++i)
{
Console.WriteLine("Sending {0}...", i);
publisher.Send(new ZFrame(i));
}
publisher.Send(new ZFrame("END"));
}
}
}
}
syncpub: Synchronized publisher in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Synchronized publisher in Common Lisp;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.syncpub
(:nicknames#:syncpub)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.syncpub)
;; We wait for 10 subscribers
(defparameter *expected-subscribers* 10)
(defunmain ()
(zmq:with-context (context1)
;; Socket to talk to clients
(zmq:with-socket (publishercontextzmq:pub)
(zmq:bindpublisher"tcp://*:5561")
;; Socket to receive signals
(zmq:with-socket (syncservicecontextzmq:rep)
(zmq:bindsyncservice"tcp://*:5562")
;; Get synchronization from subscribers
(loop:repeat *expected-subscribers* :do;; - wait for synchronization request
(let ((msg (make-instance'zmq:msg)))
(zmq:recvsyncservicemsg))
;; - send synchronization reply
(let ((msg (make-instance'zmq:msg:data"")))
(zmq:sendsyncservicemsg)))
;; Now broadcast exactly 1M updates followed by END
(loop:repeat1000000:do
(let ((msg (make-instance'zmq:msg:data"Rhubarb")))
(zmq:sendpublishermsg)))
(let ((msg (make-instance'zmq:msg:data"END")))
(zmq:sendpublishermsg))))
;; Give 0MQ/2.0.x time to flush output
(sleep1))
(cleanup))
syncpub: Synchronized publisher in Delphi
program syncpub;
//
// Synchronized publisher
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqApi
;
// We wait for 10 subscribers
const
SUBSCRIBERS_EXPECTED = 2;
var
context: TZMQContext;
publisher,
syncservice: TZMQSocket;
subscribers: Integer;
str: Utf8String;
i: Integer;
begin
context := TZMQContext.create;
// Socket to talk to clients
publisher := Context.Socket( stPub );
publisher.setSndHWM( 1000001 );
publisher.bind( 'tcp://*:5561' );
// Socket to receive signals
syncservice := Context.Socket( stRep );
syncservice.bind( 'tcp://*:5562' );
// Get synchronization from subscribers
Writeln( 'Waiting for subscribers' );
subscribers := 0;
while ( subscribers < SUBSCRIBERS_EXPECTED ) do
begin
// - wait for synchronization request
syncservice.recv( str );
// - send synchronization reply
syncservice.send( '' );
Inc( subscribers );
end;
// Now broadcast exactly 1M updates followed by END
Writeln( 'Broadcasting messages' );
for i := 0 to 1000000 - 1 do
publisher.send( 'Rhubarb' );
publisher.send( 'END' );
publisher.Free;
syncservice.Free;
context.Free;
end.
syncpub: Synchronized publisher in Erlang
#! /usr/bin/env escript
%%
%% Synchronized publisher
%%
%% We wait for 10 subscribers
-define(SUBSCRIBERS_EXPECTED, 10).
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to talk to clients
{ok, Publisher} = erlzmq:socket(Context, pub),
ok = erlzmq:bind(Publisher, "tcp://*:5561"),
%% Socket to receive signals
{ok, Syncservice} = erlzmq:socket(Context, rep),
ok = erlzmq:bind(Syncservice, "tcp://*:5562"),
%% Get synchronization from subscribers
io:format("Waiting for subscribers~n"),
sync_subscribers(Syncservice, ?SUBSCRIBERS_EXPECTED),
%% Now broadcast exactly 1M updates followed by END
io:format("Broadcasting messages~n"),
broadcast(Publisher, 1000000),
ok = erlzmq:send(Publisher, <<"END">>),
ok = erlzmq:close(Publisher),
ok = erlzmq:close(Syncservice),
ok = erlzmq:term(Context).
sync_subscribers(_Syncservice, 0) -> ok;
sync_subscribers(Syncservice, N) whenN > 0 ->
%% Wait for synchornization request
{ok, _} = erlzmq:recv(Syncservice),
%% Send synchronization reply
ok = erlzmq:send(Syncservice, <<>>),
sync_subscribers(Syncservice, N - 1).
broadcast(_Publisher, 0) -> ok;
broadcast(Publisher, N) whenN > 0 ->
ok = erlzmq:send(Publisher, <<"Rhubarb">>),
broadcast(Publisher, N - 1).
syncpub: Synchronized publisher in Elixir
defmodule Syncpub do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:34
"""
defmacrop erlconst_SUBSCRIBERS_EXPECTED() do
quote do
2
end
end
def main() do
{:ok, context} = :erlzmq.context()
{:ok, publisher} = :erlzmq.socket(context, :pub)
:ok = :erlzmq.bind(publisher, 'tcp://*:5561')
{:ok, syncservice} = :erlzmq.socket(context, :rep)
:ok = :erlzmq.bind(syncservice, 'tcp://*:5562')
:io.format('Waiting for subscribers. Please start 2 subscribers.~n')
sync_subscribers(syncservice, erlconst_SUBSCRIBERS_EXPECTED())
:io.format('Broadcasting messages~n')
broadcast(publisher, 1000000)
:ok = :erlzmq.send(publisher, "END")
:ok = :erlzmq.close(publisher)
:ok = :erlzmq.close(syncservice)
:ok = :erlzmq.term(context)
end
def sync_subscribers(_syncservice, 0) do
:ok
end
def sync_subscribers(syncservice, n) when n > 0 do
{:ok, _} = :erlzmq.recv(syncservice)
:ok = :erlzmq.send(syncservice, <<>>)
sync_subscribers(syncservice, n - 1)
end
def broadcast(_publisher, 0) do
:ok
end
def broadcast(publisher, n) when n > 0 do
:ok = :erlzmq.send(publisher, "Rhubarb")
broadcast(publisher, n - 1)
end
end
Syncpub.main
syncpub: Synchronized publisher in F#
(*
Synchronized publisher
*)
#r @"bin/fszmq.dll"
open fszmq
open fszmq.Context
open fszmq.Socket
#load "zhelpers.fs"
// we wait for 10 subscribers
let [<Literal>] SUBSCRIBERS_EXPECTED = 10
let main () =
use context = new Context(1)
// socket to talk to clients
use publisher = pub context
"tcp://*:5561" |> bind publisher
// socket to receive signals
use syncservice = rep context
"tcp://*:5562" |> bind syncservice
// get synchronization from subscribers
printfn "Waiting for subscribers"
let subscribers = ref 0
while !subscribers < SUBSCRIBERS_EXPECTED do
// - wait for synchronization request
syncservice |> s_recv |> ignore
// - send synchronization reply
"" |> s_send syncservice
incr subscribers
// now broadcast exactly 1M updates followed by END
printfn "Broadcasting messages"
for update_nbr in 0 .. 999999 do "Rhubarb" |> s_send publisher
"END" |> s_send publisher
EXIT_SUCCESS
main ()
# Synchronized publisher in Perlusestrict;
usewarnings;
usev5.10;
useZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_PUB ZMQ_REP ZMQ_SNDHWM);
my$SUBSCRIBERS_EXPECTED = 10; # We wait for 10 subscribersmy$context = ZMQ::FFI->new();
# Socket to talk to clientsmy$publisher = $context->socket(ZMQ_PUB);
$publisher->set(ZMQ_SNDHWM, 'int', 0);
$publisher->set_linger(-1);
$publisher->bind('tcp://*:5561');
# Socket to receive signalsmy$syncservice = $context->socket(ZMQ_REP);
$syncservice->bind('tcp://*:5562');
# Get synchronization from subscribers
say "Waiting for subscribers";
formy$subscribers (1..$SUBSCRIBERS_EXPECTED) {
# wait for synchronization request$syncservice->recv();
# send synchronization reply$syncservice->send('');
say "+1 subscriber ($subscribers/$SUBSCRIBERS_EXPECTED)";
}
# Now broadcast exactly 1M updates followed by END
say "Broadcasting messages";
for (1..1_000_000) {
$publisher->send("Rhubarb");
}
$publisher->send("END");
say "Done";
syncpub: Synchronized publisher in PHP
<?php/*
* Synchronized publisher
*
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/// We wait for 10 subscribers
define("SUBSCRIBERS_EXPECTED", 10);
$context = new ZMQContext();
// Socket to talk to clients
$publisher = new ZMQSocket($context, ZMQ::SOCKET_PUB);
$publisher->bind("tcp://*:5561");
// Socket to receive signals
$syncservice = new ZMQSocket($context, ZMQ::SOCKET_REP);
$syncservice->bind("tcp://*:5562");
// Get synchronization from subscribers
$subscribers = 0;
while ($subscribers < SUBSCRIBERS_EXPECTED) {
// - wait for synchronization request
$string = $syncservice->recv();
// - send synchronization reply
$syncservice->send("");
$subscribers++;
}
// Now broadcast exactly 1M updates followed by END
for ($update_nbr = 0; $update_nbr < 1000000; $update_nbr++) {
$publisher->send("Rhubarb");
}
$publisher->send("END");
sleep (1); // Give 0MQ/2.0.x time to flush output
syncpub: Synchronized publisher in Python
## Synchronized publisher#importzmq# We wait for 10 subscribers
SUBSCRIBERS_EXPECTED = 10defmain():
context = zmq.Context()
# Socket to talk to clients
publisher = context.socket(zmq.PUB)
# set SNDHWM, so we don't drop messages for slow subscribers
publisher.sndhwm = 1100000
publisher.bind("tcp://*:5561")
# Socket to receive signals
syncservice = context.socket(zmq.REP)
syncservice.bind("tcp://*:5562")
# Get synchronization from subscribers
subscribers = 0while subscribers < SUBSCRIBERS_EXPECTED:
# wait for synchronization request
msg = syncservice.recv()
# send synchronization reply
syncservice.send(b'')
subscribers += 1print(f"+1 subscriber ({subscribers}/{SUBSCRIBERS_EXPECTED})")
# Now broadcast exactly 1M updates followed by ENDfor i inrange(1000000):
publisher.send(b"Rhubarb")
publisher.send(b"END")
if __name__ == "__main__":
main()
// Synchronized subscriber
#include"zhelpers.h"#include<unistd.h>intmain (void)
{
void *context = zmq_ctx_new ();
// First, connect our subscriber socket
void *subscriber = zmq_socket (context, ZMQ_SUB);
zmq_connect (subscriber, "tcp://localhost:5561");
zmq_setsockopt (subscriber, ZMQ_SUBSCRIBE, "", 0);
// 0MQ is so fast, we need to wait a while...
sleep (1);
// Second, synchronize with publisher
void *syncclient = zmq_socket (context, ZMQ_REQ);
zmq_connect (syncclient, "tcp://localhost:5562");
// - send a synchronization request
s_send (syncclient, "");
// - wait for synchronization reply
char *string = s_recv (syncclient);
free (string);
// Third, get our updates and report how many we got
int update_nbr = 0;
while (1) {
char *string = s_recv (subscriber);
if (strcmp (string, "END") == 0) {
free (string);
break;
}
free (string);
update_nbr++;
}
printf ("Received %d updates\n", update_nbr);
zmq_close (subscriber);
zmq_close (syncclient);
zmq_ctx_destroy (context);
return0;
}
syncsub: Synchronized subscriber in C++ syncsub:用 C++ 编写的同步订阅者
//
// Synchronized subscriber in C++
//
#include"zhelpers.hpp"intmain (int argc, char *argv[])
{
zmq::context_t context(1);
// First, connect our subscriber socket
zmq::socket_t subscriber (context, ZMQ_SUB);
subscriber.connect("tcp://localhost:5561");
subscriber.set(zmq::sockopt::subscribe, "");
// Second, synchronize with publisher
zmq::socket_t syncclient (context, ZMQ_REQ);
syncclient.connect("tcp://localhost:5562");
// - send a synchronization request
s_send (syncclient, std::string(""));
// - wait for synchronization reply
s_recv (syncclient);
// Third, get our updates and report how many we got
int update_nbr = 0;
while (1) {
if (s_recv (subscriber).compare("END") == 0) {
break;
}
update_nbr++;
}
std::cout << "Received " << update_nbr << " updates" << std::endl;
return0;
}
syncsub: Synchronized subscriber in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid SyncSub(string[] args)
{
//
// Synchronized subscriber
//
// Author: metadings
//
using (var context = new ZContext())
using (var subscriber = new ZSocket(context, ZSocketType.SUB))
using (var syncclient = new ZSocket(context, ZSocketType.REQ))
{
// First, connect our subscriber socket
subscriber.Connect("tcp://127.0.0.1:5561");
subscriber.SubscribeAll();
// 0MQ is so fast, we need to wait a while…
Thread.Sleep(1);
// Second, synchronize with publisher
syncclient.Connect("tcp://127.0.0.1:5562");
// - send a synchronization request
syncclient.Send(new ZFrame());
// - wait for synchronization reply
syncclient.ReceiveFrame();
// Third, get our updates and report how many we got
int i = 0;
while (true)
{
using (ZFrame frame = subscriber.ReceiveFrame())
{
string text = frame.ReadString();
if (text == "END")
{
break;
}
frame.Position = 0;
Console.WriteLine("Receiving {0}...", frame.ReadInt32());
++i;
}
}
Console.WriteLine("Received {0} updates.", i);
}
}
}
}
syncsub: Synchronized subscriber in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Synchronized subscriber in Common Lisp;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.syncsub
(:nicknames#:syncsub)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.syncsub)
(defunmain ()
(zmq:with-context (context1)
;; First, connect our subscriber socket
(zmq:with-socket (subscribercontextzmq:sub)
(zmq:connectsubscriber"tcp://localhost:5561")
(zmq:setsockoptsubscriberzmq:subscribe"")
;; Second, synchronize with publisher
(zmq:with-socket (syncclientcontextzmq:req)
(zmq:connectsyncclient"tcp://localhost:5562")
;; - send a synchronization request
(let ((msg (make-instance'zmq:msg:data"")))
(zmq:sendsyncclientmsg))
;; - wait for synchronization reply
(let ((msg (make-instance'zmq:msg)))
(zmq:recvsyncclientmsg))
;; Third, get our updates and report how many we got
(let ((updates0))
(loop
(let ((msg (make-instance'zmq:msg)))
(zmq:recvsubscribermsg)
(when (string="END" (zmq:msg-data-as-stringmsg))
(return))
(incfupdates)))
(message"Received ~D updates~%"updates)))))
(cleanup))
syncsub: Synchronized subscriber in Delphi
program syncsub;
//
// Synchronized subscriber
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqApi
;
var
context: TZMQContext;
subscriber,
syncclient: TZMQSocket;
str: Utf8String;
i: Integer;
begin
context := TZMQContext.Create;
// First, connect our subscriber socket
subscriber := Context.Socket( stSub );
subscriber.RcvHWM := 1000001;
subscriber.connect( 'tcp://localhost:5561' );
subscriber.Subscribe( '' );
// 0MQ is so fast, we need to wait a while...
sleep (1000);
// Second, synchronize with publisher
syncclient := Context.Socket( stReq );
syncclient.connect( 'tcp://localhost:5562' );
// - send a synchronization request
syncclient.send( '' );
// - wait for synchronization reply
syncclient.recv( str );
// Third, get our updates and report how many we got
i := 0;
while True do
begin
subscriber.recv( str );
if str = 'END' then
break;
inc( i );
end;
Writeln( Format( 'Received %d updates', [i] ) );
subscriber.Free;
syncclient.Free;
context.Free;
end.
syncsub: Synchronized subscriber in Erlang
#! /usr/bin/env escript
%%
%% Synchronized subscriber
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% First, connect our subscriber socket
{ok, Subscriber} = erlzmq:socket(Context, sub),
ok = erlzmq:connect(Subscriber, "tcp://localhost:5561"),
ok = erlzmq:setsockopt(Subscriber, subscribe, <<>>),
%% Second, synchronize with publisher
{ok, Syncclient} = erlzmq:socket(Context, req),
ok = erlzmq:connect(Syncclient, "tcp://localhost:5562"),
%% - send a synchronization request
ok = erlzmq:send(Syncclient, <<>>),
%% - wait for synchronization reply
{ok, <<>>} = erlzmq:recv(Syncclient),
%% Third, get our updates and report how many we got
Updates = acc_updates(Subscriber, 0),
io:format("Received ~b updates~n", [Updates]),
ok = erlzmq:close(Subscriber),
ok = erlzmq:close(Syncclient),
ok = erlzmq:term(Context).
acc_updates(Subscriber, N) ->
caseerlzmq:recv(Subscriber) of
{ok, <<"END">>} -> N;
{ok, _} -> acc_updates(Subscriber, N + 1)
end.
syncsub: Synchronized subscriber in Elixir
defmodule Syncsub do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:34
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, subscriber} = :erlzmq.socket(context, :sub)
:ok = :erlzmq.connect(subscriber, 'tcp://localhost:5561')
:ok = :erlzmq.setsockopt(subscriber, :subscribe, <<>>)
{:ok, syncclient} = :erlzmq.socket(context, :req)
:ok = :erlzmq.connect(syncclient, 'tcp://localhost:5562')
:ok = :erlzmq.send(syncclient, <<>>)
{:ok, <<>>} = :erlzmq.recv(syncclient)
updates = acc_updates(subscriber, 0)
:io.format('Received ~b updates~n', [updates])
:ok = :erlzmq.close(subscriber)
:ok = :erlzmq.close(syncclient)
:ok = :erlzmq.term(context)
end
def acc_updates(subscriber, n) do
case(:erlzmq.recv(subscriber)) do
{:ok, "END"} ->
n
{:ok, _} ->
acc_updates(subscriber, n + 1)
end
end
end
Syncsub.main
syncsub: Synchronized subscriber in F#
(*
Synchronized subscriber
*)
#r @"bin/fszmq.dll"
open fszmq
open fszmq.Context
open fszmq.Socket
#load "zhelpers.fs"
let main () =
use context = new Context(1)
// first, connect our subscriber socket
use subscriber = sub context
"tcp://localhost:5561" |> connect subscriber
[ ""B ] |> subscribe subscriber
// 0MQ is so fast, we need to wait a while...
sleep 1
// second, synchronize with publisher
use syncclient = req context
"tcp://localhost:5562" |> connect syncclient
// - send a synchronization request
"" |> s_send syncclient
// - wait for synchronization reply
syncclient |> s_recv |> ignore
// third, get our updates and report how many we got
let rec loop count =
let message = s_recv subscriber
if message <> "END"
then loop (count + 1)
else count
let update_nbr = loop 0
printfn "Received %d updates" update_nbr
EXIT_SUCCESS
main ()
// Synchronized subscriber
//
// Author: Aleksandar Janicijevic
// Requires: http://github.com/alecthomas/gozmq
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq""time"
)
funcmain() {
context, _ := zmq.NewContext()
defer context.Close()
subscriber, _ := context.NewSocket(zmq.SUB)
defer subscriber.Close()
subscriber.Connect("tcp://localhost:5561")
subscriber.SetSubscribe("")
// 0MQ is so fast, we need to wait a while...
time.Sleep(time.Second)
// Second, synchronize with publisher
syncclient, _ := context.NewSocket(zmq.REQ)
defer syncclient.Close()
syncclient.Connect("tcp://localhost:5562")
// - send a synchronization request
fmt.Println("Send synchronization request")
syncclient.Send([]byte(""), 0)
fmt.Println("Wait for synchronization reply")
// - wait for synchronization reply
syncclient.Recv(0)
fmt.Println("Get updates")
// Third, get our updates and report how many we got
update_nbr := 0for {
reply, _ := subscriber.Recv(0)
ifstring(reply) == "END" {
break
}
update_nbr++
}
fmt.Printf("Received %d updates\n", update_nbr)
}
syncsub: Synchronized subscriber in Haskell
{-# LANGUAGE OverloadedStrings #-}-- Synchronized subscribermoduleMainwhereimportControl.ConcurrentimportData.FunctionimportSystem.ZMQ4.MonadicimportText.Printfmain::IO()main= runZMQ $ do-- First, connect our subscriber socket
subscriber <- socket Sub
connect subscriber "tcp://localhost:5561"
subscribe subscriber ""-- 0MQ is so fast, we need to wait a while...
liftIO $ threadDelay 1000000-- Second, synchronize with the publisher
syncclient <- socket Req
connect syncclient "tcp://localhost:5562"-- Send a synchronization request
send syncclient []""-- Wait for a synchronization reply
receive syncclient
let-- go :: (Int -> ZMQ z Int) -> Int -> ZMQ z Int
go loop =\n ->do
string <- receive subscriber
if string == "END"then return n
else loop (n+1)
-- Third, get our updates and report how many we got
update_nbr <- fix go (0::Int)
liftIO $ printf "Received %d updates\n" update_nbr
syncsub: Synchronized subscriber in Haxe
package ;
importneko.Lib;
importhaxe.io.Bytes;
importneko.Sys;
importorg.zeromq.ZMQ;
importorg.zeromq.ZMQContext;
importorg.zeromq.ZMQSocket;
/**
* Synchronised subscriber
*
* See: http://zguide.zeromq.org/page:all#Node-Coordination
*
* Use with SyncPub.hx
*/class SyncSub
{
publicstaticfunctionmain() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** SyncSub (see: http://zguide.zeromq.org/page:all#Node-Coordination)");
// First connect our subscriber socketvar subscriber:ZMQSocket = context.socket(ZMQ_SUB);
subscriber.connect("tcp://127.0.0.1:5561");
subscriber.setsockopt(ZMQ_SUBSCRIBE, Bytes.ofString(""));
// 0MQ is so fast, we need to wait a little while
Sys.sleep(1.0);
// Second, synchronise with publishervar syncClient:ZMQSocket = context.socket(ZMQ_REQ);
syncClient.connect("tcp://127.0.0.1:5562");
// Send a synchronisation request
syncClient.sendMsg(Bytes.ofString(""));
// Wait for a synchronisation replyvar msgBytes:Bytes = syncClient.recvMsg();
// Third, get our updates and report how many we gotvar update_nbr = 0;
while (true) {
msgBytes = subscriber.recvMsg();
if (msgBytes.toString() == "END") {
break;
}
msgBytes = null;
update_nbr++;
}
Lib.println("Received " + update_nbr + " updates\n");
subscriber.close();
syncClient.close();
context.term();
}
}
syncsub: Synchronized subscriber in Java
packageguide;
importorg.zeromq.SocketType;
importorg.zeromq.ZMQ;
importorg.zeromq.ZMQ.Socket;
importorg.zeromq.ZContext;
/**
* Synchronized subscriber.
*/publicclasssyncsub
{
publicstaticvoidmain(String[] args)
{
try (ZContext context = new ZContext()) {
// First, connect our subscriber socket
Socket subscriber = context.createSocket(SocketType.SUB);
subscriber.connect("tcp://localhost:5561");
subscriber.subscribe(ZMQ.SUBSCRIPTION_ALL);
// Second, synchronize with publisher
Socket syncclient = context.createSocket(SocketType.REQ);
syncclient.connect("tcp://localhost:5562");
// - send a synchronization request
syncclient.send(ZMQ.MESSAGE_SEPARATOR, 0);
// - wait for synchronization reply
syncclient.recv(0);
// Third, get our updates and report how many we got
int update_nbr = 0;
while (true) {
String string = subscriber.recvStr(0);
if (string.equals("END")) {
break;
}
update_nbr++;
}
System.out.println("Received " + update_nbr + " updates.");
}
}
}
# Synchronized subscriber in Perlusestrict;
usewarnings;
usev5.10;
useZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_SUB ZMQ_REQ ZMQ_RCVHWM);
my$context = ZMQ::FFI->new();
# First, connect our subscriber socketmy$subscriber = $context->socket(ZMQ_SUB);
$subscriber->set(ZMQ_RCVHWM, 'int', 0);
$subscriber->connect('tcp://localhost:5561');
$subscriber->subscribe('');
# 0MQ is so fast, we need to wait a while...sleep3;
# Second, synchronize with publishermy$syncclient = $context->socket(ZMQ_REQ);
$syncclient->connect('tcp://localhost:5562');
# send a synchronization request$syncclient->send('');
# wait for synchronization reply$syncclient->recv();
# Third, get our updates and report how many we gotmy$update_nbr = 0;
while (1) {
lastif$subscriber->recv() eq"END";
$update_nbr++;
}
say "Received $update_nbr updates";
syncsub: Synchronized subscriber in PHP
<?php/*
* Synchronized subscriber
*
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/$context = new ZMQContext();
// First, connect our subscriber socket
$subscriber = $context->getSocket(ZMQ::SOCKET_SUB);
$subscriber->connect("tcp://localhost:5561");
$subscriber->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "");
// Second, synchronize with publisher
$syncclient = $context->getSocket(ZMQ::SOCKET_REQ);
$syncclient->connect("tcp://localhost:5562");
// - send a synchronization request
$syncclient->send("");
// - wait for synchronization reply
$string = $syncclient->recv();
// Third, get our updates and report how many we got
$update_nbr = 0;
while (true) {
$string = $subscriber->recv();
if ($string == "END") {
break;
}
$update_nbr++;
}
printf ("Received %d updates %s", $update_nbr, PHP_EOL);
syncsub: Synchronized subscriber in Python
## Synchronized subscriber#importtimeimportzmqdefmain():
context = zmq.Context()
# First, connect our subscriber socket
subscriber = context.socket(zmq.SUB)
subscriber.connect("tcp://localhost:5561")
subscriber.setsockopt(zmq.SUBSCRIBE, b'')
time.sleep(1)
# Second, synchronize with publisher
syncclient = context.socket(zmq.REQ)
syncclient.connect("tcp://localhost:5562")
# send a synchronization request
syncclient.send(b'')
# wait for synchronization reply
syncclient.recv()
# Third, get our updates and report how many we got
nbr = 0while True:
msg = subscriber.recv()
if msg == b"END":
break
nbr += 1print(f"Received {nbr} updates")
if __name__ == "__main__":
main()
This Bash shell script will start ten subscribers and then the publisher: 这个 Bash shell 脚本将启动十个订阅者,然后启动发布者:
echo "Starting subscribers..."
for ((a=0; a<10; a++)); do
syncsub &
done
echo "Starting publisher..."
syncpub
Which gives us this satisfying output: 这给了我们这个令人满意的输出:
Starting subscribers...
Starting publisher...
Received 1000000 updates
Received 1000000 updates
...
Received 1000000 updates
Received 1000000 updates
We can’t assume that the SUB connect will be finished by the time the REQ/REP dialog is complete. There are no guarantees that outbound connects will finish in any order whatsoever, if you’re using any transport except inproc. So, the example does a brute force sleep of one second between subscribing, and sending the REQ/REP synchronization. 我们不能假设在 REQ/REP 对话完成时,SUB 的连接已经完成。除非使用 inproc 传输,否则无法保证出站连接会以任何顺序完成。因此,示例在订阅和发送 REQ/REP 同步之间强制暂停了一秒钟。
Subscribers connect SUB socket and when they receive a Hello message they tell the publisher via a REQ/REP socket pair. 订阅者连接 SUB 套接字,当他们收到 Hello 消息时,通过 REQ/REP 套接字对告诉发布者。
When the publisher has had all the necessary confirmations, it starts to send real data. 当发布者获得所有必要的确认后,它开始发送真实数据。
ZeroMQ’s message API lets you send and receive messages directly from and to application buffers without copying data. We call this zero-copy, and it can improve performance in some applications. ZeroMQ 的消息 API 允许你直接从应用程序缓冲区发送和接收消息,而无需复制数据。我们称之为零拷贝,这在某些应用中可以提升性能。
You should think about using zero-copy in the specific case where you are sending large blocks of memory (thousands of bytes), at a high frequency. For short messages, or for lower message rates, using zero-copy will make your code messier and more complex with no measurable benefit. Like all optimizations, use this when you know it helps, and measure before and after. 你应该在发送大块内存(数千字节)且频率较高的特定情况下考虑使用零拷贝。对于短消息或较低的消息速率,使用零拷贝会使代码变得更混乱、更复杂,而没有明显的好处。像所有优化一样,只有在确定它有帮助时才使用,并且要在使用前后进行测量。
To do zero-copy, you use zmq_msg_init_data() to create a message that refers to a block of data already allocated with malloc() or some other allocator, and then you pass that to zmq_msg_send(). When you create the message, you also pass a function that ZeroMQ will call to free the block of data, when it has finished sending the message. This is the simplest example, assuming buffer is a block of 1,000 bytes allocated on the heap: 要实现零拷贝,你使用 zmq_msg_init_data() 创建一个消息,该消息引用已经通过 malloc() 或其他分配器分配的数据块,然后将其传递给 zmq_msg_send() 。创建消息时,你还需要传递一个函数,ZeroMQ 会在发送完消息后调用该函数来释放数据块。以下是最简单的示例,假设 buffer 是堆上分配的 1000 字节数据块:
voidmy_free (void *data, void *hint) {
free (data);
}
// Send message from buffer, which we allocate and ZeroMQ will free for us
zmq_msg_t message;
zmq_msg_init_data (&message, buffer, 1000, my_free, NULL);
zmq_msg_send (&message, socket, 0);
Note that you don’t call zmq_msg_close() after sending a message–libzmq will do this automatically when it’s actually done sending the message. 请注意,发送消息后无需调用 zmq_msg_close() —— libzmq 会在消息实际发送完成时自动执行此操作。
There is no way to do zero-copy on receive: ZeroMQ delivers you a buffer that you can store as long as you wish, but it will not write data directly into application buffers. 接收时无法实现零拷贝:ZeroMQ 会交付一个缓冲区,你可以根据需要保存它,但它不会将数据直接写入应用程序缓冲区。
On writing, ZeroMQ’s multipart messages work nicely together with zero-copy. In traditional messaging, you need to marshal different buffers together into one buffer that you can send. That means copying data. With ZeroMQ, you can send multiple buffers coming from different sources as individual message frames. Send each field as a length-delimited frame. To the application, it looks like a series of send and receive calls. But internally, the multiple parts get written to the network and read back with single system calls, so it’s very efficient. 在写入时,ZeroMQ 的多部分消息与零拷贝配合得很好。在传统消息传递中,你需要将不同的缓冲区合并成一个缓冲区进行发送,这意味着数据拷贝。而使用 ZeroMQ,你可以将来自不同来源的多个缓冲区作为独立的消息帧发送。将每个字段作为一个长度限定的帧发送。对应用程序来说,这看起来像是一系列的发送和接收调用。但在内部,多个部分通过单个系统调用写入网络并读取回来,因此效率非常高。
In the pub-sub pattern, we can split the key into a separate message frame that we call an envelope. If you want to use pub-sub envelopes, make them yourself. It’s optional, and in previous pub-sub examples we didn’t do this. Using a pub-sub envelope is a little more work for simple cases, but it’s cleaner especially for real cases, where the key and the data are naturally separate things. 在发布-订阅模式中,我们可以将键拆分成一个单独的消息帧,称为信封。如果你想使用发布-订阅信封,可以自己实现。这是可选的,在之前的发布-订阅示例中我们没有这样做。对于简单情况,使用发布-订阅信封稍微麻烦一些,但对于实际情况来说更清晰,因为键和数据本质上是两个独立的东西。
Figure 23 - Pub-Sub Envelope with Separate Key 图 23 - 带有独立键的发布-订阅信封
Subscriptions do a prefix match. That is, they look for “all messages starting with XYZ”. The obvious question is: how to delimit keys from data so that the prefix match doesn’t accidentally match data. The best answer is to use an envelope because the match won’t cross a frame boundary. Here is a minimalist example of how pub-sub envelopes look in code. This publisher sends messages of two types, A and B. 订阅执行前缀匹配。也就是说,它们查找“所有以 XYZ 开头的消息”。显而易见的问题是:如何区分键和数据,以防前缀匹配意外匹配到数据。最好的答案是使用信封,因为匹配不会跨越帧边界。下面是一个极简示例,展示了发布-订阅信封在代码中的样子。该发布者发送两种类型的消息,A 和 B。
// Pubsub envelope publisher
// Note that the zhelpers.h file also provides s_sendmore
#include"zhelpers.h"#include<unistd.h>intmain (void)
{
// Prepare our context and publisher
void *context = zmq_ctx_new ();
void *publisher = zmq_socket (context, ZMQ_PUB);
zmq_bind (publisher, "tcp://*:5563");
while (1) {
// Write two messages, each with an envelope and content
s_sendmore (publisher, "A");
s_send (publisher, "We don't want to see this");
s_sendmore (publisher, "B");
s_send (publisher, "We would like to see this");
sleep (1);
}
// We never get here, but clean up anyhow
zmq_close (publisher);
zmq_ctx_destroy (context);
return0;
}
psenvpub: Pub-Sub envelope publisher in C++ psenvpub:C++ 中的发布-订阅信封发布者
//
// Pubsub envelope publisher
// Note that the zhelpers.h file also provides s_sendmore
//
#include"zhelpers.hpp"intmain () {
// Prepare our context and publisher
zmq::context_t context(1);
zmq::socket_t publisher(context, ZMQ_PUB);
publisher.bind("tcp://*:5563");
while (1) {
// Write two messages, each with an envelope and content
s_sendmore (publisher, std::string("A"));
s_send (publisher, std::string("We don't want to see this"));
s_sendmore (publisher, std::string("B"));
s_send (publisher, std::string("We would like to see this"));
sleep (1);
}
return0;
}
psenvpub: Pub-Sub envelope publisher in C#
usingSystem;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Text;
usingSystem.Threading;
usingZeroMQ;
namespaceExamples
{
staticpartialclassProgram
{
publicstaticvoid PSEnvPub(string[] args)
{
//
// Pubsub envelope publisher
//
// Author: metadings
//
// Prepare our context and publisher
using (var context = new ZContext())
using (var publisher = new ZSocket(context, ZSocketType.PUB))
{
publisher.Linger = TimeSpan.Zero;
publisher.Bind("tcp://*:5563");
int published = 0;
while (true)
{
// Write two messages, each with an envelope and content
using (var message = new ZMessage())
{
published++;
message.Add(new ZFrame(string.Format("A {0}", published)));
message.Add(new ZFrame(string.Format(" We don't like to see this.")));
Thread.Sleep(1000);
Console_WriteZMessage("Publishing ", message);
publisher.Send(message);
}
using (var message = new ZMessage())
{
published++;
message.Add(new ZFrame(string.Format("B {0}", published)));
message.Add(new ZFrame(string.Format(" We do like to see this.")));
Thread.Sleep(1000);
Console_WriteZMessage("Publishing ", message);
publisher.Send(message);
}
}
}
}
}
}
psenvpub: Pub-Sub envelope publisher in CL
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-;;;;;; Pubsub envelope publisher in Common Lisp;;; Note that the zhelpers package also provides send-text and send-more-text;;;;;; Kamil Shakirov <kamils80@gmail.com>;;;
(defpackage#:zguide.psenvpub
(:nicknames#:psenvpub)
(:use#:cl#:zhelpers)
(:export#:main))
(in-package:zguide.psenvpub)
(defunmain ()
;; Prepare our context and publisher
(zmq:with-context (context1)
(zmq:with-socket (publishercontextzmq:pub)
(zmq:bindpublisher"tcp://*:5563")
(loop;; Write two messages, each with an envelope and content
(send-more-textpublisher"A")
(send-textpublisher"We don't want to see this")
(send-more-textpublisher"B")
(send-textpublisher"We would like to see this")
(sleep1))))
(cleanup))
psenvpub: Pub-Sub envelope publisher in Delphi
program psenvpub;
//
// Pubsub envelope publisher
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
publisher: TZMQSocket;
begin
// Prepare our context and publisher
context := TZMQContext.Create;
publisher := context.Socket( stPub );
publisher.bind( 'tcp://*:5563' );
while true do
begin
// Write two messages, each with an envelope and content
publisher.send( ['A', 'We don''t want to see this'] );
publisher.send( ['B', 'We would like to see this'] );
sleep(1000);
end;
publisher.Free;
context.Free;
end.
psenvpub: Pub-Sub envelope publisher in Erlang
#! /usr/bin/env escript
%%
%% Pubsub envelope publisher
%%
main(_) ->
%% Prepare our context and publisher
{ok, Context} = erlzmq:context(),
{ok, Publisher} = erlzmq:socket(Context, pub),
ok = erlzmq:bind(Publisher, "tcp://*:5563"),
loop(Publisher),
%% We never get here but clean up anyhow
ok = erlzmq:close(Publisher),
ok = erlzmq:term(Context).
loop(Publisher) ->
%% Write two messages, each with an envelope and content
ok = erlzmq:send(Publisher, <<"A">>, [sndmore]),
ok = erlzmq:send(Publisher, <<"We don't want to see this">>),
ok = erlzmq:send(Publisher, <<"B">>, [sndmore]),
ok = erlzmq:send(Publisher, <<"We would like to see this">>),
timer:sleep(1000),
loop(Publisher).
psenvpub: Pub-Sub envelope publisher in Elixir
defmodule Psenvpub do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:29
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, publisher} = :erlzmq.socket(context, :pub)
:ok = :erlzmq.bind(publisher, 'tcp://*:5563')
loop(publisher)
:ok = :erlzmq.close(publisher)
:ok = :erlzmq.term(context)
end
def loop(publisher) do
:ok = :erlzmq.send(publisher, "A", [:sndmore])
:ok = :erlzmq.send(publisher, "We don't want to see this")
:ok = :erlzmq.send(publisher, "B", [:sndmore])
:ok = :erlzmq.send(publisher, "We would like to see this")
:timer.sleep(1000)
loop(publisher)
end
end
Psenvpub.main
psenvpub: Pub-Sub envelope publisher in F#
(*
Pubsub envelope publisher
Note that the zhelpers.fs file also provides s_sendmore
*)
#r @"bin/fszmq.dll"
open fszmq
open fszmq.Context
open fszmq.Socket
#load "zhelpers.fs"
let main () =
// prepare our context and publisher
use context = new Context(1)
use publisher = pub context
"tcp://*:5563" |> bind publisher
while true do
// write two messages, each with an envelope and content
"A" |> s_sendmore publisher
"We don't want to see this" |> s_send publisher
"B" |> s_sendmore publisher
"We would like to see this" |> s_send publisher
sleep 1
// we never get here but clean up anyhow
EXIT_SUCCESS
main ()
//
// Pubsub envelope publisher
//
package main
import (
zmq "github.com/alecthomas/gozmq""time"
)
funcmain() {
context, _ := zmq.NewContext()
defer context.Close()
publisher, _ := context.NewSocket(zmq.PUB)
defer publisher.Close()
publisher.Bind("tcp://*:5563")
for {
publisher.SendMultipart([][]byte{[]byte("A"), []byte("We don't want to see this")}, 0)
publisher.SendMultipart([][]byte{[]byte("B"), []byte("We would like to see this")}, 0)
time.Sleep(time.Second)
}
}
psenvpub: Pub-Sub envelope publisher in Haskell
{-# LANGUAGE OverloadedLists #-}{-# LANGUAGE OverloadedStrings #-}-- Pubsub envelope publishermoduleMainwhereimportControl.ConcurrentimportControl.MonadimportSystem.ZMQ4.Monadicmain::IO()main= runZMQ $ do-- Prepare our publisher
publisher <- socket Pub
bind publisher "tcp://*:5563"
forever $ do-- Write two messages, each with an envelope and content
sendMulti publisher ["A", "We don't want to see this"]
sendMulti publisher ["B", "We would like to see this"]
liftIO $ threadDelay 1000000
psenvpub: Pub-Sub envelope publisher in Haxe
package ;
importhaxe.io.Bytes;
importneko.Lib;
importneko.Sys;
importorg.zeromq.ZMQ;
importorg.zeromq.ZMQContext;
importorg.zeromq.ZMQException;
importorg.zeromq.ZMQSocket;
/**
* Pubsub envelope publisher
*
* See: http://zguide.zeromq.org/page:all#Pub-sub-Message-Envelopes
*
* Use with PSEnvSub
*/class PSEnvPub
{
publicstaticfunctionmain() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** PSEnvPub (see: http://zguide.zeromq.org/page:all#Pub-sub-Message-Envelopes)");
var publisher:ZMQSocket = context.socket(ZMQ_PUB);
publisher.bind("tcp://*:5563");
ZMQ.catchSignals();
while (true) {
publisher.sendMsg(Bytes.ofString("A"), SNDMORE);
publisher.sendMsg(Bytes.ofString("We don't want to see this"));
publisher.sendMsg(Bytes.ofString("B"), SNDMORE);
publisher.sendMsg(Bytes.ofString("We would like to see this"));
Sys.sleep(1.0);
}
// We never get here but clean up anyhow
publisher.close();
context.term();
}
}
psenvpub: Pub-Sub envelope publisher in Java
packageguide;
importorg.zeromq.SocketType;
importorg.zeromq.ZMQ;
importorg.zeromq.ZMQ.Socket;
importorg.zeromq.ZContext;
/**
* Pubsub envelope publisher
*/publicclasspsenvpub
{
publicstaticvoidmain(String[] args) throws Exception
{
// Prepare our context and publisher
try (ZContext context = new ZContext()) {
Socket publisher = context.createSocket(SocketType.PUB);
publisher.bind("tcp://*:5563");
while (!Thread.currentThread().isInterrupted()) {
// Write two messages, each with an envelope and content
publisher.sendMore("A");
publisher.send("We don't want to see this");
publisher.sendMore("B");
publisher.send("We would like to see this");
}
}
}
}
---- Pubsub envelope publisher-- Note that the zhelpers.h file also provides s_sendmore---- Author: Robert G. Jakabosky <bobby@sharedrealm.com>--
require"zmq"
require"zhelpers"-- Prepare our context and publisherlocal context = zmq.init(1)
local publisher = context:socket(zmq.PUB)
publisher:bind("tcp://*:5563")
whiletruedo-- Write two messages, each with an envelope and content
publisher:send("A", zmq.SNDMORE)
publisher:send("We don't want to see this")
publisher:send("B", zmq.SNDMORE)
publisher:send("We would like to see this")
s_sleep (1000)
end-- We never get here but clean up anyhow
publisher:close()
context:term()
psenvpub: Pub-Sub envelope publisher in Node.js
var zmq = require('zeromq')
var publisher = zmq.socket('pub')
publisher.bind('tcp://*:5563', function(err) {
if(err)
console.log(err)
else
console.log('Listening on 5563...')
})
setInterval(function() {
//if you pass an array, send() uses SENDMORE flag automatically
publisher.send(["A", "We do not want to see this"]);
//if you want, you can set it explicitly
publisher.send("B", zmq.ZMQ_SNDMORE);
publisher.send("We would like to see this");
},1000);
psenvpub: Pub-Sub envelope publisher in Objective-C
# Pubsub envelope publisher in Perlusestrict;
usewarnings;
usev5.10;
useZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_PUB);
# Prepare our context and publishermy$context = ZMQ::FFI->new();
my$publisher = $context->socket(ZMQ_PUB);
$publisher->bind('tcp://*:5563');
while (1) {
# Write two messages, each with an envelope and content$publisher->send_multipart(["A", "We don't want to see this"]);
$publisher->send_multipart(["B", "We would like to see this"]);
sleep1;
}
# We never get here
psenvpub: Pub-Sub envelope publisher in PHP
<?php/*
* Pubsub envelope publisher
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/// Prepare our context and publisher
$context = new ZMQContext();
$publisher = new ZMQSocket($context, ZMQ::SOCKET_PUB);
$publisher->bind("tcp://*:5563");
while (true) {
// Write two messages, each with an envelope and content
$publisher->send("A", ZMQ::MODE_SNDMORE);
$publisher->send("We don't want to see this");
$publisher->send("B", ZMQ::MODE_SNDMORE);
$publisher->send("We would like to see this");
sleep (1);
}
// We never get here
psenvpub: Pub-Sub envelope publisher in Python
"""
Pubsub envelope publisher
Author: Guillaume Aubert (gaubert) <guillaume(dot)aubert(at)gmail(dot)com>
"""importtimeimportzmqdefmain():
"""main method"""# Prepare our context and publisher
context = zmq.Context()
publisher = context.socket(zmq.PUB)
publisher.bind("tcp://*:5563")
while True:
# Write two messages, each with an envelope and content
publisher.send_multipart([b"A", b"We don't want to see this"])
publisher.send_multipart([b"B", b"We would like to see this"])
time.sleep(1)
# We never get here but clean up anyhow
publisher.close()
context.term()
if __name__ == "__main__":
main()
#! /usr/bin/env escript
%%
%% Pubsub envelope subscriber
%%
main(_) ->
%% Prepare our context and subscriber
{ok, Context} = erlzmq:context(),
{ok, Subscriber} = erlzmq:socket(Context, sub),
ok = erlzmq:connect(Subscriber, "tcp://localhost:5563"),
ok = erlzmq:setsockopt(Subscriber, subscribe, <<"B">>),
loop(Subscriber),
%% We never get here but clean up anyhow
ok = erlzmq:close(Subscriber),
ok = erlzmq:term(Context).
loop(Subscriber) ->
%% Read envelope with address
{ok, Address} = erlzmq:recv(Subscriber),
%% Read message contents
{ok, Contents} = erlzmq:recv(Subscriber),
io:format("[~s] ~s~n", [Address, Contents]),
loop(Subscriber).
psenvsub: Pub-Sub envelope subscriber in Elixir
defmodule Psenvsub do
@moduledoc"""
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:30
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, subscriber} = :erlzmq.socket(context, :sub)
:ok = :erlzmq.connect(subscriber, 'tcp://localhost:5563')
:ok = :erlzmq.setsockopt(subscriber, :subscribe, "B")
loop(subscriber)
:ok = :erlzmq.close(subscriber)
:ok = :erlzmq.term(context)
end
def loop(subscriber) do
{:ok, address} = :erlzmq.recv(subscriber)
{:ok, contents} = :erlzmq.recv(subscriber)
:io.format('[~s] ~s~n', [address, contents])
loop(subscriber)
end
end
Psenvsub.main
psenvsub: Pub-Sub envelope subscriber in F#
(*
Pubsub envelope subscriber
*)
#r @"bin/fszmq.dll"
open fszmq
open fszmq.Context
open fszmq.Socket
#load "zhelpers.fs"
let main () =
// prepare our context and publisher
use context = new Context(1)
use subscriber = sub context
"tcp://localhost:5563" |> connect subscriber
[ "B"B ] |> subscribe subscriber
while true do
// read envelope with address
let address = s_recv subscriber
// read message contents
let contents = s_recv subscriber
printfn "[%s] %s" address contents
// we never get here but clean up anyhow
EXIT_SUCCESS
main ()
# Pubsub envelope subscriber in Perlusestrict;
usewarnings;
usev5.10;
useZMQ::FFI;
useZMQ::FFI::Constantsqw(ZMQ_SUB);
# Prepare our context and subscribermy$context = ZMQ::FFI->new();
my$subscriber = $context->socket(ZMQ_SUB);
$subscriber->connect('tcp://localhost:5563');
$subscriber->subscribe('B');
while (1) {
# Read envelope with addressmy ($address, $contents) = $subscriber->recv_multipart();
say "[$address] $contents";
}
# We never get here
psenvsub: Pub-Sub envelope subscriber in PHP
<?php/*
* Pubsub envelope subscriber
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/// Prepare our context and subscriber
$context = new ZMQContext();
$subscriber = new ZMQSocket($context, ZMQ::SOCKET_SUB);
$subscriber->connect("tcp://localhost:5563");
$subscriber->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "B");
while (true) {
// Read envelope with address
$address = $subscriber->recv();
// Read message contents
$contents = $subscriber->recv();
printf ("[%s] %s%s", $address, $contents, PHP_EOL);
}
// We never get here
psenvsub: Pub-Sub envelope subscriber in Python
"""
Pubsub envelope subscriber
Author: Guillaume Aubert (gaubert) <guillaume(dot)aubert(at)gmail(dot)com>
"""importzmqdefmain():
""" main method """# Prepare our context and publisher
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.connect("tcp://localhost:5563")
subscriber.setsockopt(zmq.SUBSCRIBE, b"B")
while True:
# Read envelope with address
[address, contents] = subscriber.recv_multipart()
print(f"[{address}] {contents}")
# We never get here but clean up anyhow
subscriber.close()
context.term()
if __name__ == "__main__":
main()
When you run the two programs, the subscriber should show you this: 当你运行这两个程序时,订阅者应该会显示如下内容:
[B] We would like to see this
[B] We would like to see this
[B] We would like to see this
...
This example shows that the subscription filter rejects or accepts the entire multipart message (key plus data). You won’t get part of a multipart message, ever. If you subscribe to multiple publishers and you want to know their address so that you can send them data via another socket (and this is a typical use case), create a three-part message. 这个例子表明订阅过滤器会拒绝或接受整个多部分消息(键加数据)。你永远不会只收到多部分消息的一部分。如果你订阅多个发布者,并且想知道它们的地址以便通过另一个套接字向它们发送数据(这是一个典型用例),请创建一个三部分消息。
When you can send messages rapidly from process to process, you soon discover that memory is a precious resource, and one that can be trivially filled up. A few seconds of delay somewhere in a process can turn into a backlog that blows up a server unless you understand the problem and take precautions. 当你能够快速地从一个进程向另一个进程发送消息时,你很快会发现内存是一种宝贵的资源,而且很容易被填满。某个进程中几秒钟的延迟可能会变成积压,导致服务器崩溃,除非你理解问题并采取预防措施。
The problem is this: imagine you have process A sending messages at high frequency to process B, which is processing them. Suddenly B gets very busy (garbage collection, CPU overload, whatever), and can’t process the messages for a short period. It could be a few seconds for some heavy garbage collection, or it could be much longer, if there’s a more serious problem. What happens to the messages that process A is still trying to send frantically? Some will sit in B’s network buffers. Some will sit on the Ethernet wire itself. Some will sit in A’s network buffers. And the rest will accumulate in A’s memory, as rapidly as the application behind A sends them. If you don’t take some precaution, A can easily run out of memory and crash. 问题是这样的:假设你有进程 A 以高频率向进程 B 发送消息,B 正在处理这些消息。突然 B 变得非常忙碌(垃圾回收、CPU 过载等原因),短时间内无法处理消息。可能是几秒钟的重度垃圾回收,也可能更长时间,如果出现更严重的问题。此时,进程 A 仍在疯狂地尝试发送消息,这些消息会发生什么?部分消息会停留在 B 的网络缓冲区,部分会停留在以太网线路上,部分会停留在 A 的网络缓冲区,其余的则会随着 A 后端应用发送的速度迅速积累在 A 的内存中。如果不采取任何预防措施,A 很容易耗尽内存并崩溃。
It is a consistent, classic problem with message brokers. What makes it hurt more is that it’s B’s fault, superficially, and B is typically a user-written application which A has no control over. 这是消息代理中一个一贯且经典的问题。更让人头疼的是,表面上看是 B 的错,而 B 通常是用户编写的应用程序,A 无法控制它。
What are the answers? One is to pass the problem upstream. A is getting the messages from somewhere else. So tell that process, “Stop!” And so on. This is called flow control. It sounds plausible, but what if you’re sending out a Twitter feed? Do you tell the whole world to stop tweeting while B gets its act together? 解决方案是什么?其中一个是将问题传递给上游。A 是从别处接收消息的,所以告诉那个过程,“停!”等等。这就是所谓的流量控制。听起来合理,但如果你正在发送 Twitter 推文呢?你会告诉全世界在 B 调整好之前停止发推吗?
Flow control works in some cases, but not in others. The transport layer can’t tell the application layer to “stop” any more than a subway system can tell a large business, “please keep your staff at work for another half an hour. I’m too busy”. The answer for messaging is to set limits on the size of buffers, and then when we reach those limits, to take some sensible action. In some cases (not for a subway system, though), the answer is to throw away messages. In others, the best strategy is to wait. 流量控制在某些情况下有效,但在其他情况下无效。传输层不能像地铁系统不能告诉大型企业“请让员工多工作半小时,我太忙了”一样,告诉应用层“停”。消息传递的解决方案是设置缓冲区大小限制,当达到限制时采取合理的措施。在某些情况下(地铁系统除外),答案是丢弃消息。在其他情况下,最佳策略是等待。
ZeroMQ uses the concept of HWM (high-water mark) to define the capacity of its internal pipes. Each connection out of a socket or into a socket has its own pipe, and HWM for sending, and/or receiving, depending on the socket type. Some sockets (PUB, PUSH) only have send buffers. Some (SUB, PULL, REQ, REP) only have receive buffers. Some (DEALER, ROUTER, PAIR) have both send and receive buffers. ZeroMQ 使用 HWM(高水位标记)的概念来定义其内部管道的容量。每个从套接字发出的连接或进入套接字的连接都有自己的管道,并且根据套接字类型具有发送和/或接收的 HWM。一些套接字(PUB、PUSH)只有发送缓冲区。一些(SUB、PULL、REQ、REP)只有接收缓冲区。一些(DEALER、ROUTER、PAIR)则同时具有发送和接收缓冲区。
In ZeroMQ v2.x, the HWM was infinite by default. This was easy but also typically fatal for high-volume publishers. In ZeroMQ v3.x, it’s set to 1,000 by default, which is more sensible. If you’re still using ZeroMQ v2.x, you should always set a HWM on your sockets, be it 1,000 to match ZeroMQ v3.x or another figure that takes into account your message sizes and expected subscriber performance. 在 ZeroMQ v2.x 中,HWM 默认是无限的。这虽然简单,但对于高流量的发布者来说通常是致命的。在 ZeroMQ v3.x 中,默认设置为 1000,这更合理。如果你仍在使用 ZeroMQ v2.x,应该始终为你的套接字设置 HWM,无论是设置为 1000 以匹配 ZeroMQ v3.x,还是根据你的消息大小和预期的订阅者性能设置其他数值。
When your socket reaches its HWM, it will either block or drop data depending on the socket type. PUB and ROUTER sockets will drop data if they reach their HWM, while other socket types will block. Over the inproc transport, the sender and receiver share the same buffers, so the real HWM is the sum of the HWM set by both sides. 当你的套接字达到其 HWM 时,会根据套接字类型阻塞或丢弃数据。PUB 和 ROUTER 套接字在达到 HWM 时会丢弃数据,而其他套接字类型则会阻塞。在 inproc 传输上,发送方和接收方共享相同的缓冲区,因此实际的 HWM 是双方设置的 HWM 之和。
Lastly, the HWMs are not exact; while you may get up to 1,000 messages by default, the real buffer size may be much lower (as little as half), due to the way libzmq implements its queues. 最后,HWM 不是精确的;虽然默认情况下您可能会收到多达 1,000 条消息,但由于 libzmq 实现其队列的方式,实际缓冲区大小可能会低得多(可能只有一半)。
As you build applications with ZeroMQ, you will come across this problem more than once: losing messages that you expect to receive. We have put together a diagram that walks through the most common causes for this. 在使用 ZeroMQ 构建应用程序时,您会多次遇到这样的问题:丢失了预期接收的消息。我们整理了一张图表,详细说明了最常见的原因。
Here’s a summary of what the graphic says: 以下是图示内容的总结:
On SUB sockets, set a subscription using zmq_setsockopt() with ZMQ_SUBSCRIBE, or you won’t get messages. Because you subscribe to messages by prefix, if you subscribe to "” (an empty subscription), you will get everything. 在 SUB 套接字上,使用 zmq_setsockopt() 和 ZMQ_SUBSCRIBE 设置订阅,否则你将收不到消息。因为你是通过前缀订阅消息的,如果你订阅了“”(空订阅),你将收到所有消息。
If you start the SUB socket (i.e., establish a connection to a PUB socket) after the PUB socket has started sending out data, you will lose whatever it published before the connection was made. If this is a problem, set up your architecture so the SUB socket starts first, then the PUB socket starts publishing. 如果你在 PUB 套接字开始发送数据之后才启动 SUB 套接字(即建立与 PUB 套接字的连接),你将丢失连接建立之前发布的所有消息。如果这是个问题,请将架构设置为先启动 SUB 套接字,然后再启动 PUB 套接字进行发布。
Even if you synchronize a SUB and PUB socket, you may still lose messages. It’s due to the fact that internal queues aren’t created until a connection is actually created. If you can switch the bind/connect direction so the SUB socket binds, and the PUB socket connects, you may find it works more as you’d expect. 即使你同步了 SUB 和 PUB 套接字,你仍然可能丢失消息。这是因为内部队列直到连接真正建立后才会创建。如果你能调整绑定/连接方向,让 SUB 套接字绑定,PUB 套接字连接,你可能会发现它的行为更符合预期。
If you’re using REP and REQ sockets, and you’re not sticking to the synchronous send/recv/send/recv order, ZeroMQ will report errors, which you might ignore. Then, it would look like you’re losing messages. If you use REQ or REP, stick to the send/recv order, and always, in real code, check for errors on ZeroMQ calls. 如果你使用 REP 和 REQ 套接字,并且没有遵循同步的 send/recv/send/recv 顺序,ZeroMQ 会报告错误,而你可能会忽略它们。这样看起来就像你丢失了消息。如果使用 REQ 或 REP,请坚持 send/recv 顺序,并且在实际代码中始终检查 ZeroMQ 调用的错误。
If you’re using PUSH sockets, you’ll find that the first PULL socket to connect will grab an unfair share of messages. The accurate rotation of messages only happens when all PULL sockets are successfully connected, which can take some milliseconds. As an alternative to PUSH/PULL, for lower data rates, consider using ROUTER/DEALER and the load balancing pattern. 如果你使用的是 PUSH 套接字,你会发现第一个连接的 PULL 套接字会获得不公平的消息份额。只有当所有 PULL 套接字都成功连接后,消息的准确轮转才会发生,这可能需要几毫秒。作为 PUSH/PULL 的替代方案,对于较低的数据速率,可以考虑使用 ROUTER/DEALER 和负载均衡模式。
If you’re sharing sockets across threads, don’t. It will lead to random weirdness, and crashes. 如果你在多个线程间共享套接字,千万别这么做。这会导致随机的异常行为和崩溃。
If you’re using inproc, make sure both sockets are in the same context. Otherwise the connecting side will in fact fail. Also, bind first, then connect. inproc is not a disconnected transport like tcp. 如果你使用的是 inproc ,确保两个套接字在同一个上下文中。否则,连接方实际上会失败。另外,先绑定,再连接。 inproc 不是像 tcp 那样的断开传输。
If you’re using ROUTER sockets, it’s remarkably easy to lose messages by accident, by sending malformed identity frames (or forgetting to send an identity frame). In general setting the ZMQ_ROUTER_MANDATORY option on ROUTER sockets is a good idea, but do also check the return code on every send call. 如果你使用的是 ROUTER 套接字,通过发送格式错误的身份帧(或忘记发送身份帧)意外丢失消息是非常容易的。通常在 ROUTER 套接字上设置 ZMQ_ROUTER_MANDATORY 选项是个好主意,但也要检查每次发送调用的返回码。
Lastly, if you really can’t figure out what’s going wrong, make a minimal test case that reproduces the problem, and ask for help from the ZeroMQ community. 最后,如果你实在无法弄清楚问题出在哪里,制作一个能重现问题的最小测试用例,并向 ZeroMQ 社区寻求帮助。