libevent源码刨析

最近自学libevent事件驱动库,参考的资料为libevent2.2版本以及张亮提供的《Libevent源码深度剖析》,参考资料: http://blog.csdn.net/sparkliang/article/details/4957667, libevent好处之类的就不赘述了,libevent和libiop,redis等一样都是采用事件回调机制,这种模式被称作Reactor模式。正常事件处理流程是应用程序调用某个接口触发某个功能,而Reactor模式需要我们将这些接口和宿主指针(谁调用这些接口)注册在Reactor,在合适的时机Reactor使用宿主指针调用注册好的回调函数。

一: Reactor基本知识

Reactor 模式是编写高性能网络服务器的必备技术之一,它具有如下的优点:
1)响应快,不必为单个同步时间所阻塞,虽然 Reactor 本身依然是同步的;
2)编程相对简单,可以最大程度的避免复杂的多线程及同步问题,并且避免了多线程/进程的切换开销;
3)可扩展性,可以方便的通过增加 Reactor 实例个数来充分利用 CPU 资源;
4)可复用性, reactor 框架本身与具体事件处理逻辑无关,具有很高的复用性;
Reactor模式框架
https://cdn.llfc.club/reactorlib.png
<!--more-->
1) Handle 意思为句柄,在Linux表示文件描述符,在windows是socket或者handle。

2)EventDemultiplexer 表示事件多路分发机制,调用系统提供的I/O多路复用机制,比如select,epoll,程序先将关注的句柄注册到EventDemultiplexer上,当有关注的事件
到来时,触发EventDemultiplexer通知程序,程序调用之前注册好的回调函数完成消息相应。对应到 libevent 中,依然是 select、 poll、 epoll 等,但是 libevent 使用结构体eventop进行了 封装,以统一的接口来支持这些 I/O 多路复用机制,达到了对外隐藏底层系统机制的目的。
3)Reactor——反应器
Reactor,是事件管理的接口,内部使用 event demultiplexer 注册、注销事件;并运行事件循环,当有事件进入“就绪”状态时,调用注册事件的回调函数处理事件。
对应到 libevent 中,就是 event_base 结构体。一个典型的Reactor声明方式

  1. class Reactor
  2. {
  3. public:
  4. int register_handler(Event_Handler *pHandler, int event);
  5. int remove_handler(Event_Handler *pHandler, int event);
  6. void handle_events(timeval *ptv);
  7. // ...
  8. };

4) Event Handler——事件处理程序
事件处理程序提供了一组接口,每个接口对应了一种类型的事件,供 Reactor 在相应的事件发生时调用,执行相应的事件处理。通常它会绑定一个有效的句柄。
对应到 libevent 中,就是 event 结构体。下面是两种典型的 Event Handler 类声明方式, 二者互有优缺点。

  1. class Event_Handler
  2. {
  3. public:
  4. virtual void handle_read() = 0;
  5. virtual void handle_write() = 0;
  6. virtual void handle_timeout() = 0;
  7. virtual void handle_close() = 0;
  8. virtual HANDLE get_handle() = 0;
  9. // ...
  10. };
  11. class Event_Handler
  12. {
  13. public:
  14. // events maybe read/write/timeout/close .etc
  15. virtual void handle_events(int events) = 0;
  16. virtual HANDLE get_handle() = 0;
  17. // ...
  18. };

二:如何使用libevent库提供的API

1)首先初始化 libevent 库,并保存返回的指针struct event_base * base = event_init();
实际上这一步相当于初始化一个 Reactor 实例;在初始化 libevent 后,就可以注册事件了。

2)设置event属性和回调函数

调用函数void event_set(struct event ev, int fd, short event, void (cb)(int,
short, void ), void arg);

每个参数的意义:

ev:执行要初始化的 event 对象
fd:该 event 绑定的“句柄”,对于信号事件,它就是关注的信号
event:在该 fd 上关注的事件类型,它可以是 EV_READ, EV_WRITE, EV_SIGNAL
cb:这是一个函数指针,当 fd 上的事件 event 发生时,调用该函数执行处理,它有三个参数,分别是关注的fd, 关注的事件类型(读/写/信号),回调函数的参数void* arg,调用时由event_base 负责传入,按顺序,实际上就是 event_set 时的 fd, event 和 arg;

arg:传递给 cb 函数指针的参数

由于定时事件不需要 fd,并且定时事件是根据添加时( event_add)的超时值设定的,因此这里 event 也不需要设置。
这一步相当于初始化一个 event handler,在 libevent 中事件类型保存在 event 结构体中。
注意: libevent 并不会管理 event 事件集合,这需要应用程序自行管理;

3)设置 event 从属的 event_base
event_base_set(base, &ev);
这一步相当于指明 event 要注册到哪个 event_base 实例上;

4)将事件添加到事件队列里
event_add(&ev, timeout);
基本信息都已设置完成,只要简单的调用 event_add()函数即可完成,其中 timeout 是定时值;

这一步相当于调用Reactor::register_handler()函数注册事件。

5)程序进入无限循环,等待就绪事件并执行事件处理
event_base_dispatch(base);

看一下libevent提供的sample

  1. int
  2. main(int argc, char **argv)
  3. {
  4. struct event evfifo;
  5. #ifdef WIN32
  6. HANDLE socket;
  7. /* Open a file. */
  8. socket = CreateFileA("test.txt", /* open File */
  9. GENERIC_READ, /* open for reading */
  10. 0, /* do not share */
  11. NULL, /* no security */
  12. OPEN_EXISTING, /* existing file only */
  13. FILE_ATTRIBUTE_NORMAL, /* normal file */
  14. NULL); /* no attr. template */
  15. if (socket == INVALID_HANDLE_VALUE)
  16. return 1;
  17. #else
  18. struct stat st;
  19. const char *fifo = "event.fifo";
  20. int socket;
  21. if (lstat(fifo, &st) == 0) {
  22. if ((st.st_mode & S_IFMT) == S_IFREG) {
  23. errno = EEXIST;
  24. perror("lstat");
  25. exit(1);
  26. }
  27. }
  28. unlink(fifo);
  29. if (mkfifo(fifo, 0600) == -1) {
  30. perror("mkfifo");
  31. exit(1);
  32. }
  33. /* Linux pipes are broken, we need O_RDWR instead of O_RDONLY */
  34. #ifdef __linux
  35. socket = open(fifo, O_RDWR | O_NONBLOCK, 0);
  36. #else
  37. socket = open(fifo, O_RDONLY | O_NONBLOCK, 0);
  38. #endif
  39. if (socket == -1) {
  40. perror("open");
  41. exit(1);
  42. }
  43. fprintf(stderr, "Write data to %s\n", fifo);
  44. #endif
  45. /* Initalize the event library */
  46. event_init();
  47. /* Initalize one event */
  48. #ifdef WIN32
  49. event_set(&evfifo, (evutil_socket_t)socket, EV_READ, fifo_read, &evfifo);
  50. #else
  51. event_set(&evfifo, socket, EV_READ, fifo_read, &evfifo);
  52. #endif
  53. /* Add it to the active events, without a timeout */
  54. event_add(&evfifo, NULL);
  55. event_dispatch();
  56. #ifdef WIN32
  57. CloseHandle(socket);
  58. #endif
  59. return (0);
  60. }

main函数里调用event_init()初始化一个event_base,之后调用event_set对event设置了回调函数和读事件关注event_add将此事件加入event队列里超时设置为空最后调用event_dispatch()进行事件轮训派发。fifo_read是一个回调函数,格式就是之前说的cb格式

  1. static void
  2. fifo_read(evutil_socket_t fd, short event, void *arg)
  3. {
  4. char buf[255];
  5. int len;
  6. struct event *ev = arg;
  7. #ifdef WIN32
  8. DWORD dwBytesRead;
  9. #endif
  10. /* Reschedule this event */
  11. event_add(ev, NULL);
  12. fprintf(stderr, "fifo_read called with fd: %d, event: %d, arg: %p\n",
  13. (int)fd, event, arg);
  14. #ifdef WIN32
  15. len = ReadFile((HANDLE)fd, buf, sizeof(buf) - 1, &dwBytesRead, NULL);
  16. /* Check for end of file. */
  17. if (len && dwBytesRead == 0) {
  18. fprintf(stderr, "End Of File");
  19. event_del(ev);
  20. return;
  21. }
  22. buf[dwBytesRead] = '\0';
  23. #else
  24. len = read(fd, buf, sizeof(buf) - 1);
  25. if (len == -1) {
  26. perror("read");
  27. return;
  28. } else if (len == 0) {
  29. fprintf(stderr, "Connection closed\n");
  30. return;
  31. }
  32. buf[len] = '\0';
  33. #endif
  34. fprintf(stdout, "Read: %s\n", buf);
  35. }

三:《libevent 代码深度剖析》中对文件组织进行了归类

1)头文主要就是 event.h:事件宏定义、接口函数声明,主要结构体 event 的声明;

2)内部头文件
xxx-internal.h:内部数据结构和函数,对外不可见,以达到信息隐藏的目的;
3) libevent 框架
event.c: event 整体框架的代码实现;
4)对系统 I/O 多路复用机制的封装
epoll.c:对 epoll 的封装;
select.c:对 select 的封装;
devpoll.c:对 dev/poll 的封装;
kqueue.c:对 kqueue 的封装;
5)定时事件管理
min-heap.h:其实就是一个以时间作为 key 的小根堆结构;
6)信号管理
signal.c:对信号事件的处理;
7)辅助功能函数
evutil.h 和 evutil.c:一些辅助功能函数,包括创建 socket pair 和一些时间操作函数:加、减
和比较等。
8)日志
log.h 和 log.c: log 日志函数
9)缓冲区管理
evbuffer.c 和 buffer.c: libevent 对缓冲区的封装;
10)基本数据结构
compat\sys 下的两个源文件: queue.h 是 libevent 基本数据结构的实现,包括链表,双向链表,
队列等; _libevent_time.h:一些用于时间操作的结构体定义、函数和宏定义;
11)实用网络库

http 和 evdns:是基于 libevent 实现的 http 服务器和异步 dns 查询库;

四:event结构知识

下面着重看下event结构体,这是libevent核心结构

  1. struct event {
  2. TAILQ_ENTRY(event) ev_active_next;
  3. TAILQ_ENTRY(event) ev_next;
  4. /* for managing timeouts */
  5. union {
  6. TAILQ_ENTRY(event) ev_next_with_common_timeout;
  7. int min_heap_idx;
  8. } ev_timeout_pos;
  9. evutil_socket_t ev_fd;
  10. struct event_base *ev_base;
  11. union {
  12. /* used for io events */
  13. struct {
  14. TAILQ_ENTRY(event) ev_io_next;
  15. struct timeval ev_timeout;
  16. } ev_io;
  17. /* used by signal events */
  18. struct {
  19. TAILQ_ENTRY(event) ev_signal_next;
  20. short ev_ncalls;
  21. /* Allows deletes in callback */
  22. short *ev_pncalls;
  23. } ev_signal;
  24. } _ev;
  25. short ev_events;
  26. short ev_res; /* result passed to event callback */
  27. short ev_flags;
  28. ev_uint8_t ev_pri; /* smaller numbers are higher priority */
  29. ev_uint8_t ev_closure;
  30. struct timeval ev_timeout;
  31. /* allows us to adopt for different types of events */
  32. void (*ev_callback)(evutil_socket_t, short, void *arg);
  33. void *ev_arg;
  34. };

ev_active_next: 表示就绪状态的事件链表指针,当关注的事件就绪后,会把

对应的event放入active的队列里。表示该事件在active队列里的位置

ev_next:表示所有事件队列链表的指针。表示该事件在所有时间列表的位置。

ev_timeout_pos:用于管理超时

ev_fd:event绑定的socket描述符

ev_events:event关注的事件类型,它可以是以下3种类型:
I/O事件: EV_WRITE和EV_READ
定时事件: EV_TIMEOUT
信号: EV_SIGNAL
辅助选项: EV_PERSIST,表明是一个永久事件

Libevent中的定义为:

  1. #define EV_TIMEOUT 0x01
  2. #define EV_READ 0x02
  3. #define EV_WRITE 0x04
  4. #define EV_SIGNAL 0x08
  5. #define EV_PERSIST 0x10 /* Persistant event */

ev_res:记录了当前激活事件的类型

ev_flags: libevent 用于标记 event 信息的字段,表明其当前的状态,可能的值有

  1. #define EVLIST_TIMEOUT 0x01 // event在time堆中
  2. #define EVLIST_INSERTED 0x02 // event在已注册事件链表中
  3. #define EVLIST_SIGNAL 0x04 // 未见使用
  4. #define EVLIST_ACTIVE 0x08 // event在激活链表中
  5. #define EVLIST_INTERNAL 0x10 // 内部使用标记
  6. #define EVLIST_INIT 0x80 // event 已被初始化

ev_pri:当前事件的优先级

ev_timeout:超时时间设置

ev_callback:该事件对应的回调函数,和cb类型一样

ev_arg:回调函数用到参数

ev_ncalls:事件就绪执行时,调用 ev_callback 的次数,通常为 1
ev_pncalls:指针,通常指向 ev_ncalls 或者为 NULL

五:libevent对于event的管理和使用

对于event使用流程之前有讲过,需要设置event的属性和回调函数,然后将其加入event队列里。设置event属性和回调函数的api如下

  1. void
  2. event_set(struct event *ev, evutil_socket_t fd, short events,
  3. void (*callback)(evutil_socket_t, short, void *), void *arg)
  4. {
  5. int r;
  6. r = event_assign(ev, current_base, fd, events, callback, arg);
  7. EVUTIL_ASSERT(r == 0);
  8. }

ev:表示event指针

fd:event要关注的socket fd

events:event关注的事件类型(读写I/O,信号,时间事件等)

callback:event就绪后会调用的回调函数

arg:调用回调函数时,函数的参数

该函数内部调用了event_assign完成设置

  1. int
  2. event_assign(struct event *ev, struct event_base *base, evutil_socket_t fd, short events, void (*callback)(evutil_socket_t, short, void *), void *arg)
  3. {
  4. if (!base)
  5. base = current_base;
  6. _event_debug_assert_not_added(ev);
  7. ev->ev_base = base;
  8. ev->ev_callback = callback;
  9. ev->ev_arg = arg;
  10. ev->ev_fd = fd;
  11. ev->ev_events = events;
  12. ev->ev_res = 0;
  13. ev->ev_flags = EVLIST_INIT;
  14. ev->ev_ncalls = 0;
  15. ev->ev_pncalls = NULL;
  16. if (events & EV_SIGNAL) {
  17. if ((events & (EV_READ|EV_WRITE)) != 0) {
  18. event_warnx("%s: EV_SIGNAL is not compatible with "
  19. "EV_READ or EV_WRITE", __func__);
  20. return -1;
  21. }
  22.     //对于信号设置终止信号
  23. ev->ev_closure = EV_CLOSURE_SIGNAL;
  24. } else {
  25. if (events & EV_PERSIST) {
  26. evutil_timerclear(&ev->ev_io_timeout);
  27.     //永久事件
  28. ev->ev_closure = EV_CLOSURE_PERSIST;
  29. } else {
  30. ev->ev_closure = EV_CLOSURE_NONE;
  31. }
  32. }
  33. min_heap_elem_init(ev);
  34. if (base != NULL) {
  35. /* by default, we put new events into the middle priority */
  36. ev->ev_pri = base->nactivequeues / 2;
  37. }
  38. _event_debug_note_setup(ev);
  39. return 0;
  40. }

设置好event属性和回调函数后,需要将event设置到指定的event_base中,因为有可能存在很多event_base。调用如下函数

  1. int
  2. event_base_set(struct event_base *base, struct event *ev)
  3. {
  4. /* Only innocent events may be assigned to a different base */
  5. if (ev->ev_flags != EVLIST_INIT)
  6. return (-1);
  7. _event_debug_assert_is_setup(ev);
  8. ev->ev_base = base;
  9. ev->ev_pri = base->nactivequeues/2;
  10. return (0);
  11. }

该函数设置了优先级和隶属于哪个base另外还有一个设置优先级的函数

int event_priority_set(struct event *ev, int pri)
设置event ev的优先级,没什么可说的,注意的一点就是:当ev正处于就绪状态时,不能设置,返回-1

六:event_base结构分析和使用

event_base结构如下

  1. struct event_base {
  2. /** Function pointers and other data to describe this event_base's
  3. * backend. */
  4. const struct eventop *evsel;
  5. /** Pointer to backend-specific data. */
  6. void *evbase;
  7. /** List of changes to tell backend about at next dispatch. Only used
  8. * by the O(1) backends. */
  9. struct event_changelist changelist;
  10. /** Function pointers used to describe the backend that this event_base
  11. * uses for signals */
  12. const struct eventop *evsigsel;
  13. /** Data to implement the common signal handelr code. */
  14. struct evsig_info sig;
  15. /** Number of virtual events */
  16. int virtual_event_count;
  17. /** Number of total events added to this event_base */
  18. int event_count;
  19. /** Number of total events active in this event_base */
  20. int event_count_active;
  21. /** Set if we should terminate the loop once we're done processing
  22. * events. */
  23. int event_gotterm;
  24. /** Set if we should terminate the loop immediately */
  25. int event_break;
  26. /** Set if we should start a new instance of the loop immediately. */
  27. int event_continue;
  28. /** The currently running priority of events */
  29. int event_running_priority;
  30. /** Set if we're running the event_base_loop function, to prevent
  31. * reentrant invocation. */
  32. int running_loop;
  33. /* Active event management. */
  34. /** An array of nactivequeues queues for active events (ones that
  35. * have triggered, and whose callbacks need to be called). Low
  36. * priority numbers are more important, and stall higher ones.
  37. */
  38. struct event_list *activequeues;
  39. /** The length of the activequeues array */
  40. int nactivequeues;
  41. /* common timeout logic */
  42. /** An array of common_timeout_list* for all of the common timeout
  43. * values we know. */
  44. struct common_timeout_list **common_timeout_queues;
  45. /** The number of entries used in common_timeout_queues */
  46. int n_common_timeouts;
  47. /** The total size of common_timeout_queues. */
  48. int n_common_timeouts_allocated;
  49. /** List of defered_cb that are active. We run these after the active
  50. * events. */
  51. struct deferred_cb_queue defer_queue;
  52. /** Mapping from file descriptors to enabled (added) events */
  53. struct event_io_map io;
  54. /** Mapping from signal numbers to enabled (added) events. */
  55. struct event_signal_map sigmap;
  56. /** All events that have been enabled (added) in this event_base */
  57. struct event_list eventqueue;
  58. /** Stored timeval; used to detect when time is running backwards. */
  59. struct timeval event_tv;
  60. /** Priority queue of events with timeouts. */
  61. struct min_heap timeheap;
  62. /** Stored timeval: used to avoid calling gettimeofday/clock_gettime
  63. * too often. */
  64. struct timeval tv_cache;
  65. #if defined(_EVENT_HAVE_CLOCK_GETTIME) && defined(CLOCK_MONOTONIC)
  66. /** Difference between internal time (maybe from clock_gettime) and
  67. * gettimeofday. */
  68. struct timeval tv_clock_diff;
  69. /** Second in which we last updated tv_clock_diff, in monotonic time. */
  70. time_t last_updated_clock_diff;
  71. #endif
  72. #ifndef _EVENT_DISABLE_THREAD_SUPPORT
  73. /* threading support */
  74. /** The thread currently running the event_loop for this base */
  75. unsigned long th_owner_id;
  76. /** A lock to prevent conflicting accesses to this event_base */
  77. void *th_base_lock;
  78. /** The event whose callback is executing right now */
  79. struct event *current_event;
  80. /** A condition that gets signalled when we're done processing an
  81. * event with waiters on it. */
  82. void *current_event_cond;
  83. /** Number of threads blocking on current_event_cond. */
  84. int current_event_waiters;
  85. #endif
  86. #ifdef WIN32
  87. /** IOCP support structure, if IOCP is enabled. */
  88. struct event_iocp_port *iocp;
  89. #endif
  90. /** Flags that this base was configured with */
  91. enum event_base_config_flag flags;
  92. /* Notify main thread to wake up break, etc. */
  93. /** True if the base already has a pending notify, and we don't need
  94. * to add any more. */
  95. int is_notify_pending;
  96. /** A socketpair used by some th_notify functions to wake up the main
  97. * thread. */
  98. evutil_socket_t th_notify_fd[2];
  99. /** An event used by some th_notify functions to wake up the main
  100. * thread. */
  101. struct event th_notify;
  102. /** A function used to wake up the main thread from another thread. */
  103. int (*th_notify_fn)(struct event_base *base);
  104. };

evsel:eventop类型的指针,针对不同的模型封装了同一套操作

evbase: 不同模型开辟的数据空间,放到event_base里

evbase和evsel配套使用,eventop结构如下

  1. struct eventop {
  2. /** The name of this backend. */
  3. const char *name;
  4. /** Function to set up an event_base to use this backend. It should
  5. * create a new structure holding whatever information is needed to
  6. * run the backend, and return it. The returned pointer will get
  7. * stored by event_init into the event_base.evbase field. On failure,
  8. * this function should return NULL. */
  9. void *(*init)(struct event_base *);
  10. /** Enable reading/writing on a given fd or signal. 'events' will be
  11. * the events that we're trying to enable: one or more of EV_READ,
  12. * EV_WRITE, EV_SIGNAL, and EV_ET. 'old' will be those events that
  13. * were enabled on this fd previously. 'fdinfo' will be a structure
  14. * associated with the fd by the evmap; its size is defined by the
  15. * fdinfo field below. It will be set to 0 the first time the fd is
  16. * added. The function should return 0 on success and -1 on error.
  17. */
  18. int (*add)(struct event_base *, evutil_socket_t fd, short old, short events, void *fdinfo);
  19. /** As "add", except 'events' contains the events we mean to disable. */
  20. int (*del)(struct event_base *, evutil_socket_t fd, short old, short events, void *fdinfo);
  21. /** Function to implement the core of an event loop. It must see which
  22. added events are ready, and cause event_active to be called for each
  23. active event (usually via event_io_active or such). It should
  24. return 0 on success and -1 on error.
  25. */
  26. int (*dispatch)(struct event_base *, struct timeval *);
  27. /** Function to clean up and free our data from the event_base. */
  28. void (*dealloc)(struct event_base *);
  29. /** Flag: set if we need to reinitialize the event base after we fork.
  30. */
  31. int need_reinit;
  32. /** Bit-array of supported event_method_features that this backend can
  33. * provide. */
  34. enum event_method_feature features;
  35. /** Length of the extra information we should record for each fd that
  36. has one or more active events. This information is recorded
  37. as part of the evmap entry for each fd, and passed as an argument
  38. to the add and del functions above.
  39. */
  40. size_t fdinfo_len;
  41. };

eventop封装了epoll, select 等不同网络模型的init,add,deldispatch等回调函数

changelist:通知后端改变的列表

evsigsel:告诉后台 eventbase用于处理signal事件的函数指针

event_count:eventbase中总共的event数量

event_count_active: eventbase中激活的event数量

event_gotterm:这个参数设置后,一旦我们对事件做了处理,就要终止循环。

event_break:立刻结束循环

event_continue:立刻开启一个新的循环实例

event_running_priority:当前事件队列的优先级。

activequeues:激活的事件队列,priority数字小的先触发。

struct event_io_map io: 用于管理io事件的描述符

struct event_signal_map sigmap: 用于管理signal的描述符

eventqueue:所有被加入event_base的事件组成的队列

event_tv:后台记录运行的时间

timeheap:用小根堆管理超时事件队列

其他的参数不是很了解,以后用到了再琢磨。

libevent提供如下两个函数可以生成eventbase实例

  1. event_init(void)
  2. {
  3. struct event_base *base = event_base_new_with_config(NULL);
  4. if (base == NULL) {
  5. event_errx(1, "%s: Unable to construct event_base", __func__);
  6. return NULL;
  7. }
  8. current_base = base;
  9. return (base);
  10. }
  11. struct event_base *
  12. event_base_new(void)
  13. {
  14. struct event_base *base = NULL;
  15. struct event_config *cfg = event_config_new();
  16. if (cfg) {
  17. base = event_base_new_with_config(cfg);
  18. event_config_free(cfg);
  19. }
  20. return base;
  21. }

到此为止基本的结构介绍完了。

七libevent事件添加/删除/初始化/派发接口分析

1 event_base初始化和模型初始化

先看下初始化,实际上初始化是在event_base_new_with_config函数中完成的。

  1. struct event_base *
  2. event_base_new_with_config(const struct event_config *cfg)
  3. {
  4. ...
  5.    base->evsel = eventops[i];
  6. base->evbase = base->evsel->init(base);
  7. ...
  8. }

event_init和eventbase_new都会调用event_base_new_with_config这个函数.

而base->evsel是不同模型的指针,进而实现调用不同模型的init

2 事件添加注册函数

  1. int
  2. event_add(struct event *ev, const struct timeval *tv)
  3. {
  4. int res;
  5. if (EVUTIL_FAILURE_CHECK(!ev->ev_base)) {
  6. event_warnx("%s: event has no event_base set.", __func__);
  7. return -1;
  8. }
  9. EVBASE_ACQUIRE_LOCK(ev->ev_base, th_base_lock);
  10. res = event_add_internal(ev, tv, 0);
  11. EVBASE_RELEASE_LOCK(ev->ev_base, th_base_lock);
  12. return (res);
  13. }

防止多线程访问出错,加了锁,并且调用了

  1. static inline int
  2. event_add_internal(struct event *ev, const struct timeval *tv,
  3. int tv_is_absolute)
  4. {
  5. struct event_base *base = ev->ev_base;
  6. int res = 0;
  7. int notify = 0;
  8. ...
  9. /*
  10. * 新的timer事件,调用timer heap接口在堆上预留一个位置
  11.    *向系统I/O机制注册可能会失败,而当在堆上预留成功后,
  12. * 定时事件的添加将肯定不会失败;
  13. */
  14. if (tv != NULL && !(ev->ev_flags & EVLIST_TIMEOUT)) {
  15. if (min_heap_reserve(&base->timeheap,
  16. 1 + min_heap_size(&base->timeheap)) == -1)
  17. return (-1); /* ENOMEM == errno */
  18. }
  19. //根据不同的事件(IO,信号,超时事件等将fd放入不同的map
  20. if ((ev->ev_events & (EV_READ|EV_WRITE|EV_SIGNAL)) &&
  21. !(ev->ev_flags & (EVLIST_INSERTED|EVLIST_ACTIVE))) {
  22. if (ev->ev_events & (EV_READ|EV_WRITE))
  23. res = evmap_io_add(base, ev->ev_fd, ev);
  24. else if (ev->ev_events & EV_SIGNAL)
  25. res = evmap_signal_add(base, (int)ev->ev_fd, ev);
  26. if (res != -1)
  27. //将event放入事件队列
  28. event_queue_insert(base, ev, EVLIST_INSERTED);
  29. if (res == 1) {
  30. /* evmap says we need to notify the main thread. */
  31. notify = 1;
  32. res = 0;
  33. }
  34. }
  35. /*
  36. EVLIST_TIMEOUT表明event已经在定时器堆中了,删除旧的
  37. */
  38. if (ev->ev_flags & EVLIST_TIMEOUT) {
  39. /* XXX I believe this is needless. */
  40. if (min_heap_elt_is_top(ev))
  41. notify = 1;
  42. event_queue_remove(base, ev, EVLIST_TIMEOUT);
  43. }
  44. /* // 如果事件已经是就绪状态则从激活链表中删除 */
  45. if ((ev->ev_flags & EVLIST_ACTIVE) &&
  46. (ev->ev_res & EV_TIMEOUT)) {
  47. if (ev->ev_events & EV_SIGNAL) {
  48. /* See if we are just active executing
  49. * this event in a loop
  50. */
  51. if (ev->ev_ncalls && ev->ev_pncalls) {
  52. /* Abort loop */
  53. *ev->ev_pncalls = 0;
  54. }
  55. }
  56. event_queue_remove(base, ev, EVLIST_ACTIVE);
  57. }
  58. gettime(base, &now);
  59. ... // 计算时间,并插入到timer小根堆中
  60. event_queue_insert(base, ev, EVLIST_TIMEOUT);
  61. return (res);
  62. }

在evmap_io_add里完成了io事件对于不同网络模型(select/epoll等)的绑定

  1. int
  2. evmap_io_add(struct event_base *base, evutil_socket_t fd, struct event *ev)
  3. {
  4. ...
  5. const struct eventop *evsel = base->evsel;
  6. struct event_io_map *io = &base->io;
  7. ...
  8. //将event的fd和类型注册到不同的网络模型
  9. if (evsel->add(base, ev->ev_fd,
  10. old, (ev->ev_events & EV_ET) | res, extra) == -1)
  11. return (-1);
  12. ...
  13. }

event_queue_insert()负责将事件插入到对应的链表中
event_queue_remove()负责将事件从对应的链表中删除

  1. void event_queue_insert(struct event_base *base, struct event *ev,
  2. int queue)
  3. {
  4. // ev可能已经在激活列表中了,避免重复插入
  5. if (ev->ev_flags & queue) {
  6. if (queue & EVLIST_ACTIVE)
  7. return;
  8. }
  9. // // 记录queue标记
  10. ev->ev_flags |= queue;
  11. switch (queue) {
  12. // I/O或Signal事件,加入已注册事件链表
  13. case EVLIST_INSERTED:
  14. TAILQ_INSERT_TAIL(&base->eventqueue, ev, ev_next);
  15. break;
  16. // 就绪事件,加入激活链表
  17. case EVLIST_ACTIVE:
  18. base->event_count_active++;
  19. TAILQ_INSERT_TAIL(base->activequeues[ev->ev_pri], ev,
  20. ev_active_next);
  21. break;
  22. // 定时事件,加入堆
  23. case EVLIST_TIMEOUT:
  24. min_heap_push(&base->timeheap, ev);
  25. break;
  26. }
  27. }

3删除事件

  1. int
  2. event_del(struct event *ev)
  3. {
  4. int res;
  5. if (EVUTIL_FAILURE_CHECK(!ev->ev_base)) {
  6. event_warnx("%s: event has no event_base set.", __func__);
  7. return -1;
  8. }
  9. EVBASE_ACQUIRE_LOCK(ev->ev_base, th_base_lock);
  10. res = event_del_internal(ev);
  11. EVBASE_RELEASE_LOCK(ev->ev_base, th_base_lock);
  12. return (res);
  13. }

内部调用了

  1. static inline int
  2. event_del_internal(struct event *ev)
  3. {
  4. struct event_base *base;
  5. int res = 0, notify = 0;
  6. event_debug(("event_del: %p (fd "EV_SOCK_FMT"), callback %p",
  7. ev, EV_SOCK_ARG(ev->ev_fd), ev->ev_callback));
  8. /* An event without a base has not been added */
  9. if (ev->ev_base == NULL)
  10. return (-1);
  11. base = ev->ev_base;
  12. EVUTIL_ASSERT(!(ev->ev_flags & ~EVLIST_ALL));
  13. //根据不同flag从队列中移除
  14. if (ev->ev_flags & EVLIST_TIMEOUT) {
  15. /* NOTE: We never need to notify the main thread because of a
  16. * deleted timeout event: all that could happen if we don't is
  17. * that the dispatch loop might wake up too early. But the
  18. * point of notifying the main thread _is_ to wake up the
  19. * dispatch loop early anyway, so we wouldn't gain anything by
  20. * doing it.
  21. */
  22. event_queue_remove(base, ev, EVLIST_TIMEOUT);
  23. }
  24. if (ev->ev_flags & EVLIST_ACTIVE)
  25. event_queue_remove(base, ev, EVLIST_ACTIVE);
  26. if (ev->ev_flags & EVLIST_INSERTED) {
  27. event_queue_remove(base, ev, EVLIST_INSERTED);
  28. if (ev->ev_events & (EV_READ|EV_WRITE))
  29. //io事件从map中移除
  30. res = evmap_io_del(base, ev->ev_fd, ev);
  31. else
  32. res = evmap_signal_del(base, (int)ev->ev_fd, ev);
  33. if (res == 1) {
  34. /* evmap says we need to notify the main thread. */
  35. notify = 1;
  36. res = 0;
  37. }
  38. }return (res);
  39. }

evmap_io_del 中完成了不同模型del函数调用

  1. int
  2. evmap_io_del(struct event_base *base, evutil_socket_t fd, struct event *ev)
  3. {
  4. if (evsel->del(base, ev->ev_fd, old, res, extra) == -1)
  5. return (-1);
  6. }

4 事件派发

  1. int
  2. event_base_loop(struct event_base *base, int flags)
  3. {
  4. const struct eventop *evsel = base->evsel;
  5. struct timeval tv;
  6. struct timeval *tv_p;
  7. int res, done, retval = 0;
  8. while (!done) {
  9. base->event_continue = 0;
  10. /* Terminate the loop if we have been asked to */
  11.     //设置了event_gotterm或者event_break都会导致break
  12. if (base->event_gotterm) {
  13. break;
  14. }
  15. if (base->event_break) {
  16. break;
  17. }
  18. //校正系统时间
  19. timeout_correct(base, &tv);
  20. // 根据timer heap中事件的最小超时时间,计算系统I/O demultiplexer
  21. //的最大等待时间
  22. tv_p = &tv;
  23. if (!N_ACTIVE_CALLBACKS(base) && !(flags & EVLOOP_NONBLOCK)) {
  24. timeout_next(base, &tv_p);
  25. } else {
  26. /*
  27. * 如果有激活的事件,我们立即处理,不用等待
  28. */
  29. evutil_timerclear(&tv);
  30. }
  31. /* If we have no events, we just exit */
  32. if (!event_haveevents(base) && !N_ACTIVE_CALLBACKS(base)) {
  33. event_debug(("%s: no events registered.", __func__));
  34. retval = 1;
  35. goto done;
  36. }
  37. /* update last old time */
  38. gettime(base, &base->event_tv);
  39. clear_time_cache(base);
  40. //完成事件的派发,将就绪的事件放倒active列表里
  41. res = evsel->dispatch(base, tv_p);
  42. if (res == -1) {
  43. event_debug(("%s: dispatch returned unsuccessfully.",
  44. __func__));
  45. retval = -1;
  46. goto done;
  47. }
  48. update_time_cache(base);
  49. timeout_process(base);
  50.     // 调用event_process_active()处理激活链表中的就绪event,调用其
  51. //回调函数执行事件处理
  52. // 该函数会寻找最高优先级(priority值越小优先级越高)的激活事件
  53. //链表,
  54. // 然后处理链表中的所有就绪事件;
  55. // 因此低优先级的就绪事件可能得不到及时处理;
  56. if (N_ACTIVE_CALLBACKS(base)) {
  57. int n = event_process_active(base);
  58. if ((flags & EVLOOP_ONCE)
  59. && N_ACTIVE_CALLBACKS(base) == 0
  60. && n != 0)
  61. done = 1;
  62. } else if (flags & EVLOOP_NONBLOCK)
  63. done = 1;
  64. }
  65. event_debug(("%s: asked to terminate loop.", __func__));
  66. done:
  67. clear_time_cache(base);
  68. base->running_loop = 0;
  69. EVBASE_RELEASE_LOCK(base, th_base_lock);
  70. return (retval);
  71. }

调用evsel->dispatch后将事件放入active队列,然后process_active处理激活队列里的事件

  1. static int
  2. event_process_active(struct event_base *base)
  3. {
  4. /* Caller must hold th_base_lock */
  5. struct event_list *activeq = NULL;
  6. int i, c = 0;
  7. for (i = 0; i < base->nactivequeues; ++i) {
  8. if (TAILQ_FIRST(&base->activequeues[i]) != NULL) {
  9. base->event_running_priority = i;
  10. activeq = &base->activequeues[i];
  11. c = event_process_active_single_queue(base, activeq);
  12. if (c < 0) {
  13. base->event_running_priority = -1;
  14. return -1;
  15. } else if (c > 0)
  16. break; /* Processed a real event; do not
  17. * consider lower-priority events */
  18. /* If we get here, all of the events we processed
  19. * were internal. Continue. */
  20. }
  21. }
  22. event_process_deferred_callbacks(&base->defer_queue,&base->event_break);
  23. base->event_running_priority = -1;
  24. return c;
  25. }

优先级数字base->nactivequeues 以下的会被先处理。到目前为止介绍了libevent库的基本api和流程。对于不同的网络模型libevent是如何封装的呢?

八libevent对于网络模型的封装(epoll为例)

epoll基本单元封装

  1. struct epollop {
  2. struct epoll_event *events;
  3. int nevents;
  4. int epfd;
  5. };

epfd:epoll_create 返回的epoll表句柄

events: 表示epoll表监听的epoll_event 队列

nevents:epoll_event队列大小

在介绍epoll封装的一些接口前,先看以下两个结构体定义的对象

  1. static const struct eventop epollops_changelist = {
  2. "epoll (with changelist)",
  3. epoll_init,
  4. event_changelist_add,
  5. event_changelist_del,
  6. epoll_dispatch,
  7. epoll_dealloc,
  8. 1, /* need reinit */
  9. EV_FEATURE_ET|EV_FEATURE_O1,
  10. EVENT_CHANGELIST_FDINFO_SIZE
  11. };

结构体对象封装了epoll操作的函数指针,

event_changelist_add表示设置changlist标记位时将事件加入changelist

event_changelist_del表示如果设置了changelist标记位事件从changelist中移除。

  1. const struct eventop epollops = {
  2. "epoll",
  3. epoll_init,
  4. epoll_nochangelist_add,
  5. epoll_nochangelist_del,
  6. epoll_dispatch,
  7. epoll_dealloc,
  8. 1, /* need reinit */
  9. EV_FEATURE_ET|EV_FEATURE_O1,
  10. 0
  11. };

这个结构体对象对应的是没设置changelist标记位时epoll的操作接口接下来看下epoll_init函数

  1. epoll_init(struct event_base *base)
  2. {
  3. int epfd;
  4. struct epollop *epollop;
  5. /* Initialize the kernel queue. (The size field is ignored since
  6. * 2.6.8.) */
  7. if ((epfd = epoll_create(32000)) == -1) {
  8. if (errno != ENOSYS)
  9. event_warn("epoll_create");
  10. return (NULL);
  11. }
  12. evutil_make_socket_closeonexec(epfd);
  13. if (!(epollop = mm_calloc(1, sizeof(struct epollop)))) {
  14. close(epfd);
  15. return (NULL);
  16. }
  17. epollop->epfd = epfd;
  18. /* Initialize fields */
  19. epollop->events = mm_calloc(INITIAL_NEVENT, sizeof(struct epoll_event));
  20. if (epollop->events == NULL) {
  21. mm_free(epollop);
  22. close(epfd);
  23. return (NULL);
  24. }
  25. epollop->nevents = INITIAL_NEVENT;
  26. //如果设置了EVENT_BASE_FLAG_EPOLL_USE_CHANGELIST
  27. //添加和删除事件的回调函数变为changelist_add 和changelist_del
  28. if ((base->flags & EVENT_BASE_FLAG_EPOLL_USE_CHANGELIST) != 0 ||
  29. ((base->flags & EVENT_BASE_FLAG_IGNORE_ENV) == 0 &&
  30. evutil_getenv("EVENT_EPOLL_USE_CHANGELIST") != NULL))
  31. base->evsel = &epollops_changelist;
  32. evsig_init(base);
  33. return (epollop);
  34. }

接下来看下epoll封装的事件注册接口,只看epoll_nochangelist_add,epoll_changelist_add都是类似的。

  1. static int
  2. epoll_nochangelist_add(struct event_base *base, evutil_socket_t fd,
  3. short old, short events, void *p)
  4. {
  5. struct event_change ch;
  6. ch.fd = fd;
  7. ch.old_events = old;
  8. ch.read_change = ch.write_change = 0;
  9. if (events & EV_WRITE)
  10. ch.write_change = EV_CHANGE_ADD |
  11. (events & EV_ET);
  12. if (events & EV_READ)
  13. ch.read_change = EV_CHANGE_ADD |
  14. (events & EV_ET);
  15. return epoll_apply_one_change(base, base->evbase, &ch);
  16. }

在epoll_apply_one_change中完成事件添加

  1. static int
  2. epoll_apply_one_change(struct event_base *base,
  3. struct epollop *epollop,
  4. const struct event_change *ch)
  5. {
  6. struct epoll_event epev;
  7. int op, events = 0;
  8. if (1) {
  9. if ((ch->read_change & EV_CHANGE_ADD) ||
  10. (ch->write_change & EV_CHANGE_ADD)) {
  11. events = 0;
  12. op = EPOLL_CTL_ADD;
  13. //关注读事件
  14. if (ch->read_change & EV_CHANGE_ADD) {
  15. events |= EPOLLIN;
  16. } else if (ch->read_change & EV_CHANGE_DEL) {
  17. ;
  18. } else if (ch->old_events & EV_READ) {
  19. events |= EPOLLIN;
  20. }
  21. //关注写事件
  22. if (ch->write_change & EV_CHANGE_ADD) {
  23. events |= EPOLLOUT;
  24. } else if (ch->write_change & EV_CHANGE_DEL) {
  25. ;
  26. } else if (ch->old_events & EV_WRITE) {
  27. events |= EPOLLOUT;
  28. }
  29. //设置et模型
  30. if ((ch->read_change|ch->write_change) & EV_ET)
  31. events |= EPOLLET;
  32. //之前有事件关注,将操作改为EPOLL_CTL_MOD
  33. if (ch->old_events) {
  34. op = EPOLL_CTL_MOD;
  35. }
  36. } else if ((ch->read_change & EV_CHANGE_DEL) ||
  37. (ch->write_change & EV_CHANGE_DEL)) {
  38. op = EPOLL_CTL_DEL;
  39. //之前关注过删除该事件
  40. if (ch->read_change & EV_CHANGE_DEL) {
  41. if (ch->write_change & EV_CHANGE_DEL) {
  42. //读写事件都设置删除标记
  43. events = EPOLLIN|EPOLLOUT;
  44. } else if (ch->old_events & EV_WRITE) {
  45. events = EPOLLOUT;
  46. //只有写事件要求删除,那么更改为读事件
  47. op = EPOLL_CTL_MOD;
  48. } else {
  49. //只删除读事件
  50. events = EPOLLIN;
  51. }
  52. } else if (ch->write_change & EV_CHANGE_DEL) {
  53. if (ch->old_events & EV_READ) {
  54. events = EPOLLIN;
  55. //更改为关注读事件
  56. op = EPOLL_CTL_MOD;
  57. } else {
  58. //只删除写事件
  59. events = EPOLLOUT;
  60. }
  61. }
  62. }
  63. if (!events)
  64. return 0;
  65. memset(&epev, 0, sizeof(epev));
  66. epev.data.fd = ch->fd;
  67. epev.events = events;
  68. //调用epoll_ctl设置event进入epoll监听队列
  69. if (epoll_ctl(epollop->epfd, op, ch->fd, &epev) == -1) {
  70. if (op == EPOLL_CTL_MOD && errno == ENOENT) {
  71. if (epoll_ctl(epollop->epfd, EPOLL_CTL_ADD, ch->fd, &epev) == -1) {
  72. event_warn("Epoll MOD(%d) on %d retried as ADD; that failed too",
  73. (int)epev.events, ch->fd);
  74. return -1;
  75. } else {
  76. event_debug(("Epoll MOD(%d) on %d retried as ADD; succeeded.",
  77. (int)epev.events,
  78. ch->fd));
  79. }
  80. } else if (op == EPOLL_CTL_ADD && errno == EEXIST) {
  81. if (epoll_ctl(epollop->epfd, EPOLL_CTL_MOD, ch->fd, &epev) == -1) {
  82. event_warn("Epoll ADD(%d) on %d retried as MOD; that failed too",
  83. (int)epev.events, ch->fd);
  84. return -1;
  85. } else {
  86. event_debug(("Epoll ADD(%d) on %d retried as MOD; succeeded.",
  87. (int)epev.events,
  88. ch->fd));
  89. }
  90. } else if (op == EPOLL_CTL_DEL &&
  91. (errno == ENOENT || errno == EBADF ||
  92. errno == EPERM)) {
  93. event_debug(("Epoll DEL(%d) on fd %d gave %s: DEL was unnecessary.",
  94. (int)epev.events,
  95. ch->fd,
  96. strerror(errno)));
  97. } else {
  98. event_warn("Epoll %s(%d) on fd %d failed. Old events were %d; read change was %d (%s); write change was %d (%s)",
  99. epoll_op_to_string(op),
  100. (int)epev.events,
  101. ch->fd,
  102. ch->old_events,
  103. ch->read_change,
  104. change_to_string(ch->read_change),
  105. ch->write_change,
  106. change_to_string(ch->write_change));
  107. return -1;
  108. }
  109. } else {
  110. event_debug(("Epoll %s(%d) on fd %d okay. [old events were %d; read change was %d; write change was %d]",
  111. epoll_op_to_string(op),
  112. (int)epev.events,
  113. (int)ch->fd,
  114. ch->old_events,
  115. ch->read_change,
  116. ch->write_change));
  117. }
  118. }
  119. return 0;
  120. }

删除事件也是类似的,内部调用epoll_apply_onechange知识将事件标记位EV_CHANGE_DEL,到函数内部解析为epoll_del或者epoll_mod

  1. static int
  2. epoll_nochangelist_del(struct event_base *base, evutil_socket_t fd,
  3. short old, short events, void *p)
  4. {
  5. struct event_change ch;
  6. ch.fd = fd;
  7. ch.old_events = old;
  8. ch.read_change = ch.write_change = 0;
  9. if (events & EV_WRITE)
  10. ch.write_change = EV_CHANGE_DEL;
  11. if (events & EV_READ)
  12. ch.read_change = EV_CHANGE_DEL;
  13. return epoll_apply_one_change(base, base->evbase, &ch);
  14. }

最后是事件派发

  1. static int
  2. epoll_dispatch(struct event_base *base, struct timeval *tv)
  3. {
  4. struct epollop *epollop = base->evbase;
  5. struct epoll_event *events = epollop->events;
  6. int i, res;
  7. long timeout = -1;
  8. if (tv != NULL) {
  9. timeout = evutil_tv_to_msec(tv);
  10. if (timeout < 0 || timeout > MAX_EPOLL_TIMEOUT_MSEC) {
  11. /* Linux kernels can wait forever if the timeout is
  12. * too big; see comment on MAX_EPOLL_TIMEOUT_MSEC. */
  13. timeout = MAX_EPOLL_TIMEOUT_MSEC;
  14. }
  15. }
  16. //处理changelis列表的所有事件
  17. epoll_apply_changes(base);
  18. event_changelist_remove_all(&base->changelist, base);
  19. EVBASE_RELEASE_LOCK(base, th_base_lock);
  20. //处理epoll_wait返回就绪描述符个数
  21. res = epoll_wait(epollop->epfd, events, epollop->nevents, timeout);
  22. EVBASE_ACQUIRE_LOCK(base, th_base_lock);
  23. if (res == -1) {
  24. if (errno != EINTR) {
  25. event_warn("epoll_wait");
  26. return (-1);
  27. }
  28. return (0);
  29. }
  30. event_debug(("%s: epoll_wait reports %d", __func__, res));
  31. EVUTIL_ASSERT(res <= epollop->nevents);
  32. for (i = 0; i < res; i++) {
  33. int what = events[i].events;
  34. short ev = 0;
  35. if (what & (EPOLLHUP|EPOLLERR)) {
  36. ev = EV_READ | EV_WRITE;
  37. } else {
  38. if (what & EPOLLIN)
  39. ev |= EV_READ;
  40. if (what & EPOLLOUT)
  41. ev |= EV_WRITE;
  42. }
  43. if (!ev)
  44. continue;
  45. //将event放入active队列中
  46. evmap_io_active(base, events[i].data.fd, ev | EV_ET);
  47. }
  48. //epoll就绪数量等于队列长度,额外开辟空间确保新的事件可以触发
  49. if (res == epollop->nevents && epollop->nevents < MAX_NEVENT) {
  50. /* We used all of the event space this time. We should
  51. be ready for more events next time. */
  52. int new_nevents = epollop->nevents * 2;
  53. struct epoll_event *new_events;
  54. new_events = mm_realloc(epollop->events,
  55. new_nevents * sizeof(struct epoll_event));
  56. if (new_events) {
  57. epollop->events = new_events;
  58. epollop->nevents = new_nevents;
  59. }
  60. }
  61. return (0);
  62. }

以上就是livevent对于epoll的基本封装。整个结构体对象epollops在event.c中被赋值给eventops[]的

  1. static const struct eventop *eventops[] = {
  2. #ifdef _EVENT_HAVE_EVENT_PORTS
  3. &evportops,
  4. #endif
  5. #ifdef _EVENT_HAVE_WORKING_KQUEUE
  6. &kqops,
  7. #endif
  8. #ifdef _EVENT_HAVE_EPOLL
  9. &epollops,
  10. #endif
  11. #ifdef _EVENT_HAVE_DEVPOLL
  12. &devpollops,
  13. #endif
  14. #ifdef _EVENT_HAVE_POLL
  15. &pollops,
  16. #endif
  17. #ifdef _EVENT_HAVE_SELECT
  18. &selectops,
  19. #endif
  20. #ifdef WIN32
  21. &win32ops,
  22. #endif
  23. NULL
  24. };

之后在event_base_new_with_config中调用base->evsel = eventops[i];完成不同模型的结构体对象赋值给base的evsel指针的。

目前对于libevent的分析和理解先告一段落,以后会更新对libevent的更深刻认识。

热门评论

热门文章

  1. vscode搭建windows C++开发环境

    喜欢(596) 浏览(84498)
  2. 聊天项目(28) 分布式服务通知好友申请

    喜欢(507) 浏览(6142)
  3. 使用hexo搭建个人博客

    喜欢(533) 浏览(12038)
  4. Linux环境搭建和编码

    喜欢(594) 浏览(13708)
  5. Qt环境搭建

    喜欢(517) 浏览(25036)

最新评论

  1. visual studio配置boost库 一giao里我离giaogiao:请问是修改成这样吗:.\b2.exe toolset=MinGW
  2. Qt MVC结构之QItemDelegate介绍 胡歌-此生不换:gpt, google
  3. 利用栅栏实现同步 Dzher:作者你好!我觉得 std::thread a(write_x); std::thread b(write_y); std::thread c(read_x_then_y); std::thread d(read_y_then_x); 这个例子中的assert fail并不会发生,原子变量设定了非relaxed内存序后一个线程的原子变量被写入,那么之后的读取一定会被同步的,c和d线程中只可能同时发生一个z++未执行的情况,最终z不是1就是2了,我测试了很多次都没有assert,请问我这个观点有什么错误,谢谢!
  4. slice介绍和使用 恋恋风辰:切片作为引用类型极大的提高了数据传递的效率和性能,但也要注意切片的浅拷贝隐患,算是一把双刃剑,这世间的常态就是在两极之间寻求一种稳定。
  5. interface应用 secondtonone1:interface是万能类型,但是使用时要转换为实际类型来使用。interface丰富了go的多态特性,也降低了传统面向对象语言的耦合性。
  6. 答疑汇总(thread,async源码分析) Yagus:如果引用计数为0,则会执行 future 的析构进而等待任务执行完成,那么看到的输出将是 这边应该不对吧,std::future析构只在这三种情况都满足的时候才回block: 1.共享状态是std::async 创造的(类型是_Task_async_state) 2.共享状态没有ready 3.这个future是共享状态的最后一个引用 这边共享状态类型是“_Package_state”,引用计数即使为0也不应该block啊
  7. Qt 对话框 Spade2077:QDialog w(); //这里是不是不需要带括号
  8. 利用C11模拟伪闭包实现连接的安全回收 搁浅:看chatgpt说 直接传递 shared_from_this() 更安全 提问: socket_.async_read_some(boost::asio::buffer(data_, BUFFSIZE), // 接收客户端发生来的数据 std::bind(&Session::handle_read, this, std::placeholders::_1, std::placeholders::_2, shared_from_this())); socket_.async_read_some(boost::asio::buffer(data_, BUFFSIZE), std::bind(&Session::handle_read, shared_from_this(), std::placeholders::_1, std::placeholders::_2)); 这两种方式有区别吗? 回答 : 第一种方式:this 是裸指针,可能会导致生命周期问题,虽然 shared_from_this() 提供了一定的保护,但 this 依然存在风险。 第二种方式:完全使用 shared_ptr 来管理生命周期,更加安全。 通常,第二种方式更推荐使用,因为它可以确保在异步操作完成之前,Session 对象的生命周期得到完全管理,避免使用裸指针的潜在风险。
  9. 构造函数 secondtonone1:构造函数是类的基础知识,要着重掌握
  10. 解决博客回复区被脚本注入的问题 secondtonone1:走到现在我忽然明白一个道理,无论工作也好生活也罢,最重要的是开心,即使一份安稳的工作不能给我带来事业上的积累也要合理的舍弃,所以我还是想去做喜欢的方向。
  11. C++ 线程安全的单例模式演变 183******95:单例模式的析构函数何时运行呢? 实际测试里:无论单例模式的析构函数为私有或公有,使用智能指针和辅助回收类,两种方法都无法在main()结束前调用单例的析构函数。
  12. boost::asio之socket的创建和连接 项空月:发现一些错别字 :每隔vector存储  是不是是每个. asio::mutable_buffers_1 o或者    是不是多打了个o
  13. 网络编程学习方法和图书推荐 Corleone:啥程度可以找工作
  14. 处理网络粘包问题 zyouth: //消息的长度小于头部规定的长度,说明数据未收全,则先将部分消息放到接收节点里 if (bytes_transferred < data_len) { memcpy(_recv_msg_node->_data + _recv_msg_node->_cur_len, _data + copy_len, bytes_transferred); _recv_msg_node->_cur_len += bytes_transferred; ::memset(_data, 0, MAX_LENGTH); _socket.async_read_some(boost::asio::buffer(_data, MAX_LENGTH), std::bind(&CSession::HandleRead, this, std::placeholders::_1, std::placeholders::_2, shared_self)); //头部处理完成 _b_head_parse = true; return; } 把_b_head_parse = true;放在_socket.async_read_some前面是不是更好
  15. 聊天项目(9) redis服务搭建 pro_lin:redis线程池的析构函数,除了pop出队列,还要free掉redis连接把
  16. 面试题汇总(一) secondtonone1:看到网络上经常提问的go的问题,做了一下汇总,结合自己的经验给出的答案,如有纰漏,望指正批评。
  17. 无锁并发队列 TenThousandOne:_head  和 _tail  替换为原子变量。那里pop的逻辑,val = _data[h] 可以移到循环外面吗
  18. string类 WangQi888888:确实错了,应该是!isspace(sind[index]). 否则不进入循环,还是原来的字符串“some string”
  19. 再谈单例模式 secondtonone1:是的,C++11以后返回局部static变量对象能保证线程安全了。
  20. 聊天项目(7) visualstudio配置grpc diablorrr:cmake文件得改一下 find_package(Boost REQUIRED COMPONENTS system filesystem),要加上filesystem。在target_link_libraries中也同样加上
  21. 聊天项目(15) 客户端实现TCP管理者 lkx:已经在&QTcpSocket::readyRead 回调函数中做了处理了的。
  22. 聊天项目(13) 重置密码功能 Doraemon:万一一个用户多个邮箱呢 有可能的
  23. 创建项目和编译 secondtonone1:谢谢支持
  24. 类和对象 陈宇航:支持!!!!
  25. C++ 并发三剑客future, promise和async Yunfei:大佬您好,如果这个线程池中加入的异步任务的形参如果有右值引用,这个commit中的返回类型推导和bind绑定就会出现问题,请问实际工程中,是不是不会用到这种任务,如果用到了,应该怎么解决?
  26. 堆排序 secondtonone1:堆排序非常实用,定时器就是这个原理制作的。
  27. protobuf配置和使用 熊二:你可以把dll放到系统目录,也可以配置环境变量,还能把dll丢到lib里
  28. 基于锁实现线程安全队列和栈容器 secondtonone1:我是博主,你认真学习的样子的很可爱,哈哈,我画的是链表由空变成1个的情况。其余情况和你思考的类似,只不过我用了一个无效节点表示tail的指向,最初head和tail指向的都是这个节点。

个人公众号

个人微信