Commit graph

13 commits

Author SHA1 Message Date
Donald Sharp 12bf042c68 bgpd: Modify bgp to handle packet events in a FIFO
Current behavor of BGP is to have a event per connection.  Given
that on startup of BGP with a high number of neighbors you end
up with 2 * # of peers events that are being processed.  Additionally
once BGP has selected the connection this still only comes down
to 512 events.  This number of events is swamping the event system
and in addition delaying any other work from being done in BGP at
all because the the 512 events are always going to take precedence
over everything else.  The other main events are the handling
of the metaQ(1 event), update group events( 1 per update group )
and the zebra batching event.  These are being swamped.

Modify the BGP code to have a FIFO of connections.  As new data
comes in to read, place the connection on the end of the FIFO.
Have the bgp_process_packet handle up to 100 packets spread
across the individual peers where each peer/connection is limited
to the original quanta.  During testing I noticed that withdrawal
events at very very large scale are taking up to 40 seconds to process
so I added a check for yielding to further limit the number of packets
being processed.

This change also allow for BGP to be interactive again on scale
setups on initial convergence.  Prior to this change any vtysh
command entered would be delayed by 10's of seconds in my setup
while BGP was doing other work.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-25 09:10:46 -04:00
Donald Sharp ccb51e8266 bgpd: Convert bgp_io.c to take struct peer_connection
bgp_io.c is clearly connection oriented so let's convert
it over to using `struct peer_connection`

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2023-08-18 09:29:04 -04:00
David Lamparter acddc0ed3c *: auto-convert to SPDX License IDs
Done with a combination of regex'ing and banging my head against a wall.

Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
2023-02-09 14:09:11 +01:00
Quentin Young 8fa7732f5d bgpd: raise default & max r/w quanta to 64
Vectored writes are more efficient with a higher quantum.

Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2019-10-14 18:41:53 +00:00
Quentin Young a715eab3ce
bgpd: update pthreads to use lib changes
Use the new threading facilities provided in lib/ to streamline the
threads used in bgpd. In particular, all of the lifecycle code has been
removed from the I/O thread and replaced with the default loop. Did not
do the same to the keepalives thread as it is much smaller (doesn't need
the event system).

Also cleaned up some comments to match the style guide.

Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2018-01-24 15:30:55 -05:00
Quentin Young f09a656d24
bgpd: improve bgp thread startup characteristics
Replace atomic spinlock with condition variable.

Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2018-01-09 14:27:44 -05:00
Donald Sharp 88b24dee19 bgpd: Ensure that io thread is running after start
The BGP IO thread must be running before other threads
can start using it.  So at startup check to see
that it running once, instead of before every
function call into.

Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
2018-01-06 14:09:29 -05:00
Quentin Young bb3d357d2f
bgpd: remove unused extern from bgp_io.h
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-11-30 16:18:03 -05:00
Quentin Young 51abb4b49f
bgpd: update I/O docs
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-11-30 16:18:01 -05:00
Quentin Young 555e09d4a2
bgpd: atomize write-quanta, add read-quanta
bgpd supports setting a write-quanta that serves as a hint on how many
packets to write per I/O cycle. Now that input is buffered, it makes
sense to add the equivalent parameter for how many packets are processed
per cycle. This is *not* how many packets are read off the wire per I/O
cycle; rather it is how many packets are processed from the input buffer
in a given cycle after having been read off the wire and sanitized.

Since these values must be used from multiple threads, they have also
been made atomic.

Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-11-30 16:18:00 -05:00
Quentin Young 42cf651ecd
bgpd: style for bgp i/o
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-11-30 16:18:00 -05:00
Quentin Young 424ab01d0f
bgpd: implement buffered reads
* Move and modify all network input related code to bgp_io.c
* Add a real input buffer to `struct peer`
* Move connection initialization to its own thread.c task instead of
  piggybacking off of bgp_read()
* Tons of little fixups

Primary changes are in bgp_packet.[ch], bgp_io.[ch], bgp_fsm.[ch].
Changes made elsewhere are almost exclusively refactoring peer->ibuf to
peer->curr since peer->ibuf is now the true FIFO packet input buffer
while peer->curr represents the packet currently being processed by the
main pthread.

Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-11-30 16:17:59 -05:00
Quentin Young 56257a44e4
bgpd: move bgp i/o to a separate source file
After implement threading, bgp_packet.c was serving the double purpose
of consolidating packet parsing functionality and handling actual I/O
operations. This is somewhat messy and difficult to understand. I've
thus moved all code and data structures for handling threaded packet
writes to bgp_io.[ch].

Although bgp_io.[ch] only handles writes at the moment to keep the noise
on this commit series down, for organization purposes, it's probably
best to move bgp_read() and its trappings into here as well and
restructure that code so that read()'s happen in the pthread and packet
processing happens on the main thread.

Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-11-30 16:17:59 -05:00