Compare commits

..

451 commits

Author SHA1 Message Date
Donatas Abraitis 004c6c0260
Merge pull request #18692 from donaldsharp/event_cleanups
zebra: Save event pointer for rib sweeping
2025-04-19 03:40:27 +03:00
Donald Sharp 547894c087 zebra: Save event pointer for rib sweeping
The rib_sweep_route function when not doing graceful
restart does not attempt to save the event on the
t_rib_sweep pointer for shutdown.  Prevent any
weird shenanigans by allowing shutdown to clean
up the rib_sweep_route event.

Signed-off-by: Donald Sharp <donaldsharp72@gmail.com>
2025-04-18 17:44:39 -04:00
Donald Sharp 5d4c7d2ece
Merge pull request #18675 from mjstapp/fix_clang_18_warnings
lib,pimd,bgpd,bfdd: Fix clang 18 warnings
2025-04-17 18:38:21 -04:00
Donatas Abraitis 0850ae7db7
Merge pull request #18658 from y-bharath14/srib-tests-v12
tests: Resource leak in common_config.py
2025-04-17 18:18:26 +03:00
Donald Sharp 2892c20097
Merge pull request #18538 from nabahr/autorp-enabling
pimd: Only create and bind the autorp socket when really needed
2025-04-17 10:14:13 -04:00
Mark Stapp 2d42318625 bfdd, bgpd: clean up clang warnings
Clean up some clang compiler warnings.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-16 13:50:21 -04:00
Mark Stapp d378275106 pimd: clean up clang warnings
Clean up clang warnings in pimd; mostly address-of-packed
issues (removed some ugly casts too).

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-16 13:50:21 -04:00
Mark Stapp c65fdc9a49 lib: disable clang warning in parser yacc output
Disable a clang 'unused' warning in the yacc source
of command_parse.c.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-16 13:50:21 -04:00
Jafar Al-Gharaibeh 38968cc8df
Merge pull request #18669 from opensourcerouting/rfapi-misused-conditional
bgpd: fix misused rfapi conditional
2025-04-16 09:39:02 -05:00
Donald Sharp 69a59464ef
Merge pull request #18665 from y-bharath14/srib-yang-v10
yang: Corrected pyang errors in frr-pathd.yang
2025-04-16 09:41:32 -04:00
Mark Stapp 1ca756f315
Merge pull request #18497 from krishna-samy/show-metaq-counters
zebra: show command to display metaq info
2025-04-16 09:16:40 -04:00
Mark Stapp 21a32b010b
Merge pull request #18579 from krishna-samy/krishna/dplane_fpm_read
zebra: change fpm_read to batch the messages
2025-04-16 08:47:11 -04:00
Carmine Scarpitta 42d31854a1
Merge pull request #18667 from louis-6wind/fix-srv6-sid-leak
isisd: fix srv6_sid memory leak
2025-04-16 11:50:09 +00:00
David Lamparter d46909e50f bgpd: fix misused rfapi conditional
bgpd/bgpd.c:8975:5: error: "ENABLE_BGP_VNC" is not defined, evaluates to 0 [-Werror=undef]
 8975 | #if ENABLE_BGP_VNC

Fixes: FRRouting#18546
Fixes: 1629c05924 ("bgpd: rfapi: track outstanding rib and import timers, free mem at exit")
Cc: G. Paul Ziemba <paulz@labn.net>
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
2025-04-16 12:47:36 +02:00
Louis Scalbert 25c813ac38 isisd: fix srv6_sid memory leak
Seen with isis_srv6_topo1 topotest.

> ==178793==ERROR: LeakSanitizer: detected memory leaks
>
> Direct leak of 56 byte(s) in 1 object(s) allocated from:
>     #0 0x7f3f63cb4a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7f3f6366f8dd in qcalloc lib/memory.c:105
>     #2 0x561b810c62b7 in isis_srv6_sid_alloc isisd/isis_srv6.c:243
>     #3 0x561b8111f944 in isis_zebra_srv6_sid_notify isisd/isis_zebra.c:1534
>     #4 0x7f3f637df9d7 in zclient_read lib/zclient.c:4845
>     #5 0x7f3f637779b2 in event_call lib/event.c:2011
>     #6 0x7f3f63642ff1 in frr_run lib/libfrr.c:1216
>     #7 0x561b81018bf2 in main isisd/isis_main.c:360
>     #8 0x7f3f63029d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58

Fixes: 0af0f4616d ("isisd: Receive SRv6 SIDs notifications from zebra")
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-04-16 11:28:01 +02:00
David Lamparter 336fe6728f
Merge pull request #18662 from mjstapp/fix_test_nb_endian 2025-04-16 10:49:13 +02:00
Y Bharath eff5a9023a yang: Corrected pyang errors in frr-pathd.yang
Corrected pyang warnings and errors in frr-pathd.yang

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-04-16 12:47:31 +05:30
Krishnasamy 7e8c18d0b0 zebra: change fpm_read to batch the messages
Make code changes in fpm_read to create a list of ctx and send it to
zebra for processing rather than sending individual ctx

Signed-off-by: Krishnasamy <krishnasamyr@nvidia.com>
2025-04-16 07:14:55 +00:00
Mark Stapp b256f2f1e9 tests: add nb test binary to .gitignore
Add a northbound unit-test binary product to .gitignore

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-15 13:16:07 -04:00
Mark Stapp da8fce3830 tests: use little-endian order for libyang api
Use the expected - little-endian - byte-order for a param
to one of the libyang apis; tests fail on LE architectures
otherwise.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-15 13:15:09 -04:00
Russ White e190163413
Merge pull request #18592 from zmw12306/bfd_set_shutdown
bfdd: Set bfd.LocalDiag when transitioning to AdminDown
2025-04-15 11:34:28 -04:00
Russ White 796e1af6e2
Merge pull request #18540 from LabNConsulting/chopps/list-entry-done
lib: nb: add list_entry_done() callback to free resources
2025-04-15 11:30:50 -04:00
Christian Hopps f5a8b8aedf
Merge pull request #18610 from lsang6WIND/yang-isisd
fix yang commands that don't have yang attr
2025-04-15 10:25:51 -05:00
Donatas Abraitis 7ef9394c33
Merge pull request #18653 from louis-6wind/fix-bgp-pbr-mem-leaks
bgpd: fix pbr memory leaks
2025-04-15 10:07:14 +03:00
Y Bharath bb6f2c2fb0 tests: Resource leak in common_config.py
Address pending changes from PR:18574

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-04-15 11:24:44 +05:30
Jafar Al-Gharaibeh 3da1473093
Merge pull request #18655 from mjstapp/fix_clang_lib_bgp
lib,bgpd: clean up clang warnings
2025-04-14 21:10:49 -05:00
Donald Sharp 417c82aadd
Merge pull request #18654 from chdxD1/v4-via-v6-nexthop
Add v4-via-v6 nexthop support to staticd
2025-04-14 19:28:48 -04:00
Jafar Al-Gharaibeh 0dc71bcfca
Merge pull request #18641 from donaldsharp/fpm_listener_storage
zebra: Add ability to dump routes received from fpm_listener
2025-04-14 15:21:13 -05:00
Mark Stapp 81b472bd79 lib,bgpd: clean up clang warnings
Clean up a couple of clang compiler warnings (this was
clang 18)

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-14 15:32:08 -04:00
Jafar Al-Gharaibeh c2ee9a360e
Merge pull request #18578 from ak503/pim6_use_source
pim6d: fix missing 'use-source' interface command
2025-04-14 14:07:26 -05:00
Christopher Dziomba 86628752a5
doc: Add v4-over-v6 next-hop to staticd docs
GATEWAY can now be v4 or v6 for v4 routes, for v6 routes it
can only be v6 (like today).

Signed-off-by: Christopher Dziomba <christopher.dziomba@telekom.de>
2025-04-14 19:34:17 +02:00
Christopher Dziomba 438360e2bd
tests: Validating staticd v4-over-v6 nexthop
Introducing do_ipv6_nexthop to static_simple topotest. The test
configures IPv4 routes with IPv6 nexthop and validates that via
inet6 is visible in the Linux Kernel

Signed-off-by: Christopher Dziomba <christopher.dziomba@telekom.de>
2025-04-14 19:34:11 +02:00
Christopher Dziomba 8fc41e81f0
staticd: Add v4-via-v6 nexthop support
Routing v4 over an v6 nexthop is already well supported within zebra
(and FRR). This adds support to staticd, allowing an IPv6 nexthop to
be provided to ip route statements. For this the commands are
extended and the address family is parsed from the parameter.

When receiving nht updates from zebra, both AFIs are checked because
prefixes could exist in both. Additionally when route_node is known,
family of prefix is used instead of nexthop.

Signed-off-by: Christopher Dziomba <christopher.dziomba@telekom.de>
2025-04-14 19:22:39 +02:00
Louis Scalbert eb8281aeb8 bgpd: fix bgp_pbr_or_filter memory leaks
Note that bgp_pbr_policyroute_add_from_zebra() and
bgp_pbr_policyroute_remove_from_zebra() are only called from
bgp_pbr_handle_entry().

>  ==966967==ERROR: LeakSanitizer: detected memory leaks
>
> Direct leak of 40 byte(s) in 1 object(s) allocated from:
>     #0 0x7fd447ab4a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7fd44746f8dd in qcalloc lib/memory.c:105
>     #2 0x7fd44744401a in list_new lib/linklist.c:49
>     #3 0x560f8c094490 in bgp_pbr_handle_entry bgpd/bgp_pbr.c:2818
>     #4 0x560f8c095993 in bgp_pbr_update_entry bgpd/bgp_pbr.c:2941
>     #5 0x560f8c2164f3 in bgp_zebra_announce bgpd/bgp_zebra.c:1618
>     #6 0x560f8c0bd668 in bgp_process_main_one bgpd/bgp_route.c:3691
>     #7 0x560f8c0be7fe in process_subq_other_route bgpd/bgp_route.c:3856
>     #8 0x560f8c0bf280 in process_subq bgpd/bgp_route.c:3955
>     #9 0x560f8c0bf320 in meta_queue_process bgpd/bgp_route.c:3980
>     #10 0x7fd44759fdfc in work_queue_run lib/workqueue.c:282
>     #11 0x7fd4475779b2 in event_call lib/event.c:2011
>     #12 0x7fd447442ff1 in frr_run lib/libfrr.c:1216
>     #13 0x560f8bef0a15 in main bgpd/bgp_main.c:545
>     #14 0x7fd446e29d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
>
> Direct leak of 40 byte(s) in 1 object(s) allocated from:
>     #0 0x7fd447ab4a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7fd44746f8dd in qcalloc lib/memory.c:105
>     #2 0x7fd44744401a in list_new lib/linklist.c:49
>     #3 0x560f8c09439d in bgp_pbr_handle_entry bgpd/bgp_pbr.c:2812
>     #4 0x560f8c095993 in bgp_pbr_update_entry bgpd/bgp_pbr.c:2941
>     #5 0x560f8c2164f3 in bgp_zebra_announce bgpd/bgp_zebra.c:1618
>     #6 0x560f8c0bd668 in bgp_process_main_one bgpd/bgp_route.c:3691
>     #7 0x560f8c0be7fe in process_subq_other_route bgpd/bgp_route.c:3856
>     #8 0x560f8c0bf280 in process_subq bgpd/bgp_route.c:3955
>     #9 0x560f8c0bf320 in meta_queue_process bgpd/bgp_route.c:3980
>     #10 0x7fd44759fdfc in work_queue_run lib/workqueue.c:282
>     #11 0x7fd4475779b2 in event_call lib/event.c:2011
>     #12 0x7fd447442ff1 in frr_run lib/libfrr.c:1216
>     #13 0x560f8bef0a15 in main bgpd/bgp_main.c:545
>     #14 0x7fd446e29d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
>
> Direct leak of 4 byte(s) in 1 object(s) allocated from:
>     #0 0x7fd447ab4a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7fd44746f8dd in qcalloc lib/memory.c:105
>     #2 0x560f8c080cec in bgp_pbr_extract_enumerate_unary bgpd/bgp_pbr.c:362
>     #3 0x560f8c080f7e in bgp_pbr_extract_enumerate bgpd/bgp_pbr.c:400
>     #4 0x560f8c094530 in bgp_pbr_handle_entry bgpd/bgp_pbr.c:2819
>     #5 0x560f8c095993 in bgp_pbr_update_entry bgpd/bgp_pbr.c:2941
>     #6 0x560f8c2164f3 in bgp_zebra_announce bgpd/bgp_zebra.c:1618
>     #7 0x560f8c0bd668 in bgp_process_main_one bgpd/bgp_route.c:3691
>     #8 0x560f8c0be7fe in process_subq_other_route bgpd/bgp_route.c:3856
>     #9 0x560f8c0bf280 in process_subq bgpd/bgp_route.c:3955
>     #10 0x560f8c0bf320 in meta_queue_process bgpd/bgp_route.c:3980
>     #11 0x7fd44759fdfc in work_queue_run lib/workqueue.c:282
>     #12 0x7fd4475779b2 in event_call lib/event.c:2011
>     #13 0x7fd447442ff1 in frr_run lib/libfrr.c:1216
>     #14 0x560f8bef0a15 in main bgpd/bgp_main.c:545
>     #15 0x7fd446e29d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
>
> Direct leak of 4 byte(s) in 1 object(s) allocated from:
>     #0 0x7fd447ab4a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7fd44746f8dd in qcalloc lib/memory.c:105
>     #2 0x560f8c080cec in bgp_pbr_extract_enumerate_unary bgpd/bgp_pbr.c:362
>     #3 0x560f8c080f7e in bgp_pbr_extract_enumerate bgpd/bgp_pbr.c:400
>     #4 0x560f8c09443d in bgp_pbr_handle_entry bgpd/bgp_pbr.c:2813
>     #5 0x560f8c095993 in bgp_pbr_update_entry bgpd/bgp_pbr.c:2941
>     #6 0x560f8c2164f3 in bgp_zebra_announce bgpd/bgp_zebra.c:1618
>     #7 0x560f8c0bd668 in bgp_process_main_one bgpd/bgp_route.c:3691
>     #8 0x560f8c0be7fe in process_subq_other_route bgpd/bgp_route.c:3856
>     #9 0x560f8c0bf280 in process_subq bgpd/bgp_route.c:3955
>     #10 0x560f8c0bf320 in meta_queue_process bgpd/bgp_route.c:3980
>     #11 0x7fd44759fdfc in work_queue_run lib/workqueue.c:282
>     #12 0x7fd4475779b2 in event_call lib/event.c:2011
>     #13 0x7fd447442ff1 in frr_run lib/libfrr.c:1216
>     #14 0x560f8bef0a15 in main bgpd/bgp_main.c:545
>     #15 0x7fd446e29d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58

Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-04-14 13:52:02 +02:00
Louis Scalbert 442d8bce36 bgpd: fix bgp_pbr_rule memory leak
Fix bgp_pbr_rule memory leak. Found by code review.

Fixes: 27e376d4e1 ("bgpd: an hash list of pbr iprule is created")
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-04-14 13:09:36 +02:00
Louis Scalbert 8d9df5cf04 bgpd: fix bgp_pbr_match memory leak
> Direct leak of 1144 byte(s) in 13 object(s) allocated from:
>     #0 0x7f3eedeb4a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7f3eed86f8dd in qcalloc lib/memory.c:105
>     #2 0x55b32d236faf in bgp_pbr_match_alloc_intern bgpd/bgp_pbr.c:1074
>     #3 0x7f3eed817d79 in hash_get lib/hash.c:147
>     #4 0x55b32d242d9a in bgp_pbr_policyroute_add_to_zebra_unit bgpd/bgp_pbr.c:2486
>     #5 0x55b32d244436 in bgp_pbr_policyroute_add_to_zebra bgpd/bgp_pbr.c:2672
>     #6 0x55b32d245a05 in bgp_pbr_handle_entry bgpd/bgp_pbr.c:2843
>     #7 0x55b32d246912 in bgp_pbr_update_entry bgpd/bgp_pbr.c:2939
>     #8 0x55b32d3c7472 in bgp_zebra_announce bgpd/bgp_zebra.c:1618
>     #9 0x55b32d26e5e7 in bgp_process_main_one bgpd/bgp_route.c:3691
>     #10 0x55b32d26f77d in process_subq_other_route bgpd/bgp_route.c:3856
>     #11 0x55b32d2701ff in process_subq bgpd/bgp_route.c:3955
>     #12 0x55b32d27029f in meta_queue_process bgpd/bgp_route.c:3980
>     #13 0x7f3eed99fdd8 in work_queue_run lib/workqueue.c:282
>     #14 0x7f3eed97798e in event_call lib/event.c:2011
>     #15 0x7f3eed842ff1 in frr_run lib/libfrr.c:1216
>     #16 0x55b32d0a1a15 in main bgpd/bgp_main.c:545
>     #17 0x7f3eed229d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58

Fixes: d114b0d739 ("bgpd: inject policy route entry from bgp into zebra pbr entries.")
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-04-14 13:09:35 +02:00
Louis Scalbert 520518c3ad bgpd: fix bgp_pbr_match_entry memory leak
> ==238132==ERROR: LeakSanitizer: detected memory leaks
>
> Direct leak of 160 byte(s) in 1 object(s) allocated from:
>     #0 0x7fd79f0b4a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7fd79ea6f8dd in qcalloc lib/memory.c:105
>     #2 0x5586b26995f9 in bgp_pbr_match_entry_alloc_intern bgpd/bgp_pbr.c:1155
>     #3 0x7fd79ea17d79 in hash_get lib/hash.c:147
>     #4 0x5586b26a551d in bgp_pbr_policyroute_add_to_zebra_unit bgpd/bgp_pbr.c:2522
>     #5 0x5586b26a6436 in bgp_pbr_policyroute_add_to_zebra bgpd/bgp_pbr.c:2672
>     #6 0x5586b26a8089 in bgp_pbr_handle_entry bgpd/bgp_pbr.c:2876
>     #7 0x5586b26a8912 in bgp_pbr_update_entry bgpd/bgp_pbr.c:2939
>     #8 0x5586b2829472 in bgp_zebra_announce bgpd/bgp_zebra.c:1618
>     #9 0x5586b282ab4b in bgp_zebra_announce_table bgpd/bgp_zebra.c:1766
>     #10 0x5586b2824b99 in bgp_zebra_tm_connect bgpd/bgp_zebra.c:1091
>     #11 0x7fd79eb7798e in event_call lib/event.c:2011
>     #12 0x7fd79ea42ff1 in frr_run lib/libfrr.c:1216
>     #13 0x5586b2503a15 in main bgpd/bgp_main.c:545
>     #14 0x7fd79e429d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58

Fixes: d114b0d739 ("bgpd: inject policy route entry from bgp into zebra pbr entries.")
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-04-14 13:09:34 +02:00
Jafar Al-Gharaibeh ac56da1f50
Merge pull request #18649 from donaldsharp/rpki_testing_and_buf_fix
Rpki testing and bug fix
2025-04-12 22:26:19 -05:00
Donald Sharp dbff585b41 tests: Add more tests to bgp_rpki_topo1 test
Looking at the gcov of the rpki code, I noticed
that there was some functionality that is not
covered in our test suites.  Add the functionality.

Signed-off-by: Donald Sharp <donaldsharp72@gmail.com>
2025-04-12 17:00:02 -04:00
Donald Sharp dcf43ae009 bgpd: Prevent crash when issuing a show rpki connections
When attempting to check rpki status and the connection
has been turned off, let's check to see if we are connected
before we ask the rpki subsystem, else we will get a crash
in the rpki library.

Signed-off-by: Donald Sharp <donaldsharp72@gmail.com>
2025-04-12 16:59:56 -04:00
Dmitrii Turlupov 467e1ed597 pim6d: fix missing 'use-source' interface command
Signed-off-by: Dmitrii Turlupov <turlupov@bk.ru>
2025-04-12 09:54:19 +03:00
Donald Sharp bd8ee74b49
Merge pull request #18645 from louis-6wind/fix-zebra-pbr-leak
zebra: fix pbr_iptable memory leak
2025-04-11 19:54:03 -04:00
Donatas Abraitis 85bb2155db
Merge pull request #18574 from y-bharath14/srib-tests-v10
tests: Shadowing the built-in function
2025-04-11 18:54:01 +03:00
Carmine Scarpitta c5dfb9f5e5
Merge pull request #18628 from raja-rajasekar/rajasekarr/fix_frr_reload_srv6
tools: fix reload script for SRv6 locators and formats
2025-04-11 17:07:05 +02:00
Jafar Al-Gharaibeh 953d92b3b2
Merge pull request #18640 from donaldsharp/fpm_listener_nhg_data
zebra: modify fpm_listener to display data about nhgs
2025-04-11 10:06:55 -05:00
Carmine Scarpitta 5ad0ba3ee9
Merge pull request #18597 from pguibert6WIND/end_b6_encaps_extensions
lib, staticd, isisd: add B6.ENCAPS codepoint extensions
2025-04-11 17:00:13 +02:00
Louis Scalbert 55ea74d630 zebra: clean pbr_iptable interface_name_list free
Clean up code related to pbr_iptable->interface_name_list free. This is
a cosmetic change.

Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-04-11 15:52:42 +02:00
Louis Scalbert 92cddedffd zebra: fix pbr_iptable memory leak
We are obviously doing deleting on wrong object.

> Direct leak of 40 byte(s) in 1 object(s) allocated from:
>     #0 0x7fcf718b4a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7fcf7126f8dd in qcalloc lib/memory.c:105
>     #2 0x7fcf7124401a in list_new lib/linklist.c:49
>     #3 0x55771621d86d in pbr_iptable_alloc_intern zebra/zebra_pbr.c:1015
>     #4 0x7fcf71217d79 in hash_get lib/hash.c:147
>     #5 0x55771621dad3 in zebra_pbr_add_iptable zebra/zebra_pbr.c:1030
>     #6 0x55771614d00c in zread_iptable zebra/zapi_msg.c:4131
>     #7 0x55771614e586 in zserv_handle_commands zebra/zapi_msg.c:4424
>     #8 0x5577162dae2c in zserv_process_messages zebra/zserv.c:521
>     #9 0x7fcf7137798e in event_call lib/event.c:2011
>     #10 0x7fcf71242ff1 in frr_run lib/libfrr.c:1216
>     #11 0x5577160e4d6d in main zebra/main.c:540
>     #12 0x7fcf70c29d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
>
> Indirect leak of 24 byte(s) in 1 object(s) allocated from:
>     #0 0x7fcf718b4a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7fcf7126f8dd in qcalloc lib/memory.c:105
>     #2 0x7fcf71244129 in listnode_new lib/linklist.c:71
>     #3 0x7fcf71244238 in listnode_add lib/linklist.c:92
>     #4 0x55771621d938 in pbr_iptable_alloc_intern zebra/zebra_pbr.c:1019
>     #5 0x7fcf71217d79 in hash_get lib/hash.c:147
>     #6 0x55771621dad3 in zebra_pbr_add_iptable zebra/zebra_pbr.c:1030
>     #7 0x55771614d00c in zread_iptable zebra/zapi_msg.c:4131
>     #8 0x55771614e586 in zserv_handle_commands zebra/zapi_msg.c:4424
>     #9 0x5577162dae2c in zserv_process_messages zebra/zserv.c:521
>     #10 0x7fcf7137798e in event_call lib/event.c:2011
>     #11 0x7fcf71242ff1 in frr_run lib/libfrr.c:1216
>     #12 0x5577160e4d6d in main zebra/main.c:540
>     #13 0x7fcf70c29d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58

Fixes: f80ec7e3d6 ("zebra: handle iptable list of interfaces")
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-04-11 15:52:30 +02:00
Louis Scalbert cd451ff4ef zebra: split up MTYPE_PBR_OBJ
Split up MTYPE_PBR_OBJ into dedicated MTYPE to clarify the memory
allocation and free.

Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-04-11 15:52:30 +02:00
Donald Sharp f163168c95
Merge pull request #18642 from louis-6wind/fix-asla-leak
isisd: fix asla memory leak
2025-04-11 09:07:27 -04:00
Donald Sharp 2c37a21743
Merge pull request #16735 from zmw12306/babel_nonzeroMBZ
babeld: Add MBZ and Reserved field checking
2025-04-11 08:42:45 -04:00
Donald Sharp f34ee05f8b
Merge pull request #18633 from y-bharath14/srib-tests-v11
tests: Fix potential issues in mcast-tester.py
2025-04-11 08:41:02 -04:00
Donald Sharp 01085bfbec
Merge pull request #18635 from opensourcerouting/support_bundle_ns
tools: Add pathspace option to generate_support_bundle
2025-04-11 08:40:34 -04:00
Louis Scalbert fe2a07aea4 isisd: fix asla memory leak
> ==713776==ERROR: LeakSanitizer: detected memory leaks
>
> Direct leak of 120 byte(s) in 1 object(s) allocated from:
>     #0 0x7fdfcbeb4a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7fdfcb86f8dd in qcalloc lib/memory.c:105
>     #2 0x55ce707739b6 in isis_tlvs_find_alloc_asla isisd/isis_tlvs.c:8500
>     #3 0x55ce7072fae0 in isis_link_params_update_asla isisd/isis_te.c:191
>     #4 0x55ce70733881 in isis_link_params_update isisd/isis_te.c:499
>     #5 0x55ce70693f2a in isis_circuit_up isisd/isis_circuit.c:776
>     #6 0x55ce7069a120 in isis_csm_state_change isisd/isis_csm.c:135
>     #7 0x55ce7068dd80 in isis_circuit_enable isisd/isis_circuit.c:79
>     #8 0x55ce70699346 in isis_ifp_create isisd/isis_circuit.c:1618
>     #9 0x7fdfcb81f47f in hook_call_if_real lib/if.c:55
>     #10 0x7fdfcb82056e in if_new_via_zapi lib/if.c:188
>     #11 0x7fdfcb9d17da in zclient_interface_add lib/zclient.c:2706
>     #12 0x7fdfcb9df842 in zclient_read lib/zclient.c:4843
>     #13 0x7fdfcb97798e in event_call lib/event.c:2011
>     #14 0x7fdfcb842ff1 in frr_run lib/libfrr.c:1216
>     #15 0x55ce7067cbf2 in main isisd/isis_main.c:360
>     #16 0x7fdfcb229d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
>
> Indirect leak of 8 byte(s) in 1 object(s) allocated from:
>     #0 0x7fdfcbeb4a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7fdfcb86f8dd in qcalloc lib/memory.c:105
>     #2 0x7fdfcb79a7b7 in admin_group_init lib/admin_group.c:186
>     #3 0x55ce707739ca in isis_tlvs_find_alloc_asla isisd/isis_tlvs.c:8501
>     #4 0x55ce7072fae0 in isis_link_params_update_asla isisd/isis_te.c:191
>     #5 0x55ce70733881 in isis_link_params_update isisd/isis_te.c:499
>     #6 0x55ce70693f2a in isis_circuit_up isisd/isis_circuit.c:776
>     #7 0x55ce7069a120 in isis_csm_state_change isisd/isis_csm.c:135
>     #8 0x55ce7068dd80 in isis_circuit_enable isisd/isis_circuit.c:79
>     #9 0x55ce70699346 in isis_ifp_create isisd/isis_circuit.c:1618
>     #10 0x7fdfcb81f47f in hook_call_if_real lib/if.c:55
>     #11 0x7fdfcb82056e in if_new_via_zapi lib/if.c:188
>     #12 0x7fdfcb9d17da in zclient_interface_add lib/zclient.c:2706
>     #13 0x7fdfcb9df842 in zclient_read lib/zclient.c:4843
>     #14 0x7fdfcb97798e in event_call lib/event.c:2011
>     #15 0x7fdfcb842ff1 in frr_run lib/libfrr.c:1216
>     #16 0x55ce7067cbf2 in main isisd/isis_main.c:360
>     #17 0x7fdfcb229d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58

PR: 95719
Fixes: 5749ac83a8 ("isisd: add ASLA support")
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-04-11 09:57:18 +02:00
Philippe Guibert 27fa9ac4db lib, staticd, isisd: add B6.Encaps codepoint extensions
Add codepoint extensions for END.B6.Encaps instruction.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-04-11 09:36:29 +02:00
Donald Sharp 6299a89371 zebra: Add ability to dump routes received from fpm_listener
The fpm_listener currently has no ability to store the list
of prefixes that it has received.  Modify the code to store
the prefixes in a typesafe RB Tree.  Additionally modify
the code such that when a SIGUSR1 is received to dump
the routes out.  If the operator specifies a -z <filename>
then write the routes to that file.  It will overwrite
the last version of the file written.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-04-10 20:01:39 -04:00
Donald Sharp ef580f0e80 zebra: modify fpm_listener to display data about nhgs
Currently the fpm_listener completely ignores NHG's.
Let's start dumping some data about the nexthop groups:

[2025-04-10 16:55:12.939235306] FPM message - Type: 1, Length 52
[2025-04-10 16:55:12.939254252] Nexthop Group ID: 9, Protocol: Zebra(11), Contains 1 nexthops, Family: 2, Scope: 0
[2025-04-10 16:55:12.939260564] FPM message - Type: 1, Length 52
[2025-04-10 16:55:12.939263990] Nexthop Group ID: 10, Protocol: Zebra(11), Contains 1 nexthops, Family: 2, Scope: 0
[2025-04-10 16:55:12.939268659] FPM message - Type: 1, Length 56
[2025-04-10 16:55:12.939271635] Nexthop Group ID: 8, Protocol: Zebra(11), Contains 2 nexthops, Family: 0, Scope: 0

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-04-10 17:14:38 -04:00
Donatas Abraitis dc37bff8c0
Merge pull request #18376 from pguibert6WIND/show_bgp_neighbor_counter_tx
bgpd: fix add prefix sent in 'show bgp neighbor'
2025-04-10 19:38:59 +03:00
Donatas Abraitis cf351d6d05
Merge pull request #18611 from pguibert6WIND/bgp_usid
bgpd: add usid behavior for bgp srv6 instructions
2025-04-10 19:34:44 +03:00
Jafar Al-Gharaibeh fc3e1ec15f
Merge pull request #18472 from zmw12306/Update-TLV
babeld: Add input validation for update TLV.
2025-04-10 09:59:13 -05:00
Jafar Al-Gharaibeh 2355683c72
Merge pull request #18548 from zmw12306/request_subtlv_type
babeld: fix incorrect type assignment in parse_request_subtlv
2025-04-10 09:56:14 -05:00
Rajasekar Raja ce06d35fa9 tools: fix reload script for SRv6 locators and formats
Current code implementation does not have a "no" form of handling for
the following commands under segment-routing srv6
 - no formats
 - no locators
 - no prefix <> under locator XYZ

Fix the handling of segment-routing srv6 locators and formats commands
 - Ignore "no formats" and "no locators" command
 - replace "no prefix" under locator XYZ as "no locator XYZ" as prefix
   is a mandatory property of locator

Signed-off-by: Rajasekar Raja <rajasekarr@nvidia.com>
2025-04-10 07:55:57 -07:00
Martin Winter 9acadf8d3f
tools: Add pathspace option to generate_support_bundle
Adding a `-N` pathspace option to the generate_support_bundle.py
to support FRR running in a non-default namespace with a prefix
on the config/socket options.
The same pathspace will be prepended to the output log files (if
specified)

Signed-off-by: Martin Winter <mwinter@opensourcerouting.org>
2025-04-10 16:47:27 +02:00
Donald Sharp 86f66afc52
Merge pull request #18586 from zmw12306/bfd_find_disc
bfdd: Fix demultiplexing to rely solely on Your Discriminator
2025-04-10 10:30:05 -04:00
Donatas Abraitis 2f0b8ff1ea
Merge pull request #18624 from louis-6wind/remove-afi2family
bgpd: remove useless calls to afi2family
2025-04-10 17:14:08 +03:00
Mark Stapp 0e8e0c5fb9
Merge pull request #18594 from soumyar-roy/soumya/netwithdraw
bgpd: Paths not deleted received from shutdown peer
2025-04-10 10:07:32 -04:00
Y Bharath 868796cf69 tests: Fix potential issues in mcast-tester.py
Fix potential issues in mcast-tester.py

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-04-10 17:01:06 +05:30
Philippe Guibert a47a53b003 bgpd: fix add prefix sent in 'show bgp neighbor'
The 'acceptedPrefixCounter' is available in 'show bgp neighbor json', but
there is no equivalent when using the non json output. Add it.

> # show bgp neighbor
> [..]
>  Community attribute sent to this neighbor(all)
>  0 accepted prefixes, 1 sent prefixes

Fixes: 856ca177c4 ("Added json formating support to show-...-neighbors-... bgp commands.")

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-04-10 10:12:36 +02:00
Christian Hopps 4f9d162526 lib: suppress clang-analyzer false-positives
Signed-off-by: Christian Hopps <chopps@labn.net>
2025-04-10 04:53:08 +00:00
Christian Hopps f9c759ee4e lib: nb: add list_entry_done() callback to free resources
The existing iteration callback only allows for a daemon to return a
pointer to objects that must already exist and must continue to exists
indefinitely.

To allow the daemon to return allocated iterator objects and for locking
it's container structures we need a callback to tell the daemon when FRR
is done using the returned value, so the daemon can free it (or unlock
etc)

That's what list_entry_done() is for.

Signed-off-by: Christian Hopps <chopps@labn.net>
2025-04-10 04:49:59 +00:00
Donatas Abraitis f28394313f
Merge pull request #18625 from donaldsharp/bgp_table_init_reverse
bgpd: On shutdown free up table for static routes
2025-04-10 00:56:06 +03:00
Soumya Roy 93458dcf7a test: Test for bgp route delete
This fix add tests to verify routes/path are getting
deleted properly, when the advertising neighbor is shutdown

Signed-off-by: Soumya Roy <souroy@nvidia.com>
2025-04-09 20:46:43 +00:00
Mark Stapp 67278980eb
Merge pull request #18627 from donaldsharp/irdp_shadow
zebra: Fix shadow warning in irdp_packet.c
2025-04-09 14:17:36 -04:00
Donald Sharp 64a6a2e175 zebra: Fix shadow warning in irdp_packet.c
My compiler is complaining about irdp_sock
being a shadow variable.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-04-09 12:01:30 -04:00
Soumya Roy d2bec7a691 bgpd: Paths, received from shutdown peer, not deleted
Issue:
In a scaled setup, (where number of nets > BGP_CLEARING_BATCH_MAX_DESTS
for walk_batch_table_helper), when peer is shutdown, it is seen some
of the paths are not deleted, which are received from that peer.

Fix:
This is due to, in clear_batch_rib_helper, once walk_batch_table_helper
returns after BGP_CLEARING_BATCH_MAX_DESTS is reached, we just break
from inner loop for the afi/safi for loops. So during walk for next
afi/safi that 'ret' state is overwritten with new state. Also the
resume context is overwritten. This causes to lose the start point
for next walk, some nets are skipped forever. So they are not marked
for deletion anymore. To fix this, we immediately return from current
run. This will have resume state to be stored correctly, and next walk
will start from there.

Testing:
32 ecmp paths were received from the shutdown peer
Before fix:
show bgp ipv6 2052:52:1:167::/64
BGP routing table entry for 2052:52:1:167::/64, version 495
Paths: (246 available, best #127, table default)
  Not advertised to any peer

<snip>
  4200165500 4200165002
    2021:21:51:101::2(spine-5) from spine-5(2021:21:51:101::2) (6.0.0.17)
    (fe80::202:ff:fe00:55) (prefer-global)
      Origin incomplete, valid, external, multipath
      Last update: Fri Apr  4 17:25:05 2025
  4200165500 4200165002
    2021:21:11:116::2(spine-1) from spine-1(2021:21:11:116::2) (0.0.0.0)
    (fe80::202:ff:fe00:3d) (prefer-global)<<<<path not deleted
      Origin incomplete, valid, external
      Last update: Fri Apr  4 17:25:05 2025
  4200165500 4200165002
    2021:21:11:115::2(spine-1) from spine-1(2021:21:11:115::2) (0.0.0.0)
    (fe80::202:ff:fe00:3d) (prefer-global)<<<<path not deleted
      Origin incomplete, valid, external
      Last update: Fri Apr  4 17:25:05 2025
<snip>

 32 paths are supposed to be withdrawn:
root@leaf-1:mgmt:# vtysh -c "show bgp ipv6 2052:52:1:167::/64" | grep "prefer-global" | wc -l
256
root@leaf-1:mgmt# vtysh -c "show bgp ipv6 2052:52:1:167::/64" | grep "prefer-global" | wc -l
246<<should be 224, but showing 246, which is wrong
After fix:
 32 paths are supposed to be withdrawn:
root@leaf-1:mgmt:# vtysh -c "show bgp ipv6 2052:52:1:167::/64" | grep "prefer-global" | wc -l
256
root@leaf-1:mgmt:# vtysh -c "show bgp ipv6 2052:52:1:167::/64" | grep "prefer-global" | wc -l
224<<<shows correctly

Signed-off-by: Soumya Roy <souroy@nvidia.com>
2025-04-09 14:32:23 +00:00
Mark Stapp 2aa6e786a2
Merge pull request #18601 from LabNConsulting/chopps/mgmtd-candidate-overwrite
mgmtd: remove bogus "hedge" code which corrupted active candidate DS
2025-04-09 09:51:47 -04:00
Donald Sharp b2d8d9b37a bgpd: On shutdown free up table for static routes
Indirect leak of 56 byte(s) in 1 object(s) allocated from:
    0 0x7fdaf6cb83b7 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:77
    1 0x7fdaf683a480 in qcalloc lib/memory.c:106
    2 0x7fdaf68dd706 in route_table_init_with_delegate lib/table.c:38
    3 0x5649b22c05b0 in bgp_table_init bgpd/bgp_table.c:139
    4 0x5649b2273da0 in bgp_static_set bgpd/bgp_route.c:7779
    5 0x5649b21eba58 in vpnv4_network bgpd/bgp_mplsvpn.c:3244
    6 0x7fdaf67b6d61 in cmd_execute_command_real lib/command.c:1003
    7 0x7fdaf67b7080 in cmd_execute_command lib/command.c:1062
    8 0x7fdaf67b75ac in cmd_execute lib/command.c:1228
    9 0x7fdaf68ffb20 in vty_command lib/vty.c:626
    10 0x7fdaf6900073 in vty_execute lib/vty.c:1389
    11 0x7fdaf6903e24 in vtysh_read lib/vty.c:2408
    12 0x7fdaf68f0222 in event_call lib/event.c:2019
    13 0x7fdaf681b3c6 in frr_run lib/libfrr.c:1247
    14 0x5649b211c903 in main bgpd/bgp_main.c:565
    15 0x7fdaf630c249 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58

Table was being created but never deleted.  Let's delete it.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-04-09 09:28:31 -04:00
Louis Scalbert a0a0749568 bgpd: remove useless calls to afi2family
Remove useless calls to afi2family(). str2prefix() always sets the
prefix family.

Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-04-09 13:07:56 +02:00
Christian Hopps 1f9c880297
Merge pull request #18604 from y-bharath14/srib-yang-v9
yang: Pyang errors in frr-bfdd.yang
2025-04-09 06:24:22 -04:00
Christian Hopps 59d2368b0f mgmtd: normalize argument order to copy(dst, src)
Having just completed a code audit during RCA, the fact that we have 2
different argument orders for the related datastore copying functions
was unnecessary and super confusing.

Fix this code-maintenance/comprehension mistake and move the newer mgmtd
copy routines to use the same arg order as the pre-existing underlying
northbound copy functions (i.e., use `copy(dst, src)`)

Signed-off-by: Christian Hopps <chopps@labn.net>
2025-04-09 10:14:58 +00:00
David Lamparter 8418e57791
Merge pull request #17915 from mjstapp/compile_wshadow 2025-04-09 09:59:06 +02:00
Jafar Al-Gharaibeh 1d426d9961
Merge pull request #18614 from donaldsharp/bgp_memory_fixes_vrf_different_asn
bgpd: On shutdown free up memory leak found by topotest
2025-04-08 14:31:15 -05:00
Mark Stapp 27ba9956a1 lib,ripd: resolve clang SA warnings
Looks like there were a couple of SA warnings lurking; fix
them.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp e2515f7a4f tests: clean up variable-shadow warnings
Clean up -Wshadow warnings in unit-tests

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp edb330686d tools,pceplib,ospfclient: clean up variable-shadow warnings
Clean up -Wshadow warnings in these components

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 2998eeb0a5 pbrd,staticd,vrrpd: clean up variable-shadow warnings
Clean up -Wshadow warnings in three daemons

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 14b74d50ec sharpd: clean up variable-shadowing compiler warnings
Clean up -Wshadow in sharpd

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 660cbf5651 bgpd: clean up variable-shadowing compiler warnings
Clean up -Wshadow warnings in bgp.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp aece400f10 pimd: clean up variable-shadow warnings
Clean up -Wshadow warnings in pimd

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 7c98a27f3e zebra: clean up -Wshadow compiler warnings
Clean up variable-shadowing compiler warnings.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp a2acd59afd ripng: clean up -Wshadow compiler warnings
Clean up -Wshadow compiler warnings.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 536944dc53 lib,ripd: clean up -Wshadow compiler warnings
Clean up compiler warnings; convert a linklist macro
to an inline to resolve one; clean up a side-effect in isisd.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 9362c9b370 nhrpd: clean up -Wshadow compiler warnings
Clean up compiler warnings.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 06614a803b pathd: clean up variable-shadow warnings
Clean up various variable-shadow warnings from -Wshadow

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp bf6e7c1da5 vtysh: clean up variable-shadow warnings
Clean up various variable-shadowing warnings from -Wshadow

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 7e1ed59d74 mgmtd: clean up -Wshadow warnings
Clean up various variable-shadow warnings in mgmtd.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 1378ebf640 ldpd: clean up warnings from -Wshadow
Clean up various variable-shadow warnings in ldpd.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp e49a2f9a53 eigrpd: clean up variable-shadow warnings
Clean up various warnings from -Wshadow in eigrp.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 2d32ad6aa9 bfdd: clean up -Wshadow warnings
Clean up various variable-shadow warnings in bfdd.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp e636398908 babeld: clean up -Wshadow warnings
Clean up various "shadow" warnings in babeld.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 5be982966c ospf6: clean up -Wshadow warnings
Clean up various "shadow" warnings.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp 0561943a78 ospfd: clean up -Wshadow warnings
Clean up various "shadow" warnings.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:27 -04:00
Mark Stapp e88f0a4778 isisd: clean up -Wshadow warnings
Clean up various "shadow" warnings.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:26 -04:00
Mark Stapp 028872bf40 lib: fix -Wshadow warnings in the lib modules
Fix various "shadow" warnings in lib.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:26 -04:00
David Lamparter d683c4d8de lib: don't shadow _once in frr_with_mutex
The `_once` loop variable will result in a `-Wshadow` warning when that
is turned on.  Use `__COUNTER__` to give these variables distinct names,
like is already done with `_mtx_`.

(and because I touched it, clang-format wants it reformatted... ohwell.)

Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
2025-04-08 14:41:26 -04:00
Mark Stapp 05446a2961 configure: add -Wshadow option
Start exposing variable-shadowing warnings in all builds.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-04-08 14:41:26 -04:00
Jafar Al-Gharaibeh 4281d2b2c7
Merge pull request #18583 from zmw12306/source_port
babeld: check valid babel port
2025-04-08 12:28:58 -05:00
Jafar Al-Gharaibeh c7a45b1beb
Merge pull request #18598 from zmw12306/nhrp_nexthop
nhrpd: Add Hop Count Validation Before Forwarding in nhrp_peer_recv()
2025-04-08 11:28:57 -05:00
Donald Sharp b18c309015 bgpd: On shutdown free up memory leak found by topotest
This commit fixes two types of problems:

a) Avoidance of cleaning up memory when a instance is
hidden, thus causing it never to be freed on shutdown

b) In some instances bgp_create is called 2 times
for some code.  We are double allocating memory
and dropping it on the second allocation.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-04-08 11:47:50 -04:00
Jafar Al-Gharaibeh 3510904f1d
Merge pull request #18526 from donaldsharp/pim_leakage
pimd: Fix memory leak on shutdown
2025-04-08 10:19:33 -05:00
Russ White 4d18be4faf
Merge pull request #18602 from LabNConsulting/chopps/doc-diagram
doc: add a diagram for config datastore cleanup on file reads
2025-04-08 11:12:05 -04:00
Loïc Sang e65d12f4e5 pimd: add YANG attr to YANG cmd
Those commands are using northbound api, add YANG attr to them. This
will allow them to use with pending commit, else the validation will
failed as they are detected as non YANG cmd.

Signed-off-by: Loïc Sang <loic.sang@6wind.com>
2025-04-08 17:05:52 +02:00
Loïc Sang de5eaf322b pathd: add YANG attr to YANG cmd
Those commands are using northbound api, add YANG attr to them. This
will allow them to use with pending commit, else the validation will
failed as they are detected as non YANG cmd.

Signed-off-by: Loïc Sang <loic.sang@6wind.com>
2025-04-08 16:58:37 +02:00
Russ White e9e9b2ba7e
Merge pull request #18585 from zmw12306/babel-knownae
babel: fix incorrect check in known_ae()
2025-04-08 10:55:24 -04:00
Russ White 9f59a2d05c
Merge pull request #18584 from zmw12306/babel_get_myid
babeld: Add a check to prevent all-ones case
2025-04-08 10:54:50 -04:00
Russ White 53a8868331
Merge pull request #18582 from zmw12306/route_lost
babeld: Fix starvation handling on route loss per RFC 8966 §3.8.2.1
2025-04-08 10:52:39 -04:00
Russ White 59b62ca788
Merge pull request #18581 from zmw12306/request_forward
babeld: Request forwarding does not prioritize feasible routes
2025-04-08 10:52:08 -04:00
Russ White 28d66ef7f1
Merge pull request #18547 from zmw12306/Hop-Count
babeld: Hop Count must not be 0.
2025-04-08 10:31:01 -04:00
Philippe Guibert 2ced6d233f bgpd: add usid behavior for bgp srv6 instructions
Until now, BGP srv6 usid instructions were not really used. Add the
support for this.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-04-08 16:16:06 +02:00
Loïc Sang 731e438b22 isisd: add YANG attr to YANG cmd
Those commands are using northbound api, add YANG attr to them. This
will allow them to use with pending commit, else the validation will
failed as they are detected as non YANG cmd.

Signed-off-by: Loïc Sang <loic.sang@6wind.com>
2025-04-08 15:24:41 +02:00
Carmine Scarpitta 46a526568f
Merge pull request #18580 from raja-rajasekar/rajasekarr/check_sid_loc_block_beforehand
staticd: Avoid requesting SRv6 sid from zebra when loc and sid block dont match
2025-04-08 14:53:26 +02:00
Y Bharath 8c2e01e245 yang: Pyang errors in frr-bfdd.yang
Corrected pyang errors in frr-bfdd.yang

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-04-08 14:27:28 +05:30
Christian Hopps 3f6be74025 doc: add a diagram for config datastore cleanup on file reads
Signed-off-by: Christian Hopps <chopps@labn.net>
2025-04-08 07:04:19 +00:00
Christian Hopps b12b4c28b4 mgmtd: remove bogus "hedge" code which corrupted active candidate DS
Say you have 2 mgmtd frontend sessions (2 vtysh's) the first one is long
running and is actively changing the global candidate datastore (DS),
the second one starts and exits, this code would then copy running
back over the candidate, blowing away any changes made by the first
session.

(the long running session could technically be any user)

Instead we need to trust the various cleanup code that already exits.
For example in the commit_cfg_reply on success candidate is copied to
running, and on failure *for implicit commit* running is copied back to
candidate clearing the change. This leaves the non-implicit
configuration changes in this case we actually want candidate to keep
it's changes in transactional cases, in the other case of pending commit
during a file read the code restores candidate (if needed) on exit from
"config terminal", with this call stack:

 vty_config_node_exit()
   nb_cli_pending_commit_check()
     nb_cli_classic_commit()
       nb_candidate_commit_prepare() [fail] -> copy running -> candidate
       nb_candidate_commit_apply() -> copy candidate -> running

fixes #18541

Signed-off-by: Christian Hopps <chopps@labn.net>
2025-04-08 05:39:18 +00:00
zmw12306 7c87716482 nhrpd: Add Hop Count Validation Before Forwarding in nhrp_peer_recv()
According to [RFC 2332, Section 5.1], if an NHS receives a packet that it would normally forward and the hop count is zero, it must send an error indication back to the source and drop the packet.

Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-04-07 16:13:45 -04:00
Rajasekar Raja dbd9fed0b3 staticd: Avoid requesting SRv6 sid from zebra when loc and sid block dont match
Currently, when the locator block and sid block differs, staticd would
still go ahead and request zebra to allocate the SID which it does if
there is atleast one match (from any locators).

Only when staticd tries to install the route, it sees that the locator
block and sid block are different and avoids installing the route.

Fix:
Check if the locator block and sid block match before even requesting
Zebra to allocate one.

Signed-off-by: Rajasekar Raja <rajasekarr@nvidia.com>
2025-04-07 10:34:07 -07:00
Donatas Abraitis 5e092d0e25
Merge pull request #18558 from spoignant-proton/master
bgpd: flowspec: remove sizelimit check applied to the wrong length field (issue 18557)
2025-04-07 03:27:02 +03:00
zmw12306 b70f973834 bfdd: Set bfd.LocalDiag when transitioning to AdminDown
RFC 5880 6.8.16, need to set LocalDiag when transitioning to AdminDown state.

Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-04-06 16:13:32 -04:00
zmw12306 b2ab620121 bfdd: Fix demultiplexing to rely solely on Your Discriminator as per RFC 5880.
According to RFC 5880 Section 6.3, once the remote peer reflects back the local discriminator, the receiver MUST demultiplex subsequent BFD packets based solely on the Your Discriminator field. The source IP or interface MUST NOT be used in demultiplexing once the session is established.

Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-04-05 19:02:31 -04:00
zmw12306 16a0458dbc babel: fix incorrect check in known_ae()
The known_ae() function accepts AE values up to 4, but the RFC only defines AE values 0-3.

Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-04-05 15:24:06 -04:00
zmw12306 e43dbc8fe1 babeld: Add a check to prevent all-ones case
A router-id MUST NOT consist of either all binary zeroes (0000000000000000 hexadecimal) or all binary ones (FFFFFFFFFFFFFFFF hexadecimal).

Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-04-05 15:21:27 -04:00
zmw12306 6f88868f32 babeld: check valid babel port
Add checking for port == 6696.

Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-04-05 15:14:12 -04:00
zmw12306 8a8c43c891 babeld: Fix starvation handling on route loss per RFC 8966 §3.8.2.1
When all feasible routes to a destination are lost, but unexpired unfeasible routes exist, the node MUST send a seqno request to prevent starvation.

Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-04-05 14:26:32 -04:00
zmw12306 49f6e9a385 babeld: Request forwarding does not prioritize feasible routes
Modify route selection to check feasibility first, then fall back to non-feasible routes as per SHOULD requirement.

Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-04-05 14:00:41 -04:00
Mark Stapp 259ffe1dfe
Merge pull request #18562 from opensourcerouting/fix/bfd_down_if_established
bgpd: Treat the peer as not active due to BFD down only if established
2025-04-04 12:28:18 -04:00
Stephane Poignant 2cee5567bc
bgpd: flowspec: remove sizelimit check applied to the wrong length field (issue 18557)
Section 4.1 of RFC8955 defines how the length field of flowspec NLRIs is encoded.
The method use implies a maximum length of 4095 for a single flowspec NLRI.
However, in bgp_flowspec.c, we check the length attribute of the bgp_nlri structure against this maximum value, which actually is the *total* length of all NLRI included in the considered MP_REACH_NLRI path attribute.
Due to this confusion, frr would reject valid announces that contain many flowspec NLRIs, when their cummulative length exceeds 4095, and close the session.
The proposed change removes that check entirely. Indeed, there is no need to check the length field of each invidual NLRI because the method employed make it impossible to encode a length greater than 4095.

Signed-off-by: Stephane Poignant <stephane.poignant@proton.ch>
2025-04-04 13:29:02 +02:00
Y Bharath 920ef44023 tests: Shadowing the built-in function
Shadowing the built-in function

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-04-04 12:25:18 +05:30
Donatas Abraitis 03c5ada020
Merge pull request #18567 from nabahr/proxy_init_disable
pimd: Initialize gm proxy to false
2025-04-04 02:10:35 +03:00
Mark Stapp bee5b36bbb
Merge pull request #18572 from opensourcerouting/fix/syntax_error_bgp_gr_notification
tests: Fix typo when configuring delayopen timer
2025-04-03 10:32:05 -04:00
Mark Stapp e0a97e5b85
Merge pull request #18546 from LabNConsulting/ziemba/250330-rfapi-mem-cleanup
bgpd: rfapi: track outstanding rib and import timers, free mem at exit
2025-04-03 09:01:35 -04:00
Russ White ab67e5544e
Merge pull request #18396 from pguibert6WIND/srv6l3vpn_to_bgp_vrf_redistribute
Add BGP redistribution in SRv6 BGP
2025-04-03 08:25:32 -04:00
Donatas Abraitis 55d88ee3de tests: Fix typo when configuring delayopen timer
`"` was accidentally added, and random tests failures happening.

Fixes: a4f61b78dd ("tests: Check if routes are marked as stale and retained with N-bit for GR")

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-04-03 14:01:20 +03:00
Donatas Abraitis 112663772a
Merge pull request #18553 from y-bharath14/srib-tests-v9
tests: Resource leaks in test_all_protocol_startup
2025-04-03 11:08:37 +03:00
Donatas Abraitis b977a0541c
Merge pull request #18564 from routingrocks/rvaratharaj/bug_fix_bgp
bgpd: Skip EVPN MAC processing for non-EVPN peers
2025-04-03 09:46:34 +03:00
Nathan Bahr 153d9ea3b9 pimd: Initialize gm proxy to false
Signed-off-by: Nathan Bahr <nbahr@atcorp.com>
2025-04-02 21:07:41 +00:00
Jafar Al-Gharaibeh 994fdeeb22
Merge pull request #18525 from donaldsharp/eigrp_coverity_newly_found
eigrpd: Fix possible use after free in nbr deletion
2025-04-02 14:13:37 -05:00
Jafar Al-Gharaibeh d9a00d25d9
Merge pull request #18561 from opensourcerouting/fix/ipv6_duplicate_check
lib: Return duplicate ipv6 prefix-list entry test
2025-04-02 14:08:12 -05:00
Rajesh Varatharaj 35129c88b4 bgpd: Skip EVPN MAC processing for non-EVPN peers
Issue:
"Processing EVPN MAC interface change on peer" log message is printed
even when the peer didnt have EVPN address family.

Fix:
Process only if the peer is in EVPN address family

Ticket: #17890
Signed-off-by: Rajesh Varatharaj <rvaratharaj@nvidia.com>
2025-04-02 11:48:42 -07:00
Donatas Abraitis da4a7b0356 bgpd: Treat the peer as not active due to BFD down only if established
If we have `neighbor X bfd` and BFD status is DOWN and/or ADMIN_DOWN, and BGP
session is not yet established, we never allow the session to establish.

Let's fix this regression that was in 10.2.

Fixes: 1fb48f5 ("bgpd: Do not start BGP session if BFD profile is in shutdown state")

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-04-02 17:24:09 +03:00
Russ White d10b08e4e8
Merge pull request #18097 from louis-6wind/attrhash_cmp
bgpd: optimize attrhash_cmp calls
2025-04-02 08:25:35 -04:00
Donatas Abraitis 24ae7cd30a lib: Return duplicate ipv6 prefix-list entry test
Fixes: 8384d41144 ("lib: Return duplicate prefix-list entry test")

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-04-02 11:09:59 +03:00
Russ White 90b004cd46
Merge pull request #18543 from y-bharath14/srib-yang-v8
yang: Corrected pyang errors in frr-zebra.yang
2025-04-01 17:30:30 -04:00
Louis Scalbert cbf27be5d9 bgpd: optimize attrhash_cmp calls
Only call attrhash_cmp when necessary.

Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-04-01 16:52:58 +02:00
Russ White 9f8027b8a4
Merge pull request #18524 from donaldsharp/eigrp_limit_asn_vrf
yang: Limit eigrp to just 1 instance per vrf
2025-04-01 10:22:21 -04:00
Russ White ec1fcc1799
Merge pull request #18470 from zmw12306/NH_Init
babeld: Add next hop initialization
2025-04-01 10:13:11 -04:00
Russ White c312917988
Merge pull request #18450 from donaldsharp/bgp_packet_reads
Bgp packet reads conversion to a FIFO
2025-04-01 10:12:37 -04:00
Krishnasamy 751ae76648 zebra: show command to display metaq info
Display below info from metaq and sub queues
1. Current queue size
2. Max/Highwater size
3. Total number of events received fo so far

r1# sh zebra metaq
MetaQ Summary
Current Size    : 0
Max Size        : 9
Total           : 20
 |------------------------------------------------------------------|
 | SubQ                             | Current  | Max Size  | Total  |
 |----------------------------------+----------+-----------+--------|
 | NHG Objects                      | 0        | 0         | 0      |
 |----------------------------------+----------+-----------+--------|
 | EVPN/VxLan Objects               | 0        | 0         | 0      |
 |----------------------------------+----------+-----------+--------|
 | Early Route Processing           | 0        | 8         | 11     |
 |----------------------------------+----------+-----------+--------|
 | Early Label Handling             | 0        | 0         | 0      |
 |----------------------------------+----------+-----------+--------|
 | Connected Routes                 | 0        | 6         | 9      |
 |----------------------------------+----------+-----------+--------|
 | Kernel Routes                    | 0        | 0         | 0      |
 |----------------------------------+----------+-----------+--------|
 | Static Routes                    | 0        | 0         | 0      |
 |----------------------------------+----------+-----------+--------|
 | RIP/OSPF/ISIS/EIGRP/NHRP Routes  | 0        | 0         | 0      |
 |----------------------------------+----------+-----------+--------|
 | BGP Routes                       | 0        | 0         | 0      |
 |----------------------------------+----------+-----------+--------|
 | Other Routes                     | 0        | 0         | 0      |
 |----------------------------------+----------+-----------+--------|
 | Graceful Restart                 | 0        | 0         | 0      |
 |------------------------------------------------------------------|

Signed-off-by: Krishnasamy <krishnasamyr@nvidia.com>
2025-04-01 09:32:46 +00:00
Y Bharath 09081f8563 tests: Resource leaks in test_all_protocol_startup
Fix resource leaks in test_all_protocol_startup.py

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-04-01 12:06:15 +05:30
G. Paul Ziemba 1629c05924 bgpd: rfapi: track outstanding rib and import timers, free mem at exit
While here, also make "VPN SAFI clear" test wait for clear result
    (tests/topotests/bgp_rfapi_basic_sanity{,_config2})

    Original RFAPI code relied on the frr timer system to remember
    various allocations that were supposed to be freed at future times
    rather than manage a parallel database. However, if bgpd is terminated
    before the times expire, those pending allocations are marked as
    memory leaks, even though they wouldn't be leaks under normal operation.

    This change adds some hash tables to track these outstanding
    allocations that are associated with pending timers, and uses
    those tables to free the allocations when bgpd exits.

Signed-off-by: G. Paul Ziemba <paulz@labn.net>
2025-03-31 08:45:33 -07:00
Donatas Abraitis f33dcf3fa0
Merge pull request #18544 from donaldsharp/memory_leaks_all_over
Memory leaks all over
2025-03-31 14:50:59 +03:00
zmw12306 1571607c6b babeld: fix incorrect type assignment in parse_request_subtlv
parse_request_subtlv accesses type using fixed offset instead of current position.

Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-03-31 00:08:38 -04:00
zmw12306 2b2bebfa92 babeld: Hop Count must not be 0.
According to RFC 8966:
Hop Count The maximum number of times that this TLV may be forwarded, plus 1. This MUST NOT be 0.

Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-03-31 00:01:53 -04:00
zmw12306 c2e69624ba babeld: Add input validation for update TLV.
1. If the metric is infinite and AE is 0, Plen and Omitted MUST both be 0
2. Use INFINITY to replace 0xFFFF
3. Ignore unkown ae
4. If the metric field if 0xFFFF, a retraction happens So it is acceptable for no router_id when metric is 0xFFFF while ae is not 0.

Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-03-31 00:01:30 -04:00
Donald Sharp 354aee8932 bgpd: Free memory associated with aspath_dup
Fix this:

==3890443== 92 (48 direct, 44 indirect) bytes in 1 blocks are definitely lost in loss record 68 of 98
==3890443==    at 0x484DA83: calloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==3890443==    by 0x49737B3: qcalloc (memory.c:106)
==3890443==    by 0x3EA63B: aspath_dup (bgp_aspath.c:703)
==3890443==    by 0x2F5438: route_set_aspath_exclude (bgp_routemap.c:2604)
==3890443==    by 0x49BC52A: route_map_apply_ext (routemap.c:2708)
==3890443==    by 0x2C1069: bgp_input_modifier (bgp_route.c:1925)
==3890443==    by 0x2C9F12: bgp_update (bgp_route.c:5205)
==3890443==    by 0x2CF281: bgp_nlri_parse_ip (bgp_route.c:7271)
==3890443==    by 0x2A28C7: bgp_nlri_parse (bgp_packet.c:338)
==3890443==    by 0x2A7F5C: bgp_update_receive (bgp_packet.c:2448)
==3890443==    by 0x2ACCA6: bgp_process_packet (bgp_packet.c:4046)
==3890443==    by 0x49EB77C: event_call (event.c:2019)
==3890443==    by 0x495FAD1: frr_run (libfrr.c:1247)
==3890443==    by 0x208D6D: main (bgp_main.c:557)

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-30 17:54:34 -04:00
Donald Sharp f82682a3f9 zebra: Clean up memory associated with affinity maps
Zebra is using affinity maps but not cleaning up memory on shutdown.
BAD!

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-30 17:54:34 -04:00
Donald Sharp 1f09381f0f isisd: Tie isis into cleaning up affinity maps
Affinity maps are abeing leaked.  STOP

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-30 17:54:34 -04:00
Donald Sharp 2da251264d lib: Add a affinity_map_terminate() function
This function will clean up memory associated with affinity maps
on shutdown

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-30 17:54:34 -04:00
Donald Sharp fbdce3358e *: Ensure prefix lists are freed on shutdown.
Several daemons were not calling prefix_list_reset
to clean up memory on shutdown.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-30 17:00:38 -04:00
Donald Sharp c9d431d4db bgpd: On shutdown, unlock table when clearing the bgp metaQ
There are some tables not being freed upon shutdown.  This
is happening because the table is being locked as dests
are being put on the metaQ.  When in shutdown it was clearing
the MetaQ it was not unlocking the table

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-30 14:02:16 -04:00
Donald Sharp 06480c0c81 bgpd: When shutting down do not clear self peers
Commit: e0ae285eb8

Modified the fsm state machine to attempt to not
clear routes on a peer that was not established.
The peer should be not a peer self.  We do not want
to ever clear the peer self.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-30 14:02:16 -04:00
Christian Hopps 1ef4f19009
Merge pull request #15471 from opensourcerouting/frrreload_logfile
tools: Add option to frr-reload to specify alternate logfile
2025-03-30 05:52:43 -04:00
Donald Sharp 521b58945c pimd: Fix memory leak on shutdown
The gm_join_list has a setup where it attempts to only
create the list upon need and deletes it when the list
is empty.  On interface shutdown it was calling the
function to empty the list but it was not empty so
the list was being left at the end.  Just add a bit
of code to really clean up the list in the shutdown
case.

Direct leak of 40 byte(s) in 1 object(s) allocated from:
    0 0x7f84850b83b7 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:77
    1 0x7f8484c391c4 in qcalloc lib/memory.c:106
    2 0x7f8484c1ad36 in list_new lib/linklist.c:49
    3 0x55d982827252 in pim_if_gm_join_add pimd/pim_iface.c:1354
    4 0x55d982852b59 in lib_interface_gmp_address_family_join_group_create pimd/pim_nb_config.c:4499
    5 0x7f8484c6a5d3 in nb_callback_create lib/northbound.c:1512
    6 0x7f8484c6a5d3 in nb_callback_configuration lib/northbound.c:1910
    7 0x7f8484c6bb51 in nb_transaction_process lib/northbound.c:2042
    8 0x7f8484c6c164 in nb_candidate_commit_apply lib/northbound.c:1381
    9 0x7f8484c6c39f in nb_candidate_commit lib/northbound.c:1414
    10 0x7f8484c6cf1c in nb_cli_classic_commit lib/northbound_cli.c:57
    11 0x7f8484c72f67 in nb_cli_apply_changes_internal lib/northbound_cli.c:195
    12 0x7f8484c73a2e in nb_cli_apply_changes lib/northbound_cli.c:251
    13 0x55d9828bd30f in interface_ip_igmp_join_magic pimd/pim_cmd.c:5436
    14 0x55d9828bd30f in interface_ip_igmp_join pimd/pim_cmd_clippy.c:6366
    15 0x7f8484bb5cbd in cmd_execute_command_real lib/command.c:1003
    16 0x7f8484bb5fdc in cmd_execute_command lib/command.c:1062
    17 0x7f8484bb6508 in cmd_execute lib/command.c:1228
    18 0x7f8484cfb6ec in vty_command lib/vty.c:626
    19 0x7f8484cfbc3f in vty_execute lib/vty.c:1389
    20 0x7f8484cff9f0 in vtysh_read lib/vty.c:2408
    21 0x7f8484cec846 in event_call lib/event.c:1984
    22 0x7f8484c1a10a in frr_run lib/libfrr.c:1246
    23 0x55d9828fc765 in main pimd/pim_main.c:166
    24 0x7f848470c249 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-29 11:54:36 -04:00
Y Bharath 094072e948 yang: Corrected pyang errors in frr-zebra.yang
Corrected pyang warnings or errors in frr-zebra.yang

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-03-29 19:36:58 +05:30
Nathan Bahr 133bb5f614 pimd: Only create and bind the autorp socket when really needed
Previously, the autorp socket would get created and bind if needed
by autorp configuration.
This update limits it further to also require pim enabled interfaces
in the vrf before the socket is created and bind.
So now the socket will automatically close if there are no pim
enabled interfaces left, or if autorp is turned off. It will
automatically turn on if autorp is turned on and there are pim
enabled interfaces in the vrf.

Signed-off-by: Nathan Bahr <nbahr@atcorp.com>
2025-03-28 16:50:09 +00:00
Donatas Abraitis 285fcb903a
Merge pull request #18532 from y-bharath14/srib-tests-v8
tests: Irrelevant code in lutil.py
2025-03-28 12:38:07 +02:00
Y Bharath f2d988bf71 tests: Irrelevant code in lutil.py
Irrelevant code in lutil.py

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-03-28 10:52:36 +05:30
Donald Sharp 694fb7f48f eigrpd: Fix possible use after free in nbr deletion
Coverity is complaining about use after free's in
clearing eigrp neighbors.  Clean the code
up to not have the problem.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-27 11:28:58 -04:00
Donatas Abraitis 2de45ca1b4
Merge pull request #18520 from y-bharath14/srib-tests-v7
tests: Fix potential issues at send_bsr_packet.py
2025-03-27 15:07:45 +02:00
Donald Sharp 749dc0c966 yang: Limit eigrp to just 1 instance per vrf
Currently EIGRP has built in yang code that expects only
1 ASN used per vrf.  Let's just limit the operator from
putting themselves in a bad position by allowing something like
this:

router eigrp 33
....
!
router eigrp 99
...
!

no router eigrp 99 would crash because of assumptions
made in the yang processing.

Let's just hard code that assumption into the EIGRP yang
at the moment such that it will not allow you to enter
a `router eigrp 99` instance at all.

This is purely a software limitation to prevent the code
from violating it's current assumptions.  I do not see
much need to support this at this point in time so I
fixed the problem this way instead of having to possibly
touch a bunch of code.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-27 08:51:05 -04:00
Donatas Abraitis b8dfbbcca9
Merge pull request #18515 from donaldsharp/route_map_show_fix
lib: `show route-map` should not print (null)
2025-03-27 09:13:51 +02:00
Y Bharath 04a73bdf3a tests: Fix potential issues at send_bsr_packet.py
Fix potential issues at send_bsr_packet.py

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-03-27 09:30:40 +05:30
Donald Sharp d682f42d5b lib: show route-map should not print (null)
This command:
route-map FOOBAR permit 10
 set ipv6 next-hop prefer-global
 set community 5060:12345 additive
!

When you issue a `show route-map ...` command displays this:

route-map: FOOBAR Invoked: 0 (0 milliseconds total) Optimization: enabled Processed Change: false
 permit, sequence 5 Invoked 0 (0 milliseconds total)
  Match clauses:
  Set clauses:
    ipv6 next-hop prefer-global (null)
    community 5060:12345 additive
  Call clause:
  Action:
    Exit routemap

Modify the code so that it no longer displays the NULL when there
is nothing to display.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-26 14:35:13 -04:00
Russ White 565da4d471
Merge pull request #18498 from opensourcerouting/fix/keep_stale_routes_on_clear
bgpd: Retain the routes if we do a clear with N-bit set for Graceful-Restart
2025-03-26 14:02:52 -04:00
Jafar Al-Gharaibeh 75d5312f19
Merge pull request #18508 from donaldsharp/rip_snmp_test_fixup
tests: Modify simple_snmp_test to use frr.conf
2025-03-26 12:38:05 -05:00
Donald Sharp 212a5379b0
Merge pull request #18502 from opensourcerouting/fix/mpls_withdraw_label
bgpd: Set the label for MP_UNREACH_NLRI 0x800000 instead of 0x000000
2025-03-26 11:26:19 -04:00
Donald Sharp e23d2f197c tests: Modify simple_snmp_test to use frr.conf
The simple_snmp_test was not properly testing
the rip snmp code because of weirdness w/ mgmtd
and non-integrated configs.  Modify the whole
test to use a integrated config and voila
ripd is talking snmp again in the test.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-26 11:14:57 -04:00
Donald Sharp f726c135a2
Merge pull request #18500 from y-bharath14/srib-yang-v7
yang: Fixed pyang errors at frr-isisd.yang
2025-03-26 10:51:10 -04:00
Russ White 73bfe788f0
Merge pull request #18506 from donaldsharp/ripng_test_aggregate_address
tests: Add ripng aggregate address testing
2025-03-26 10:31:59 -04:00
Donald Sharp 8ca4376e01
Merge pull request #18503 from gromit1811/bugfix/ospf6_gr_leak
ospf6d: Fix LSA memory leaks related to graceful restart
2025-03-26 10:30:30 -04:00
Donatas Abraitis e69459c714 tests: Use label 0x800000 instead of 0x000000 for BMP tests
Related-to: 94e2aadf71 ("bgpd: Set the label for MP_UNREACH_NLRI 0x800000 instead of 0x000000")

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-03-26 14:47:44 +02:00
Mark Stapp 38fd34f363
Merge pull request #18482 from donaldsharp/eigrp_typesafe
Eigrp typesafe
2025-03-26 07:54:23 -04:00
Martin Buck b73e3ae69d tests: Fix wait times in test_ospf6_gr_topo1 topotest
Increase wait times to at least the minimum wait time accepted by
topotest.run_and_expect(). Also change poll interval to 1s, no point in
doings this more frequently.

Finally, slightly improve the topology diagram to also include area numbers.

Signed-off-by: Martin Buck <mb-tmp-tvguho.pbz@gromit.dyndns.org>
2025-03-26 10:30:35 +01:00
Martin Buck 0db0e7fbd7 ospf6d: Fix LSA memory leaks related to graceful restart
Fixes leaks reported by ospf6_gr_topo1 topotest.

Signed-off-by: Martin Buck <mb-tmp-tvguho.pbz@gromit.dyndns.org>
2025-03-26 10:30:20 +01:00
Donatas Abraitis 03d0a27418
Merge pull request #18448 from Shbinging/fix_babel_hello_interval
babeld: fix hello packets not sent with configured hello timer
2025-03-26 10:37:58 +02:00
Donatas Abraitis d19854c5ce
Merge pull request #18476 from y-bharath14/srib-tests-v6
tests: Handling potential errors gracefully
2025-03-26 10:34:23 +02:00
Donatas Abraitis 94e2aadf71 bgpd: Set the label for MP_UNREACH_NLRI 0x800000 instead of 0x000000
RFC8277 says:

The procedures in [RFC3107] for withdrawing the binding of a label
or sequence of labels to a prefix are not specified clearly and correctly.

=> How to Explicitly Withdraw the Binding of a Label to a Prefix

Suppose a BGP speaker has announced, on a given BGP session, the
   binding of a given label or sequence of labels to a given prefix.
   Suppose it now wishes to withdraw that binding.  To do so, it may
   send a BGP UPDATE message with an MP_UNREACH_NLRI attribute.  The
   NLRI field of this attribute is encoded as follows:

      0                   1                   2                   3
      0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     |    Length     |        Compatibility                          |
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     |                          Prefix                               ~
     ~                                                               |
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

                       Figure 4: NLRI for Withdrawal

   Upon transmission, the Compatibility field SHOULD be set to 0x800000.
   Upon reception, the value of the Compatibility field MUST be ignored.

[RFC3107] also made it possible to withdraw a binding without
   specifying the label explicitly, by setting the Compatibility field
   to 0x800000.  However, some implementations set it to 0x000000.  In
   order to ensure backwards compatibility, it is RECOMMENDED by this
   document that the Compatibility field be set to 0x800000, but it is
   REQUIRED that it be ignored upon reception.

In FRR case where a single label is used per-prefix, we should send 0x800000,
and not 0x000000.

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-03-26 10:30:52 +02:00
Y Bharath 99b617954e yang: Fixed pyang errors at frr-isisd.yang
Fixed pyang errors at frr-isisd.yang

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-03-26 12:46:08 +05:30
Donatas Abraitis 42b9d985cc bgpd: Remove unused defines from bgp_label.h
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-03-26 08:50:06 +02:00
Donald Sharp 0a275ba161
Merge pull request #18496 from mjstapp/fix_bgp_clearing_sa
bgpd: fix SA warning in bgp clearing code
2025-03-25 18:00:02 -04:00
Donald Sharp 0c7cd73a7b tests: Add ripng aggregate address testing
Looking at gcov and noticed that ripngd does not
test any aggregate address addition/deletion
to ensure that it works.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-25 17:35:47 -04:00
Donatas Abraitis a4f61b78dd tests: Check if routes are marked as stale and retained with N-bit for GR
Related-to: b7c657d4e0 ("bgpd: Retain the routes if we do a clear with N-bit set for Graceful-Restart")

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-03-25 17:40:00 +02:00
Donatas Abraitis b7c657d4e0 bgpd: Retain the routes if we do a clear with N-bit set for Graceful-Restart
On receiving side we already did the job correctly, but the peer which initiates
the clear does not retain the other's routes. This commit fixes that.

Fixes: 20170775da ("bgpd: Activate Graceful-Restart when receiving CEASE/HOLDTIME notifications")

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-03-25 17:20:56 +02:00
Donald Sharp 83a92c926e bgpd: Delay processing MetaQ in some events
If the number of peers that are being handled on
the peer connection fifo is greater than 10, that
means we have some network event going on.  Let's
allow the packet processing to continue instead
of running the metaQ.  This has advantages because
everything else in BGP is only run after the metaQ
is run.  This includes best path processing,
installation of the route into zebra as well as
telling our peers about this change.  Why does
this matter?  It matters because if we are receiving
the same route multiple times we limit best path processing
to much fewer times and consequently we also limit
the number of times we send the route update out and
we install the route much fewer times as well.

Prior to this patch, with 512 peers and 5k routes.
CPU time for bgpd was 3:10, zebra was 3:28.  After
the patch CPU time for bgpd was 0:55 and zebra was
0:25.

Here are the prior `show event cpu`:
Event statistics for bgpd:

Showing statistics for pthread default
--------------------------------------
                               CPU (user+system): Real (wall-clock):
Active   Runtime(ms)   Invoked Avg uSec Max uSecs Avg uSec Max uSecs  CPU_Warn Wall_Warn Starv_Warn   Type  Event
    0         20.749     33144        0       395        1       396         0         0          0    T    (bgp_generate_updgrp_packets)
    0       9936.199      1818     5465     43588     5466     43589         0         0          0     E   bgp_handle_route_announcements_to_zebra
    0          0.220        84        2        20        3        20         0         0          0    T    update_subgroup_merge_check_thread_cb
    0          0.058         2       29        43       29        43         0         0          0     E   zclient_connect
    0      17297.733       466    37119     67428    37124     67429         0         0          0   W     zclient_flush_data
    1          0.134         7       19        40       20        42         0         0          0  R      vtysh_accept
    0        151.396      1067      141      1181      142      1189         0         0          0  R      vtysh_read
    0          0.297      1030        0        14        0        14         0         0          0    T    (bgp_routeadv_timer)
    0          0.001         1        1         1        2         2         0         0          0    T    bgp_sync_label_manager
    2          9.374       544       17       261       17       262         0         0          0  R      bgp_accept
    0          0.001         1        1         1        2         2         0         0          0    T    bgp_startup_timer_expire
    0          0.012         1       12        12       13        13         0         0          0     E   frr_config_read_in
    0          0.308         1      308       308      309       309         0         0          0    T    subgroup_coalesce_timer
    0          4.027       105       38        77       39        78         0         0          0    T    (bgp_start_timer)
    0     112206.442      1818    61719     84726    61727     84736         0         0          0    TE   work_queue_run
    0          0.345         1      345       345      346       346         0         0          0    T    bgp_config_finish
    0          0.710       620        1         6        1         9         0         0          0   W     bgp_connect_check
    2         39.420      8283        4       110        5       111         0         0          0  R      zclient_read
    0          0.052         1       52        52      578       578         0         0          0    T    bgp_start_label_manager
    0          0.452        87        5        90        5        90         0         0          0    T    bgp_announce_route_timer_expired
    0        185.837      3088       60       537       92     21705         0         0          0     E   bgp_event
    0      48719.671      4346    11210     78292    11215     78317         0         0          0     E   bgp_process_packet

Showing statistics for pthread BGP I/O thread
---------------------------------------------
                               CPU (user+system): Real (wall-clock):
Active   Runtime(ms)   Invoked Avg uSec Max uSecs Avg uSec Max uSecs  CPU_Warn Wall_Warn Starv_Warn   Type  Event
    0        321.915     28597       11        86       11       265         0         0          0   W     bgp_process_writes
  515        115.586     26954        4       121        4       128         0         0          0  R      bgp_process_reads

Event statistics for zebra:

Showing statistics for pthread default
--------------------------------------
                               CPU (user+system): Real (wall-clock):
Active   Runtime(ms)   Invoked Avg uSec Max uSecs Avg uSec Max uSecs  CPU_Warn Wall_Warn Starv_Warn   Type  Event
    0          0.109         2       54        62       55        63         0         0          0    T    timer_walk_start
    1          0.550        11       50       100       50       100         0         0          0  R      vtysh_accept
    0     112848.163      4441    25410    405489    25413    410127         0         0          0     E   zserv_process_messages
    0          0.007         1        7         7        7         7         0         0          0     E   frr_config_read_in
    0          0.005         1        5         5        5         5         0         0          0    T    rib_sweep_route
    1        573.589      4789      119      1567      120      1568         0         0          0    T    wheel_timer_thread
  347         30.848        97      318      1367      318      1366         0         0          0    T    zebra_nhg_timer
    0          0.005         1        5         5        6         6         0         0          0    T    zebra_evpn_mh_startup_delay_exp_cb
    0          5.404       521       10        38       10        70         0         0          0    T    timer_walk_continue
    1          1.669         9      185       219      186       219         0         0          0  R      zserv_accept
    1          0.174        18        9        53       10        53         0         0          0  R      msg_conn_read
    0          3.028       520        5        47        6        47         0         0          0    T    if_zebra_speed_update
    0          0.324       274        1         5        1         6         0         0          0   W     msg_conn_write
    1         24.661      2124       11       359       12       359         0         0          0  R      kernel_read
    0      73683.333      2964    24859    143223    24861    143239         0         0          0    TE   work_queue_run
    1         46.649      6789        6       424        7       424         0         0          0  R      rtadv_read
    0         52.661        85      619      2087      620      2088         0         0          0  R      vtysh_read
    0         42.660        18     2370     21694     2373     21695         0         0          0     E   msg_conn_proc_msgs
    0          0.034         1       34        34       35        35         0         0          0     E   msg_client_connect_timer
    0       2786.938      2300     1211     29456     1219     29555         0         0          0     E   rib_process_dplane_results

Showing statistics for pthread Zebra dplane thread
--------------------------------------------------
                               CPU (user+system): Real (wall-clock):
Active   Runtime(ms)   Invoked Avg uSec Max uSecs Avg uSec Max uSecs  CPU_Warn Wall_Warn Starv_Warn   Type  Event
    0       4875.670    200371       24       770       24       776         0         0          0     E   dplane_thread_loop
    0          0.059         1       59        59       76        76         0         0          0     E   dplane_incoming_request
    1          9.640       722       13      4510       15      5343         0         0          0  R      dplane_incoming_read

Here are the post `show event cpu` results:

Event statistics for bgpd:

Showing statistics for pthread default
--------------------------------------
                               CPU (user+system): Real (wall-clock):
Active   Runtime(ms)   Invoked Avg uSec Max uSecs Avg uSec Max uSecs  CPU_Warn Wall_Warn Starv_Warn   Type  Event
    0      21297.497      3565     5974     57912     5981     57913         0         0          0     E   bgp_process_packet
    0        149.742      1068      140      1109      140      1110         0         0          0  R      vtysh_read
    0          0.013         1       13        13       14        14         0         0          0     E   frr_config_read_in
    0          0.459        86        5       104        5       105         0         0          0    T    bgp_announce_route_timer_expired
    0          0.139        81        1        20        2        21         0         0          0    T    update_subgroup_merge_check_thread_cb
    0        405.889    291687        1       179        1       450         0         0          0    T    (bgp_generate_updgrp_packets)
    0          0.682       618        1         6        1         9         0         0          0   W     bgp_connect_check
    0          3.888       103       37        81       38        82         0         0          0    T    (bgp_start_timer)
    0          0.074         1       74        74      458       458         0         0          0    T    bgp_start_label_manager
    0          0.000         1        0         0        1         1         0         0          0    T    bgp_sync_label_manager
    0          0.121         3       40        54      100       141         0         0          0     E   bgp_process_conn_error
    0          0.060         2       30        49       30        50         0         0          0     E   zclient_connect
    0          0.354         1      354       354      355       355         0         0          0    T    bgp_config_finish
    0          0.283         1      283       283      284       284         0         0          0    T    subgroup_coalesce_timer
    0      29365.962      1805    16269     99445    16273     99454         0         0          0    TE   work_queue_run
    0        185.532      3097       59       497       94     26107         0         0          0     E   bgp_event
    1          0.290         8       36       151       37       158         0         0          0  R      vtysh_accept
    2          9.462       548       17       320       17       322         0         0          0  R      bgp_accept
    2         40.219      8283        4       128        5       128         0         0          0  R      zclient_read
    0          0.322      1031        0         4        0         5         0         0          0    T    (bgp_routeadv_timer)
    0        356.812       637      560      3007      560      3007         0         0          0     E   bgp_handle_route_announcements_to_zebra

Showing statistics for pthread BGP I/O thread
---------------------------------------------
                               CPU (user+system): Real (wall-clock):
Active   Runtime(ms)   Invoked Avg uSec Max uSecs Avg uSec Max uSecs  CPU_Warn Wall_Warn Starv_Warn   Type  Event
  515         62.965     14335        4       103        5       181         0         0          0  R      bgp_process_reads
    0       1986.041    219813        9       213        9       315         0         0          0   W     bgp_process_writes

Event statistics for zebra:

Showing statistics for pthread default
--------------------------------------
                               CPU (user+system): Real (wall-clock):
Active   Runtime(ms)   Invoked Avg uSec Max uSecs Avg uSec Max uSecs  CPU_Warn Wall_Warn Starv_Warn   Type  Event
    0          0.006         1        6         6        7         7         0         0          0     E   frr_config_read_in
    0       3673.365      2044     1797    259281     1800    261342         0         0          0     E   zserv_process_messages
    1        651.846      8041       81      1090       82      1233         0         0          0    T    wheel_timer_thread
    0         38.184        18     2121     21345     2122     21346         0         0          0     E   msg_conn_proc_msgs
    1          0.651        12       54       112       55       112         0         0          0  R      vtysh_accept
    0          0.102         2       51        55       51        56         0         0          0    T    timer_walk_start
    0        202.721      1577      128     29172      141     29226         0         0          0     E   rib_process_dplane_results
    1         41.650      6645        6       140        6       140         0         0          0  R      rtadv_read
    1         22.518      1969       11       106       12       154         0         0          0  R      kernel_read
    0          4.265        48       88      1465       89      1466         0         0          0  R      vtysh_read
    0       6099.851       650     9384     28313     9390     28314         0         0          0    TE   work_queue_run
    0          5.104       521        9        30       10        31         0         0          0    T    timer_walk_continue
    0          3.078       520        5        53        6        55         0         0          0    T    if_zebra_speed_update
    0          0.005         1        5         5        5         5         0         0          0    T    rib_sweep_route
    0          0.034         1       34        34       35        35         0         0          0     E   msg_client_connect_timer
    1          1.641         9      182       214      183       215         0         0          0  R      zserv_accept
    0          0.358       274        1         6        2         6         0         0          0   W     msg_conn_write
    1          0.159        18        8        54        9        54         0         0          0  R      msg_conn_read

Showing statistics for pthread Zebra dplane thread
--------------------------------------------------
                               CPU (user+system): Real (wall-clock):
Active   Runtime(ms)   Invoked Avg uSec Max uSecs Avg uSec Max uSecs  CPU_Warn Wall_Warn Starv_Warn   Type  Event
    0        301.404      7280       41      1878       41      1878         0         0          0     E   dplane_thread_loop
    0          0.048         1       48        48       49        49         0         0          0     E   dplane_incoming_request
    1          9.558       727       13      4659       14      5420         0         0          0  R      dplane_incoming_read

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-25 10:47:39 -04:00
Mark Stapp cade67dce6
Merge pull request #18494 from opensourcerouting/fix/duplicate_prefix_list
lib: Return duplicate prefix-list entry test
2025-03-25 10:43:24 -04:00
Russ White 053aeaf58b
Merge pull request #18474 from zmw12306/Hop-Count
babeld: Hop Count must not be 0.
2025-03-25 10:38:23 -04:00
Russ White 694f67c48a
Merge pull request #18369 from huchaogithup/master-dev-pr1
isisd: Fix the issue where redistributed routes do not change when th…
2025-03-25 10:18:13 -04:00
Russ White ccfdab3ddb
Merge pull request #18311 from Z-Yivon/fix-isis-hello-timer-bug
isisd:IS-IS hello packets not sent with configured hello timer
2025-03-25 10:15:42 -04:00
Philippe Guibert 171231686d topotests: remove useless frr commands of bgp_srv6l3vpn_to_bgp*
Many useless commande are still persistent in the
bgp_srv6l3vpn_to_bgp_vrf tests. Remove the useless commands.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-25 14:37:29 +01:00
Mark Stapp 39bb12299c bgpd: fix SA warnings in bgp clearing code
Fix a possible use-after-free in the recent bgp batch
clearing code, CID 1639091.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-03-25 09:32:14 -04:00
Donald Sharp 937a9fb3e9 zebra: Limit reading packets when MetaQ is full
Currently Zebra is just reading packets off the zapi
wire and stacking them up for processing in zebra
in the future.  When there is significant churn
in the network the size of zebra can grow without
bounds due to the MetaQ sizing constraints.  This
ends up showing by the number of nexthops in the
system.  Reducing the number of packets serviced
to limit the metaQ size to the packets to process
allieviates this problem.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-25 09:10:46 -04:00
Donald Sharp 12bf042c68 bgpd: Modify bgp to handle packet events in a FIFO
Current behavor of BGP is to have a event per connection.  Given
that on startup of BGP with a high number of neighbors you end
up with 2 * # of peers events that are being processed.  Additionally
once BGP has selected the connection this still only comes down
to 512 events.  This number of events is swamping the event system
and in addition delaying any other work from being done in BGP at
all because the the 512 events are always going to take precedence
over everything else.  The other main events are the handling
of the metaQ(1 event), update group events( 1 per update group )
and the zebra batching event.  These are being swamped.

Modify the BGP code to have a FIFO of connections.  As new data
comes in to read, place the connection on the end of the FIFO.
Have the bgp_process_packet handle up to 100 packets spread
across the individual peers where each peer/connection is limited
to the original quanta.  During testing I noticed that withdrawal
events at very very large scale are taking up to 40 seconds to process
so I added a check for yielding to further limit the number of packets
being processed.

This change also allow for BGP to be interactive again on scale
setups on initial convergence.  Prior to this change any vtysh
command entered would be delayed by 10's of seconds in my setup
while BGP was doing other work.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-25 09:10:46 -04:00
Donald Sharp f3790640d3 tests: Expand hold timer to 60 seconds for high_ecmp
The hold timer is 5/20.  At load with a very very
large number of routes, the tests are experiencing
some issues with this.  Let's just give ourselves
some headroom associated with the receiving
of packets

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-25 09:10:14 -04:00
Donald Sharp ab6d15b42b
Merge pull request #18471 from zmw12306/NH-TLV
babeld: add check incorrect AE value for NH TLV.
2025-03-25 09:03:16 -04:00
Shbinging 3c7a635722 babeld: fix hello packets not sent with configured hello timer
Same issue occurring as previously addressed in https://github.com/FRRouting/frr/pull/9092. The root cause is: "Sending a Hello message before restarting the hello timer to avoid session flaps in case of larger hello interval configurations."

Signed-off-by: Shbinging <bingshui@smail.nju.edu.cn>
2025-03-25 20:14:34 +08:00
Donatas Abraitis 8384d41144 lib: Return duplicate prefix-list entry test
If we do e.g.:

ip prefix-list PL_LoopbackV4 permit 10.1.0.32/32
ip prefix-list PL_LoopbackV4 permit 10.1.0.32/32
ip prefix-list PL_LoopbackV4 permit 10.1.0.32/32

We end up, having duplicate records with a different sequence number only.

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-03-25 13:54:24 +02:00
Donatas Abraitis 45af7ea217
Merge pull request #18483 from donaldsharp/holdtime_mistake
bgpd: Fix holdtime not working properly when busy
2025-03-25 09:38:09 +02:00
Donatas Abraitis 0a405f477d
Merge pull request #18484 from mjstapp/fix_evpn_rt_cli
bgpd: fix handling of configured route-targets for l2vni, l3vni
2025-03-25 09:37:08 +02:00
Mark Stapp 2496bfecfe bgpd: fix handling of configured RTs for l2vni, l3vni
Test for existing explicit config as part of validation of
route-target configuration: allow explicit config of generic/
default AS+VNI, for example, instead of rejecting it.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-03-24 16:53:32 -04:00
Russ White 7afe25744b
Merge pull request #18447 from donaldsharp/bgp_clear_batch
Bgp clear batch
2025-03-24 16:13:49 -04:00
Philippe Guibert 56c9f1c566 bgpd: fix dereference of null pointer in bgp_nht
Assuming attr is null, a dereference can happen in the function
make_prefix(). Add the protection over attr before accessing the
variable.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-24 20:59:18 +01:00
zmw12306 eea39974ad babeld: add check incorrect AE value for NH TLV.
According to RFC 8966, for NH TLV, AE SHOULD be 1 (IPv4) or 3 (link-local IPv6), and MUST NOT be 0.
Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-03-24 15:55:08 -04:00
Donald Sharp 9a26a56c51 bgpd: Fix holdtime not working properly when busy
Commit:  cc9f21da22

Modified the bgp_fsm code to dissallow the extension
of the hold time when the system is under extremely
heavy load.  This was a attempt to remove the return
code but it was too aggressive and messed up this bit
of code.

Put the behavior back that was introduced in:
d0874d195d

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-24 15:55:01 -04:00
zmw12306 3b5d421207 babeld: Hop Count must not be 0.
According to RFC 8966:
Hop Count The maximum number of times that this TLV may be forwarded, plus 1. This MUST NOT be 0.
Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-03-24 15:32:18 -04:00
Philippe Guibert 121c2ff1b0 topotests: use json exact test for bgp_srv6l3vpn_to_bgp_vrf3
Add more control on the expected outputs, by using an exact json
comparison.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-24 17:39:20 +01:00
Donald Sharp a1d3b2b04e eigrpd: Remove unneeded function declaration
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-24 11:36:13 -04:00
Donald Sharp ae74af996f zebra: On shutdown call appropriate finish functions
The vrf_terminate and route_map_finish functions are not being called and as such
memory was being dropped on shutdown.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-24 11:36:13 -04:00
Donald Sharp efb2aeae7b eigrpd: Cleanup memory issues on shutdown
a) EIGRP was having issues with the prefix created as part
of the topology destination.  Make this just a part of the
topology data structure instead of allocating it.

b) EIGRP was not freeing up any memory associated with
the network table.  Free it.

c) EIGRP was confusing zebra shutdown as part of the deletion
of the last eigrp data structure.  This was inappropriate it
should be part of the `I'm just shutting down`.

d) The QOBJ was not being properly freed, free it.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-24 11:36:13 -04:00
Donald Sharp 95e7f56eec eigrpd: Convert eigrp list to a typesafe hash
Convert the eigrp_om->eigrp list to a typesafe hash.
Allow for quicker lookup and all that jazz.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-24 11:36:13 -04:00
Donald Sharp cb34559d7f eigrpd: Convert the eiflist to a typesafe hash
The eigrp->eiflist is a linked list and should just
be a hash instead.  The full conversion to a hash
like functionality is goingto wait until the connected
eigrp data structure is created.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-24 11:36:13 -04:00
Donald Sharp 8249c046d7 eigrpd: Convert the nbrs list to a typesafe hash
Convert the ei->nbrs list to a typesafe hash to
facilitate quick lookups.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-24 11:36:13 -04:00
Donald Sharp 5f9e26069e lib: expose comparision function to allow a typesafe conversion
The interface hash comparison function is needed in eigrpd.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-24 11:00:45 -04:00
Donald Sharp d736350986
Merge pull request #18473 from zmw12306/Request-TLV
babeld: Missing Validation for AE=0 and Plen!=0
2025-03-24 10:29:36 -04:00
Y Bharath 0885d2232c tests: Handling potential errors gracefully
Handling potential errors gracefully at exa-receive.py

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-03-24 17:54:20 +05:30
Donatas Abraitis 927e2a9c81
Merge pull request #18467 from cscarpitta/fix/fix_srv6_static_sids_crash_2
staticd: Fix a crash that occurs when modifying an SRv6 SID
2025-03-24 14:16:37 +02:00
Donatas Abraitis 073a670ed7
Merge pull request #18469 from donaldsharp/fix_update_groups
tests: high_ecmp creates 2 update groups
2025-03-24 14:08:19 +02:00
Donatas Abraitis 05544ff5c4
Merge pull request #18475 from LabNConsulting/chopps/pylint
tests: add another directory to search path for pylint
2025-03-24 14:03:17 +02:00
Philippe Guibert 03b57b45c6 topotests: fix invalidate exported vpn prefixes on srv6l3vpn vrf3 setup
When srv6 is disabled due to misconfiguration, exported VPN prefixes
are invalidated, except for the ones that have their nexthop modified
with the 'nexthop vpn export' command. The previous commit also
invalidates those vpn prefixes.

Apply the changes to the test by not considering some prefixes as
selected. Enforce the expected route count.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-24 09:17:01 +01:00
Philippe Guibert 99acebcdc9 bgpd: fix check validity of a VPN SRv6 route with modified nexthop
When exporting a VPN SRv6 route, the path may not be considered valid if
the nexthop is not valid. This is the case when the 'nexthop vpn export'
command is used. The below example illustrates that the VPN path to
2001:1::/64 is not selected, as the expected nexthop to find in vrf10 is
the one configured:

> # show running-config
> router bgp 1 vrf vrf10
>  address-family ipv6 unicast
>   nexthop vpn export 2001::1

> # show bgp ipv6 vpn
> [..]
> Route Distinguisher: 1:10
>      2001:1::/64      2001::1@4                0             0 65001 i
>     UN=2001::1 EC{99:99} label=16 sid=2001:db8:1:1:: sid_structure=[40,24,16,0] type=bgp, subtype=5

The analysis indicates that the 2001::1 nexthop is considered.

> 2025/03/20 21:47:53.751853 BGP: [RD1WY-YE9EC] leak_update: entry: leak-to=VRF default, p=2001:1::/64, type=10, sub_type=0
> 2025/03/20 21:47:53.751855 BGP: [VWNP2-DNMFV] Found existing bnc 2001::1/128(0)(VRF vrf10) flags 0x82 ifindex 0 #paths 2 peer 0x0, resolved prefix UNK prefix
> 2025/03/20 21:47:53.751856 BGP: [VWC2R-4REXZ] leak_update_nexthop_valid: 2001:1::/64 nexthop is not valid (in VRF vrf10)
> 2025/03/20 21:47:53.751857 BGP: [HX87B-ZXWX9] leak_update: ->VRF default: 2001:1::/64: Found route, no change

Actually, to check the nexthop validity, only the source path in the VRF
has the correct nexthop. Fix this by reusing the source path information
instead of the current one.

> 2025/03/20 22:43:51.703521 BGP: [RD1WY-YE9EC] leak_update: entry: leak-to=VRF default, p=2001:1::/64, type=10, sub_type=0
> 2025/03/20 22:43:51.703523 BGP: [VWNP2-DNMFV] Found existing bnc fe80::b812:37ff:fe13:d441/128(0)(VRF vrf10) flags 0x87 ifindex 0 #paths 2 peer 0x0, resolved prefix fe80::/64
> 2025/03/20 22:43:51.703525 BGP: [VWC2R-4REXZ] leak_update_nexthop_valid: 2001:1::/64 nexthop is valid (in VRF vrf10)
> 2025/03/20 22:43:51.703526 BGP: [HX87B-ZXWX9] leak_update: ->VRF default: 2001:1::/64: Found route, no change

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-24 09:17:01 +01:00
Philippe Guibert 914931ed36 bgpd: fix do not export VPN prefix when no SID available on the VRF
When detaching the locator from the main BGP instance, the used SIDs
and locators are removed from the srv6 per-afi or per-vrf contects.
Under those conditions, it is not possible to attempt to export new
VPN updates. Do invalidate the nexthop for leaking.

Restrict the control for exported VPN prefixes and not for unicast
imported prefixes.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-24 09:17:01 +01:00
Philippe Guibert 7b55ca7f1c bgpd: fix do not use srv6 SID for NHT when SID is ours
The resulting VPN prefix of a BGP route from a L3VPN in an srv6 setup
is not advertised to remote devices.

> r1# show bgp ipv6 vpn
> BGP table version is 2, local router ID is 1.1.1.1, vrf id 0
> Default local pref 100, local AS 65500
> Status codes:  s suppressed, d damped, h history, u unsorted, * valid, > best, = multipath,
>                i internal, r RIB-failure, S Stale, R Removed
> Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
> Origin codes:  i - IGP, e - EGP, ? - incomplete
> RPKI validation codes: V valid, I invalid, N Not found
>
>      Network          Next Hop            Metric LocPrf Weight Path
> Route Distinguisher: 1:10
>      2011:1::/64      2001:1::2@6<             0    100      0 i
>     UN=2001:1::2 EC{99:99} label=4096 sid=2001:db8:1:1:: sid_structure=[40,24,8,0] type=bgp, subtype=5

What happens is that the SID of this BGP update is used as nexthop.
Consequently, the prefix is not valid because of nexthop unreachable.
obviously the locator prefix is not reachable in that L3VRF, and the
real nexthop 2001:1::2 should be used.

> r1# show bgp vrf vrf10 nexthop  detail
> Current BGP nexthop cache:
>  2001:db8:1:1:100:: invalid, #paths 1
>   Last update: Fri Mar 14 21:18:59 2025
>   Paths:
>     2/3 2011:1::/64 RD 1:10 VRF default flags 0x4000

Fix this by considering the SID of a given BGP update, only if the SID
is not ours.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-24 09:17:01 +01:00
Jonathan Voss 11ac6ab650 topotests: extend bgp_srv6_l3vpn_to_bgp_vrf4 test with bgp peers
This test ensures route redistribution across an srv6 VPN network
is well taken into account.

Signed-off-by: Jonathan Voss <jvoss@onvox.net>
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-24 09:17:01 +01:00
Philippe Guibert 92ba38bde2 topotests: bgp_srv6l3vpn_to_bgp_vrf, add redistribute BGP update in l3vpn
Add a BGP update in CE1 for redistribution. The expectation is that this
BGP update will be leaked to the L3VPN. Reversely, if the locator is
unset, the L3VPN prefix will be invalidated.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-24 09:17:01 +01:00
Philippe Guibert 57a03cfc54 topotests: bgp_srv6l3vpn_to_bgp_vrf, change AS values
Use experimental AS values to play the test.
Add BGP peering on CEs, and use the default-originate functionality on
each PE facing CPEs.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-24 09:17:01 +01:00
Philippe Guibert 6aeb697e38 topotests: remove useless zebra aspath control in bgp srv6 test
The aspath value has no need to be controlled. Unset the bgp capability
to send aspath information to zebra.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-24 09:17:01 +01:00
Philippe Guibert 1160c39093 topotests: move bgp_srv6l3vpn_to_bgp_vrf to unified configuration
Use the unified configuration for bgp_srv6l3vpn_to_bgp_vrf test.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-24 09:17:01 +01:00
Christian Hopps 3d4dee5f0c tests: add another directory to search path for pylint
Some IDEs (e.g., emacs+lsp) run pylint from the root directory and so
we need to add `tests/topotests` so that `lib` and `munet` are found
by pylint when used in imports

Signed-off-by: Christian Hopps <chopps@labn.net>
2025-03-24 05:10:36 +00:00
zmw12306 7c48a717f0 babeld: Missing Validation for AE=0 and Plen!=0
A Request TLV with AE set to 0 and Plen not set to 0 MUST be ignored.
Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-03-23 22:37:59 -04:00
zmw12306 a031da53db babeld: Add next hop initialization
Initialize v4_nh/v6_nh from source address at the beginning of packet parsing
Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-03-23 19:02:14 -04:00
Donald Sharp 5f375cefd5 tests: high_ecmp creates 2 update groups
The high_ecmp test was creating 2 update groups, where
513 of the neighbors are in 1 and the remaining is in
another.  They should just all be in 1 update group.
Modify the test creation such that interfaces r1-eth514
and r2-eth514 have v4 and v6 addresses.

Signed-off-by: Donald Sharp <donaldsharp72@gmail.com>
2025-03-23 17:48:02 -04:00
zmw12306 476cf0e1fc babeld: babeld: Add MBZ and Reserved field checking
Signed-off-by: zmw12306 <zmw12306@gmail.com>
2025-03-23 15:33:21 -04:00
Carmine Scarpitta 23403e01a3 tests: Add test case to verify SRv6 SID modify
This commit adds a test case that modifies a SID and verifies that the
RIB is as expected.

Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>
2025-03-23 18:47:35 +01:00
Carmine Scarpitta 6037ea350c staticd: Fix crash that occurs when modifying an SRv6 SID
When the user modifies an SRv6 SID and then removes all SIDs, staticd
crashes:

```
2025/03/23 08:37:22.691860 STATIC: lib/memory.c:74: mt_count_free(): assertion (mt->n_alloc) failed
STATIC: Received signal 6 at 1742715442 (si_addr 0x8200007cf0); aborting...
STATIC: zlog_signal+0x390                  fcc704a844b8     ffffd7450390 /usr/lib/frr/libfrr.so.0 (mapped at 0xfcc704800000)
STATIC: core_handler+0x1f8                 fcc704b79990     ffffd7450590 /usr/lib/frr/libfrr.so.0 (mapped at 0xfcc704800000)
STATIC:     ---- signal ----
STATIC: ?                                  fcc705c008f8     ffffd74507a0 linux-vdso.so.1 (mapped at 0xfcc705c00000)
STATIC: pthread_key_delete+0x1a0           fcc70458f1f0     ffffd7451a00 /lib/aarch64-linux-gnu/libc.so.6 (mapped at 0xfcc704510000)
STATIC: raise+0x1c                         fcc70454a67c     ffffd7451ad0 /lib/aarch64-linux-gnu/libc.so.6 (mapped at 0xfcc704510000)
STATIC: abort+0xe4                         fcc704537130     ffffd7451af0 /lib/aarch64-linux-gnu/libc.so.6 (mapped at 0xfcc704510000)
STATIC: _zlog_assert_failed+0x3c4          fcc704c407c8     ffffd7451c40 /usr/lib/frr/libfrr.so.0 (mapped at 0xfcc704800000)
STATIC: mt_count_free+0x12c                fcc704a93c74     ffffd7451dc0 /usr/lib/frr/libfrr.so.0 (mapped at 0xfcc704800000)
STATIC: qfree+0x28                         fcc704a93fa0     ffffd7451e70 /usr/lib/frr/libfrr.so.0 (mapped at 0xfcc704800000)
STATIC: static_srv6_sid_free+0x1c          adc1df8fa544     ffffd7451e90 /usr/lib/frr/staticd (mapped at 0xadc1df8a0000)
STATIC: delete_static_srv6_sid+0x14        adc1df8faafc     ffffd7451eb0 /usr/lib/frr/staticd (mapped at 0xadc1df8a0000)
STATIC: list_delete_all_node+0x104         fcc704a60eec     ffffd7451ed0 /usr/lib/frr/libfrr.so.0 (mapped at 0xfcc704800000)
STATIC: list_delete+0x8c                   fcc704a61054     ffffd7451f00 /usr/lib/frr/libfrr.so.0 (mapped at 0xfcc704800000)
STATIC: static_srv6_cleanup+0x20           adc1df8fabdc     ffffd7451f20 /usr/lib/frr/staticd (mapped at 0xadc1df8a0000)
STATIC: sigint+0x40                        adc1df8be544     ffffd7451f30 /usr/lib/frr/staticd (mapped at 0xadc1df8a0000)
STATIC: frr_sigevent_process+0x148         fcc704b79460     ffffd7451f40 /usr/lib/frr/libfrr.so.0 (mapped at 0xfcc704800000)
STATIC: event_fetch+0x1c4                  fcc704bc0834     ffffd7451f60 /usr/lib/frr/libfrr.so.0 (mapped at 0xfcc704800000)
STATIC: frr_run+0x650                      fcc704a5d230     ffffd7452080 /usr/lib/frr/libfrr.so.0 (mapped at 0xfcc704800000)
STATIC: main+0x1d0                         adc1df8be75c     ffffd7452270 /usr/lib/frr/staticd (mapped at 0xadc1df8a0000)
STATIC: __libc_init_first+0x7c             fcc7045373fc     ffffd74522b0 /lib/aarch64-linux-gnu/libc.so.6 (mapped at 0xfcc704510000)
STATIC: __libc_start_main+0x98             fcc7045374cc     ffffd74523c0 /lib/aarch64-linux-gnu/libc.so.6 (mapped at 0xfcc704510000)
STATIC: _start+0x30                        adc1df8be0f0     ffffd7452420 /usr/lib/frr/staticd (mapped at 0xadc1df8a0000)
```

Tracking this down, the crash occurs because every time we modify a
SID, staticd executes some callbacks to modify the SID and finally it
calls `apply_finish`, which re-adds the SID to the list `srv6_sids`.

This leads to having the same SID multiple times in the `srv6_sids`
list. When we delete all SIDs, staticd attempts to deallocate the same
SID multiple times, which leads to the crash.

This commit fixes the issue by moving the code that adds the SID to the
list from the `apply_finish` callback to the `create` callback.
This ensures that the SID is inserted into the list only once, when it
is created. For all subsequent modifications, the SID is modified but
not added to the list.

Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>
2025-03-23 18:46:45 +01:00
Donatas Abraitis 44c4743e08
Merge pull request #18378 from Tuetuopay/fix-route-map-gateway-ip
bgpd: fix `set evpn gateway-ip ipv[46]` route-map
2025-03-23 12:38:38 +02:00
Donald Sharp efa761d132
Merge pull request #18339 from y-bharath14/srib-tests-v3
tests: Corrected typo at path_attributes.py
2025-03-22 19:53:53 -04:00
Donatas Abraitis d75147e318
Merge pull request #18446 from louis-6wind/test_bfd_static_vrf
tests: add bfd_static_vrf
2025-03-22 12:21:30 +02:00
Donatas Abraitis 8876fbf8e1
Merge pull request #18452 from donaldsharp/bmp_changes
tests: Change up start order of bmp tests
2025-03-22 12:20:18 +02:00
Donald Sharp 2daa1e2074 tests: Change up start order of bmp tests
Currently the tests appear to do this:
a) Start the neighbors
b) Start the bmp server connection
c) Look for the neighbors up
d) Look for the neighbor up messages in the bmp log

This is not great from a testing perspective in that
even though we started a) first it may not happen
until after b) happens.  Or even worse if it is
partially up ( 1 of the 2 peers ) then the dump
will have the neighbor connecting after parts
of the table.  This doesn't work too well because
the SEQ number is something that is kept and compared
to to make sure only new data is being looked at.

Let's modify the startup configuration to start
the bmp server first and then have a delayopen
on the bgp neighbor statements so that the bmp
peering can come up first.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-21 18:08:25 -04:00
Donald Sharp 2b9585a36c
Merge pull request #18442 from y-bharath14/srib-yang-v6
yang: Code inline with RFC 8407 rules
2025-03-21 15:08:45 -04:00
Mark Stapp 556d3c445d
Merge pull request #18359 from soumyar-roy/soumya/streamsize
zebra: zebra crash for zapi stream
2025-03-21 11:30:16 -04:00
Donatas Abraitis 797e051222
Merge pull request #17986 from dmytroshytyi-6WIND/fix-static-30-01-2025
lib: fix static analysis error
2025-03-21 12:19:50 +02:00
Donatas Abraitis 7bb2c2dabc
Merge pull request #18277 from y-bharath14/srib-tests-v2
tests: Catch specific exceptions
2025-03-21 12:13:13 +02:00
Louis Scalbert 49567328b9 tests: add bfd_static_vrf
Add bfd_static_vrf to test BFD tracking of static routes in VRF.

Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-03-21 10:23:16 +01:00
Jafar Al-Gharaibeh eab86cd206
Merge pull request #18330 from usrivastava-nvidia/master
pimd: Skip RPF check for SA message from mesh group peer
2025-03-20 16:28:05 -05:00
Russ White 37fd451997
Merge pull request #18409 from donaldsharp/typesafe_zclient
Typesafe zclient
2025-03-20 12:48:47 -04:00
usrivastava-nvidia eb4c1610cb pimd:Skip RPF check for SA message received from the MSDP mesh group peers
Signed-off-by: Utkarsh Srivastava <usrivastava@nvidia.com>
2025-03-20 16:18:02 +00:00
usrivastava-nvidia 5934b6f402 pimd:Setting the flag PIM_MSDP_PEERF_IN_GROUP for MSDP mesh group peers
Signed-off-by: Utkarsh Srivastava <usrivastava@nvidia.com>
2025-03-20 16:17:39 +00:00
Soumya Roy 860c1e4450 zebra: reduce memory usage by streams when redistributing routes
This commit undo 8c9b007a0c
stream lib has been modified to expand the stream if needed
Now for zapi route encode, we use expandable stream

Signed-off-by: Soumya Roy <souroy@nvidia.com>
2025-03-20 16:13:44 +00:00
Soumya Roy 6fe9092eb3 zebra: zebra crash for zapi stream
Issue:
If static route is created with a BGP route as nexthop, which
recursively resolves over 512 ECMP v6 nexthops, zapi nexthop encode
fails, as there is not enough memory allocated for stream. This causes
assert/core dump in zebra. Right now we allocate fixed memory
of ZEBRA_MAX_PACKET_SIZ size.

Fix:
1)Dynamically calculate required memory size for the stream
2)try to optimize memory usage

Testing:
No crash happens anymore with the fix
zebra: zebra crash for zapi stream

Issue:
If static route is created with a BGP route as nexthop, which
recursively resolves over 512 ECMP v6 nexthops, zapi nexthop encode
fails, as there is not enough memory allocated for stream. This causes
assert/core dump in zebra. Right now we allocate fixed memory
of ZEBRA_MAX_PACKET_SIZ size.

Fix:
1)Dynamically calculate required memory size for the stream
2)try to optimize memory usage

Testing:
No crash happens anymore with the fix
r1#
r1# sharp install routes 2100:cafe:: nexthop 2001:db8::1 1000
r1#

r2# conf
r2(config)# ipv6 route 2503:feca::100/128 2100:cafe::1
r2(config)# exit
r2#

Signed-off-by: Soumya Roy <souroy@nvidia.com>
2025-03-20 16:13:44 +00:00
Soumya Roy 4de0f16a89 tests: Add staticd/ospfd/ospf6d/pimd for high ecmp
Signed-off-by: Soumya Roy <souroy@nvidia.com>
2025-03-20 16:13:44 +00:00
Soumya Roy c0c46bad15 lib: Add support for stream buffer to expand
Issue:
 Currently, during encode time, if required memory is
 more than available space in stream buffer, stream buffer
 can't be expanded. This fix introduces new apis to support
 stream buffer expansion.

 Testing:
 Tested with zebra nexthop encoding with 512 nexthops, which triggers
 this new code changes, it works fine. Without fix, for same trigger
 it asserts.

 Signed-off-by: Soumya Roy <souroy@nvidia.com>
2025-03-20 16:13:34 +00:00
Donald Sharp 863f4b0992 bgpd: Tie in more clear events to clear code
The `clear bgp *` and the interface down events
cause a global clearing of data from the bgp rib.
Let's tie those into the clear peer code such
that we can take advantage of the reduced load
in these cases too.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-20 09:38:51 -04:00
Mark Stapp c527882012 bgpd: Allow batch clear to do partial work and continue later
Modify the batch clear code to be able to stop after processing
some of the work and to pick back up again.  This will allow
the very expensive nature of the batch clearing to be spread out
and allow bgp to continue to be responsive.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-03-20 09:33:52 -04:00
Tuetuopay 7320659f78 bgpd: fix evpn attributes being dropped on input
All assignments of the EVPN attributes (ESI and Gateway IP) are gated
behind the peer being set up for inbound soft-reconfiguration.

There are no actual reasons for this limitation, so let's perform the
EVPN attribute assignment no matter what when soft reconfiguration is
not enabled.

Fixes: 6e076ba523 ("bgpd: Fix for ain->attr corruption during path update")
Signed-off-by: Tuetuopay <tuetuopay@me.com>
2025-03-20 10:23:17 +01:00
Y Bharath 9689556728 yang: Code inline with RFC 8407 rules
Code inline with RFC 8407 rules

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-03-20 12:11:46 +05:30
Jafar Al-Gharaibeh 361f80a64b
Merge pull request #18325 from chdxD1/topotests/evpn-multipath-flap
topotests: Add EVPN RT5 multipath flap test
2025-03-19 23:14:38 -05:00
Jafar Al-Gharaibeh 5646cf0c13
Merge pull request #18431 from donaldsharp/fpm_listener_reject
Fpm listener reject
2025-03-19 23:00:47 -05:00
Jafar Al-Gharaibeh d851f457eb
Merge pull request #18435 from donaldsharp/fix_valgrind_found_memory_leak_in_bgp
bgpd: Fix leaked memory when showing some bgp routes
2025-03-19 22:58:37 -05:00
Jafar Al-Gharaibeh 29da863f28
Merge pull request #18432 from donaldsharp/fix_topotest_to_wait_for_zebra_connection
Fix topotest to wait for zebra connection
2025-03-19 22:55:31 -05:00
Donald Sharp 7a40da3f0a
Merge pull request #18412 from lsang6WIND/fix_bgp_delete
bgpd: fix "delete in progress" flag on default instance
2025-03-19 20:44:57 -04:00
Donald Sharp 9651b159cc bgpd: Fix leaked memory when showing some bgp routes
The two memory leaks:

==387155== 744 (48 direct, 696 indirect) bytes in 1 blocks are definitely lost in loss record 222 of 262
==387155==    at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==387155==    by 0x4C1B982: json_object_new_object (in /usr/lib/x86_64-linux-gnu/libjson-c.so.5.1.0)
==387155==    by 0x2E4146: peer_adj_routes (bgp_route.c:15245)
==387155==    by 0x2E4F1A: show_ip_bgp_instance_neighbor_advertised_route_magic (bgp_route.c:15549)
==387155==    by 0x2B982B: show_ip_bgp_instance_neighbor_advertised_route (bgp_route_clippy.c:722)
==387155==    by 0x4915E6F: cmd_execute_command_real (command.c:1003)
==387155==    by 0x4915FE8: cmd_execute_command (command.c:1062)
==387155==    by 0x4916598: cmd_execute (command.c:1228)
==387155==    by 0x49EB858: vty_command (vty.c:626)
==387155==    by 0x49ED77C: vty_execute (vty.c:1389)
==387155==    by 0x49EFFA7: vtysh_read (vty.c:2408)
==387155==    by 0x49E4156: event_call (event.c:2019)
==387155==    by 0x4958ABD: frr_run (libfrr.c:1247)
==387155==    by 0x206A68: main (bgp_main.c:557)
==387155==
==387155== 2,976 (192 direct, 2,784 indirect) bytes in 4 blocks are definitely lost in loss record 240 of 262
==387155==    at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==387155==    by 0x4C1B982: json_object_new_object (in /usr/lib/x86_64-linux-gnu/libjson-c.so.5.1.0)
==387155==    by 0x2E45CA: peer_adj_routes (bgp_route.c:15325)
==387155==    by 0x2E4F1A: show_ip_bgp_instance_neighbor_advertised_route_magic (bgp_route.c:15549)
==387155==    by 0x2B982B: show_ip_bgp_instance_neighbor_advertised_route (bgp_route_clippy.c:722)
==387155==    by 0x4915E6F: cmd_execute_command_real (command.c:1003)
==387155==    by 0x4915FE8: cmd_execute_command (command.c:1062)
==387155==    by 0x4916598: cmd_execute (command.c:1228)
==387155==    by 0x49EB858: vty_command (vty.c:626)
==387155==    by 0x49ED77C: vty_execute (vty.c:1389)
==387155==    by 0x49EFFA7: vtysh_read (vty.c:2408)
==387155==    by 0x49E4156: event_call (event.c:2019)
==387155==    by 0x4958ABD: frr_run (libfrr.c:1247)
==387155==    by 0x206A68: main (bgp_main.c:557)

For the 1st one, if the operator issues a advertised-routes command, the
json_ar variable was never being freed.

For the 2nd one, if the operator issued a command where the
output_count_per_rd is 0, we need to free the json_routes value.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-19 16:53:50 -04:00
Donald Sharp 14b5d3d191
Merge pull request #18430 from nabahr/protocol_vrf
lib: Create VRF if needed
2025-03-19 15:31:06 -04:00
Donald Sharp 4224c8f478 tests: wait_time is not defined so don't use it
In daemon startup a error message was attempting
to use a variable `wait_time` that has not been
setup.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-19 15:23:28 -04:00
Donald Sharp df0e10076f tests: Ensure that the daemon has connected to zebra
On daemon startup, ensure that the daemon is there and
connected to zebra.  There are some exceptions,
pathd is srte.  pim6d and pimd are the same at the
moment and finally smnptrapd.

This should help the startup of using a unified
config in the topotests.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-19 15:20:31 -04:00
Donald Sharp 9c273fad26 zebra: Add timestamp to output
It's interesting to know the time we received the route.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-19 13:47:01 -04:00
Donald Sharp 04d6adc94b zebra: Allow fpm_listener to reject all routes
Now usage of `-r -f` with fpm_listener now causes all
routes to be rejected.

r1# sharp install routes 10.0.0.0 nexthop 192.168.44.5 5
r1# show ip route
Codes: K - kernel route, C - connected, L - local, S - static,
       R - RIP, O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric, t - Table-Direct,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup
       t - trapped, o - offload failure

IPv4 unicast VRF default:
D>o 10.0.0.0/32 [150/0] via 192.168.44.5, r1-eth0, weight 1, 00:00:02
D>o 10.0.0.1/32 [150/0] via 192.168.44.5, r1-eth0, weight 1, 00:00:02
D>o 10.0.0.2/32 [150/0] via 192.168.44.5, r1-eth0, weight 1, 00:00:02
D>o 10.0.0.3/32 [150/0] via 192.168.44.5, r1-eth0, weight 1, 00:00:02
D>o 10.0.0.4/32 [150/0] via 192.168.44.5, r1-eth0, weight 1, 00:00:02
C>* 192.168.44.0/24 is directly connected, r1-eth0, weight 1, 00:00:37
L>* 192.168.44.1/32 is directly connected, r1-eth0, weight 1, 00:00:37
r1#

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-19 13:43:47 -04:00
Donald Sharp 4d6f5c7e27 zebra: Rework the stale client list to a typesafe list
The stale client list was just a linked list, let's use
the typesafe list.

Signed-off-by: Donald Sharp <donaldsharp72@gmail.com>
2025-03-19 13:43:00 -04:00
Donald Sharp 24d293277f zebra: Convert the zrouter.client_list to a typesafe list
This list should just be a typesafe list.

Signed-off-by: Donald Sharp <donaldsharp72@gmail.com>
2025-03-19 13:27:36 -04:00
Lou Berger 5602e5fe28
Merge pull request #18426 from opensourcerouting/rpm_snmp_rpki_fix
RedHat: Fixing for PR17793 - Allow RPM build without docs and/or rpki
2025-03-19 12:34:10 -04:00
Nathan Bahr b6ae01f907 lib: Create VRF if needed
When creating a control plane protocol through NB, create the vrf
if needed instead of only looking up and asserting if it doesn't
exist yet.
Fixes 18429.

Signed-off-by: Nathan Bahr <nbahr@atcorp.com>
2025-03-19 16:08:56 +00:00
Martin Winter 2f171ac023
redhat: Make sure zeromq is always disabled
Fix issue where zeromq is getting enabled if build system has the libs
installed. For RPMs, we want it always based on intended config options.
(and currently the zeromq is not part of the packages)

Signed-off-by: Martin Winter <mwinter@opensourcerouting.org>
2025-03-19 13:51:54 +01:00
Martin Winter 972ec6fd8c
redhat: Make docs and rpki optional for RPM package build
Adding options to disable docs and rpki during the build. By
default they are always built. RPKI sub-package will not be built
(and not available) if built without the RPKI support.

Signed-off-by: Martin Winter <mwinter@opensourcerouting.org>
2025-03-19 13:51:47 +01:00
Russ White d5b864ebee
Merge pull request #18374 from raja-rajasekar/rajasekarr/nhg_intf_flap_issue
zebra: Fix reinstalling nexthops in NHGs upon interface flaps
2025-03-19 08:10:15 -04:00
Martin Winter 1f815d555c
Revert "redhat: Add option to build pkg without docs and rpki support, allow for different system environments by including all built .so files"
This reverts commit d89f21fc06.

Reverting original change from PR 17793. This commit breaks RPKI
and SNMP sub-packages

Signed-off-by: Martin Winter <mwinter@opensourcerouting.org>
2025-03-19 07:09:00 +01:00
Jafar Al-Gharaibeh b06dd2ccac
Merge pull request #18418 from donaldsharp/ripngd_memory_leaks_on_shutdown
ripngd: Access and Prefix lists are being leaked on shutdown
2025-03-18 22:00:26 -05:00
Jafar Al-Gharaibeh 7323d5c080
Merge pull request #18419 from donaldsharp/typesafe_warning
doc: Modify typesafe documentation
2025-03-18 21:59:29 -05:00
Rajasekar Raja de168795ab zebra: Fix reinstalling nexthops in NHGs upon interface flaps
Trigger:
Imagine a route utilizing an NHG with six nexthops (Intf swp1-swp6).
If interfaces swp1-swp4 flaps, the NHG remains the same but now only
references two nexthops (swp5-6) instead of all six. This behavior
occurs due to how NHGs with recursive nexthops are managed within Zebra.

In the scenario below, NHG 370 has all six nexthops installed in the
kernel. However, Zebra maintains a list of recursive NHGs that NHG 370
references i.e., Depends: (371), (372), (373) which are not directly
installed in the kernel.
- When an interface comes up, its nexthop and corresponding dependents
  are installed.
- These dependents (counterparts to 371-373) are non-recursive and
  are installed as well.
- However, when attempting to install the recursive ones in
  zebra_nhg_install_kernel(), they resolve to the already installed
  counterparts, resulting in a NO-OP.

Fixing this by iterating all dependents of the recursively resolved
NHGs and reinstalling them.

Trigger: Flap swp1 to swp4

Before Fix:
root@leaf-11:mgmt:/var/home/cumulus# ip route show | grep 6.0.0.5
6.0.0.5 nhid 370 proto bgp metric 20
ip -d next show
id 337 via 2000:1:0:1:0:f:0:9 dev swp6 scope link proto zebra
id 339 via 2000:1:0:1:0:e:0:9 dev swp5 scope link proto zebra
id 341 via 2000:1:0:1:0:8:0:8 dev swp4 scope link proto zebra
id 343 via 2000:1:0:1:0:7:0:8 dev swp3 scope link proto zebra
id 346 via 2000:1:0:1:0:1:0:7 dev swp2 scope link proto zebra
id 348 via 2000:1:0:1::7 dev swp1 scope link proto zebra
id 370 group 346/348/341/343/337/339 scope global proto zebra

After Trigger:
root@leaf-11:mgmt:/var/home/cumulus# ip route show | grep 6.0.0.5
6.0.0.5 nhid 370 proto bgp metric 20
root@leaf-11:mgmt:/var/home/cumulus# ip -d next show
id 337 via 2000:1:0:1:0:f:0:9 dev swp6 scope link proto zebra
id 339 via 2000:1:0:1:0:e:0:9 dev swp5 scope link proto zebra
id 370 group 337/339 scope global proto zebra

After Fix:
root@leaf-11:mgmt:/var/home/cumulus# ip route show | grep 6.0.0.5
6.0.0.5 nhid 432 proto bgp metric 20
ip -d next show
id 432 group 395/397/400/402/405/407 scope global proto zebra

After Trigger
root@leaf-11:mgmt:/var/home/cumulus# ip route show | grep 6.0.0.5
6.0.0.5 nhid 432 proto bgp metric 20
root@leaf-11:mgmt:/var/home/cumulus# ip -d next show
id 432 group 395/397/400/402/405/407 scope global proto zebra

Ticket :#

Signed-off-by: Rajasekar Raja <rajasekarr@nvidia.com>

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-18 12:21:42 -07:00
Donald Sharp 6940c1923b doc: Modify typesafe documentation
The typesafe documentation needs a bit of warning about
how they can cause problems on conversion.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-18 13:56:28 -04:00
Donald Sharp 95df46ab40 ripngd: Access and Prefix lists are being leaked on shutdown
ripngd:     Access List                   :      1 *         56
ripngd:     Access List Str               :      1 *          3
ripngd:     Access Filter                 :      1 *        112
ripngd:     Prefix List                   :      1 *         88
ripngd:     Prefix List Str               :      1 *          3
ripngd:     Prefix List Entry             :      1 *        136
ripngd:     Prefix List Trie Table        :      4 *       4096

This is now fixed.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-18 13:40:32 -04:00
Loïc Sang 8dc9eacb83 bgpd: fix "delete in progress" flag on default instance
Since 4d0e7a4 ("bgpd: VRF-Lite fix default BGP delete"), upon deletion
of the default instance, it is marked as hidden and the "deletion
in progress" flag is set. When the instance is restored, some routes
are not installed due to the presence of this flag.

Fixes: 4d0e7a4 ("bgpd: VRF-Lite fix default bgp delete")
Signed-off-by: Loïc Sang <loic.sang@6wind.com>
2025-03-18 17:42:34 +01:00
Russ White f3d9bd90a1
Merge pull request #18413 from Shbinging/fix_babel_wired
babled: reset wired/wireless internal only when wired/wireless status changed
2025-03-18 11:28:27 -04:00
Russ White 4b6e0ba1a1
Merge pull request #18349 from donaldsharp/more_yang_state
More yang state
2025-03-18 11:02:28 -04:00
Jafar Al-Gharaibeh fe809f47d6
Merge pull request #18414 from y-bharath14/srib-tests-v5
tests: Corrected input dict at pim.py
2025-03-18 09:22:51 -05:00
Russ White ad7e625c15
Merge pull request #18410 from opensourcerouting/fix/print_the_real_reason_supressed_peer
bgpd: Print the real reason why the peer is not accepted (incoming)
2025-03-18 08:46:43 -04:00
Russ White 34b8872699
Merge pull request #18407 from everoute/master
fix(vrrp): display vrrp version by default
2025-03-18 08:45:55 -04:00
Russ White 6278a8357d
Merge pull request #18364 from dmytroshytyi-6WIND/rtadv_disable
bgpd, zebra, tests: disable rtadv when bgp instance unconfiguration.
2025-03-18 08:26:21 -04:00
Russ White 1e69d08fb0
Merge pull request #18275 from opensourcerouting/fix/issue_18222_no_topotest
bgpd: Do not keep stale paths in Adj-RIB-Out if not addpath aware
2025-03-18 08:20:05 -04:00
Tuetuopay 05a74323b9 tests: add route-map evpn set gateway-ip topotest
This test does not actually look at the route since the gateway-ip is
not exposed in vtysh output. However, this ensures such a route-map does
not crash bgpd.

Signed-off-by: Tuetuopay <tuetuopay@me.com>
2025-03-18 10:27:37 +01:00
Y Bharath 066dcf6c20 tests: Corrected input dict at pim.py
Corrected input dict at pim.py

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-03-18 14:47:54 +05:30
Shbinging 6af2af83be babled: set wired/wireless internal only when wired/wireless status changes
As stated in doc, interface's attributes such noninterfering/interfering are reset when the wired/wireless status of an interface is changed. If wired/wireless status is not changed, such as wired->wired, we should not reset internal attributes.

Signed-off-by: Shbinging <bingshui@smail.nju.edu.cn>
2025-03-18 10:38:12 +08:00
Donatas Abraitis f47b2fb94a bgpd: Move stale Adj-RIB-Out paths removal to subgroup_process_announce_selected()
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-03-17 16:02:16 +02:00
Donatas Abraitis 4c79c560d1 tests: Check if addpath with disabled RX flag is working correctly in RS setup
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-03-17 16:02:16 +02:00
Donatas Abraitis eecfea9768 bgpd: Do not remove the path from Adj-Rib-Out if it's a selected route
There was a case where removing the selected (single best) route leads to
adj-rib-out to be vanished at all.

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-03-17 16:02:16 +02:00
Donatas Abraitis cc6c3d7a20 bgpd: Do not keep stale paths in Adj-RIB-Out if not addpath aware
```
munet> r1 shi vtysh -c 'show ip bgp update advertised-routes'
update group 1, subgroup 1
BGP table version is 5, local router ID is 192.168.137.1
Status codes:  s suppressed, d damped, h history, u unsorted, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Origin codes:  i - IGP, e - EGP, ? - incomplete
     Network          Next Hop            Metric LocPrf Weight Path
 *> 1.0.0.0/24       192.168.137.201                10      0 65200 65444 i
 *> 10.0.0.0/24      192.168.137.100                10      0 65100 65444 65444 i
 *> 10.65.10.0/24    192.168.137.100          0     10      0 65100 i
 *> 10.200.2.0/24    192.168.137.202          0     10      0 65200 i
```

Announce one more 10.0.0.0/24 via 65200 and we have TWO paths 10.0.0.0/24 in adj-rib-out:

```
munet> r1 shi vtysh -c 'show ip bgp update advertised-routes'
update group 1, subgroup 1
BGP table version is 6, local router ID is 192.168.137.1
Status codes:  s suppressed, d damped, h history, u unsorted, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Origin codes:  i - IGP, e - EGP, ? - incomplete
     Network          Next Hop            Metric LocPrf Weight Path
 *> 1.0.0.0/24       192.168.137.201                10      0 65200 65444 i
 *> 10.0.0.0/24      192.168.137.100                10      0 65100 65444 65444 i
 *> 10.0.0.0/24      192.168.137.201                10      0 65200 65444 i
 *> 10.65.10.0/24    192.168.137.100          0     10      0 65100 i
 *> 10.200.2.0/24    192.168.137.202          0     10      0 65200 i
```

Stop announcing 10.0.0.0/24 via 65200 and we still have TWO paths for 10.0.0.0/24...

```
munet> r1 shi vtysh -c 'show ip bgp update advertised-routes'
update group 1, subgroup 1
BGP table version is 7, local router ID is 192.168.137.1
Status codes:  s suppressed, d damped, h history, u unsorted, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Origin codes:  i - IGP, e - EGP, ? - incomplete
     Network          Next Hop            Metric LocPrf Weight Path
 *> 1.0.0.0/24       192.168.137.201                10      0 65200 65444 i
 *> 10.0.0.0/24      192.168.137.100                10      0 65100 65444 65444 i
 *> 10.0.0.0/24      192.168.137.201                10      0 65200 65444 i
 *> 10.65.10.0/24    192.168.137.100          0     10      0 65100 i
 *> 10.200.2.0/24    192.168.137.202          0     10      0 65200 i
```

Why do we need to keep old paths in adj-rib-out if we don't have e.g. AddPaths enabled?

Shouldn't it be like here? (only one 10.0.0.0/24 in adj-rib-out for this update-group instead of multiple (stale from previous announcements))

```
munet> r1 shi vtysh -c 'show ip bgp update advertised-routes'
update group 1, subgroup 1
BGP table version is 6, local router ID is 192.168.137.1
Status codes:  s suppressed, d damped, h history, u unsorted, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Origin codes:  i - IGP, e - EGP, ? - incomplete
     Network          Next Hop            Metric LocPrf Weight Path
 *> 1.0.0.0/24       192.168.137.201                10      0 65200 65444 i
 *> 10.0.0.0/24      192.168.137.201                10      0 65200 65444 i
 *> 10.65.10.0/24    192.168.137.100          0     10      0 65100 i
 *> 10.200.2.0/24    192.168.137.202          0     10      0 65200 i
```

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-03-17 16:02:16 +02:00
Donatas Abraitis ace4b8fe61 bgpd: Print the real reason why the peer is not accepted (incoming)
If it's suppressed due to BFD down or unspecified connection, we never know
the real reason and just say "no AF activated" which is misleading.

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-03-17 14:52:42 +02:00
Tuetuopay 0b0e701597 bgpd: fix set evpn gateway-ip ipv[46] route-map
The `route_set_evpn_gateway_ip` function copies `gw_ip->ip.addr` in the
route's gateway ip. In a nutshell, this skips the `ipa_type` field,
writing the actual IP in the IP type. This later rightfully trips
asserts about unknown IP types.

The following route-map...

```
route-map test permit 10
    set evpn gateway-ip ipv4 1.1.1.1
```

...will make the following gateway IP in the route:

```
(gdb) p/x a1->evpn_overlay->gw_ip
$11 = {ipa_type = 0x1010101, ip = {addr = 0x0, addrbytes = {
      0x0 <repeats 16 times>}, _v4_addr = {s_addr = 0x0}, _v6_addr = {
      __in6_u = {__u6_addr8 = {0x0 <repeats 16 times>}, __u6_addr16 = {0x0,
          0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, __u6_addr32 = {0x0, 0x0, 0x0,
          0x0}}}}}
```

We do indeed see the IP Address in the `ipa_type` field.

Fix by starting the memcpy at the root of `struct ipaddr` instead of
skipping the `ipa_type` field.

Fixes: d0a4ee6010 ("bgpd: Add "set evpn gateway-ip" clause for route-map")
Signed-off-by: Tuetuopay <tuetuopay@me.com>
2025-03-17 12:08:02 +01:00
Dmytro Shytyi 8f47d0f1b7
tests: add rtadv topotest
Verify the new rtadv "show interface json" fields
The rtadv json parameters should not be present
when bgp instance is disabled.

Signed-off-by: Dmytro Shytyi <dmytro.shytyi@6wind.com>
2025-03-17 11:33:09 +01:00
Dmytro Shytyi e6d08a89c7
zebra: add rtadv information output in vtysh json
Add to "show interface json" output multiple rtadv parameters.

if_dump_vty() calls => hook_call(zebra_if_extra_info, vty, ifp);

if_dump_vty_json() now do the same call, with additional parameter:
hook_call(zebra_if_extra_info, vty, json_if, ifp);

Signed-off-by: Dmytro Shytyi <dmytro.shytyi@6wind.com>
2025-03-17 11:19:58 +01:00
Dmytro Shytyi 942a7c916c
bgpd: align peer_unconfigure with gracefull-restart
When configured Graceful-Restart, skipping unconfig notification,
similarly as it is done in 95098d9611
("bgpd: Do not send Deconfig/Shutdown message when restarting")

Signed-off-by: Dmytro Shytyi <dmytro.shytyi@6wind.com>
2025-03-17 11:19:58 +01:00
Philippe Guibert 496caed836
bgpd: fix radv interface disabled when bgp instance removed
If a peer uses radv for an interface, and bgp instance is removed,
then the radv service is not disabled on the interface.

Fix this by doing the same at BGP unconfiguration. Like it has been
done when a peer is unconfigured, call the radv unregistration before
deleting the peer.

Fixes: b3a3290e23 ("bgpd: turn off RAs when numbered peers are deleted")
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Signed-off-by: Dmytro Shytyi <dmytro.shytyi@6wind.com>
2025-03-17 11:19:58 +01:00
echken aabe3c2079 fix(vrrp): display vrrp version by default
Make the VRRP version information always visible in the running
configuration output, regardless of whether it's the default value
(version 3) or not.

When using frr-reload.py to apply configuration changes, VRRP instances
were being unnecessarily reinitialized even when no actual configuration
changes were made. This occurred because:
The cli_show_vrrp function in vrrpd/vrrp_vty.c does not display the VRRP
version in the show running-config output when it's the default value
(version 3).
Configuration files often explicitly specify vrrp X version 3 even
though it's the default.
When frr-reload.py compares the explicit configuration with the running
configuration, it detects a difference and generates commands to remove
and recreate the VRRP instance.

This patch modifies the cli_show_vrrp function to unconditionally
display the VRRP version, regardless of whether it's the default value
or the show_defaults parameter is set. By making the version information
explicit in all cases, we ensure consistent configuration comparison in
frr-reload.py, preventing unnecessary VRRP reinitialization and
associated network disruptions.

Signed-off-by: echken <chengcheng.luo@smartx.com>
2025-03-17 03:46:26 +00:00
Donatas Abraitis c288e5fbaf
Merge pull request #18399 from LabNConsulting/chopps/fix-unit-tests
2 unit-test fixes
2025-03-16 15:14:55 +01:00
Donatas Abraitis f2245941d8
Merge pull request #18384 from LabNConsulting/chopps/suppress-expected-libyang-error-log
lib: suppress libyang logs during expected error result
2025-03-16 15:12:59 +01:00
Donatas Abraitis f5a74fc91c
Merge pull request #18387 from Manpreet-k0/redo_import_check_crash
bgpd: Fixed crash upon bgp network import-check command
2025-03-15 18:35:04 +01:00
Donatas Abraitis 35cc716363
Merge pull request #18394 from donaldsharp/fpm_listener_output
zebra: add ability to specify output file with fpm_listener
2025-03-15 18:32:19 +01:00
Donatas Abraitis 0647a115cc
Merge pull request #18395 from donaldsharp/bgp_stream_copy_usage_removal
bgpd: Remove unnecessary stream_new/stream_copies in bgp_open_make
2025-03-15 18:31:41 +01:00
Donatas Abraitis 7444121f79
Merge pull request #18393 from LabNConsulting/aceelindem/ospf6-area-no-config-delete-stale
ospf6d: Disable and delete OSPFv3 areas that no longer have interfaces or configuration.
2025-03-15 14:15:58 +01:00
Christian Hopps bc3f7d9c07 tests: fix wrong callback function parameters in unit-test
Signed-off-by: Christian Hopps <chopps@labn.net>
2025-03-15 04:12:31 +00:00
Christian Hopps 986501029e tests: deal with configure overridden timestamp prec in unit test
Previously if you configured a different timestamp precision then
`make check` would fail as the non-default config is generated and
fails test_cli config file comparison.

Signed-off-by: Christian Hopps <chopps@labn.net>
2025-03-15 04:11:48 +00:00
Donald Sharp c9655e2893 bgpd: Remove unnecessary stream_new/stream_copies in bgp_open_make
The call into bgp_open_capability can return that it wrote more
than BGP_OPEN_NON_EXT_OPT_LEN bytes, in that case the open
part needs to be written again with ext_opt_params set to
true to allow extended parameters to be written thus keeping
the len < 255 bytes.  The code to do this was first creating
a new stream and then copying into it the stream, trying
to call bgp_open_capability() and if it succeeded recopying
the tmp stream back onto the original.

Let's change this around such that we save the current spot
in the stream of where we are writing and if the change does
not work reset the pointer and try again with the correct
parameter.  This removes the stream and multiple copies and
eventual free of the temporary stream.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-14 14:50:59 -04:00
Donald Sharp f0b2bc3b4c zebra: add ability to specify output file with fpm_listener
The fpm_listener didn't have the ability to specify the output
file location at all.  Modify the code to accept this.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-14 13:24:19 -04:00
Acee Lindem 04994891fe ospf6d: Disable and delete OSPFv3 areas that no longer have interfaces or configuration.
This fix will delete an OSPFv3 area when all the interfaces and
        configuration (ranges, NSSA ranges, stub area, NSSA area, filter-list,
        import-list and export-list) have been removed. The changes provides
        a general solution to https://github.com/FRRouting/frr/issues/18324.

Signed-off-by: Acee Lindem <acee@lindem.com>
2025-03-14 16:02:28 +00:00
Jafar Al-Gharaibeh 7945af0200
Merge pull request #18360 from raja-rajasekar/rajasekarr/fix_explicit_sid_allocation
zebra: ensure proper return for failure for Sid allocation
2025-03-14 09:57:41 -05:00
Christian Hopps 4663c3ef82 lib: suppress libyang logs during expected error result.
Signed-off-by: Christian Hopps <chopps@labn.net>
2025-03-14 14:36:13 +00:00
Christian Hopps 48ceff2128 lib: northbound: also log debugs for new get callback
Signed-off-by: Christian Hopps <chopps@labn.net>
2025-03-14 14:36:13 +00:00
Donald Sharp dbecefb6c6
Merge pull request #18383 from LabNConsulting/chopps/fix-oper-state-list-query-bug
Fix bug with oper-state queries including list node
2025-03-14 10:33:16 -04:00
Donald Sharp e5848deedf
Merge pull request #18377 from kaffarell/master
isisd: fix bit flag collision in options field
2025-03-14 09:43:09 -04:00
Donald Sharp e7318ce845
Merge pull request #18388 from y-bharath14/srib-yang-v5
yang: Fixed pyang errors at frr-bgp-common.yang
2025-03-14 09:41:39 -04:00
Manpreet Kaur bc1008b970 bgpd: Fixed crash upon bgp network import-check command
BT:
```
3  <signal handler called>
4  0x00005616837546fc in bgp_static_update (bgp=bgp@entry=0x5616865eac50, p=0x561686639e40,
    bgp_static=0x561686639f50, afi=afi@entry=AFI_IP6, safi=safi@entry=SAFI_UNICAST) at ../bgpd/bgp_route.c:7232
5  0x0000561683754ad0 in bgp_static_add (bgp=0x5616865eac50) at ../bgpd/bgp_table.h:413
6  0x0000561683785e2e in no_bgp_network_import_check (self=<optimized out>, vty=0x5616865e04c0,
    argc=<optimized out>, argv=<optimized out>) at ../bgpd/bgp_vty.c:4609
7  0x00007fdbcc294820 in cmd_execute_command_real (vline=vline@entry=0x561686663000,
```

The program encountered a SEG FAULT when attempting to access pi->extra->vrfleak->bgp_orig because
pi->extra->vrfleak was NULL.
```
(gdb) p pi->extra->vrfleak
$1 = (struct bgp_path_info_extra_vrfleak *) 0x0
(gdb) p pi->extra->vrfleak->bgp_orig
Cannot access memory at address 0x8
```
Added NOT NULL check on pi->extra->vrfleak before accessing pi->extra->vrfleak->bgp_orig
to prevent the segmentation fault.

Signed-off-by: Manpreet Kaur <manpreetk@nvidia.com>
2025-03-14 05:40:16 -07:00
Y Bharath d7839f5ddd yang: Fixed pyang errors at frr-bgp-common.yang
Fixed pyang errors at frr-bgp-common.yang

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-03-14 14:31:52 +05:30
Christian Hopps 2586c4c3ed lib: make sure we update the darr_strlen from pruned string.
This fixes a bug when handling of queries which include list nodes in
the xpath.

Signed-off-by: Christian Hopps <chopps@labn.net>
2025-03-14 08:37:46 +00:00
Christian Hopps d58a8f473b lib: add darr_strlen_fixup() to update len based on NUL term
Signed-off-by: Christian Hopps <chopps@labn.net>
2025-03-14 08:37:46 +00:00
Donatas Abraitis 8982a81de1
Merge pull request #18380 from donaldsharp/non_peer_group
bgpd: Show bgp <afi> <safi> shouldn't display peers in groups
2025-03-14 07:30:52 +01:00
Donald Sharp 3fb72a03c3 bgpd: Show bgp <afi> <safi> shouldn't display peers in groups
The command `show bgp <afi> <safi>` has this output:

r1# show bgp ipv4 uni 10.0.0.0
BGP routing table entry for 10.0.0.0/32, version 1
Paths: (1 available, best #1, table default)
  Advertised to non peer-group peers:
  r1-eth0 r1-eth1 r1-eth2 r1-eth3
  ....

It specifically states `Advertised to non peer-group peers:` yet
the code is not filtering those out.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-13 15:19:02 -04:00
Gabriel Goller 86faed512f isisd: fix bit flag collision in options field
Resolve conflict between F_ISIS_UNIT_TEST and ISIS_OPT_DUMMY_AS_LOOPBACK
which were both using the same bit value (0x01). This collision caused
unit test mode to be unintentionally enabled when DUMMY_AS_LOOPBACK was set.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2025-03-13 16:51:04 +01:00
Donatas Abraitis 2ab8cce2e1
Merge pull request #18366 from pguibert6WIND/community_limit_zero_add_test
Add Testing for community and Extended community match limit zero
2025-03-12 21:47:17 +01:00
Mark Stapp c000c2144c tests: add bgp peer-shutdown topotest
Add a simple topotest using multiple bgp peers; based on the
ecmp_topo1 test.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-03-12 12:42:07 -04:00
Mark Stapp 6206e7e7ed zebra: move peer conn error list to connection struct
Move the peer connection error list to the peer_connection
struct; that seems to line up better with the way that struct
works.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-03-12 12:42:07 -04:00
Mark Stapp 58f924d287 bgpd: batch peer connection error clearing
When peer connections encounter errors, attempt to batch some
of the clearing processing that occurs. Add a new batch object,
add multiple peers to it, if possible. Do one rib walk for the
batch, rather than one walk per peer. Use a handler callback
per batch to check and remove peers' path-infos, rather than
a work-queue and callback per peer. The original clearing code
remains; it's used for single peers.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-03-12 12:42:06 -04:00
Mark Stapp 020245befd bgpd: remove apis from bgp_route.h
Remove a couple of apis that don't exist.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-03-12 12:40:07 -04:00
Mark Stapp 6a5962e1f8 bgpd: Replace per-peer connection error with per-bgp
Replace the per-peer connection error with a per-bgp event and
a list. The io pthread enqueues peers per-bgp-instance, and the
error-handing code can process multiple peers if there have been
multiple failures.

Signed-off-by: Mark Stapp <mjs@cisco.com>
2025-03-12 12:40:07 -04:00
Jafar Al-Gharaibeh ddf483c65c
Merge pull request #18367 from donaldsharp/static_nexthop_reinstall
staticd: Install known nexthops upon connection with zebra
2025-03-12 10:30:13 -05:00
Jafar Al-Gharaibeh e97b20576e
Merge pull request #18368 from donaldsharp/backout_bgp_if_up_down_changes
Revert "bgpd: upon if event, evaluate bnc with matching nexthop"
2025-03-12 10:29:11 -05:00
huachao01 01d6daea2f isisd: Fix the issue where redistributed routes do not change when the route-map is modified.
Signed-off-by: huachao01 <1945995178@qq.com>
2025-03-12 09:19:19 -04:00
Donald Sharp 052aea624e Revert "bgpd: upon if event, evaluate bnc with matching nexthop"
This reverts commit 58592be577.

This commit is being reverted because of several issues:

a) tcpdump -i <any interface that bgp happens to use>

This command causes bgp to dump it's entire table to all
of it's peers again.  This is a huge problem in any type
of scaled environment *and* it is not unusual to have an
operator do this.

b) This commit appears to be attempting to solve the problem
with route leaking across vrf's using labels( or somesuch ).
Unfortunately we have absolutely no topotests that show the
behavior.  I am also unable to get any type of how to reproduce
the problem being solved by the commit.  I do know, though,
that the problem really stems from the fact that bgp has
decided to cheat and not create bnc's for route leaking.
Thus when a nexthop changes, bgp is not being notified.
This commit was being used as a hammer to solve the problem.

While I do agree backing out a bug fix for some operator
is less then ideal, I believe that since I cannot get the
operator to tell me the problem it solved and the fact
that sending large amounts of updates with just a simple
tcpdump command ( actually 2 one for tcpdump start and
one for finishing ) is more detrimental in my eyes at
this point in time.  Additionally the solution used
is the wrong one for the problem.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-12 09:01:52 -04:00
Donald Sharp 918a1f85c2 staticd: Install known nexthops upon connection with zebra
CI tests are showing cases where staticd is connecting to
zebra after config is read in and the nexthops are never
being registered w/ zebra:

2025/03/11 15:39:44 STATIC: [T83RR-8SM5G] staticd 10.4-dev starting: vty@2616
2025/03/11 15:39:45 STATIC: [GH3PB-C7X4Y] Static Route to 13.13.13.13/32 not installed currently because dependent config not fully available
2025/03/11 15:39:45 STATIC: [RHJK1-M5FAR] static_zebra_nht_register: Failure to send nexthop 1.1.1.2/32 for 11.11.11.11/32 to zebra
2025/03/11 15:39:45 STATIC: [M7Q4P-46WDR] vty[14]@> enable

Zebra shows connection time as:

2025/03/11 15:39:45.933343 ZEBRA: [V98V0-MTWPF] client 5 says hello and bids fair to announce only static routes vrf=0

As a result staticd never installs the route because it has no nexthop
tracking to say that the route could be installed.

Modify staticd on startup to go through it's nexthops and dump them to
zebra to allow the staticd state machine to get to work.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-12 08:30:43 -04:00
Mark Stapp 27953dd141
Merge pull request #18336 from routingrocks/rvaratharaj/bugfixmar
zebra: Fix neigh delete causing heap-use-after-free error
2025-03-12 08:09:29 -04:00
Christopher Dziomba b6fc0a1c5b
topotests: Add EVPN RT5 multipath flap test
Session flapping isn't tested which led to queuing / order issues
in the past. This adds a second path between R1 and R2, after that
both paths are flapped and the presence of the routerMac is checked

Signed-off-by: Christopher Dziomba <christopher.dziomba@telekom.de>
2025-03-12 09:13:54 +01:00
Rajesh Varatharaj 3060afc84d zebra: Fix neigh delete causing heap-use-after-free error
Issue:
Not freeing the neighbor n  within the same function can lead to
memory leak.
zebra_neigh_del_all() -> zebra_neigh_del() re lookup and free

Fix: not accessing n after its freed.
Directly free the neighbor entry (n) when its interface index matches
ifp->ifindex.

This fixes:
ERROR: AddressSanitizer: heap-use-after-free on address 0x6070001052e8 at pc 0x7f6bf7d09ddb bp 0x7ffd3366a000 sp 0x7ffd33669ff0
READ of size 8 at 0x6070001052e8 thread T0
    #0 0x7f6bf7d09dda in _rb_next lib/openbsd-tree.c:455
    #1 0x55f95a307261 in zebra_neigh_rb_head_RB_NEXT zebra/zebra_neigh.h:34
    #2 0x55f95a3082e9 in zebra_neigh_del_all zebra/zebra_neigh.c:162
    #3 0x55f95a121ee7 in zebra_interface_down_update zebra/redistribute.c:571
    #4 0x55f95a0f819d in if_down zebra/interface.c:1017
    #5 0x55f95a0fe168 in zebra_if_dplane_ifp_handling zebra/interface.c:2102
    #6 0x55f95a0ff10c in zebra_if_dplane_result zebra/interface.c:2241
    #7 0x55f95a27ce9c in rib_process_dplane_results zebra/zebra_rib.c:5015
    #8 0x7f6bf7da3ad9 in event_call lib/event.c:1984
    #9 0x7f6bf7c62141 in frr_run lib/libfrr.c:1246
    #10 0x55f95a11ca7f in main zebra/main.c:543
    #11 0x7f6bf7029d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
    #12 0x7f6bf7029e3f in __libc_start_main_impl ../csu/libc-start.c:392
    #13 0x55f95a0dd0b4 in _start (/usr/lib/frr/zebra+0x1a80b4)

Ticket: #18047

Signed-off-by: Rajesh Varatharaj <rvaratharaj@nvidia.com>
2025-03-11 13:41:40 -07:00
Mark Stapp d0cb3ad7cb
Merge pull request #16614 from louis-6wind/fix-otable-heap-after-free
zebra: fix table heap-after-free crash
2025-03-11 14:03:14 -04:00
Philippe Guibert ec703b0e08 topotests: check when extended community-limit is set to 0
Add a test to control that when the extended community limit is set
to 0, then only 2 BGP updates are received on R3.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-11 17:33:49 +01:00
Philippe Guibert c3038acf3c topotests: check when community-limit is set to 0
Add a test to control that when the community limit is set to 0, then
only 2 BGP updates are received on R3.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-11 17:31:22 +01:00
Russ White b1711c010f
Merge pull request #18362 from y-bharath14/srib-tests-v4
tests: Fixed NameError at bmpserver.py
2025-03-11 11:26:12 -04:00
Russ White aa0841e431
Merge pull request #18346 from donaldsharp/memory_leaks_bgp
Clean up some code and bad assumptions in zebra
2025-03-11 10:35:14 -04:00
Russ White 27a504fcef
Merge pull request #18342 from anlancs/ospfd-minor-change
ospfd: minor change for style
2025-03-11 10:33:41 -04:00
Dmytro Shytyi 83de17ca9b
lib: call to 'calloc' has an allocation size of 0 bytes
w->sects = calloc(sizeof(PyObject *), w->ehdr->e_shnum);

Signed-off-by: Dmytro Shytyi <dmytro.shytyi@6wind.com>
2025-03-11 11:16:01 +01:00
Y Bharath 2c17648015 tests: Fixed NameError at bmpserver.py
Fixed NameError at bmpserver.py

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-03-11 11:33:33 +05:30
Rajasekar Raja 5a63cf4c0d zebra: ensure proper return for failure for Sid allocation
The functions alloc_srv6_sid_func_explicit/dynamic expect to return bool
but we have places where we return a -1 or NULL which the caller is
assuming as a True/Valid and ending up allocating Sid

Without Fix:
2025/03/10 21:44:04.295350 ZEBRA: [XWV20-TGK70] alloc_srv6_sid_func_explicit: trying to allocate explicit SID function 65088 from block fcbb:bbbb::/32
2025/03/10 21:44:04.295351 ZEBRA: [MM61M-TQZNP] alloc_srv6_sid_func_explicit: elib s 10000 e 20000 wlib s 1000 ewlib s 30000 e 1000 SID_FUNC 65088
2025/03/10 21:44:04.295352 ZEBRA: [QGHMB-SWNFW] alloc_srv6_sid_func_explicit: function 65088 is outside ELIB [10000/20000] and EWLIB alloc ranges [30000/1000]
2025/03/10 21:44:04.295367 ZEBRA: [H0GZA-NNSWJ] get_srv6_sid_explicit: allocated explicit SRv6 SID fcbb:bbbb:1:fe40:: for context End.X nh6 2001::2
2025/03/10 21:44:04.295368 ZEBRA: [XBBYD-T1Q7P] srv6_manager_get_sid_internal: got new SRv6 SID for ctx End.X nh6 2001::2: sid_value=fcbb:bbbb:1:fe40:: (func=65088) (proto=4, instance=0, sessionId=0), notifying all clients

With Fix:
2025/03/10 22:04:25.052235 ZEBRA: [MM61M-TQZNP] alloc_srv6_sid_func_explicit: elib s 30000 e 31000 wlib s 31000 ewlib s 30000 e 31000 SID_FUNC 65056
2025/03/10 22:04:25.052236 ZEBRA: [YHMRC-EMYNX] alloc_srv6_sid_func_explicit: function 65056 is outside ELIB [30000/31000] and EWLIB alloc ranges [30000/31000]
2025/03/10 22:04:25.052254 ZEBRA: [XSG8X-Q2XJX] get_srv6_sid_explicit: invalid SM request arguments: failed to allocate SID function 65056 from block fcbb:bbbb::/32
2025/03/10 22:04:25.052257 ZEBRA: [YC52T-427SJ] srv6_manager_get_sid_internal: not got SRv6 SID for ctx End.DT6 vrf_id 4, sid_value=fcbb:bbbb:1:fe20::, locator_name=MAIN
root@rajasekarr:/tmp/topotests/static_srv6_sids.test_static_srv6_sids/r1#

Ticket :#
Signed-off-by: Rajasekar Raja <rajasekarr@nvidia.com>
2025-03-10 15:26:38 -07:00
Martin Winter b16752c26c
doc: Update frr-reload doc to include new option
Signed-off-by: Martin Winter <mwinter@opensourcerouting.org>
2025-03-10 23:23:11 +01:00
Martin Winter 29e8cf3a22
tools: Add option to frr-reload to specify alternate logfile
Adding option --logfile to specify a different logfile instead of
the default /var/log/frr/frr-reload.log

Signed-off-by: Martin Winter <mwinter@opensourcerouting.org>
2025-03-10 23:23:04 +01:00
Louis Scalbert c50fe21045 tests: zebra_rib, test vrf change
Test table ID move to a VRF and the removal of the VRF.

Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-03-10 09:54:18 +01:00
Louis Scalbert c6afe42455 lib, tests, zebra: keep table routes at vrf disabling
At VRF disabling, keep the route entries that was associated to its
table ID but not to the VRF itself. Kernel flushes these entries so we
need to reinstall them.

To do so, add a flag to mean that a route entry is owned by a table ID
and not by a VRF. If the VRF associated to the table ID is deleted, the
route entry must not be deleted.

Update to tests with new flag. 2057 is in hexa 0x809, meaning that the
new flag has been to some prefix.

Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-03-10 09:54:18 +01:00
Louis Scalbert 97c159e882 tests: zebra_rib, test vrf change
Test table ID move to a VRF and the removal of the VRF.

Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-03-10 09:54:18 +01:00
Louis Scalbert 52a35e9592 zebra: fix vanished blackhole route
Fix vanished blackhole route when kernel routes are updated.

> root@router# echo "100 my_table" | tee -a /etc/iproute2/rt_tables
> root@router# ip l add du0 type dummy
> root@router# ifconfig du0 192.168.0.1/24 up
> root@router# ip route add blackhole default table 100
> root@router# ip route show table 100
> blackhole default
> root@router# vtysh -c 'show ip route table 100'
> [...]
> Table 100:
> K>* 0.0.0.0/0 [0/0] unreachable (blackhole), weight 1, 00:00:05
> root@router# ip l add red type vrf table 100
> root@router# vtysh -c 'show ip route table 100'
> [...]
> Table 100:
> K>* 0.0.0.0/0 [0/0] unreachable (blackhole), weight 1, 00:00:16
> root@router# ip l set du0 master red
> root@router# vtysh -c 'show ip route table 100'
> [...]
> Table 100:
> C>* 192.168.0.0/24 is directly connected, du0, weight 1, 00:00:02
> L>* 192.168.0.1/32 is directly connected, du0, weight 1, 00:00:02
> root@router# ip route show table 100
> blackhole default
> 192.168.0.0/24 dev du0 proto kernel scope link src 192.168.0.1
> local 192.168.0.1 dev du0 proto kernel scope host src 192.168.0.1
> broadcast 192.168.0.255 dev du0 proto kernel scope link src 192.168.0.1

Fixes: d528c02a20 ("zebra: Handle kernel routes appropriately")
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-03-10 09:54:18 +01:00
Louis Scalbert 5cde97678e zebra: fix removed default route at vrf enabling
When a routing table (RT) already has a default route before being
assigned to a VRF, the default route vanishes in zebra after the VRF
assignment.

> root@router:~# ip route add blackhole default table 100
> root@router:~# ip route show table 100
> blackhole default
> root@router:~# vtysh -c 'show ip route table 100'
> [...]
> VRF default table 100:
> K>* 0.0.0.0/0 [0/0] unreachable (blackhole), 00:00:05
> root@router:~# ip l add red type vrf table 100
> root@router:~# vtysh -c 'show ip route table 100'
> root@router:~#

Do not override the default route if it exists.

Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-03-10 09:54:18 +01:00
Louis Scalbert fb8bf9cf59 zebra: remove vrf route entries at vrf disabling
This is the continuation of the previous commit.

When a VRF is deleted, the kernel retains only its own routing entries
in the former VRF table and removes all others.

This change ensures that routing entries created by FRR daemons are also
removed from the former zebra VRF table when the VRF is disabled.

To test:

> echo "100 my_table" | tee -a /etc/iproute2/rt_tables
> ip l add du0 type dummy
> ifconfig du0 192.168.0.1/24 up
> ip route add blackhole default table 100
> ip route show table 100
> ip l add red type vrf table 100
> ip l set du0 master red
> vtysh -c 'configure' -c 'vrf red' -c 'ip route 10.0.0.0/24 192.168.0.254'
> vtysh -c 'show ip route table 100'
> sleep 0.1
> ip l del red
> sleep 0.1
> vtysh -c 'show ip route table 100'
> ip l add red type vrf table 100
> ip l set du0 master red
> vtysh -c 'configure' -c 'vrf red' -c 'ip route 10.0.0.0/24 192.168.0.254'
> vtysh -c 'show ip route table 100'
> sleep 0.1
> ip l del red
> sleep 0.1
> vtysh -c 'show ip route table 100'

Fixes: d8612e6 ("zebra: Track tables allocated by vrf and cleanup")
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-03-10 09:54:18 +01:00
Louis Scalbert 7395e399b1 zebra: fix table heap-after-free crash
Fix a heap-after-free that causes zebra to crash even without
address-sanitizer. To reproduce:

> echo "100 my_table" | tee -a /etc/iproute2/rt_tables
> ip route add blackhole default table 100
> ip route show table 100
> ip l add red type vrf table 100
> ip l del red
> ip route del blackhole default table 100

Zebra manages routing tables for all existing Linux RT tables,
regardless of whether they are assigned to a VRF interface. When a table
is not assigned to any VRF, zebra arbitrarily assigns it to the default
VRF, even though this is not strictly accurate (the code expects this
behavior).

When an RT table is created after a VRF, zebra correctly assigns the
table to the VRF. However, if a VRF interface is assigned to an existing
RT table, zebra does not update the table owner, which remains as the
default VRF. As a result, existing routing entries remain under the
default VRF, while new entries are correctly assigned to the VRF. The
VRF mismatch is unexpected in the code and creates crashes and memory
related issues.

Furthermore, Linux does not automatically delete RT tables when they are
unassigned from a VRF. It is incorrect to delete these tables from zebra.

Instead, at VRF disabling, do not release the table but reassign it to
the default VRF. At VRF enabling, change the table owner back to the
appropriate VRF.

> ==2866266==ERROR: AddressSanitizer: heap-use-after-free on address 0x606000154f54 at pc 0x7fa32474b83f bp 0x7ffe94f67d90 sp 0x7ffe94f67d88
> READ of size 1 at 0x606000154f54 thread T0
>     #0 0x7fa32474b83e in rn_hash_node_const_find lib/table.c:28
>     #1 0x7fa32474bab1 in rn_hash_node_find lib/table.c:28
>     #2 0x7fa32474d783 in route_node_get lib/table.c:283
>     #3 0x7fa3247328dd in srcdest_rnode_get lib/srcdest_table.c:231
>     #4 0x55b0e4fa8da4 in rib_find_rn_from_ctx zebra/zebra_rib.c:1957
>     #5 0x55b0e4fa8e31 in rib_process_result zebra/zebra_rib.c:1988
>     #6 0x55b0e4fb9d64 in rib_process_dplane_results zebra/zebra_rib.c:4894
>     #7 0x7fa32476689c in event_call lib/event.c:1996
>     #8 0x7fa32463b7b2 in frr_run lib/libfrr.c:1232
>     #9 0x55b0e4e6c32a in main zebra/main.c:526
>     #10 0x7fa32424fd09 in __libc_start_main ../csu/libc-start.c:308
>     #11 0x55b0e4e2d649 in _start (/usr/lib/frr/zebra+0x1a1649)
>
> 0x606000154f54 is located 20 bytes inside of 56-byte region [0x606000154f40,0x606000154f78)
> freed by thread T0 here:
>     #0 0x7fa324ca9b6f in __interceptor_free ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:123
>     #1 0x7fa324668d8f in qfree lib/memory.c:130
>     #2 0x7fa32474c421 in route_table_free lib/table.c:126
>     #3 0x7fa32474bf96 in route_table_finish lib/table.c:46
>     #4 0x55b0e4fbca3a in zebra_router_free_table zebra/zebra_router.c:191
>     #5 0x55b0e4fbccea in zebra_router_release_table zebra/zebra_router.c:214
>     #6 0x55b0e4fd428e in zebra_vrf_disable zebra/zebra_vrf.c:219
>     #7 0x7fa32476fabf in vrf_disable lib/vrf.c:326
>     #8 0x7fa32476f5d4 in vrf_delete lib/vrf.c:231
>     #9 0x55b0e4e4ad36 in interface_vrf_change zebra/interface.c:1478
>     #10 0x55b0e4e4d5d2 in zebra_if_dplane_ifp_handling zebra/interface.c:1949
>     #11 0x55b0e4e4fb89 in zebra_if_dplane_result zebra/interface.c:2268
>     #12 0x55b0e4fb9f26 in rib_process_dplane_results zebra/zebra_rib.c:4954
>     #13 0x7fa32476689c in event_call lib/event.c:1996
>     #14 0x7fa32463b7b2 in frr_run lib/libfrr.c:1232
>     #15 0x55b0e4e6c32a in main zebra/main.c:526
>     #16 0x7fa32424fd09 in __libc_start_main ../csu/libc-start.c:308
>
> previously allocated by thread T0 here:
>     #0 0x7fa324caa037 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
>     #1 0x7fa324668c4d in qcalloc lib/memory.c:105
>     #2 0x7fa32474bf33 in route_table_init_with_delegate lib/table.c:38
>     #3 0x7fa32474e73c in route_table_init lib/table.c:512
>     #4 0x55b0e4fbc353 in zebra_router_get_table zebra/zebra_router.c:137
>     #5 0x55b0e4fd4da0 in zebra_vrf_table_create zebra/zebra_vrf.c:358
>     #6 0x55b0e4fd3d30 in zebra_vrf_enable zebra/zebra_vrf.c:140
>     #7 0x7fa32476f9b2 in vrf_enable lib/vrf.c:286
>     #8 0x55b0e4e4af76 in interface_vrf_change zebra/interface.c:1533
>     #9 0x55b0e4e4d612 in zebra_if_dplane_ifp_handling zebra/interface.c:1968
>     #10 0x55b0e4e4fb89 in zebra_if_dplane_result zebra/interface.c:2268
>     #11 0x55b0e4fb9f26 in rib_process_dplane_results zebra/zebra_rib.c:4954
>     #12 0x7fa32476689c in event_call lib/event.c:1996
>     #13 0x7fa32463b7b2 in frr_run lib/libfrr.c:1232
>     #14 0x55b0e4e6c32a in main zebra/main.c:526
>     #15 0x7fa32424fd09 in __libc_start_main ../csu/libc-start.c:308

Fixes: d8612e6 ("zebra: Track tables allocated by vrf and cleanup")
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
2025-03-10 09:54:18 +01:00
Jafar Al-Gharaibeh 3f785c913d
Merge pull request #18348 from donaldsharp/topotest_startup_order
Topotest startup order
2025-03-08 21:52:16 -06:00
Donald Sharp 50708524c0 tests: Test that ipv6 forwarding state is reflected properly
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-07 22:25:13 -05:00
Donald Sharp 0f6b8e53f2 tests: Add tests for the new operational data added recently
ip-forwarding, ipv6-forwarding and mpls-forwarding were
not being looked at/tested for existence in the query
of frr.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-07 22:24:42 -05:00
Donald Sharp 9bf22f603e zebra: Add mpls-forwarding to yang state model
The mpls-forwarding state was missing from the model
add it.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-07 22:24:42 -05:00
Donald Sharp 009f42dd5b tests: Have zebra startup look for the zserv.api socket
Ensure that the zserv.api socket is actually up and running
before moving onto other daemons after zebra.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-07 18:43:18 -05:00
Donald Sharp dd609bc069 tests: Allow mgmtd and zebra to fully come up before other daemons
Currently the topotest infrastructure is starting up daemons
in mgmtd,zebra, staticd then everything else.

The problem that is happening, under heavy load, is that
zebra may not be fully started and when a daemon attempts
to connect to it, it will not be able to connect.
Some of the daemons do not have great retry mechanisms at all.
In addition our normal systemctl startup scripts actually
wait a small amount of time for zebra to be ready before
moving onto the other daemons.

Let's make topotests startup a tiny bit more nuanced
and have mgmtd fully up before starting up zebra.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-07 18:43:18 -05:00
Mark Stapp 8f8d0923a7
Merge pull request #18335 from karthikeyav/bitfield_copy
lib: use memcpy in bf_copy
2025-03-07 17:10:54 -05:00
Jafar Al-Gharaibeh 826fed6f5b
Merge pull request #18344 from donaldsharp/fix_pytest_syntx_stuff
tests: bgp_evpn_route_map_match fix invalid escape sequence
2025-03-07 14:52:29 -06:00
Karthikeya Venkat Muppalla bb09c35592 lib: use memcpy in bf_copy
use memcpy in bf_copy() instead of copy word by word in for loop

Signed-off-by: Karthikeya Venkat Muppalla <kmuppalla@nvidia.com>
2025-03-07 11:59:20 -08:00
Donald Sharp 633ef005bd zebra: Don't use MTYPE_TMP for l2 vni data
Convert over from MTYPE_TMP to MTYPE_L2_VNI as the
data type.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-07 11:50:41 -05:00
Donald Sharp b648479cb4 zebra: Declutter zebra_vxlan_if_add_update_vni
This function has equivalent code on both sides
of a if statement.  Let's consolidate this.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-07 11:48:05 -05:00
Donald Sharp 45e2f0fc6e zebra: malloc functions cannot fail
Let's try to remember that when using a malloc function
it can never fail and as such testing for NULL does
nothing.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-07 11:48:05 -05:00
Mark Stapp 5af56c8bc9
Merge pull request #18338 from donaldsharp/documentation_typesafe
Documentation typesafe
2025-03-07 11:44:00 -05:00
Donald Sharp 4995d17237 tests: bgp_evpn_route_map_match fix invalid escape sequence
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-07 10:37:51 -05:00
Donatas Abraitis b221fd5f6c
Merge pull request #18337 from donaldsharp/revert_keepalive_connection
Revert "bgpd: Make keepalive pthread be connection based."
2025-03-07 09:03:15 +02:00
Y Bharath 0a7c43e706 tests: Corrected typo at path_attributes.py
Corrected typo at path_attributes.py

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-03-07 09:57:30 +05:30
Jafar Al-Gharaibeh aa197a930a
Merge pull request #18327 from donaldsharp/fixups_for_connection
bgpd: Fix dead code in bgp_route.c #1637664
2025-03-06 21:37:43 -06:00
Donald Sharp f1c75deb8e doc: The sbfd documentation was not being included
Add the sbfd documentation, such as it is, to the
developer documentation so that it can be read
by people.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-06 21:58:32 -05:00
Donald Sharp 4738cb51d2 doc: Developer documentation missing some build instructions
The building-frr-for-ubuntu2404 and building-doc were missing
from the compilation of developer documents.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-06 21:49:12 -05:00
Donald Sharp 39909f9fb9 doc: Add typesafe conversion examples
Try to give some good examples of various lists being
converted over to the typesafe way of doing things.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-06 21:43:10 -05:00
anlan_cs 2775cb77a8 lib: remove unused macro
This macro is unused and it leads to the compile error:

```
./ospfd/ospf_dump.h:78:60: error: pasting "OSPF_DEBUG_" and ""Global Graceful Restart - GR Mode\n"" does not give a valid preprocessing token
   78 | #define TERM_DEBUG_ON(a, b)      term_debug_ospf_ ## a |= (OSPF_DEBUG_ ## b)
```

Signed-off-by: anlan_cs <anlan_cs@126.com>
2025-03-07 10:17:18 +08:00
Donald Sharp aa736aa0aa Revert "bgpd: Make keepalive pthread be connection based."
This reverts commit 23bdaba147.
2025-03-06 20:50:41 -05:00
Jafar Al-Gharaibeh 19f8ed6aab
Merge pull request #18315 from gromit1811/bugfix/pim6_mld_vrf_fix
pimd: Fix PIM6 MLD VRF support (use recvmsg() pktinfo)
2025-03-06 12:42:03 -06:00
Donatas Abraitis 26d1e5ce17
Merge pull request #18214 from soumyar-roy/soumya/ra514nei
zebra: Bring up 514 BGP neighbor sessions
2025-03-06 20:15:19 +02:00
Donald Sharp 0758aa10a8 bgpd: Fix dead code in bgp_route.c #1637664
Coverity rightly points out that the worse pointer
cannot be null in this section of code.  Fix it.

Signed-off-by: Donald Sharp <donaldsharp72@gmail.com>
2025-03-06 09:59:19 -05:00
Mark Stapp f3a7077df0
Merge pull request #18313 from donaldsharp/log_always_documented
lib: Document --command-log-always in help
2025-03-06 09:17:52 -05:00
Mark Stapp 0641433aaf
Merge pull request #18319 from qlyoung/fix-overriding-automake-builtin-doc-targets
doc: don't override automake builtin targets
2025-03-06 07:55:34 -05:00
Quentin Young 5b1bae0c1b doc: don't override automake builtin targets
Automake generates default targets for `info`, `html`, `pdf` and
corresponding `install-info` and `install-html` targets to install the
artifacts generated by those rules. Prior to this change we are
overriding those targets which generates a warning.

The automake targets are designed to automatically build texinfo sources
without requiring user-specified rules. We do not have texinfo sources
so this functionality is not in use, but we are still overriding the
built in targets which is considered poor form. Automake has facilities
to modify the built in targets in the form of `-local` rules; this patch
renames the rules we had defined to use the `-local` ones.

The resulting targets generated by Automake look like this:

  html: html-am
  html-am: html-local

i.e. the final `html` target generated when using `html-local` to define
our custom rules is identical to the one we get by overriding the built
in `html` target. The same goes for the others.

So, the only effect this patch has is suppressing the warnings and
bringing us in line with Automake best practice.

Signed-off-by: Quentin Young <qlyoung@nvidia.com>
2025-03-05 12:25:31 -08:00
Russ White da2402adf4
Merge pull request #18195 from donaldsharp/more_connection_cleanup
More connection cleanup
2025-03-05 12:09:07 -05:00
Donald Sharp 4346f2ae69
Merge pull request #18310 from opensourcerouting/freebsd-snmp
configure.ac: fix sed failure on FreeBSD
2025-03-05 11:19:27 -05:00
Martin Buck 374c8dc4db pimd: Fix PIM6 MLD VRF support (use recvmsg() pktinfo)
When receiving MLD messages, prefer pktinfo over msghdr.msg_name for
determining the source interface. The latter is just the VRF master
interface in case of VRF and we need the true interface the packet was
received on instead.

Signed-off-by: Martin Buck <mb-tmp-tvguho.pbz@gromit.dyndns.org>
2025-03-05 16:27:23 +01:00
Donald Sharp c83af86991 lib: Document --command-log-always in help
The --command-log-always was not being listed as a valid
option for when the operator issues a <daemon> --help
command line.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-03-05 10:10:48 -05:00
Z-Yivon 8088bc39eb isisd:IS-IS hello packets not sent with configured hello timer
Signed-off-by: Z-Yivon <202100460108@mail.sdu.edn.cn>
2025-03-05 20:08:14 +08:00
Christian Hopps 5cf533ba74
Merge pull request #18268 from donaldsharp/yang_correct_vrf_issue
lib: Correct handling of /frr-vrf:lib/vrf/state/active
2025-03-05 01:19:05 -05:00
Soumya Roy 10ff0d5e4c tests: add support for 514 unnumbered/v4/v6 BGP sessions
Signed-off-by: Soumya Roy <souroy@nvidia.com>
2025-03-05 06:16:06 +00:00
Soumya Roy fd80124cca tests: add support for bringimg up 514 BGP neighbors
Signed-off-by: Soumya Roy <souroy@nvidia.com>
2025-03-05 06:16:06 +00:00
Soumya Roy 6a75d33b5c zebra: Bring up 514 BGP neighbor sessions
Issue:
When 514 inerfaces/neighbors are configured, it creates socket error,
"Cannot allocate memory", when back to back V6 RA messages are tried
to be sent over the socket. This prevents interface, to know its peer's
link local address. Socket error comes when 1) try to join ICMPv6 all
router multicast group, back to back for all interfaces 2)send back to
back RA for all interfaces

Fix:
1)For ICMPv6 join case, we check if the interface has already joined
all router group, if not try to join. On failure, retry joining after
random amount of time determined 1 ms to ICMPV6_JOIN_TIMER_EXP_MS(100 ms)
2) For RA issue case, batch sending of RA mesages using wheel timer

Testing:
Monitor BGP session running sh bgp summary command

Before fix:
r1# sh bgp summary

IPv4 Unicast Summary:
BGP router identifier 192.168.1.1, local AS number 1001 VRF default vrf-id 0
BGP table version 0
RIB entries 0, using 0 bytes of memory
Peers 515, using 12 MiB of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
r1-eth0         4       1002        89        90        0    0    0 00:07:10            0        0 N/A
r1-eth1         4       1002        89        90        0    0    0 00:07:10            0        0 N/A
r1-eth2         4       1002        89        90        0    0    0 00:07:10            0        0 N/A
r1-eth3         4       1002        89        90        0    0    0 00:07:10            0        0 N/A
r1-eth4         4       1002        89        90        0    0    0 00:07:10            0        0 N/A
r1-eth5         4       1002        89        90        0    0    0 00:07:10            0        0 N/A

…..<snip>...
r1-eth252       4       1002        31        29        0    0    0 00:02:08            0        0 N/A
r1-eth253       4       1002        31        29        0    0    0 00:02:08            0        0 N/A
r1-eth254       4       1002        31        29        0    0    0 00:02:08            0        0 N/A
r1-eth255       4       1002        31        29        0    0    0 00:02:08            0        0 N/A
r1-eth256       4          0         0         0        0    0    0    never         Idle        0 N/A
r1-eth257       4          0         0         0        0    0    0    never         Idle        0 N/A
r1-eth258       4          0         0         0        0    0    0    never         Idle        0 N/A
r1-eth259       4          0         0         0        0    0    0    never         Idle        0 N/A
r1-eth260       4          0         0         0        0    0    0    never         Idle        0 N/A
……..<snip>…..
r1-eth511       4          0         0         0        0    0    0    never         Idle        0 N/A
r1-eth512       4          0         0         0        0    0    0    never         Idle        0 N/A
r1-eth513       4          0         0         0        0    0    0    never         Idle        0 N/A
r1-eth514       4          0         0         0        0    0    0    never         Idle        0 N/A
After fix:
r1# show bgp summary

IPv4 Unicast Summary:
BGP router identifier 192.168.1.1, local AS number 1001 VRF default vrf-id 0
BGP table version 0
RIB entries 0, using 0 bytes of memory
Peers 515, using 12 MiB of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
r1-eth0         4       1002        87        87        0    0    0 00:07:04            0        0 N/A
r1-eth1         4       1002        87        87        0    0    0 00:07:04            0        0 N/A
r1-eth2         4       1002        87        87        0    0    0 00:07:04            0        0 N/A
r1-eth3         4       1002        64        67        0    0    0 00:05:09            0        0 N/A
r1-eth4         4       1002        87        87        0    0    0 00:07:04            0        0 N/A
r1-eth5         4       1002        87        87        0    0    0 00:07:04            0        0 N/A
r1-eth6         4       1002        67        70        0    0    0 00:05:22            0        0 N/A
r1-eth7         4       1002        87        87        0    0    0 00:07:04            0        0 N/A
r1-eth8         4       1002        87        87        0    0    0 00:07:04            0        0 N/A
....
r1-eth499       4       1002        43        43        0    0    0 00:03:22            0        0 N/A
r1-eth500       4       1002        43        43        0    0    0 00:03:22            0        0 N/A
r1-eth501       4       1002        19        22        0    0    0 00:01:21            0        0 N/A
r1-eth502       4       1002        43        43        0    0    0 00:03:22            0        0 N/A
r1-eth503       4       1002        43        43        0    0    0 00:03:22            0        0 N/A
r1-eth504       4       1002        20        23        0    0    0 00:01:30            0        0 N/A
r1-eth505       4       1002        43        43        0    0    0 00:03:22            0        0 N/A
r1-eth506       4       1002        43        43        0    0    0 00:03:22            0        0 N/A
r1-eth507       4       1002        22        25        0    0    0 00:01:39            0        0 N/A
r1-eth508       4       1002        43        43        0    0    0 00:03:22            0        0 N/A
r1-eth509       4       1002        17        20        0    0    0 00:01:13            0        0 N/A
r1-eth510       4       1002        43        43        0    0    0 00:03:22            0        0 N/A
r1-eth511       4       1002        43        43        0    0    0 00:03:22            0        0 N/A
r1-eth512       4       1002        19        22        0    0    0 00:01:22            0        0 N/A
r1-eth513       4       1002        43        43        0    0    0 00:03:22            0        0 N/A
r1-eth514       4       1002        43        43        0    0    0 00:03:22            0        0 N/A

Signed-off-by: Soumya Roy <souroy@nvidia.com>
2025-03-05 06:15:56 +00:00
Christian Hopps e47a0557e5
Merge pull request #18293 from y-bharath14/srib-yang-v4
yang: Imported modules are not in use
2025-03-05 00:58:44 -05:00
Rafael Zalamena 2a7edc27d3 configure.ac: fix sed failure on FreeBSD
Simplify the sed expression to make sure it works on all platforms.

The previous expression failed on FreeBSD and it caused the SNMP_LIBS
variable to be empty. When SNMP_LIBS is empty it will cause binaries
and/or libraries to not link against the correct libraries.

Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
2025-03-04 16:54:18 -03:00
Y Bharath 9d3c89520f yang: Imported modules are not in use
Imported modules are not in use

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-03-04 21:46:08 +05:30
Russ White 93c2dc28bc
Merge pull request #18306 from LabNConsulting/aceelindem/bfd-log-session-changes
bfdd: Add "log-session-changes" command to BFD configuration and operational state via YANG Northbound API.
2025-03-04 09:48:36 -05:00
Russ White 6f48c7d785
Merge pull request #18301 from pguibert6WIND/vpn_prefix_aggregate_export_and_accept
Vpn prefix aggregate export and accept
2025-03-04 09:39:52 -05:00
Russ White 0b094a772c
Merge pull request #18253 from dksharp5/yang_zebra
Allow retrieval of v4/v6 forwarding state via NB
2025-03-04 09:25:24 -05:00
Russ White 21dc0a4d16
Merge pull request #17961 from opensourcerouting/fix/bgp_reject_as_aggregate
bgpd: Do not advertise aggregate routes to contributing ASes
2025-03-04 09:18:38 -05:00
Acee Lindem aa50f5ebb8 bfdd: Add BFD "log-session-changes" feature.
Add the BFD "log-session-changes" via the YANG and northbound API. Also
add the configured value to show and operational state.

Signed-off-by: Acee Lindem <acee@lindem.com>
2025-03-03 22:46:01 +00:00
Acee Lindem 4b0aeb6b29 yang: Add "log-session-changes" to BFD common session parameters.
Signed-off-by: Acee Lindem <acee@lindem.com>
2025-03-03 20:57:48 +00:00
Acee Lindem dc942c5043 doc: Add "log-session-changes" documentation.
Signed-off-by: Acee Lindem <acee@lindem.com>
2025-03-03 20:23:13 +00:00
Acee Lindem f5d1fe1af1 tests: Add "log-session-changes" to bfd_topo1 r1 and r2 configs.
Signed-off-by: Acee Lindem <acee@lindem.com>
2025-03-03 20:21:55 +00:00
Philippe Guibert f8438393ac topotests: add a test to configure aggregated summary-only prefix on VPN
That configured aggregated prefix should be present, but all other
suppressed prefixes should not be exported.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-03 20:43:11 +01:00
Philippe Guibert bf15730267 bgpd: fix syncs suppressed prefixes in VPN environments
By using the summary-only option for aggregated prefixes, the suppressed
prefixes are however exported as VPN prefixes, whereas they should not.

> r1# show bgp vrf vrf1 ipv4
> [..]
>  *>  172.31.1.0/24    0.0.0.0                  0         32768 ?
>  s>  172.31.1.1/32    0.0.0.0                  0         32768 ?
>  s>  172.31.1.2/32    0.0.0.0                  0         32768 ?
>  s>  172.31.1.3/32    0.0.0.0                  0         32768 ?
> [..]
> r1#
>
> r1# show bgp ipv4 vpn
> [..]
>  *>  172.31.1.0/24    0.0.0.0@4<               0         32768 ?
>     UN=0.0.0.0 EC{52:100} label=101 type=bgp, subtype=5
>  *>  172.31.1.1/32    0.0.0.0@4<               0         32768 ?
>     UN=0.0.0.0 EC{52:100} label=101 type=bgp, subtype=5
>  *>  172.31.1.2/32    0.0.0.0@4<               0         32768 ?
>     UN=0.0.0.0 EC{52:100} label=101 type=bgp, subtype=5
>  *>  172.31.1.3/32    0.0.0.0@4<               0         32768 ?
>     UN=0.0.0.0 EC{52:100} label=101 type=bgp, subtype=5
> [..]
> r1#

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-03 20:43:11 +01:00
Philippe Guibert e0f585fcab topotests: add a test to unconfigure aggregated prefix on VPN
That test will ensure the VPN prefix associated is removed.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-03 20:43:11 +01:00
Philippe Guibert 32088c43a8 bgpd: fix remove vpn aggregated prefix upon unconfiguration
When unconfiguring an aggregated prefix, the VPN prefix is not
removed. Fix this by refreshing the VPN leak when the aggregated route
is or is not available.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-03 20:43:11 +01:00
Philippe Guibert bccf1e5447 topotests: add vpn test to control aggregated prefix is exported
Add a test in bgp_vpnv4_ebgp test to control that the aggregated prefix
is exported and selected as a VPN prefix.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-03 20:43:08 +01:00
Philippe Guibert 1cbbc94e55 bgpd: fix export, and selects l3vpn aggregated prefix
On a L3VPN setup, an aggretated prefix can not be exported and selected.
The below example illustrates the 172.31.0.0/24 aggregated prefix, which
is valid as a VRF prefix, but invalid as a VPN prefix:

> r1# show bgp ipv4 vpn 172.31.0.0/24
> BGP routing table entry for 444:1:172.31.0.0/24, version 0
> not allocated
> Paths: (1 available, no best path)
>   Not advertised to any peer
>   Local, (aggregated by 65500 192.0.2.1)
>     0.0.0.0 from 0.0.0.0 (192.0.2.1) vrf vrf1(4) announce-nh-self
>       Origin incomplete, metric 0, weight 32768, invalid, sourced,
local, atomic-aggregate
>       Extended Community: RT:52:100
>       Originator: 192.0.2.1
>       Remote label: 101
>       Last update: Mon Mar  3 14:35:04 2025
> r1# show bgp vrf vrf1 ipv4 172.31.0.0/24
> BGP routing table entry for 172.31.0.0/24, version 1
> Paths: (1 available, best #1, vrf vrf1)
>   Not advertised to any peer
>   Local, (aggregated by 65500 192.0.2.1)
>     0.0.0.0 from 0.0.0.0 (192.0.2.1)
>       Origin incomplete, metric 0, weight 32768, valid, aggregated,
local, atomic-aggregate, best (First path received)
>       Last update: Mon Mar  3 14:35:03 2025
> r1#

Actually, the aggregated prefix nexthop is considered, and 0.0.0.0 is
an invalid nexthop.

> r1# show bgp vrf vrf1 nexthop
> Current BGP nexthop cache:
>  0.0.0.0 invalid, #paths 1
>   Is not Registered
>   Last update: Thu Feb 13 18:33:43 2025

Fix this by considering the L3VPN prefix selected, if the VRF prefix
is selected too.

Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
2025-03-03 18:51:18 +01:00
Mark Stapp b66145b8ca
Merge pull request #18030 from fdumontet6WIND/mem_alloc_stream
zebra: reduce memory usage by streams when redistributing routes
2025-03-03 11:09:47 -05:00
Donald Sharp 21a8f5277b
Merge pull request #18294 from Orange-OpenSource/isisd
isisd: Correct edge insertion into TED
2025-03-03 07:37:42 -05:00
Olivier Dugeon 605fc1dd64 isisd: Correct edge insertion into TED
Edges are not correctly linked to Vertices during LSP processing. In function
lsp_to_edge_cb(), once edge created or updated from the LSP TLVs, the code try
to link the edge to destination vertices. In case the revert edge is not found,
the code try to found a destination vertex to link to. But, the sys_id used
for this operation corresponds to the source vertex. As a result, the edge is
attached as source and destination of the vertex. When Traffic Engineering is
stopped, TED is deleted which result into a double free of the edge attributes.
This cause a crash when attempt to free extended admin groupi the second time.

This patch removed wrong code which link twice the edge to the source vertex.

Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
2025-03-03 10:16:55 +01:00
anlan_cs 3af15029db ospfd: cosmetic change for one command
Just use the same style for all `DEFPY`s. It is a cosmetic change, doesn't
affect the actual function.

Signed-off-by: anlan_cs <anlan_cs@126.com>
2025-03-03 10:12:01 +08:00
Donna Sharp 9a073f663f zebra: allow retrieval of ipv6 forwarding state
Allow the retrieval of ipv6 forwarding state from
within the yang framework as that it was missing.

Signed-off-by: Donna Sharp <dksharp5@gmail.com>
2025-03-01 14:45:18 -05:00
Donna Sharp 453154497e zebra: allow retrieval of ip forwarding state
There was no ability to retrieve the ip-forwarding state
of zebra.  Add this to yang under the state container.

Signed-off-by: Donna Sharp <dksharp5@gmail.com>
2025-03-01 14:39:07 -05:00
Donald Sharp 23bdaba147 bgpd: Make keepalive pthread be connection based.
Again instead of making the keepalives be peer based
use the connection to make it happen.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-02-28 10:28:50 -05:00
Donald Sharp 2cd1d00dde bgpd: Convert bgp_keepalive_send to use a connection
The peer is going to eventually have a incoming and
outgoing connection.  Let's send the data based
upon the connection not the peer.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-02-28 10:28:50 -05:00
Donald Sharp f90af8abc3 bgpd: Rename peer1 to just peer
The bgp_accept function was calling the existing
peer data structure peer1 for some reason.  Let's
just call it peer instead of peer1.

Author's Note:  I am changing the bgp_accept function
in this manner because I find it incredibly confusing
remembering what is what direction and all my other
attempts at getting this straight has caused real
problems.  So I am resorting to doing really small
transformational changes at a time.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-02-28 10:28:50 -05:00
Donald Sharp f97429e665 bgpd: Call the doppelganger the doppelganger
Currently the code in bgp_accept is calling the
doppelganger `peer`.  This is confusing with
peer and peer1.  Let's just call it doppelganger.

Author's Note:  I am changing the bgp_accept function
in this manner because I find it incredibly confusing
remembering what is what direction and all my other
attempts at getting this straight has caused real
problems.  So I am resorting to doing really small
transformational changes at a time.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-02-28 10:28:50 -05:00
Donald Sharp b4e13db069 bgpd: Call newly created dynamic_peer appropriately
The dynamic peer being created is being called peer1
let's call it dynamic_peer instead.  This will make
what is being done clearer for future developers.

Author's Note:  I am changing the bgp_accept function
in this manner because I find it incredibly confusing
remembering what is what direction and all my other
attempts at getting this straight has caused real
problems.  So I am resorting to doing really small
transformational changes at a time.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-02-28 10:28:50 -05:00
Donald Sharp c32fdcdc35 bgpd: Change existing connection to be called connection
bgp_accept looks up the peer data structure.  The found
one represents the peer data structure that is created
when configuration is created.  This connection is being
called connection1.  Let's rename this to connection
to reduce some confusion.

Author's Note:  I am changing the bgp_accept function
in this manner because I find it incredibly confusing
remembering what is what direction and all my other
attempts at getting this straight has caused real
problems.  So I am resorting to doing really small
transformational changes at a time.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-02-28 10:28:50 -05:00
Donald Sharp 4f5d23402d bgpd: Call the new doppelganger connection incoming
In bgp_accept, the newly created doppelganger is
accepting a connection and setting it up to work
properly.  For this incoming connection let's call
it incoming as well.

Author's Note:  I am changing the bgp_accept function
in this manner because I find it incredibly confusing
remembering what is what direction and all my other
attempts at getting this straight has caused real
problems.  So I am resorting to doing really small
transformational changes at a time.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-02-28 10:28:50 -05:00
Donald Sharp b76d5c07c5 bgpd: Call dyanmic peer incoming connection incoming
The bgp_accept code calls the different connections
connection and connection1.  Frankly this is confusing
and hard to keep track of what we are talking about
since they are poorly named.  Let's start naming
these variables things that make logical sense.

Author's Note:  I am changing the bgp_accept function
in this manner because I find it incredibly confusing
remembering what is what direction and all my other
attempts at getting this straight has caused real
problems.  So I am resorting to doing really small
transformational changes at a time.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-02-28 10:28:50 -05:00
Donald Sharp 543fc6dc56 bgpd: Add connection direction to debug logs
Currently the incoming and outgoing connections mix up their
logs and there is absolutely no way to tell which way is being
talked about when both are operating.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-02-28 10:28:50 -05:00
Y Bharath a7e0bfe4e9 tests: Catch specific exceptions
Catch specific exceptions and handle them accordingly

Signed-off-by: y-bharath14 <y.bharath@samsung.com>
2025-02-27 21:31:41 +05:30
Francois Dumontet 8c9b007a0c zebra: reduce memory usage by streams when redistributing routes
required stream size is evaluated as a fix part and variable one.
the variable one depend on the number of nexthops.

Signed-off-by: Francois Dumontet <francois.dumontet@6wind.com>
2025-02-27 16:51:05 +01:00
Donald Sharp ad04988ad4 lib: Correct handling of /frr-vrf:lib/vrf/state/active
This value in the yang tree was returning NULL for
when the state of the vrf was not active.  It should
return a false.

Before:

eva# show mgmt get-data /frr-vrf:lib/vrf[name="vrf1"]
{
  "frr-vrf:lib": {
    "vrf": [
      {
        "name": "vrf1",
        "state": {
          "id": 4294967295
        }
eva# show mgmt get-data /frr-vrf:lib/vrf[name="BLUE"]
{
  "frr-vrf:lib": {
    "vrf": [
      {
        "name": "BLUE",
        "state": {
          "id": 68,
          "active": true
        },

After:

eva# show mgmt get-data /frr-vrf:lib/vrf[name="vrf1"]
{
  "frr-vrf:lib": {
    "vrf": [
      {
        "name": "vrf1",
        "state": {
          "id": 4294967295,
          "active": false
        }

eva# show mgmt get-data /frr-vrf:lib/vrf[name="BLUE"]
{
  "frr-vrf:lib": {
    "vrf": [
      {
        "name": "BLUE",
        "state": {
          "id": 68,
          "active": true
        },

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2025-02-26 16:01:04 -05:00
Donatas Abraitis 28178dde4c doc: Add more details for bgp reject-as-sets command
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-01-31 09:43:00 +02:00
Donatas Abraitis 25a37e9367 tests: Check if aggregated prefix is not advertised to contributing ASes
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-01-31 09:43:00 +02:00
Donatas Abraitis 925b365a87 bgpd: Do not advertise aggregate routes to contributing ASes
draft-ietf-idr-deprecate-as-set-confed-set-16 defines that we MUST NOT
advertise an aggregate prefix to the contributing ASes.

Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
2025-01-31 09:43:00 +02:00
575 changed files with 24354 additions and 5731 deletions

View file

@ -1,5 +1,5 @@
[MASTER]
init-hook="import sys; sys.path.insert(0, '..')"
init-hook="import sys; sys.path.extend(['..', 'tests/topotests']);"
signature-mutators=common_config.retry,retry
[FORMAT]

View file

@ -185,7 +185,6 @@ include grpc/subdir.am
include tools/subdir.am
include mgmtd/subdir.am
include rustlibd/subdir.am
include bgpd/subdir.am
include bgpd/rfp-example/librfp/subdir.am
@ -286,7 +285,6 @@ EXTRA_DIST += \
qpb/Makefile \
ripd/Makefile \
ripngd/Makefile \
rustlibd/Makefile \
staticd/Makefile \
tests/Makefile \
tools/Makefile \

View file

@ -310,7 +310,8 @@ DEFPY (babel_set_wired,
babel_ifp = babel_get_if_nfo(ifp);
assert (babel_ifp != NULL);
babel_set_wired_internal(babel_ifp, no ? 0 : 1);
if ((CHECK_FLAG(babel_ifp->flags, BABEL_IF_WIRED) ? 1 : 0) != (no ? 0 : 1))
babel_set_wired_internal(babel_ifp, no ? 0 : 1);
return CMD_SUCCESS;
}
@ -328,7 +329,8 @@ DEFPY (babel_set_wireless,
babel_ifp = babel_get_if_nfo(ifp);
assert (babel_ifp != NULL);
babel_set_wired_internal(babel_ifp, no ? 1 : 0);
if ((CHECK_FLAG(babel_ifp->flags, BABEL_IF_WIRED) ? 1 : 0) != (no ? 1 : 0))
babel_set_wired_internal(babel_ifp, no ? 1 : 0);
return CMD_SUCCESS;
}
@ -364,12 +366,19 @@ DEFPY (babel_set_hello_interval,
{
VTY_DECLVAR_CONTEXT(interface, ifp);
babel_interface_nfo *babel_ifp;
unsigned int old_interval;
babel_ifp = babel_get_if_nfo(ifp);
assert (babel_ifp != NULL);
old_interval = babel_ifp->hello_interval;
babel_ifp->hello_interval = no ?
BABEL_DEFAULT_HELLO_INTERVAL : hello_interval;
if (old_interval != babel_ifp->hello_interval){
set_timeout(&babel_ifp->hello_timeout, babel_ifp->hello_interval);
send_hello(ifp);
}
return CMD_SUCCESS;
}
@ -746,8 +755,10 @@ babel_interface_close_all(void)
}
/* Disable babel redistribution */
for (type = 0; type < ZEBRA_ROUTE_MAX; type++) {
zclient_redistribute (ZEBRA_REDISTRIBUTE_DELETE, zclient, AFI_IP, type, 0, VRF_DEFAULT);
zclient_redistribute (ZEBRA_REDISTRIBUTE_DELETE, zclient, AFI_IP6, type, 0, VRF_DEFAULT);
zclient_redistribute(ZEBRA_REDISTRIBUTE_DELETE, babel_zclient, AFI_IP, type, 0,
VRF_DEFAULT);
zclient_redistribute(ZEBRA_REDISTRIBUTE_DELETE, babel_zclient, AFI_IP6, type, 0,
VRF_DEFAULT);
}
}
@ -965,6 +976,7 @@ DEFUN (show_babel_route,
{
struct route_stream *routes = NULL;
struct xroute_stream *xroutes = NULL;
routes = route_stream(0);
if(routes) {
while(1) {

View file

@ -19,6 +19,7 @@ Copyright 2011 by Matthieu Boutier and Juliusz Chroboczek
#include "memory.h"
#include "libfrr.h"
#include "lib_errors.h"
#include "plist.h"
#include "babel_main.h"
#include "babeld.h"
@ -313,6 +314,7 @@ babel_exit_properly(void)
debugf(BABEL_DEBUG_COMMON, "Done.");
vrf_terminate();
prefix_list_reset();
frr_fini();
exit(0);

View file

@ -19,7 +19,7 @@ void babelz_zebra_init(void);
/* we must use a pointer because of zclient.c's functions (new, free). */
struct zclient *zclient;
struct zclient *babel_zclient;
/* Debug types */
static const struct {
@ -94,9 +94,10 @@ DEFUN (babel_redistribute_type,
}
if (!negate)
zclient_redistribute (ZEBRA_REDISTRIBUTE_ADD, zclient, afi, type, 0, VRF_DEFAULT);
zclient_redistribute(ZEBRA_REDISTRIBUTE_ADD, babel_zclient, afi, type, 0, VRF_DEFAULT);
else {
zclient_redistribute (ZEBRA_REDISTRIBUTE_DELETE, zclient, afi, type, 0, VRF_DEFAULT);
zclient_redistribute(ZEBRA_REDISTRIBUTE_DELETE, babel_zclient, afi, type, 0,
VRF_DEFAULT);
/* perhaps should we remove xroutes having the same type... */
}
return CMD_SUCCESS;
@ -230,11 +231,11 @@ static zclient_handler *const babel_handlers[] = {
void babelz_zebra_init(void)
{
zclient = zclient_new(master, &zclient_options_default, babel_handlers,
array_size(babel_handlers));
zclient_init(zclient, ZEBRA_ROUTE_BABEL, 0, &babeld_privs);
babel_zclient = zclient_new(master, &zclient_options_default, babel_handlers,
array_size(babel_handlers));
zclient_init(babel_zclient, ZEBRA_ROUTE_BABEL, 0, &babeld_privs);
zclient->zebra_connected = babel_zebra_connected;
babel_zclient->zebra_connected = babel_zebra_connected;
install_element(BABEL_NODE, &babel_redistribute_type_cmd);
install_element(ENABLE_NODE, &debug_babel_cmd);
@ -248,6 +249,6 @@ void babelz_zebra_init(void)
void
babel_zebra_close_connexion(void)
{
zclient_stop(zclient);
zclient_free(zclient);
zclient_stop(babel_zclient);
zclient_free(babel_zclient);
}

View file

@ -8,7 +8,7 @@ Copyright 2011 by Matthieu Boutier and Juliusz Chroboczek
#include "vty.h"
extern struct zclient *zclient;
extern struct zclient *babel_zclient;
void babelz_zebra_init(void);
void babel_zebra_close_connexion(void);

View file

@ -108,8 +108,8 @@ babel_config_write (struct vty *vty)
/* list redistributed protocols */
for (afi = AFI_IP; afi <= AFI_IP6; afi++) {
for (i = 0; i < ZEBRA_ROUTE_MAX; i++) {
if (i != zclient->redist_default &&
vrf_bitmap_check(&zclient->redist[afi][i], VRF_DEFAULT)) {
if (i != babel_zclient->redist_default &&
vrf_bitmap_check(&babel_zclient->redist[afi][i], VRF_DEFAULT)) {
vty_out(vty, " redistribute %s %s\n",
(afi == AFI_IP) ? "ipv4" : "ipv6",
zebra_route_string(i));
@ -183,6 +183,10 @@ static void babel_read_protocol(struct event *thread)
flog_err_sys(EC_LIB_SOCKET, "recv: %s", safe_strerror(errno));
}
} else {
if(ntohs(sin6.sin6_port) != BABEL_PORT) {
return;
}
FOR_ALL_INTERFACES(vrf, ifp) {
if(!if_up(ifp))
continue;
@ -212,7 +216,8 @@ static void babel_init_routing_process(struct event *thread)
babel_main_loop(thread);/* this function self-add to the t_update thread */
}
/* fill "myid" with an unique id (only if myid != {0}). */
/* fill "myid" with an unique id (only if myid != {0} and myid != {0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}). */
static void
babel_get_myid(void)
{
@ -222,7 +227,7 @@ babel_get_myid(void)
int i;
/* if we already have an id (from state file), we return. */
if (memcmp(myid, zeroes, 8) != 0) {
if (memcmp(myid, zeroes, 8) != 0 && memcmp(myid, ones, 8) != 0) {
return;
}

View file

@ -21,6 +21,8 @@ Copyright 2011 by Matthieu Boutier and Juliusz Chroboczek
#undef MAX
#undef MIN
#define BABEL_PORT 6696
#define MAX(x,y) ((x)<=(y)?(y):(x))
#define MIN(x,y) ((x)<=(y)?(x):(y))

View file

@ -176,8 +176,7 @@ zebra_route(int add, int family, const unsigned char *pref, unsigned short plen,
debugf(BABEL_DEBUG_ROUTE, "%s route (%s) to zebra",
add ? "adding" : "removing",
(family == AF_INET) ? "ipv4" : "ipv6");
return zclient_route_send (add ? ZEBRA_ROUTE_ADD : ZEBRA_ROUTE_DELETE,
zclient, &api);
return zclient_route_send(add ? ZEBRA_ROUTE_ADD : ZEBRA_ROUTE_DELETE, babel_zclient, &api);
}
int

View file

@ -27,6 +27,7 @@ int split_horizon = 1;
unsigned short myseqno = 0;
#define UNICAST_BUFSIZE 1024
#define RESERVED 0
static int unicast_buffered = 0;
static unsigned char *unicast_buffer = NULL;
struct neighbour *unicast_neighbour = NULL;
@ -52,7 +53,17 @@ static const unsigned char tlv_min_length[MESSAGE_MAX + 1] =
static bool
known_ae(int ae)
{
return ae <= 4;
return ae <= 3;
}
static inline bool
is_all_zero(const unsigned char *data, int len) {
for (int j = 0; j < len; j++) {
if (data[j] != 0) {
return false;
}
}
return true;
}
/* Parse a network prefix, encoded in the somewhat baroque compressed
@ -151,7 +162,11 @@ static bool parse_update_subtlv(const unsigned char *a, int alen,
"Received Mandatory bit set but this FRR version is not prepared to handle it at this point");
return true;
} else if (type == SUBTLV_PADN) {
/* Nothing. */
if (!is_all_zero(a + i + 2, len)) {
debugf(BABEL_DEBUG_COMMON,
"Received pad%d with non zero MBZ field.",
len);
}
} else if (type == SUBTLV_DIVERSITY) {
if (len > DIVERSITY_HOPS) {
flog_err(
@ -214,7 +229,11 @@ parse_hello_subtlv(const unsigned char *a, int alen,
"Received subtlv with Mandatory bit, this version of FRR is not prepared to handle this currently");
return -2;
} else if (type == SUBTLV_PADN) {
/* Nothing to do. */
if (!is_all_zero(a + i + 2, len)) {
debugf(BABEL_DEBUG_COMMON,
"Received pad%d with non zero MBZ field.",
len);
}
} else if (type == SUBTLV_TIMESTAMP) {
if (len >= 4) {
DO_NTOHL(*hello_send_us, a + i + 2);
@ -261,7 +280,11 @@ parse_ihu_subtlv(const unsigned char *a, int alen,
}
if(type == SUBTLV_PADN) {
/* Nothing to do. */
if (!is_all_zero(a + i + 2, len)) {
debugf(BABEL_DEBUG_COMMON,
"Received pad%d with non zero MBZ field.",
len);
}
} else if(type == SUBTLV_TIMESTAMP) {
if(len >= 8) {
DO_NTOHL(*hello_send_us, a + i + 2);
@ -290,7 +313,7 @@ parse_request_subtlv(int ae, const unsigned char *a, int alen,
int have_src_prefix = 0;
while(i < alen) {
type = a[0];
type = a[i];
if(type == SUBTLV_PAD1) {
i++;
continue;
@ -441,6 +464,14 @@ parse_packet(const unsigned char *from, struct interface *ifp,
return;
}
if (v4mapped(from)) {
memcpy(v4_nh, from, 16);
have_v4_nh = 1;
} else {
memcpy(v6_nh, from, 16);
have_v6_nh = 1;
}
i = 0;
while(i < bodylen) {
message = packet + 4 + i;
@ -454,12 +485,23 @@ parse_packet(const unsigned char *from, struct interface *ifp,
len = message[1];
if(type == MESSAGE_PADN) {
if (!is_all_zero(message + 2, len)) {
debugf(BABEL_DEBUG_COMMON,
"Received pad%d with non zero MBZ field.",
len);
}
debugf(BABEL_DEBUG_COMMON,"Received pad%d from %s on %s.",
len, format_address(from), ifp->name);
} else if(type == MESSAGE_ACK_REQ) {
unsigned short nonce, interval;
unsigned short nonce, interval, Reserved;
DO_NTOHS(Reserved, message + 2);
DO_NTOHS(nonce, message + 4);
DO_NTOHS(interval, message + 6);
if (Reserved != RESERVED) {
debugf(BABEL_DEBUG_COMMON,"Received ack-req (%04X %d) with non zero Reserved from %s on %s.",
nonce, interval, format_address(from), ifp->name);
goto done;
}
debugf(BABEL_DEBUG_COMMON,"Received ack-req (%04X %d) from %s on %s.",
nonce, interval, format_address(from), ifp->name);
send_ack(neigh, nonce, interval);
@ -520,8 +562,15 @@ parse_packet(const unsigned char *from, struct interface *ifp,
}
} else if(type == MESSAGE_IHU) {
unsigned short txcost, interval;
unsigned char Reserved;
unsigned char address[16];
int rc;
Reserved = message[3];
if (Reserved != RESERVED) {
debugf(BABEL_DEBUG_COMMON,"Received ihu with non zero Reserved from %s on %s.",
format_address(from), ifp->name);
goto done;
}
DO_NTOHS(txcost, message + 4);
DO_NTOHS(interval, message + 6);
rc = network_address(message[2], message + 8, len - 6, address);
@ -552,6 +601,13 @@ parse_packet(const unsigned char *from, struct interface *ifp,
} else if(type == MESSAGE_NH) {
unsigned char nh[16];
int rc;
if(message[2] != 1 && message[2] != 3) {
debugf(BABEL_DEBUG_COMMON,"Received NH with incorrect AE %d.",
message[2]);
have_v4_nh = 0;
have_v6_nh = 0;
goto fail;
}
rc = network_address(message[2], message + 4, len - 2, nh);
if(rc <= 0) {
have_v4_nh = 0;
@ -576,6 +632,20 @@ parse_packet(const unsigned char *from, struct interface *ifp,
int rc, parsed_len;
bool ignore_update = false;
// Basic sanity check on length
if (len < 10) {
if (len < 2 || (message[3] & 0x80)) {
have_v4_prefix = have_v6_prefix = 0;
}
goto fail;
}
if(!known_ae(message[2])) {
debugf(BABEL_DEBUG_COMMON,"Received update with unknown AE %d. Ignoring.",
message[2]);
goto done;
}
DO_NTOHS(interval, message + 6);
DO_NTOHS(seqno, message + 8);
DO_NTOHS(metric, message + 10);
@ -614,7 +684,7 @@ parse_packet(const unsigned char *from, struct interface *ifp,
}
have_router_id = 1;
}
if(!have_router_id && message[2] != 0) {
if(metric < INFINITY && !have_router_id && message[2] != 0) {
flog_err(EC_BABEL_PACKET,
"Received prefix with no router id.");
goto fail;
@ -626,9 +696,15 @@ parse_packet(const unsigned char *from, struct interface *ifp,
format_address(from), ifp->name);
if(message[2] == 0) {
if(metric < 0xFFFF) {
if(metric < INFINITY) {
flog_err(EC_BABEL_PACKET,
"Received wildcard update with finite metric.");
"Received wildcard update with finite metric.");
goto done;
}
// Add check for Plen and Omitted
if(message[4] != 0 || message[5] != 0) {
flog_err(EC_BABEL_PACKET,
"Received wildcard retraction with non-zero Plen or Omitted.");
goto done;
}
retract_neighbour_routes(neigh);
@ -693,6 +769,10 @@ parse_packet(const unsigned char *from, struct interface *ifp,
memcpy(src_prefix, zeroes, 16);
src_plen = 0;
}
if(message[6] == 0) {
debugf(BABEL_DEBUG_COMMON, "Received seqno request with invalid hop count 0");
goto done;
}
rc = parse_request_subtlv(message[2], message + 4 + rc,
len - 2 - rc, src_prefix, &src_plen);
if(rc < 0)
@ -706,6 +786,11 @@ parse_packet(const unsigned char *from, struct interface *ifp,
"Received source-specific wildcard request.");
goto done;
}
if(message[3] != 0) {
flog_err(EC_BABEL_PACKET,
"Ignoring request with AE=0 and non-zero Plen");
goto done;
}
/* If a neighbour is requesting a full route dump from us,
we might as well send it an IHU. */
send_ihu(neigh, NULL);
@ -721,8 +806,14 @@ parse_packet(const unsigned char *from, struct interface *ifp,
send_update(neigh->ifp, 0, prefix, plen);
}
} else if(type == MESSAGE_MH_REQUEST) {
unsigned char prefix[16], plen;
unsigned char prefix[16], plen, Reserved;
unsigned short seqno;
Reserved = message[7];
if (Reserved != RESERVED) {
debugf(BABEL_DEBUG_COMMON,"Received request with non zero Reserved from %s on %s.",
format_address(from), ifp->name);
goto done;
}
int rc;
DO_NTOHS(seqno, message + 4);
rc = network_prefix(message[2], message[3], 0,
@ -734,6 +825,10 @@ parse_packet(const unsigned char *from, struct interface *ifp,
format_prefix(prefix, plen),
format_address(from), ifp->name,
format_eui64(message + 8), seqno);
if(message[6] == 0) {
debugf(BABEL_DEBUG_COMMON, "Received request with invalid hop count 0");
goto done;
}
handle_request(neigh, prefix, plen, message[6], seqno, message + 8);
} else {
debugf(BABEL_DEBUG_COMMON,"Received unknown packet type %d from %s on %s.",
@ -1905,8 +2000,14 @@ handle_request(struct neighbour *neigh, const unsigned char *prefix,
/* We were about to forward a request to its requestor. Try to
find a different neighbour to forward the request to. */
struct babel_route *other_route;
/* First try feasible routes as required by RFC */
other_route = find_best_route(prefix, plen, 1, neigh);
other_route = find_best_route(prefix, plen, 0, neigh);
if(!other_route || route_metric(other_route) >= INFINITY) {
/* If no feasible route found, try non-feasible routes */
other_route = find_best_route(prefix, plen, 0, neigh);
}
if(other_route && route_metric(other_route) < INFINITY)
successor = other_route->neigh;
}

View file

@ -1078,17 +1078,26 @@ route_lost(struct source *src, unsigned oldmetric)
new_route = find_best_route(src->prefix, src->plen, 1, NULL);
if(new_route) {
consider_route(new_route);
} else if(oldmetric < INFINITY) {
/* Avoid creating a blackhole. */
send_update_resend(NULL, src->prefix, src->plen);
/* If the route was usable enough, try to get an alternate one.
If it was not, we could be dealing with oscillations around
the value of INFINITY. */
if(oldmetric <= INFINITY / 2)
} else {
struct babel_route *unfeasible = find_best_route(src->prefix, src->plen, 0, NULL);
if(unfeasible && !route_expired(unfeasible)) {
/* MUST send seqno request when we have unexpired unfeasible routes */
send_request_resend(NULL, src->prefix, src->plen,
src->metric >= INFINITY ?
src->seqno : seqno_plus(src->seqno, 1),
seqno_plus(src->seqno, 1),
src->id);
} else if(oldmetric < INFINITY) {
/* Avoid creating a blackhole. */
send_update_resend(NULL, src->prefix, src->plen);
/* If the route was usable enough, try to get an alternate one.
If it was not, we could be dealing with oscillations around
the value of INFINITY. */
if(oldmetric <= INFINITY / 2)
send_request_resend(NULL, src->prefix, src->plen,
src->metric >= INFINITY ?
src->seqno : seqno_plus(src->seqno, 1),
src->id);
}
}
}

View file

@ -38,7 +38,6 @@ struct babel_route {
struct route_stream;
extern struct babel_route **routes;
extern int kernel_metric;
extern enum babel_diversity diversity_kind;
extern int diversity_factor;

View file

@ -79,6 +79,7 @@ static void bfd_profile_set_default(struct bfd_profile *bp)
bp->detection_multiplier = BFD_DEFDETECTMULT;
bp->echo_mode = false;
bp->passive = false;
bp->log_session_changes = false;
bp->minimum_ttl = BFD_DEF_MHOP_TTL;
bp->min_echo_rx = BFD_DEF_REQ_MIN_ECHO_RX;
bp->min_echo_tx = BFD_DEF_DES_MIN_ECHO_TX;
@ -210,6 +211,12 @@ void bfd_session_apply(struct bfd_session *bs)
else
bfd_set_shutdown(bs, bs->peer_profile.admin_shutdown);
/* Toggle 'no log-session-changes' if default value. */
if (bs->peer_profile.log_session_changes == false)
bfd_set_log_session_changes(bs, bp->log_session_changes);
else
bfd_set_log_session_changes(bs, bs->peer_profile.log_session_changes);
/* If session interval changed negotiate new timers. */
if (bs->ses_state == PTM_BFD_UP
&& (bs->timers.desired_min_tx != min_tx
@ -574,6 +581,9 @@ void ptm_bfd_sess_up(struct bfd_session *bfd)
zlog_debug("state-change: [%s] %s -> %s",
bs_to_string(bfd), state_list[old_state].str,
state_list[bfd->ses_state].str);
if (CHECK_FLAG(bfd->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES))
zlog_notice("Session-Change: [%s] %s -> %s", bs_to_string(bfd),
state_list[old_state].str, state_list[bfd->ses_state].str);
}
}
@ -621,6 +631,11 @@ void ptm_bfd_sess_dn(struct bfd_session *bfd, uint8_t diag)
bs_to_string(bfd), state_list[old_state].str,
state_list[bfd->ses_state].str,
get_diag_str(bfd->local_diag));
if (CHECK_FLAG(bfd->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES) &&
old_state == PTM_BFD_UP)
zlog_notice("Session-Change: [%s] %s -> %s reason:%s", bs_to_string(bfd),
state_list[old_state].str, state_list[bfd->ses_state].str,
get_diag_str(bfd->local_diag));
}
/* clear peer's mac address */
@ -651,6 +666,9 @@ void ptm_sbfd_sess_up(struct bfd_session *bfd)
if (bglobal.debug_peer_event)
zlog_info("state-change: [%s] %s -> %s", bs_to_string(bfd),
state_list[old_state].str, state_list[bfd->ses_state].str);
if (CHECK_FLAG(bfd->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES))
zlog_notice("Session-Change: [%s] %s -> %s", bs_to_string(bfd),
state_list[old_state].str, state_list[bfd->ses_state].str);
}
}
@ -693,6 +711,11 @@ void ptm_sbfd_init_sess_dn(struct bfd_session *bfd, uint8_t diag)
zlog_debug("state-change: [%s] %s -> %s reason:%s", bs_to_string(bfd),
state_list[old_state].str, state_list[bfd->ses_state].str,
get_diag_str(bfd->local_diag));
if (CHECK_FLAG(bfd->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES) &&
old_state == PTM_BFD_UP)
zlog_notice("Session-Change: [%s] %s -> %s reason:%s", bs_to_string(bfd),
state_list[old_state].str, state_list[bfd->ses_state].str,
get_diag_str(bfd->local_diag));
}
/* reset local address ,it might has been be changed after bfd is up*/
//memset(&bfd->local_address, 0, sizeof(bfd->local_address));
@ -721,32 +744,18 @@ void ptm_sbfd_echo_sess_dn(struct bfd_session *bfd, uint8_t diag)
zlog_warn("state-change: [%s] %s -> %s reason:%s", bs_to_string(bfd),
state_list[old_state].str, state_list[bfd->ses_state].str,
get_diag_str(bfd->local_diag));
if (CHECK_FLAG(bfd->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES) &&
old_state == PTM_BFD_UP)
zlog_notice("Session-Change: [%s] %s -> %s reason:%s", bs_to_string(bfd),
state_list[old_state].str, state_list[bfd->ses_state].str,
get_diag_str(bfd->local_diag));
}
}
static struct bfd_session *bfd_find_disc(struct sockaddr_any *sa,
uint32_t ldisc)
{
struct bfd_session *bs;
bs = bfd_id_lookup(ldisc);
if (bs == NULL)
return NULL;
switch (bs->key.family) {
case AF_INET:
if (memcmp(&sa->sa_sin.sin_addr, &bs->key.peer,
sizeof(sa->sa_sin.sin_addr)))
return NULL;
break;
case AF_INET6:
if (memcmp(&sa->sa_sin6.sin6_addr, &bs->key.peer,
sizeof(sa->sa_sin6.sin6_addr)))
return NULL;
break;
}
return bs;
return bfd_id_lookup(ldisc);
}
struct bfd_session *ptm_bfd_sess_find(struct bfd_pkt *cp,
@ -944,6 +953,11 @@ static void _bfd_session_update(struct bfd_session *bs,
bs->peer_profile.echo_mode = bpc->bpc_echo;
bfd_set_echo(bs, bpc->bpc_echo);
if (bpc->bpc_log_session_changes)
SET_FLAG(bs->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES);
else
UNSET_FLAG(bs->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES);
/*
* Shutdown needs to be the last in order to avoid timers enable when
* the session is disabled.
@ -1531,6 +1545,7 @@ void bfd_set_shutdown(struct bfd_session *bs, bool shutdown)
return;
SET_FLAG(bs->flags, BFD_SESS_FLAG_SHUTDOWN);
bs->local_diag = BD_ADMIN_DOWN;
/* Handle data plane shutdown case. */
if (bs->bdc) {
@ -1608,6 +1623,14 @@ void bfd_set_passive_mode(struct bfd_session *bs, bool passive)
}
}
void bfd_set_log_session_changes(struct bfd_session *bs, bool log_session_changes)
{
if (log_session_changes)
SET_FLAG(bs->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES);
else
UNSET_FLAG(bs->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES);
}
/*
* Helper functions.
*/
@ -2495,7 +2518,7 @@ void sbfd_reflector_free(const uint32_t discr)
return;
}
void sbfd_reflector_flush()
void sbfd_reflector_flush(void)
{
sbfd_discr_iterate(_sbfd_reflector_free, NULL);
return;

View file

@ -84,6 +84,7 @@ struct bfd_peer_cfg {
bool bpc_cbit;
bool bpc_passive;
bool bpc_log_session_changes;
bool bpc_has_profile;
char bpc_profile[64];
@ -224,21 +225,22 @@ enum bfd_diagnosticis {
/* BFD session flags */
enum bfd_session_flags {
BFD_SESS_FLAG_NONE = 0,
BFD_SESS_FLAG_ECHO = 1 << 0, /* BFD Echo functionality */
BFD_SESS_FLAG_ECHO_ACTIVE = 1 << 1, /* BFD Echo Packets are being sent
BFD_SESS_FLAG_ECHO = 1 << 0, /* BFD Echo functionality */
BFD_SESS_FLAG_ECHO_ACTIVE = 1 << 1, /* BFD Echo Packets are being sent
* actively
*/
BFD_SESS_FLAG_MH = 1 << 2, /* BFD Multi-hop session */
BFD_SESS_FLAG_IPV6 = 1 << 4, /* BFD IPv6 session */
BFD_SESS_FLAG_SEND_EVT_ACTIVE = 1 << 5, /* send event timer active */
BFD_SESS_FLAG_SEND_EVT_IGNORE = 1 << 6, /* ignore send event when timer
BFD_SESS_FLAG_MH = 1 << 2, /* BFD Multi-hop session */
BFD_SESS_FLAG_IPV6 = 1 << 4, /* BFD IPv6 session */
BFD_SESS_FLAG_SEND_EVT_ACTIVE = 1 << 5, /* send event timer active */
BFD_SESS_FLAG_SEND_EVT_IGNORE = 1 << 6, /* ignore send event when timer
* expires
*/
BFD_SESS_FLAG_SHUTDOWN = 1 << 7, /* disable BGP peer function */
BFD_SESS_FLAG_CONFIG = 1 << 8, /* Session configured with bfd NB API */
BFD_SESS_FLAG_CBIT = 1 << 9, /* CBIT is set */
BFD_SESS_FLAG_PASSIVE = 1 << 10, /* Passive mode */
BFD_SESS_FLAG_MAC_SET = 1 << 11, /* MAC of peer known */
BFD_SESS_FLAG_SHUTDOWN = 1 << 7, /* disable BGP peer function */
BFD_SESS_FLAG_CONFIG = 1 << 8, /* Session configured with bfd NB API */
BFD_SESS_FLAG_CBIT = 1 << 9, /* CBIT is set */
BFD_SESS_FLAG_PASSIVE = 1 << 10, /* Passive mode */
BFD_SESS_FLAG_MAC_SET = 1 << 11, /* MAC of peer known */
BFD_SESS_FLAG_LOG_SESSION_CHANGES = 1 << 12, /* Log session changes */
};
enum bfd_mode_type {
@ -297,6 +299,8 @@ struct bfd_profile {
bool admin_shutdown;
/** Passive mode. */
bool passive;
/** Log session changes. */
bool log_session_changes;
/** Minimum expected TTL value. */
uint8_t minimum_ttl;
@ -682,6 +686,14 @@ void bfd_set_shutdown(struct bfd_session *bs, bool shutdown);
*/
void bfd_set_passive_mode(struct bfd_session *bs, bool passive);
/**
* Set the BFD session to log or not log session changes.
*
* \param bs the BFD session.
* \param log_session indicates whether or not to log session changes.
*/
void bfd_set_log_session_changes(struct bfd_session *bs, bool log_session);
/**
* Picks the BFD session configuration from the appropriated source:
* if using the default peer configuration prefer profile (if it exists),

View file

@ -754,6 +754,21 @@ void bfd_cli_show_passive(struct vty *vty, const struct lyd_node *dnode,
yang_dnode_get_bool(dnode, NULL) ? "" : "no ");
}
DEFPY_YANG(bfd_peer_log_session_changes, bfd_peer_log_session_changes_cmd,
"[no] log-session-changes",
NO_STR
"Log Up/Down changes for the session\n")
{
nb_cli_enqueue_change(vty, "./log-session-changes", NB_OP_MODIFY, no ? "false" : "true");
return nb_cli_apply_changes(vty, NULL);
}
void bfd_cli_show_log_session_changes(struct vty *vty, const struct lyd_node *dnode,
bool show_defaults)
{
vty_out(vty, " %slog-session-changes\n", yang_dnode_get_bool(dnode, NULL) ? "" : "no ");
}
DEFPY_YANG(
bfd_peer_minimum_ttl, bfd_peer_minimum_ttl_cmd,
"[no] minimum-ttl (1-254)$ttl",
@ -1063,6 +1078,9 @@ ALIAS_YANG(bfd_peer_passive, bfd_profile_passive_cmd,
NO_STR
"Don't attempt to start sessions\n")
ALIAS_YANG(bfd_peer_log_session_changes, bfd_profile_log_session_changes_cmd,
"[no] log-session-changes", NO_STR "Log Up/Down session changes in the profile\n")
ALIAS_YANG(bfd_peer_minimum_ttl, bfd_profile_minimum_ttl_cmd,
"[no] minimum-ttl (1-254)$ttl",
NO_STR
@ -1329,6 +1347,7 @@ bfdd_cli_init(void)
install_element(BFD_PEER_NODE, &bfd_peer_echo_receive_interval_cmd);
install_element(BFD_PEER_NODE, &bfd_peer_profile_cmd);
install_element(BFD_PEER_NODE, &bfd_peer_passive_cmd);
install_element(BFD_PEER_NODE, &bfd_peer_log_session_changes_cmd);
install_element(BFD_PEER_NODE, &bfd_peer_minimum_ttl_cmd);
install_element(BFD_PEER_NODE, &no_bfd_peer_minimum_ttl_cmd);
@ -1350,6 +1369,7 @@ bfdd_cli_init(void)
install_element(BFD_PROFILE_NODE, &bfd_profile_echo_transmit_interval_cmd);
install_element(BFD_PROFILE_NODE, &bfd_profile_echo_receive_interval_cmd);
install_element(BFD_PROFILE_NODE, &bfd_profile_passive_cmd);
install_element(BFD_PROFILE_NODE, &bfd_profile_log_session_changes_cmd);
install_element(BFD_PROFILE_NODE, &bfd_profile_minimum_ttl_cmd);
install_element(BFD_PROFILE_NODE, &no_bfd_profile_minimum_ttl_cmd);
}

View file

@ -70,6 +70,13 @@ const struct frr_yang_module_info frr_bfdd_info = {
.cli_show = bfd_cli_show_passive,
}
},
{
.xpath = "/frr-bfdd:bfdd/bfd/profile/log-session-changes",
.cbs = {
.modify = bfdd_bfd_profile_log_session_changes_modify,
.cli_show = bfd_cli_show_log_session_changes,
}
},
{
.xpath = "/frr-bfdd:bfdd/bfd/profile/minimum-ttl",
.cbs = {
@ -160,6 +167,13 @@ const struct frr_yang_module_info frr_bfdd_info = {
.cli_show = bfd_cli_show_passive,
}
},
{
.xpath = "/frr-bfdd:bfdd/bfd/sessions/single-hop/log-session-changes",
.cbs = {
.modify = bfdd_bfd_sessions_single_hop_log_session_changes_modify,
.cli_show = bfd_cli_show_log_session_changes,
}
},
{
.xpath = "/frr-bfdd:bfdd/bfd/sessions/single-hop/echo-mode",
.cbs = {
@ -356,6 +370,13 @@ const struct frr_yang_module_info frr_bfdd_info = {
.cli_show = bfd_cli_show_passive,
}
},
{
.xpath = "/frr-bfdd:bfdd/bfd/sessions/multi-hop/log-session-changes",
.cbs = {
.modify = bfdd_bfd_sessions_single_hop_log_session_changes_modify,
.cli_show = bfd_cli_show_log_session_changes,
}
},
{
.xpath = "/frr-bfdd:bfdd/bfd/sessions/multi-hop/minimum-ttl",
.cbs = {
@ -572,6 +593,13 @@ const struct frr_yang_module_info frr_bfdd_info = {
.cli_show = bfd_cli_show_passive,
}
},
{
.xpath = "/frr-bfdd:bfdd/bfd/sessions/sbfd-echo/log-session-changes",
.cbs = {
.modify = bfdd_bfd_sessions_single_hop_log_session_changes_modify,
.cli_show = bfd_cli_show_log_session_changes,
}
},
{
.xpath = "/frr-bfdd:bfdd/bfd/sessions/sbfd-echo/bfd-mode",
.cbs = {
@ -788,6 +816,13 @@ const struct frr_yang_module_info frr_bfdd_info = {
.cli_show = bfd_cli_show_passive,
}
},
{
.xpath = "/frr-bfdd:bfdd/bfd/sessions/sbfd-init/log-session-changes",
.cbs = {
.modify = bfdd_bfd_sessions_single_hop_log_session_changes_modify,
.cli_show = bfd_cli_show_log_session_changes,
}
},
{
.xpath = "/frr-bfdd:bfdd/bfd/sessions/sbfd-init/bfd-mode",
.cbs = {

View file

@ -24,6 +24,7 @@ int bfdd_bfd_profile_required_receive_interval_modify(
struct nb_cb_modify_args *args);
int bfdd_bfd_profile_administrative_down_modify(struct nb_cb_modify_args *args);
int bfdd_bfd_profile_passive_mode_modify(struct nb_cb_modify_args *args);
int bfdd_bfd_profile_log_session_changes_modify(struct nb_cb_modify_args *args);
int bfdd_bfd_profile_minimum_ttl_modify(struct nb_cb_modify_args *args);
int bfdd_bfd_profile_echo_mode_modify(struct nb_cb_modify_args *args);
int bfdd_bfd_profile_desired_echo_transmission_interval_modify(
@ -54,6 +55,7 @@ int bfdd_bfd_sessions_single_hop_administrative_down_modify(
struct nb_cb_modify_args *args);
int bfdd_bfd_sessions_single_hop_passive_mode_modify(
struct nb_cb_modify_args *args);
int bfdd_bfd_sessions_single_hop_log_session_changes_modify(struct nb_cb_modify_args *args);
int bfdd_bfd_sessions_single_hop_echo_mode_modify(
struct nb_cb_modify_args *args);
int bfdd_bfd_sessions_single_hop_desired_echo_transmission_interval_modify(
@ -229,6 +231,8 @@ void bfd_cli_peer_profile_show(struct vty *vty, const struct lyd_node *dnode,
bool show_defaults);
void bfd_cli_show_passive(struct vty *vty, const struct lyd_node *dnode,
bool show_defaults);
void bfd_cli_show_log_session_changes(struct vty *vty, const struct lyd_node *dnode,
bool show_defaults);
void bfd_cli_show_minimum_ttl(struct vty *vty, const struct lyd_node *dnode,
bool show_defaults);

View file

@ -595,6 +595,23 @@ int bfdd_bfd_profile_passive_mode_modify(struct nb_cb_modify_args *args)
return NB_OK;
}
/*
* XPath: /frr-bfdd:bfdd/bfd/profile/log-session-changes
*/
int bfdd_bfd_profile_log_session_changes_modify(struct nb_cb_modify_args *args)
{
struct bfd_profile *bp;
if (args->event != NB_EV_APPLY)
return NB_OK;
bp = nb_running_get_entry(args->dnode, NULL, true);
bp->log_session_changes = yang_dnode_get_bool(args->dnode, NULL);
bfd_profile_update(bp);
return NB_OK;
}
/*
* XPath: /frr-bfdd:bfdd/bfd/profile/minimum-ttl
*/
@ -903,6 +920,38 @@ int bfdd_bfd_sessions_single_hop_passive_mode_modify(
return NB_OK;
}
/*
* XPath: /frr-bfdd:bfdd/bfd/sessions/single-hop/log-session-changes
* /frr-bfdd:bfdd/bfd/sessions/multi-hop/log-session-changes
* /frr-bfdd:bfdd/bfd/sessions/sbfd_echo/log-session-changes
* /frr-bfdd:bfdd/bfd/sessions/sbfd_init/log-session-changes
*/
int bfdd_bfd_sessions_single_hop_log_session_changes_modify(struct nb_cb_modify_args *args)
{
struct bfd_session *bs;
bool log_session_changes;
switch (args->event) {
case NB_EV_VALIDATE:
case NB_EV_PREPARE:
return NB_OK;
case NB_EV_APPLY:
break;
case NB_EV_ABORT:
return NB_OK;
}
log_session_changes = yang_dnode_get_bool(args->dnode, NULL);
bs = nb_running_get_entry(args->dnode, NULL, true);
bs->peer_profile.log_session_changes = log_session_changes;
bfd_session_apply(bs);
return NB_OK;
}
/*
* XPath: /frr-bfdd:bfdd/bfd/sessions/sbfd-init/bfd-mode
* /frr-bfdd:bfdd/bfd/sessions/sbfd-echo/bfd-mode

View file

@ -164,9 +164,10 @@ static void _display_peer(struct vty *vty, struct bfd_session *bs)
vty_out(vty, "\t\tPassive mode\n");
else
vty_out(vty, "\t\tActive mode\n");
if (CHECK_FLAG(bs->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES))
vty_out(vty, "\t\tLog session changes\n");
if (CHECK_FLAG(bs->flags, BFD_SESS_FLAG_MH))
vty_out(vty, "\t\tMinimum TTL: %d\n", bs->mh_ttl);
vty_out(vty, "\t\tStatus: ");
switch (bs->ses_state) {
case PTM_BFD_ADM_DOWN:
@ -289,6 +290,8 @@ static struct json_object *__display_peer_json(struct bfd_session *bs)
json_object_int_add(jo, "remote-id", bs->discrs.remote_discr);
json_object_boolean_add(jo, "passive-mode",
CHECK_FLAG(bs->flags, BFD_SESS_FLAG_PASSIVE));
json_object_boolean_add(jo, "log-session-changes",
CHECK_FLAG(bs->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES));
if (CHECK_FLAG(bs->flags, BFD_SESS_FLAG_MH))
json_object_int_add(jo, "minimum-ttl", bs->mh_ttl);
@ -1194,6 +1197,7 @@ static int bfd_configure_peer(struct bfd_peer_cfg *bpc, bool mhop,
/* Defaults */
bpc->bpc_shutdown = false;
bpc->bpc_log_session_changes = false;
bpc->bpc_detectmultiplier = BPC_DEF_DETECTMULTIPLIER;
bpc->bpc_recvinterval = BPC_DEF_RECEIVEINTERVAL;
bpc->bpc_txinterval = BPC_DEF_TRANSMITINTERVAL;

View file

@ -384,10 +384,15 @@ bfd_dplane_session_state_change(struct bfd_dplane_ctx *bdc,
break;
}
if (bglobal.debug_peer_event)
if (bglobal.debug_peer_event) {
zlog_debug("state-change: [data plane: %s] %s -> %s",
bs_to_string(bs), state_list[old_state].str,
state_list[bs->ses_state].str);
if (CHECK_FLAG(bs->flags, BFD_SESS_FLAG_LOG_SESSION_CHANGES) &&
old_state != bs->ses_state)
zlog_notice("Session-Change: [data plane: %s] %s -> %s", bs_to_string(bs),
state_list[old_state].str, state_list[bs->ses_state].str);
}
}
/**

View file

@ -36,7 +36,7 @@ struct ptm_client {
TAILQ_HEAD(pcqueue, ptm_client);
static struct pcqueue pcqueue;
static struct zclient *zclient;
static struct zclient *bfd_zclient;
/*
@ -209,7 +209,7 @@ int ptm_bfd_notify(struct bfd_session *bs, uint8_t notify_state)
*
* q(64), l(32), w(16), c(8)
*/
msg = zclient->obuf;
msg = bfd_zclient->obuf;
stream_reset(msg);
/* TODO: VRF handling */
@ -264,7 +264,7 @@ int ptm_bfd_notify(struct bfd_session *bs, uint8_t notify_state)
/* Write packet size. */
stream_putw_at(msg, 0, stream_get_endp(msg));
return zclient_send_message(zclient);
return zclient_send_message(bfd_zclient);
}
static void _ptm_msg_read_address(struct stream *msg, struct sockaddr_any *sa)
@ -600,7 +600,7 @@ stream_failure:
static int bfdd_replay(ZAPI_CALLBACK_ARGS)
{
struct stream *msg = zclient->ibuf;
struct stream *msg = bfd_zclient->ibuf;
uint32_t rcmd;
STREAM_GETL(msg, rcmd);
@ -653,7 +653,7 @@ static void bfdd_zebra_connected(struct zclient *zc)
zclient_create_header(msg, ZEBRA_INTERFACE_ADD, VRF_DEFAULT);
/* Send requests. */
zclient_send_message(zclient);
zclient_send_message(zc);
}
static void bfdd_sessions_enable_interface(struct interface *ifp)
@ -837,32 +837,32 @@ void bfdd_zclient_init(struct zebra_privs_t *bfdd_priv)
{
hook_register_prio(if_real, 0, bfd_ifp_create);
hook_register_prio(if_unreal, 0, bfd_ifp_destroy);
zclient = zclient_new(master, &zclient_options_default, bfd_handlers,
array_size(bfd_handlers));
assert(zclient != NULL);
zclient_init(zclient, ZEBRA_ROUTE_BFD, 0, bfdd_priv);
bfd_zclient = zclient_new(master, &zclient_options_default, bfd_handlers,
array_size(bfd_handlers));
assert(bfd_zclient != NULL);
zclient_init(bfd_zclient, ZEBRA_ROUTE_BFD, 0, bfdd_priv);
/* Send replay request on zebra connect. */
zclient->zebra_connected = bfdd_zebra_connected;
bfd_zclient->zebra_connected = bfdd_zebra_connected;
}
void bfdd_zclient_register(vrf_id_t vrf_id)
{
if (!zclient || zclient->sock < 0)
if (!bfd_zclient || bfd_zclient->sock < 0)
return;
zclient_send_reg_requests(zclient, vrf_id);
zclient_send_reg_requests(bfd_zclient, vrf_id);
}
void bfdd_zclient_unregister(vrf_id_t vrf_id)
{
if (!zclient || zclient->sock < 0)
if (!bfd_zclient || bfd_zclient->sock < 0)
return;
zclient_send_dereg_requests(zclient, vrf_id);
zclient_send_dereg_requests(bfd_zclient, vrf_id);
}
void bfdd_zclient_stop(void)
{
zclient_stop(zclient);
zclient_stop(bfd_zclient);
/* Clean-up and free ptm clients data memory. */
pc_free_all();
@ -870,7 +870,7 @@ void bfdd_zclient_stop(void)
void bfdd_zclient_terminate(void)
{
zclient_free(zclient);
zclient_free(bfd_zclient);
}

View file

@ -424,8 +424,12 @@ static unsigned int aspath_count_hops_internal(const struct aspath *aspath)
/* Check if aspath has AS_SET or AS_CONFED_SET */
bool aspath_check_as_sets(struct aspath *aspath)
{
struct assegment *seg = aspath->segments;
struct assegment *seg;
if (!aspath || !aspath->segments)
return false;
seg = aspath->segments;
while (seg) {
if (seg->type == AS_SET || seg->type == AS_CONFED_SET)
return true;
@ -2512,3 +2516,39 @@ void bgp_remove_aspath_from_aggregate_hash(struct bgp_aggregate *aggregate,
}
}
struct aspath *aspath_delete_as_set_seq(struct aspath *aspath)
{
struct assegment *seg, *prev, *next;
bool removed = false;
if (!(aspath && aspath->segments))
return aspath;
seg = aspath->segments;
next = NULL;
prev = NULL;
while (seg) {
next = seg->next;
if (seg->type == AS_SET || seg->type == AS_CONFED_SET) {
if (aspath->segments == seg)
aspath->segments = seg->next;
else
prev->next = seg->next;
assegment_free(seg);
removed = true;
} else
prev = seg;
seg = next;
}
if (removed) {
aspath_str_update(aspath, false);
aspath->count = aspath_count_hops_internal(aspath);
}
return aspath;
}

View file

@ -168,5 +168,6 @@ extern void bgp_remove_aspath_from_aggregate_hash(
struct aspath *aspath);
extern void bgp_aggr_aspath_remove(void *arg);
extern struct aspath *aspath_delete_as_set_seq(struct aspath *aspath);
#endif /* _QUAGGA_BGP_ASPATH_H */

View file

@ -1444,11 +1444,11 @@ bgp_attr_malformed(struct bgp_attr_parser_args *args, uint8_t subcode,
uint8_t *notify_datap = (length > 0 ? args->startp : NULL);
if (bgp_debug_update(peer, NULL, NULL, 1)) {
char attr_str[BUFSIZ] = {0};
char str[BUFSIZ] = { 0 };
bgp_dump_attr(attr, attr_str, sizeof(attr_str));
bgp_dump_attr(attr, str, sizeof(str));
zlog_debug("%s: attributes: %s", __func__, attr_str);
zlog_debug("%s: attributes: %s", __func__, str);
}
/* Only relax error handling for eBGP peers */
@ -2043,11 +2043,11 @@ static int bgp_attr_aggregator(struct bgp_attr_parser_args *args)
peer->host, aspath_print(attr->aspath));
if (bgp_debug_update(peer, NULL, NULL, 1)) {
char attr_str[BUFSIZ] = {0};
char str[BUFSIZ] = { 0 };
bgp_dump_attr(attr, attr_str, sizeof(attr_str));
bgp_dump_attr(attr, str, sizeof(str));
zlog_debug("%s: attributes: %s", __func__, attr_str);
zlog_debug("%s: attributes: %s", __func__, str);
}
} else {
SET_FLAG(attr->flag, ATTR_FLAG_BIT(BGP_ATTR_AGGREGATOR));
@ -2094,11 +2094,11 @@ bgp_attr_as4_aggregator(struct bgp_attr_parser_args *args,
peer->host, aspath_print(attr->aspath));
if (bgp_debug_update(peer, NULL, NULL, 1)) {
char attr_str[BUFSIZ] = {0};
char str[BUFSIZ] = { 0 };
bgp_dump_attr(attr, attr_str, sizeof(attr_str));
bgp_dump_attr(attr, str, sizeof(str));
zlog_debug("%s: attributes: %s", __func__, attr_str);
zlog_debug("%s: attributes: %s", __func__, str);
}
} else {
SET_FLAG(attr->flag, ATTR_FLAG_BIT(BGP_ATTR_AS4_AGGREGATOR));
@ -5028,7 +5028,13 @@ void bgp_packet_mpunreach_prefix(struct stream *s, const struct prefix *p,
{
uint8_t wlabel[4] = {0x80, 0x00, 0x00};
if (safi == SAFI_LABELED_UNICAST) {
/* [RFC3107] also made it possible to withdraw a binding without
* specifying the label explicitly, by setting the Compatibility field
* to 0x800000. However, some implementations set it to 0x000000. In
* order to ensure backwards compatibility, it is RECOMMENDED by this
* document that the Compatibility field be set to 0x800000.
*/
if (safi == SAFI_LABELED_UNICAST || safi == SAFI_MPLS_VPN) {
label = (mpls_label_t *)wlabel;
num_labels = 1;
}

View file

@ -30,7 +30,7 @@
DEFINE_MTYPE_STATIC(BGPD, BFD_CONFIG, "BFD configuration data");
extern struct zclient *zclient;
extern struct zclient *bgp_zclient;
static void bfd_session_status_update(struct bfd_session_params *bsp,
const struct bfd_session_status *bss,
@ -651,7 +651,7 @@ DEFUN(no_neighbor_bfd_profile, no_neighbor_bfd_profile_cmd,
void bgp_bfd_init(struct event_loop *tm)
{
/* Initialize BFD client functions */
bfd_protocol_integration_init(zclient, tm);
bfd_protocol_integration_init(bgp_zclient, tm);
/* "neighbor bfd" commands. */
install_element(BGP_NODE, &neighbor_bfd_cmd);

View file

@ -3542,7 +3542,6 @@ static int bmp_bgp_attribute_updated(struct bgp *bgp, bool withdraw)
struct bmp_targets *bt;
struct listnode *node;
struct bmp_imported_bgp *bib;
int ret = 0;
struct stream *s = bmp_peerstate(bgp->peer_self, withdraw);
struct bmp *bmp;
afi_t afi;
@ -3553,8 +3552,8 @@ static int bmp_bgp_attribute_updated(struct bgp *bgp, bool withdraw)
if (bmpbgp) {
frr_each (bmp_targets, &bmpbgp->targets, bt) {
ret = bmp_bgp_attribute_updated_instance(bt, &bmpbgp->vrf_state, bgp,
withdraw, s);
bmp_bgp_attribute_updated_instance(bt, &bmpbgp->vrf_state, bgp,
withdraw, s);
if (withdraw)
continue;
frr_each (bmp_session, &bt->sessions, bmp) {
@ -3575,8 +3574,8 @@ static int bmp_bgp_attribute_updated(struct bgp *bgp, bool withdraw)
frr_each (bmp_imported_bgps, &bt->imported_bgps, bib) {
if (bgp_lookup_by_name(bib->name) != bgp)
continue;
ret += bmp_bgp_attribute_updated_instance(bt, &bib->vrf_state, bgp,
withdraw, s);
bmp_bgp_attribute_updated_instance(bt, &bib->vrf_state, bgp,
withdraw, s);
if (withdraw)
continue;
frr_each (bmp_session, &bt->sessions, bmp) {

View file

@ -1441,14 +1441,14 @@ static char *_ecommunity_ecom2str(struct ecommunity *ecom, int format, int filte
snprintf(encbuf, sizeof(encbuf), "FS:action %s",
action);
} else if (sub_type == ECOMMUNITY_TRAFFIC_RATE) {
union traffic_rate data;
union traffic_rate rate;
data.rate_byte[3] = *(pnt+2);
data.rate_byte[2] = *(pnt+3);
data.rate_byte[1] = *(pnt+4);
data.rate_byte[0] = *(pnt+5);
rate.rate_byte[3] = *(pnt + 2);
rate.rate_byte[2] = *(pnt + 3);
rate.rate_byte[1] = *(pnt + 4);
rate.rate_byte[0] = *(pnt + 5);
snprintf(encbuf, sizeof(encbuf), "FS:rate %f",
data.rate_float);
rate.rate_float);
} else if (sub_type == ECOMMUNITY_TRAFFIC_MARKING) {
snprintf(encbuf, sizeof(encbuf),
"FS:marking %u", *(pnt + 5));

View file

@ -905,7 +905,7 @@ static enum zclient_send_status bgp_zebra_send_remote_macip(
bool esi_valid;
/* Check socket. */
if (!zclient || zclient->sock < 0) {
if (!bgp_zclient || bgp_zclient->sock < 0) {
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("%s: No zclient or zclient->sock exists",
__func__);
@ -923,7 +923,7 @@ static enum zclient_send_status bgp_zebra_send_remote_macip(
if (!esi)
esi = zero_esi;
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(
@ -984,7 +984,7 @@ static enum zclient_send_status bgp_zebra_send_remote_macip(
frrtrace(5, frr_bgp, evpn_mac_ip_zsend, add, vpn, p, remote_vtep_ip,
esi);
return zclient_send_message(zclient);
return zclient_send_message(bgp_zclient);
}
/*
@ -998,7 +998,7 @@ bgp_zebra_send_remote_vtep(struct bgp *bgp, struct bgpevpn *vpn,
struct stream *s;
/* Check socket. */
if (!zclient || zclient->sock < 0) {
if (!bgp_zclient || bgp_zclient->sock < 0) {
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("%s: No zclient or zclient->sock exists",
__func__);
@ -1014,7 +1014,7 @@ bgp_zebra_send_remote_vtep(struct bgp *bgp, struct bgpevpn *vpn,
return ZCLIENT_SEND_SUCCESS;
}
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(
@ -1041,7 +1041,7 @@ bgp_zebra_send_remote_vtep(struct bgp *bgp, struct bgpevpn *vpn,
frrtrace(3, frr_bgp, evpn_bum_vtep_zsend, add, vpn, p);
return zclient_send_message(zclient);
return zclient_send_message(bgp_zclient);
}
/*
@ -2062,8 +2062,7 @@ static int update_evpn_route_entry(struct bgp *bgp, struct bgpevpn *vpn,
bgp_path_info_add(dest, tmp_pi);
} else {
tmp_pi = local_pi;
if (attrhash_cmp(tmp_pi->attr, attr)
&& !CHECK_FLAG(tmp_pi->flags, BGP_PATH_REMOVED))
if (!CHECK_FLAG(tmp_pi->flags, BGP_PATH_REMOVED) && attrhash_cmp(tmp_pi->attr, attr))
route_change = 0;
else {
/*
@ -3154,8 +3153,7 @@ static int install_evpn_route_entry_in_vrf(struct bgp *bgp_vrf,
pi = bgp_create_evpn_bgp_path_info(parent_pi, dest, &attr);
new_pi = true;
} else {
if (attrhash_cmp(pi->attr, &attr)
&& !CHECK_FLAG(pi->flags, BGP_PATH_REMOVED)) {
if (!CHECK_FLAG(pi->flags, BGP_PATH_REMOVED) && attrhash_cmp(pi->attr, &attr)) {
bgp_dest_unlock_node(dest);
return 0;
}
@ -3184,8 +3182,7 @@ static int install_evpn_route_entry_in_vrf(struct bgp *bgp_vrf,
/* Gateway IP nexthop should be resolved */
if (bre && bre->type == OVERLAY_INDEX_GATEWAY_IP) {
if (bgp_find_or_add_nexthop(bgp_vrf, bgp_vrf, afi, safi, pi,
NULL, 0, NULL))
if (bgp_find_or_add_nexthop(bgp_vrf, bgp_vrf, afi, safi, pi, NULL, 0, NULL, NULL))
bgp_path_info_set_flag(dest, pi, BGP_PATH_VALID);
else {
if (BGP_DEBUG(nht, NHT)) {
@ -3278,8 +3275,8 @@ static int install_evpn_route_entry_in_vni_common(
* install_evpn_route_entry_in_vni_mac() or
* install_evpn_route_entry_in_vni_ip()
*/
if (attrhash_cmp(pi->attr, parent_pi->attr) &&
!CHECK_FLAG(pi->flags, BGP_PATH_REMOVED))
if (!CHECK_FLAG(pi->flags, BGP_PATH_REMOVED) &&
attrhash_cmp(pi->attr, parent_pi->attr))
return 0;
/* The attribute has changed. */
/* Add (or update) attribute to hash. */

View file

@ -212,8 +212,8 @@ static int bgp_evpn_es_route_install(struct bgp *bgp,
bgp_dest_lock_node((struct bgp_dest *)parent_pi->net);
bgp_path_info_add(dest, pi);
} else {
if (attrhash_cmp(pi->attr, parent_pi->attr)
&& !CHECK_FLAG(pi->flags, BGP_PATH_REMOVED)) {
if (!CHECK_FLAG(pi->flags, BGP_PATH_REMOVED) &&
attrhash_cmp(pi->attr, parent_pi->attr)) {
bgp_dest_unlock_node(dest);
return 0;
}
@ -421,8 +421,7 @@ int bgp_evpn_mh_route_update(struct bgp *bgp, struct bgp_evpn_es *es,
bgp_path_info_add(dest, tmp_pi);
} else {
tmp_pi = local_pi;
if (attrhash_cmp(tmp_pi->attr, attr)
&& !CHECK_FLAG(tmp_pi->flags, BGP_PATH_REMOVED))
if (!CHECK_FLAG(tmp_pi->flags, BGP_PATH_REMOVED) && attrhash_cmp(tmp_pi->attr, attr))
*route_changed = 0;
else {
/* The attribute has changed.
@ -1388,7 +1387,7 @@ bgp_zebra_send_remote_es_vtep(struct bgp *bgp, struct bgp_evpn_es_vtep *es_vtep,
uint32_t flags = 0;
/* Check socket. */
if (!zclient || zclient->sock < 0) {
if (!bgp_zclient || bgp_zclient->sock < 0) {
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("%s: No zclient or zclient->sock exists",
__func__);
@ -1406,7 +1405,7 @@ bgp_zebra_send_remote_es_vtep(struct bgp *bgp, struct bgp_evpn_es_vtep *es_vtep,
if (CHECK_FLAG(es_vtep->flags, BGP_EVPNES_VTEP_ESR))
SET_FLAG(flags, ZAPI_ES_VTEP_FLAG_ESR_RXED);
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(s,
@ -1428,7 +1427,7 @@ bgp_zebra_send_remote_es_vtep(struct bgp *bgp, struct bgp_evpn_es_vtep *es_vtep,
frrtrace(3, frr_bgp, evpn_mh_vtep_zsend, add, es, es_vtep);
return zclient_send_message(zclient);
return zclient_send_message(bgp_zclient);
}
static enum zclient_send_status bgp_evpn_es_vtep_re_eval_active(
@ -2877,7 +2876,7 @@ static void bgp_evpn_l3nhg_zebra_add_v4_or_v6(struct bgp_evpn_es_vrf *es_vrf,
if (!api_nhg.nexthop_num)
return;
zclient_nhg_send(zclient, ZEBRA_NHG_ADD, &api_nhg);
zclient_nhg_send(bgp_zclient, ZEBRA_NHG_ADD, &api_nhg);
}
static bool bgp_evpn_l3nhg_zebra_ok(struct bgp_evpn_es_vrf *es_vrf)
@ -2886,7 +2885,7 @@ static bool bgp_evpn_l3nhg_zebra_ok(struct bgp_evpn_es_vrf *es_vrf)
return false;
/* Check socket. */
if (!zclient || zclient->sock < 0)
if (!bgp_zclient || bgp_zclient->sock < 0)
return false;
return true;
@ -2921,7 +2920,7 @@ static void bgp_evpn_l3nhg_zebra_del_v4_or_v6(struct bgp_evpn_es_vrf *es_vrf,
frrtrace(4, frr_bgp, evpn_mh_nhg_zsend, false, v4_nhg, api_nhg.id,
es_vrf);
zclient_nhg_send(zclient, ZEBRA_NHG_DEL, &api_nhg);
zclient_nhg_send(bgp_zclient, ZEBRA_NHG_DEL, &api_nhg);
}
static void bgp_evpn_l3nhg_zebra_del(struct bgp_evpn_es_vrf *es_vrf)
@ -4477,7 +4476,7 @@ static void bgp_evpn_nh_zebra_update_send(struct bgp_evpn_nh *nh, bool add)
struct bgp *bgp_vrf = nh->bgp_vrf;
/* Check socket. */
if (!zclient || zclient->sock < 0)
if (!bgp_zclient || bgp_zclient->sock < 0)
return;
/* Don't try to register if Zebra doesn't know of this instance. */
@ -4488,7 +4487,7 @@ static void bgp_evpn_nh_zebra_update_send(struct bgp_evpn_nh *nh, bool add)
return;
}
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(
@ -4513,7 +4512,7 @@ static void bgp_evpn_nh_zebra_update_send(struct bgp_evpn_nh *nh, bool add)
frrtrace(2, frr_bgp, evpn_mh_nh_rmac_zsend, add, nh);
zclient_send_message(zclient);
zclient_send_message(bgp_zclient);
}
static void bgp_evpn_nh_zebra_update(struct bgp_evpn_nh *nh, bool add)

View file

@ -673,8 +673,6 @@ static inline bool bgp_evpn_is_path_local(struct bgp *bgp,
&& pi->sub_type == BGP_ROUTE_STATIC);
}
extern struct zclient *zclient;
extern void bgp_evpn_install_uninstall_default_route(struct bgp *bgp_vrf,
afi_t afi, safi_t safi,
bool add);

View file

@ -1462,22 +1462,22 @@ static int bgp_show_ethernet_vpn(struct vty *vty, struct prefix_rd *prd,
output_count++;
if (use_json && json_array) {
const struct prefix *p =
const struct prefix *pfx =
bgp_dest_get_prefix(rm);
json_prefix_info = json_object_new_object();
json_object_string_addf(json_prefix_info,
"prefix", "%pFX", p);
"prefix", "%pFX", pfx);
json_object_int_add(json_prefix_info,
"prefixLen", p->prefixlen);
"prefixLen", pfx->prefixlen);
json_object_object_add(json_prefix_info,
"paths", json_array);
json_object_object_addf(json_nroute,
json_prefix_info,
"%pFX", p);
"%pFX", pfx);
json_array = NULL;
}
}
@ -6617,18 +6617,17 @@ static int add_rt(struct bgp *bgp, struct ecommunity *ecom, bool is_import,
{
/* Do nothing if we already have this route-target */
if (is_import) {
if (!bgp_evpn_vrf_rt_matches_existing(bgp->vrf_import_rtl,
ecom))
bgp_evpn_configure_import_rt_for_vrf(bgp, ecom,
is_wildcard);
else
if (CHECK_FLAG(bgp->vrf_flags, BGP_VRF_IMPORT_RT_CFGD) &&
bgp_evpn_vrf_rt_matches_existing(bgp->vrf_import_rtl, ecom))
return -1;
bgp_evpn_configure_import_rt_for_vrf(bgp, ecom, is_wildcard);
} else {
if (!bgp_evpn_vrf_rt_matches_existing(bgp->vrf_export_rtl,
ecom))
bgp_evpn_configure_export_rt_for_vrf(bgp, ecom);
else
if (CHECK_FLAG(bgp->vrf_flags, BGP_VRF_EXPORT_RT_CFGD) &&
bgp_evpn_vrf_rt_matches_existing(bgp->vrf_export_rtl, ecom))
return -1;
bgp_evpn_configure_export_rt_for_vrf(bgp, ecom);
}
return 0;
@ -7078,10 +7077,11 @@ DEFUN (bgp_evpn_vni_rt,
ecommunity_str(ecomadd);
/* Do nothing if we already have this import route-target */
if (!bgp_evpn_rt_matches_existing(vpn->import_rtl, ecomadd))
evpn_configure_import_rt(bgp, vpn, ecomadd);
else
if (CHECK_FLAG(vpn->flags, VNI_FLAG_IMPRT_CFGD) &&
bgp_evpn_rt_matches_existing(vpn->import_rtl, ecomadd))
ecommunity_free(&ecomadd);
else
evpn_configure_import_rt(bgp, vpn, ecomadd);
}
/* Add/update the export route-target */
@ -7096,10 +7096,11 @@ DEFUN (bgp_evpn_vni_rt,
ecommunity_str(ecomadd);
/* Do nothing if we already have this export route-target */
if (!bgp_evpn_rt_matches_existing(vpn->export_rtl, ecomadd))
evpn_configure_export_rt(bgp, vpn, ecomadd);
else
if (CHECK_FLAG(vpn->flags, VNI_FLAG_EXPRT_CFGD) &&
bgp_evpn_rt_matches_existing(vpn->export_rtl, ecomadd))
ecommunity_free(&ecomadd);
else
evpn_configure_export_rt(bgp, vpn, ecomadd);
}
return CMD_SUCCESS;

View file

@ -105,13 +105,6 @@ int bgp_nlri_parse_flowspec(struct peer *peer, struct attr *attr,
if (!attr)
withdraw = true;
if (packet->length >= FLOWSPEC_NLRI_SIZELIMIT_EXTENDED) {
flog_err(EC_BGP_FLOWSPEC_PACKET,
"BGP flowspec nlri length maximum reached (%u)",
packet->length);
return BGP_NLRI_PARSE_ERROR_FLOWSPEC_NLRI_SIZELIMIT;
}
for (; pnt < lim; pnt += psize) {
/* Clear prefix structure. */
memset(&p, 0, sizeof(p));

View file

@ -7,7 +7,6 @@
#define _FRR_BGP_FLOWSPEC_PRIVATE_H
#define FLOWSPEC_NLRI_SIZELIMIT 240
#define FLOWSPEC_NLRI_SIZELIMIT_EXTENDED 4095
/* Flowspec raffic action bit*/
#define FLOWSPEC_TRAFFIC_ACTION_TERMINAL 1

View file

@ -94,10 +94,8 @@ int bgp_peer_reg_with_nht(struct peer *peer)
connected = 1;
return bgp_find_or_add_nexthop(peer->bgp, peer->bgp,
family2afi(
peer->connection->su.sa.sa_family),
SAFI_UNICAST, NULL, peer, connected,
NULL);
family2afi(peer->connection->su.sa.sa_family), SAFI_UNICAST,
NULL, peer, connected, NULL, NULL);
}
static void peer_xfer_stats(struct peer *peer_dst, struct peer *peer_src)
@ -184,7 +182,11 @@ static struct peer *peer_xfer_conn(struct peer *from_peer)
EVENT_OFF(keeper->t_delayopen);
EVENT_OFF(keeper->t_connect_check_r);
EVENT_OFF(keeper->t_connect_check_w);
EVENT_OFF(keeper->t_process_packet);
frr_with_mutex (&bm->peer_connection_mtx) {
if (peer_connection_fifo_member(&bm->connection_fifo, keeper))
peer_connection_fifo_del(&bm->connection_fifo, keeper);
}
/*
* At this point in time, it is possible that there are packets pending
@ -305,8 +307,13 @@ static struct peer *peer_xfer_conn(struct peer *from_peer)
bgp_reads_on(keeper);
bgp_writes_on(keeper);
event_add_event(bm->master, bgp_process_packet, keeper, 0,
&keeper->t_process_packet);
frr_with_mutex (&bm->peer_connection_mtx) {
if (!peer_connection_fifo_member(&bm->connection_fifo, keeper)) {
peer_connection_fifo_add_tail(&bm->connection_fifo, keeper);
}
}
event_add_event(bm->master, bgp_process_packet, NULL, 0, &bm->e_process_packet);
return (peer);
}
@ -325,7 +332,7 @@ void bgp_timer_set(struct peer_connection *connection)
/* First entry point of peer's finite state machine. In Idle
status start timer is on unless peer is shutdown or peer is
inactive. All other timer must be turned off */
if (BGP_PEER_START_SUPPRESSED(peer) || !peer_active(connection) ||
if (BGP_PEER_START_SUPPRESSED(peer) || peer_active(connection) != BGP_PEER_ACTIVE ||
peer->bgp->vrf_id == VRF_UNKNOWN) {
EVENT_OFF(connection->t_start);
} else {
@ -472,7 +479,8 @@ static void bgp_start_timer(struct event *thread)
struct peer *peer = connection->peer;
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [FSM] Timer (start timer expire).", peer->host);
zlog_debug("%s [FSM] Timer (start timer expire for %s).", peer->host,
bgp_peer_get_connection_direction(connection));
EVENT_VAL(thread) = BGP_Start;
bgp_event(thread); /* bgp_event unlocks peer */
@ -491,8 +499,8 @@ static void bgp_connect_timer(struct event *thread)
assert(!connection->t_read);
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [FSM] Timer (connect timer (%us) expire)", peer->host,
peer->v_connect);
zlog_debug("%s [FSM] Timer (connect timer (%us) expire for %s)", peer->host,
peer->v_connect, bgp_peer_get_connection_direction(connection));
if (CHECK_FLAG(peer->sflags, PEER_STATUS_ACCEPT_PEER))
bgp_stop(connection);
@ -512,8 +520,8 @@ static void bgp_holdtime_timer(struct event *thread)
struct peer *peer = connection->peer;
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [FSM] Timer (holdtime timer expire)",
peer->host);
zlog_debug("%s [FSM] Timer (holdtime timer expire for %s)", peer->host,
bgp_peer_get_connection_direction(connection));
/*
* Given that we do not have any expectation of ordering
@ -528,9 +536,11 @@ static void bgp_holdtime_timer(struct event *thread)
frr_with_mutex (&connection->io_mtx) {
inq_count = atomic_load_explicit(&connection->ibuf->count, memory_order_relaxed);
}
if (inq_count)
if (inq_count) {
BGP_TIMER_ON(connection->t_holdtime, bgp_holdtime_timer,
peer->v_holdtime);
return;
}
EVENT_VAL(thread) = Hold_Timer_expired;
bgp_event(thread); /* bgp_event unlocks peer */
@ -542,7 +552,8 @@ void bgp_routeadv_timer(struct event *thread)
struct peer *peer = connection->peer;
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [FSM] Timer (routeadv timer expire)", peer->host);
zlog_debug("%s [FSM] Timer (routeadv timer expire for %s)", peer->host,
bgp_peer_get_connection_direction(connection));
peer->synctime = monotime(NULL);
@ -561,8 +572,8 @@ void bgp_delayopen_timer(struct event *thread)
struct peer *peer = connection->peer;
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [FSM] Timer (DelayOpentimer expire)",
peer->host);
zlog_debug("%s [FSM] Timer (DelayOpentimer expire for %s)", peer->host,
bgp_peer_get_connection_direction(connection));
EVENT_VAL(thread) = DelayOpen_timer_expired;
bgp_event(thread); /* bgp_event unlocks peer */
@ -628,8 +639,8 @@ static void bgp_graceful_restart_timer_off(struct peer_connection *connection,
if (peer_dynamic_neighbor(peer) &&
!(CHECK_FLAG(peer->flags, PEER_FLAG_DELETE))) {
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s (dynamic neighbor) deleted (%s)",
peer->host, __func__);
zlog_debug("%s (dynamic neighbor) deleted (%s) for %s", __func__,
peer->host, bgp_peer_get_connection_direction(connection));
peer_delete(peer);
}
@ -654,8 +665,9 @@ static void bgp_llgr_stale_timer_expire(struct event *thread)
* stale routes from the neighbor that it is retaining.
*/
if (bgp_debug_neighbor_events(peer))
zlog_debug("%pBP Long-lived stale timer (%s) expired", peer,
get_afi_safi_str(afi, safi, false));
zlog_debug("%pBP Long-lived stale timer (%s) expired for %s", peer,
get_afi_safi_str(afi, safi, false),
bgp_peer_get_connection_direction(peer->connection));
UNSET_FLAG(peer->af_sflags[afi][safi], PEER_STATUS_LLGR_WAIT);
@ -753,11 +765,9 @@ static void bgp_graceful_restart_timer_expire(struct event *thread)
afi_t afi;
safi_t safi;
if (bgp_debug_neighbor_events(peer)) {
zlog_debug("%pBP graceful restart timer expired", peer);
zlog_debug("%pBP graceful restart stalepath timer stopped",
peer);
}
if (bgp_debug_neighbor_events(peer))
zlog_debug("%pBP graceful restart timer expired and graceful restart stalepath timer stopped for %s",
peer, bgp_peer_get_connection_direction(connection));
FOREACH_AFI_SAFI (afi, safi) {
if (!peer->nsf[afi][safi])
@ -781,11 +791,10 @@ static void bgp_graceful_restart_timer_expire(struct event *thread)
continue;
if (bgp_debug_neighbor_events(peer))
zlog_debug(
"%pBP Long-lived stale timer (%s) started for %d sec",
peer,
get_afi_safi_str(afi, safi, false),
peer->llgr[afi][safi].stale_time);
zlog_debug("%pBP Long-lived stale timer (%s) started for %d sec for %s",
peer, get_afi_safi_str(afi, safi, false),
peer->llgr[afi][safi].stale_time,
bgp_peer_get_connection_direction(connection));
SET_FLAG(peer->af_sflags[afi][safi],
PEER_STATUS_LLGR_WAIT);
@ -816,8 +825,8 @@ static void bgp_graceful_stale_timer_expire(struct event *thread)
safi_t safi;
if (bgp_debug_neighbor_events(peer))
zlog_debug("%pBP graceful restart stalepath timer expired",
peer);
zlog_debug("%pBP graceful restart stalepath timer expired for %s", peer,
bgp_peer_get_connection_direction(connection));
/* NSF delete stale route */
FOREACH_AFI_SAFI_NSF (afi, safi)
@ -1242,10 +1251,10 @@ void bgp_fsm_change_status(struct peer_connection *connection,
if (bgp_debug_neighbor_events(peer)) {
struct vrf *vrf = vrf_lookup_by_id(bgp->vrf_id);
zlog_debug("%s : vrf %s(%u), Status: %s established_peers %u", __func__,
zlog_debug("%s : vrf %s(%u), Status: %s established_peers %u for %s", __func__,
vrf ? vrf->name : "Unknown", bgp->vrf_id,
lookup_msg(bgp_status_msg, status, NULL),
bgp->established_peers);
lookup_msg(bgp_status_msg, status, NULL), bgp->established_peers,
bgp_peer_get_connection_direction(connection));
}
/* Set to router ID to the value provided by RIB if there are no peers
@ -1258,7 +1267,7 @@ void bgp_fsm_change_status(struct peer_connection *connection,
/* Transition into Clearing or Deleted must /always/ clear all routes..
* (and must do so before actually changing into Deleted..
*/
if (status >= Clearing && (peer->established || peer == bgp->peer_self)) {
if (status >= Clearing && (peer->established || peer != bgp->peer_self)) {
bgp_clear_route_all(peer);
/* If no route was queued for the clear-node processing,
@ -1281,7 +1290,8 @@ void bgp_fsm_change_status(struct peer_connection *connection,
* Clearing
* (or Deleted).
*/
if (!work_queue_is_scheduled(peer->clear_node_queue) &&
if (!CHECK_FLAG(peer->flags, PEER_FLAG_CLEARING_BATCH) &&
!work_queue_is_scheduled(peer->clear_node_queue) &&
status != Deleted)
BGP_EVENT_ADD(connection, Clearing_Completed);
}
@ -1322,10 +1332,10 @@ void bgp_fsm_change_status(struct peer_connection *connection,
bgp_update_delay_process_status_change(peer);
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s fd %d went from %s to %s", peer->host,
connection->fd,
zlog_debug("%s fd %d went from %s to %s for %s", peer->host, connection->fd,
lookup_msg(bgp_status_msg, connection->ostatus, NULL),
lookup_msg(bgp_status_msg, connection->status, NULL));
lookup_msg(bgp_status_msg, connection->status, NULL),
bgp_peer_get_connection_direction(connection));
}
/* Flush the event queue and ensure the peer is shut down */
@ -1357,8 +1367,8 @@ enum bgp_fsm_state_progress bgp_stop(struct peer_connection *connection)
if (peer_dynamic_neighbor_no_nsf(peer) &&
!(CHECK_FLAG(peer->flags, PEER_FLAG_DELETE))) {
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s (dynamic neighbor) deleted (%s)",
peer->host, __func__);
zlog_debug("%s (dynamic neighbor) deleted (%s) for %s", __func__,
peer->host, bgp_peer_get_connection_direction(connection));
peer_delete(peer);
return BGP_FSM_FAILURE_AND_DELETE;
}
@ -1399,18 +1409,17 @@ enum bgp_fsm_state_progress bgp_stop(struct peer_connection *connection)
if (connection->t_gr_stale) {
EVENT_OFF(connection->t_gr_stale);
if (bgp_debug_neighbor_events(peer))
zlog_debug(
"%pBP graceful restart stalepath timer stopped",
peer);
zlog_debug("%pBP graceful restart stalepath timer stopped for %s",
peer, bgp_peer_get_connection_direction(connection));
}
if (CHECK_FLAG(peer->sflags, PEER_STATUS_NSF_WAIT)) {
if (bgp_debug_neighbor_events(peer)) {
zlog_debug(
"%pBP graceful restart timer started for %d sec",
peer, peer->v_gr_restart);
zlog_debug(
"%pBP graceful restart stalepath timer started for %d sec",
peer, peer->bgp->stalepath_time);
zlog_debug("%pBP graceful restart timer started for %d sec for %s",
peer, peer->v_gr_restart,
bgp_peer_get_connection_direction(connection));
zlog_debug("%pBP graceful restart stalepath timer started for %d sec for %s",
peer, peer->bgp->stalepath_time,
bgp_peer_get_connection_direction(connection));
}
BGP_TIMER_ON(connection->t_gr_restart,
bgp_graceful_restart_timer_expire,
@ -1430,9 +1439,8 @@ enum bgp_fsm_state_progress bgp_stop(struct peer_connection *connection)
EVENT_OFF(peer->t_refresh_stalepath);
if (bgp_debug_neighbor_events(peer))
zlog_debug(
"%pBP route-refresh restart stalepath timer stopped",
peer);
zlog_debug("%pBP route-refresh restart stalepath timer stopped for %s",
peer, bgp_peer_get_connection_direction(connection));
}
/* If peer reset before receiving EOR, decrement EOR count and
@ -1454,9 +1462,9 @@ enum bgp_fsm_state_progress bgp_stop(struct peer_connection *connection)
gr_info->eor_required--;
if (BGP_DEBUG(update, UPDATE_OUT))
zlog_debug("peer %s, EOR_required %d",
peer->host,
gr_info->eor_required);
zlog_debug("peer %s, EOR_required %d for %s", peer->host,
gr_info->eor_required,
bgp_peer_get_connection_direction(connection));
/* There is no pending EOR message */
if (gr_info->eor_required == 0) {
@ -1475,8 +1483,8 @@ enum bgp_fsm_state_progress bgp_stop(struct peer_connection *connection)
peer->resettime = peer->uptime = monotime(NULL);
if (BGP_DEBUG(update_groups, UPDATE_GROUPS))
zlog_debug("%s remove from all update group",
peer->host);
zlog_debug("%s remove from all update group for %s", peer->host,
bgp_peer_get_connection_direction(connection));
update_group_remove_peer_afs(peer);
/* Reset peer synctime */
@ -1522,6 +1530,7 @@ enum bgp_fsm_state_progress bgp_stop(struct peer_connection *connection)
if (connection->fd >= 0) {
close(connection->fd);
connection->fd = -1;
connection->dir = UNKNOWN;
}
/* Reset capabilities. */
@ -1596,8 +1605,8 @@ bgp_stop_with_error(struct peer_connection *connection)
if (peer_dynamic_neighbor_no_nsf(peer)) {
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s (dynamic neighbor) deleted (%s)",
peer->host, __func__);
zlog_debug("%s (dynamic neighbor) deleted (%s) for %s", __func__,
peer->host, bgp_peer_get_connection_direction(connection));
peer_delete(peer);
return BGP_FSM_FAILURE;
}
@ -1618,8 +1627,8 @@ bgp_stop_with_notify(struct peer_connection *connection, uint8_t code,
if (peer_dynamic_neighbor_no_nsf(peer)) {
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s (dynamic neighbor) deleted (%s)",
peer->host, __func__);
zlog_debug("%s (dynamic neighbor) deleted (%s) for %s", __func__,
peer->host, bgp_peer_get_connection_direction(connection));
peer_delete(peer);
return BGP_FSM_FAILURE;
}
@ -1684,8 +1693,9 @@ static void bgp_connect_check(struct event *thread)
return;
} else {
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [Event] Connect failed %d(%s)",
peer->host, status, safe_strerror(status));
zlog_debug("%s [Event] Connect failed %d(%s) for connection %s", peer->host,
status, safe_strerror(status),
bgp_peer_get_connection_direction(connection));
BGP_EVENT_ADD(connection, TCP_connection_open_failed);
return;
}
@ -1724,10 +1734,12 @@ bgp_connect_success(struct peer_connection *connection)
if (bgp_debug_neighbor_events(peer)) {
if (!CHECK_FLAG(peer->sflags, PEER_STATUS_ACCEPT_PEER))
zlog_debug("%s open active, local address %pSU", peer->host,
connection->su_local);
zlog_debug("%s open active, local address %pSU for %s", peer->host,
connection->su_local,
bgp_peer_get_connection_direction(connection));
else
zlog_debug("%s passive open", peer->host);
zlog_debug("%s passive open for %s", peer->host,
bgp_peer_get_connection_direction(connection));
}
/* Send an open message */
@ -1770,10 +1782,12 @@ bgp_connect_success_w_delayopen(struct peer_connection *connection)
if (bgp_debug_neighbor_events(peer)) {
if (!CHECK_FLAG(peer->sflags, PEER_STATUS_ACCEPT_PEER))
zlog_debug("%s open active, local address %pSU", peer->host,
connection->su_local);
zlog_debug("%s open active, local address %pSU for %s", peer->host,
connection->su_local,
bgp_peer_get_connection_direction(connection));
else
zlog_debug("%s passive open", peer->host);
zlog_debug("%s passive open for %s", peer->host,
bgp_peer_get_connection_direction(connection));
}
/* set the DelayOpenTime to the inital value */
@ -1785,8 +1799,9 @@ bgp_connect_success_w_delayopen(struct peer_connection *connection)
peer->v_delayopen);
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [FSM] BGP OPEN message delayed for %d seconds",
peer->host, peer->delayopen);
zlog_debug("%s [FSM] BGP OPEN message delayed for %d seconds for connection %s",
peer->host, peer->delayopen,
bgp_peer_get_connection_direction(connection));
return BGP_FSM_SUCCESS;
}
@ -1799,8 +1814,8 @@ bgp_connect_fail(struct peer_connection *connection)
if (peer_dynamic_neighbor_no_nsf(peer)) {
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s (dynamic neighbor) deleted (%s)",
peer->host, __func__);
zlog_debug("%s (dynamic neighbor) deleted (%s) for %s", __func__,
peer->host, bgp_peer_get_connection_direction(connection));
peer_delete(peer);
return BGP_FSM_FAILURE_AND_DELETE;
}
@ -1843,9 +1858,8 @@ static enum bgp_fsm_state_progress bgp_start(struct peer_connection *connection)
if (connection->su.sa.sa_family == AF_UNSPEC) {
if (bgp_debug_neighbor_events(peer))
zlog_debug(
"%s [FSM] Unable to get neighbor's IP address, waiting...",
peer->host);
zlog_debug("%s [FSM] Unable to get neighbor's IP address, waiting... for %s",
peer->host, bgp_peer_get_connection_direction(connection));
peer->last_reset = PEER_DOWN_NBR_ADDR;
return BGP_FSM_FAILURE;
}
@ -1888,9 +1902,9 @@ static enum bgp_fsm_state_progress bgp_start(struct peer_connection *connection)
if (!bgp_peer_reg_with_nht(peer)) {
if (bgp_zebra_num_connects()) {
if (bgp_debug_neighbor_events(peer))
zlog_debug(
"%s [FSM] Waiting for NHT, no path to neighbor present",
peer->host);
zlog_debug("%s [FSM] Waiting for NHT, no path to neighbor present for %s",
peer->host,
bgp_peer_get_connection_direction(connection));
peer->last_reset = PEER_DOWN_WAITING_NHT;
BGP_EVENT_ADD(connection, TCP_connection_open_failed);
return BGP_FSM_SUCCESS;
@ -1906,13 +1920,14 @@ static enum bgp_fsm_state_progress bgp_start(struct peer_connection *connection)
switch (status) {
case connect_error:
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [FSM] Connect error", peer->host);
zlog_debug("%s [FSM] Connect error for %s", peer->host,
bgp_peer_get_connection_direction(connection));
BGP_EVENT_ADD(connection, TCP_connection_open_failed);
break;
case connect_success:
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [FSM] Connect immediately success, fd %d",
peer->host, connection->fd);
zlog_debug("%s [FSM] Connect immediately success, fd %d for %s", peer->host,
connection->fd, bgp_peer_get_connection_direction(connection));
BGP_EVENT_ADD(connection, TCP_connection_open);
break;
@ -1920,8 +1935,9 @@ static enum bgp_fsm_state_progress bgp_start(struct peer_connection *connection)
/* To check nonblocking connect, we wait until socket is
readable or writable. */
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [FSM] Non blocking connect waiting result, fd %d",
peer->host, connection->fd);
zlog_debug("%s [FSM] Non blocking connect waiting result, fd %d for %s",
peer->host, connection->fd,
bgp_peer_get_connection_direction(connection));
if (connection->fd < 0) {
flog_err(EC_BGP_FSM, "%s peer's fd is negative value %d",
__func__, peer->connection->fd);
@ -1968,14 +1984,12 @@ bgp_reconnect(struct peer_connection *connection)
static enum bgp_fsm_state_progress
bgp_fsm_open(struct peer_connection *connection)
{
struct peer *peer = connection->peer;
/* If DelayOpen is active, we may still need to send an open message */
if ((connection->status == Connect) || (connection->status == Active))
bgp_open_send(connection);
/* Send keepalive and make keepalive timer */
bgp_keepalive_send(peer);
bgp_keepalive_send(connection);
return BGP_FSM_SUCCESS;
}
@ -2003,7 +2017,8 @@ bgp_fsm_holdtime_expire(struct peer_connection *connection)
struct peer *peer = connection->peer;
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [FSM] Hold timer expire", peer->host);
zlog_debug("%s [FSM] Hold timer expire for %s", peer->host,
bgp_peer_get_connection_direction(connection));
/* RFC8538 updates RFC 4724 by defining an extension that permits
* the Graceful Restart procedures to be performed when the BGP
@ -2184,9 +2199,11 @@ bgp_establish(struct peer_connection *connection)
UNSET_FLAG(peer->sflags, PEER_STATUS_NSF_WAIT);
if (bgp_debug_neighbor_events(peer)) {
if (BGP_PEER_RESTARTING_MODE(peer))
zlog_debug("%pBP BGP_RESTARTING_MODE", peer);
zlog_debug("%pBP BGP_RESTARTING_MODE %s", peer,
bgp_peer_get_connection_direction(connection));
else if (BGP_PEER_HELPER_MODE(peer))
zlog_debug("%pBP BGP_HELPER_MODE", peer);
zlog_debug("%pBP BGP_HELPER_MODE %s", peer,
bgp_peer_get_connection_direction(connection));
}
FOREACH_AFI_SAFI_NSF (afi, safi) {
@ -2259,16 +2276,16 @@ bgp_establish(struct peer_connection *connection)
if (connection->t_gr_stale) {
EVENT_OFF(connection->t_gr_stale);
if (bgp_debug_neighbor_events(peer))
zlog_debug(
"%pBP graceful restart stalepath timer stopped",
peer);
zlog_debug("%pBP graceful restart stalepath timer stopped for %s",
peer, bgp_peer_get_connection_direction(connection));
}
}
if (connection->t_gr_restart) {
EVENT_OFF(connection->t_gr_restart);
if (bgp_debug_neighbor_events(peer))
zlog_debug("%pBP graceful restart timer stopped", peer);
zlog_debug("%pBP graceful restart timer stopped for %s", peer,
bgp_peer_get_connection_direction(connection));
}
/* Reset uptime, turn on keepalives, send current table. */
@ -2284,9 +2301,9 @@ bgp_establish(struct peer_connection *connection)
if (peer->t_llgr_stale[afi][safi]) {
EVENT_OFF(peer->t_llgr_stale[afi][safi]);
if (bgp_debug_neighbor_events(peer))
zlog_debug(
"%pBP Long-lived stale timer stopped for afi/safi: %d/%d",
peer, afi, safi);
zlog_debug("%pBP Long-lived stale timer stopped for afi/safi: %d/%d for %s",
peer, afi, safi,
bgp_peer_get_connection_direction(connection));
}
if (CHECK_FLAG(peer->af_cap[afi][safi],
@ -2327,9 +2344,8 @@ bgp_establish(struct peer_connection *connection)
if (peer->doppelganger &&
(peer->doppelganger->connection->status != Deleted)) {
if (bgp_debug_neighbor_events(peer))
zlog_debug(
"[Event] Deleting stub connection for peer %s",
peer->host);
zlog_debug("[Event] Deleting stub connection for peer %s for %s", peer->host,
bgp_peer_get_connection_direction(peer->doppelganger->connection));
if (peer->doppelganger->connection->status > Active)
bgp_notify_send(peer->doppelganger->connection,
@ -2636,11 +2652,10 @@ int bgp_event_update(struct peer_connection *connection,
next = FSM[connection->status - 1][event - 1].next_state;
if (bgp_debug_neighbor_events(peer) && connection->status != next)
zlog_debug("%s [FSM] %s (%s->%s), fd %d", peer->host,
bgp_event_str[event],
zlog_debug("%s [FSM] %s (%s->%s), fd %d for %s", peer->host, bgp_event_str[event],
lookup_msg(bgp_status_msg, connection->status, NULL),
lookup_msg(bgp_status_msg, next, NULL),
connection->fd);
lookup_msg(bgp_status_msg, next, NULL), connection->fd,
bgp_peer_get_connection_direction(connection));
peer->last_event = peer->cur_event;
peer->cur_event = event;

View file

@ -99,8 +99,11 @@ void bgp_reads_off(struct peer_connection *connection)
assert(fpt->running);
event_cancel_async(fpt->master, &connection->t_read, NULL);
EVENT_OFF(connection->t_process_packet);
EVENT_OFF(connection->t_process_packet_error);
frr_with_mutex (&bm->peer_connection_mtx) {
if (peer_connection_fifo_member(&bm->connection_fifo, connection))
peer_connection_fifo_del(&bm->connection_fifo, connection);
}
UNSET_FLAG(connection->thread_flags, PEER_THREAD_READS_ON);
}
@ -252,8 +255,7 @@ static void bgp_process_reads(struct event *thread)
/* Handle the error in the main pthread, include the
* specific state change from 'bgp_read'.
*/
event_add_event(bm->master, bgp_packet_process_error, connection,
code, &connection->t_process_packet_error);
bgp_enqueue_conn_err(peer->bgp, connection, code);
goto done;
}
@ -294,9 +296,13 @@ done:
event_add_read(fpt->master, bgp_process_reads, connection,
connection->fd, &connection->t_read);
if (added_pkt)
event_add_event(bm->master, bgp_process_packet, connection, 0,
&connection->t_process_packet);
if (added_pkt) {
frr_with_mutex (&bm->peer_connection_mtx) {
if (!peer_connection_fifo_member(&bm->connection_fifo, connection))
peer_connection_fifo_add_tail(&bm->connection_fifo, connection);
}
event_add_event(bm->master, bgp_process_packet, NULL, 0, &bm->e_process_packet);
}
}
/*

View file

@ -10,6 +10,7 @@
#define BGP_WRITE_PACKET_MAX 64U
#define BGP_READ_PACKET_MAX 10U
#define BGP_PACKET_PROCESS_LIMIT 100
#include "bgpd/bgpd.h"
#include "frr_pthread.h"

View file

@ -108,7 +108,7 @@ static void peer_process(struct hash_bucket *hb, void *arg)
zlog_debug("%s [FSM] Timer (keepalive timer expire)",
pkat->peer->host);
bgp_keepalive_send(pkat->peer);
bgp_keepalive_send(pkat->peer->connection);
monotime(&pkat->last);
memset(&elapsed, 0, sizeof(elapsed));
diff = ka;

View file

@ -26,7 +26,7 @@
#include "bgpd/bgp_debug.h"
#include "bgpd/bgp_errors.h"
extern struct zclient *zclient;
extern struct zclient *bgp_zclient;
/* MPLS Labels hash routines. */
@ -157,7 +157,7 @@ int bgp_parse_fec_update(void)
afi_t afi;
safi_t safi;
s = zclient->ibuf;
s = bgp_zclient->ibuf;
memset(&p, 0, sizeof(p));
p.family = stream_getw(s);
@ -249,7 +249,7 @@ static void bgp_send_fec_register_label_msg(struct bgp_dest *dest, bool reg,
p = bgp_dest_get_prefix(dest);
/* Check socket. */
if (!zclient || zclient->sock < 0)
if (!bgp_zclient || bgp_zclient->sock < 0)
return;
if (BGP_DEBUG(labelpool, LABELPOOL))
@ -258,7 +258,7 @@ static void bgp_send_fec_register_label_msg(struct bgp_dest *dest, bool reg,
/* If the route node has a local_label assigned or the
* path node has an MPLS SR label index allowing zebra to
* derive the label, proceed with registration. */
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
command = (reg) ? ZEBRA_FEC_REGISTER : ZEBRA_FEC_UNREGISTER;
zclient_create_header(s, command, VRF_DEFAULT);
@ -288,7 +288,7 @@ static void bgp_send_fec_register_label_msg(struct bgp_dest *dest, bool reg,
if (reg)
stream_putw_at(s, flags_pos, flags);
zclient_send_message(zclient);
zclient_send_message(bgp_zclient);
}
/**

View file

@ -7,8 +7,6 @@
#define _BGP_LABEL_H
#define BGP_LABEL_BYTES 3
#define BGP_LABEL_BITS 24
#define BGP_WITHDRAW_LABEL 0x800000
#define BGP_PREVENT_VRF_2_VRF_LEAK 0xFFFFFFFE
struct bgp_dest;

View file

@ -219,6 +219,9 @@ static void bgp_mac_rescan_evpn_table(struct bgp *bgp, struct ethaddr *macaddr)
if (!peer_established(peer->connection))
continue;
if (!peer->afc[afi][safi])
continue;
if (bgp_debug_update(peer, NULL, NULL, 1))
zlog_debug(
"Processing EVPN MAC interface change on peer %s %s",

View file

@ -161,6 +161,14 @@ __attribute__((__noreturn__)) void sigint(void)
bgp_exit(0);
/*
* This is being done after bgp_exit because items may be removed
* from the connection_fifo
*/
peer_connection_fifo_fini(&bm->connection_fifo);
EVENT_OFF(bm->e_process_packet);
pthread_mutex_destroy(&bm->peer_connection_mtx);
exit(0);
}

View file

@ -136,4 +136,6 @@ DECLARE_MTYPE(BGP_SOFT_VERSION);
DECLARE_MTYPE(BGP_EVPN_OVERLAY);
DECLARE_MTYPE(CLEARING_BATCH);
#endif /* _QUAGGA_BGP_MEMORY_H */

View file

@ -46,7 +46,7 @@ DEFINE_MTYPE_STATIC(BGPD, MPLSVPN_NH_LABEL_BIND_CACHE,
/*
* Definitions and external declarations.
*/
extern struct zclient *zclient;
extern struct zclient *bgp_zclient;
extern int argv_find_and_parse_vpnvx(struct cmd_token **argv, int argc,
int *index, afi_t *afi)
@ -317,7 +317,7 @@ void vpn_leak_zebra_vrf_label_update(struct bgp *bgp, afi_t afi)
if (label == BGP_PREVENT_VRF_2_VRF_LEAK)
label = MPLS_LABEL_NONE;
zclient_send_vrf_label(zclient, bgp->vrf_id, afi, label, ZEBRA_LSP_BGP);
zclient_send_vrf_label(bgp_zclient, bgp->vrf_id, afi, label, ZEBRA_LSP_BGP);
bgp->vpn_policy[afi].tovpn_zebra_vrf_label_last_sent = label;
}
@ -344,7 +344,7 @@ void vpn_leak_zebra_vrf_label_withdraw(struct bgp *bgp, afi_t afi)
bgp->name_pretty, bgp->vrf_id);
}
zclient_send_vrf_label(zclient, bgp->vrf_id, afi, label, ZEBRA_LSP_BGP);
zclient_send_vrf_label(bgp_zclient, bgp->vrf_id, afi, label, ZEBRA_LSP_BGP);
bgp->vpn_policy[afi].tovpn_zebra_vrf_label_last_sent = label;
}
@ -397,11 +397,13 @@ void vpn_leak_zebra_vrf_sid_update_per_af(struct bgp *bgp, afi_t afi)
ctx.argument_len =
bgp->vpn_policy[afi]
.tovpn_sid_locator->argument_bits_length;
if (CHECK_FLAG(bgp->vpn_policy[afi].tovpn_sid_locator->flags, SRV6_LOCATOR_USID))
SET_SRV6_FLV_OP(ctx.flv.flv_ops, ZEBRA_SEG6_LOCAL_FLV_OP_NEXT_CSID);
}
ctx.table = vrf->data.l.table_id;
act = afi == AFI_IP ? ZEBRA_SEG6_LOCAL_ACTION_END_DT4
: ZEBRA_SEG6_LOCAL_ACTION_END_DT6;
zclient_send_localsid(zclient, tovpn_sid, bgp->vrf_id, act, &ctx);
zclient_send_localsid(bgp_zclient, tovpn_sid, bgp->vrf_id, act, &ctx);
tovpn_sid_ls = XCALLOC(MTYPE_BGP_SRV6_SID, sizeof(struct in6_addr));
*tovpn_sid_ls = *tovpn_sid;
@ -454,10 +456,12 @@ void vpn_leak_zebra_vrf_sid_update_per_vrf(struct bgp *bgp)
ctx.node_len = bgp->tovpn_sid_locator->node_bits_length;
ctx.function_len = bgp->tovpn_sid_locator->function_bits_length;
ctx.argument_len = bgp->tovpn_sid_locator->argument_bits_length;
if (CHECK_FLAG(bgp->tovpn_sid_locator->flags, SRV6_LOCATOR_USID))
SET_SRV6_FLV_OP(ctx.flv.flv_ops, ZEBRA_SEG6_LOCAL_FLV_OP_NEXT_CSID);
}
ctx.table = vrf->data.l.table_id;
act = ZEBRA_SEG6_LOCAL_ACTION_END_DT46;
zclient_send_localsid(zclient, tovpn_sid, bgp->vrf_id, act, &ctx);
zclient_send_localsid(bgp_zclient, tovpn_sid, bgp->vrf_id, act, &ctx);
tovpn_sid_ls = XCALLOC(MTYPE_BGP_SRV6_SID, sizeof(struct in6_addr));
*tovpn_sid_ls = *tovpn_sid;
@ -519,7 +523,7 @@ void vpn_leak_zebra_vrf_sid_withdraw_per_af(struct bgp *bgp, afi_t afi)
bgp->vpn_policy[afi]
.tovpn_sid_locator->argument_bits_length;
}
zclient_send_localsid(zclient,
zclient_send_localsid(bgp_zclient,
bgp->vpn_policy[afi].tovpn_zebra_vrf_sid_last_sent,
bgp->vrf_id, ZEBRA_SEG6_LOCAL_ACTION_UNSPEC,
&seg6localctx);
@ -564,7 +568,7 @@ void vpn_leak_zebra_vrf_sid_withdraw_per_vrf(struct bgp *bgp)
seg6localctx.argument_len =
bgp->tovpn_sid_locator->argument_bits_length;
}
zclient_send_localsid(zclient, bgp->tovpn_zebra_vrf_sid_last_sent,
zclient_send_localsid(bgp_zclient, bgp->tovpn_zebra_vrf_sid_last_sent,
bgp->vrf_id, ZEBRA_SEG6_LOCAL_ACTION_UNSPEC,
&seg6localctx);
XFREE(MTYPE_BGP_SRV6_SID, bgp->tovpn_zebra_vrf_sid_last_sent);
@ -1088,32 +1092,37 @@ static bool leak_update_nexthop_valid(struct bgp *to_bgp, struct bgp_dest *bn,
/* the route is defined with the "network <prefix>" command */
if (CHECK_FLAG(bgp_nexthop->flags, BGP_FLAG_IMPORT_CHECK))
nh_valid = bgp_find_or_add_nexthop(to_bgp, bgp_nexthop,
afi, SAFI_UNICAST,
bpi_ultimate, NULL,
0, p);
nh_valid = bgp_find_or_add_nexthop(to_bgp, bgp_nexthop, afi, SAFI_UNICAST,
bpi_ultimate, NULL, 0, p, bpi_ultimate);
else
/* if "no bgp network import-check" is set,
* then mark the nexthop as valid.
*/
nh_valid = true;
} else if (bpi_ultimate->type == ZEBRA_ROUTE_BGP &&
bpi_ultimate->sub_type == BGP_ROUTE_AGGREGATE) {
nh_valid = true;
} else
/*
* TBD do we need to do anything about the
* 'connected' parameter?
*/
nh_valid = bgp_find_or_add_nexthop(to_bgp, bgp_nexthop, afi,
safi, bpi, NULL, 0, p);
/* VPN paths: the new bpi may be altered like
* with 'nexthop vpn export' command. Use the bpi_ultimate
* to find the original nexthop
*/
nh_valid = bgp_find_or_add_nexthop(to_bgp, bgp_nexthop, afi, safi, bpi, NULL, 0, p,
bpi_ultimate);
/*
* If you are using SRv6 VPN instead of MPLS, it need to check
* the SID allocation. If the sid is not allocated, the rib
* will be invalid.
* If the SID per VRF is not available, also consider the rib as
* invalid.
*/
if (to_bgp->srv6_enabled &&
(!new_attr->srv6_l3vpn && !new_attr->srv6_vpn)) {
nh_valid = false;
}
if (to_bgp->srv6_enabled && nh_valid)
nh_valid = is_pi_srv6_valid(bpi, bgp_nexthop, afi, safi);
if (debug)
zlog_debug("%s: %pFX nexthop is %svalid (in %s)", __func__, p,
@ -1204,8 +1213,8 @@ leak_update(struct bgp *to_bgp, struct bgp_dest *bn,
return NULL;
}
if (attrhash_cmp(bpi->attr, new_attr) && labelssame &&
!CHECK_FLAG(bpi->flags, BGP_PATH_REMOVED) &&
if (labelssame && !CHECK_FLAG(bpi->flags, BGP_PATH_REMOVED) &&
attrhash_cmp(bpi->attr, new_attr) &&
leak_update_nexthop_valid(to_bgp, bn, new_attr, afi, safi, source_bpi, bpi,
bgp_orig, p,
debug) == !!CHECK_FLAG(bpi->flags, BGP_PATH_VALID)) {
@ -1591,8 +1600,8 @@ vpn_leak_from_vrf_get_per_nexthop_label(afi_t afi, struct bgp_path_info *pi,
bgp_nexthop = from_bgp;
nh_afi = BGP_ATTR_NH_AFI(afi, pi->attr);
nh_valid = bgp_find_or_add_nexthop(from_bgp, bgp_nexthop, nh_afi,
SAFI_UNICAST, pi, NULL, 0, NULL);
nh_valid = bgp_find_or_add_nexthop(from_bgp, bgp_nexthop, nh_afi, SAFI_UNICAST, pi, NULL, 0,
NULL, NULL);
if (!nh_valid && is_bgp_static_route &&
!CHECK_FLAG(from_bgp->flags, BGP_FLAG_IMPORT_CHECK)) {
@ -1693,6 +1702,14 @@ void vpn_leak_from_vrf_update(struct bgp *to_bgp, /* to */
return;
}
/* Aggregate-address suppress check. */
if (bgp_path_suppressed(path_vrf)) {
if (debug)
zlog_debug("%s: %s skipping: suppressed path will not be exported",
__func__, from_bgp->name);
return;
}
/* shallow copy */
static_attr = *path_vrf->attr;
@ -2326,8 +2343,8 @@ static void vpn_leak_to_vrf_update_onevrf(struct bgp *to_bgp, /* to */
break;
}
if (bpi && leak_update_nexthop_valid(to_bgp, bn, &static_attr, afi, safi,
path_vpn, bpi, src_vrf, p, debug))
if (bpi && leak_update_nexthop_valid(to_bgp, bn, &static_attr, afi, safi, path_vpn, bpi,
src_vrf, p, debug))
SET_FLAG(static_attr.nh_flags, BGP_ATTR_NH_VALID);
else
UNSET_FLAG(static_attr.nh_flags, BGP_ATTR_NH_VALID);

View file

@ -342,6 +342,37 @@ static inline bool is_pi_family_vpn(struct bgp_path_info *pi)
is_pi_family_matching(pi, AFI_IP6, SAFI_MPLS_VPN));
}
/*
* If you are using SRv6 VPN instead of MPLS, it need to check
* the SID allocation. If the sid is not allocated, the rib
* will be invalid.
* If the SID per VRF is not available, also consider the rib as
* invalid.
*/
static inline bool is_pi_srv6_valid(struct bgp_path_info *pi, struct bgp *bgp_nexthop, afi_t afi,
safi_t safi)
{
if (!pi->attr->srv6_l3vpn && !pi->attr->srv6_vpn)
return false;
/* imported paths from VPN: srv6 enabled and nht reachability
* are enough to know if that path is valid
*/
if (safi == SAFI_UNICAST)
return true;
if (bgp_nexthop->vpn_policy[afi].tovpn_sid == NULL && bgp_nexthop->tovpn_sid == NULL)
return false;
if (bgp_nexthop->tovpn_sid_index == 0 &&
!CHECK_FLAG(bgp_nexthop->vrf_flags, BGP_VRF_TOVPN_SID_AUTO) &&
bgp_nexthop->vpn_policy[afi].tovpn_sid_index == 0 &&
!CHECK_FLAG(bgp_nexthop->vpn_policy[afi].flags, BGP_VPN_POLICY_TOVPN_SID_AUTO))
return false;
return true;
}
extern void vpn_policy_routemap_event(const char *rmap_name);
extern vrf_id_t get_first_vrf_for_redirect_with_rt(struct ecommunity *eckey);

View file

@ -1460,8 +1460,6 @@ static struct bgp_path_info *bgpL3vpnRte_lookup(struct variable *v, oid name[],
pi = bgp_lookup_route_next(l3vpn_bgp, dest, &prefix, policy,
&nexthop);
if (pi) {
uint8_t vrf_name_len =
strnlen((*l3vpn_bgp)->name, VRF_NAMSIZ);
const struct prefix *p = bgp_dest_get_prefix(*dest);
uint8_t oid_index;
bool v4 = (p->family == AF_INET);
@ -1469,6 +1467,8 @@ static struct bgp_path_info *bgpL3vpnRte_lookup(struct variable *v, oid name[],
: sizeof(struct in6_addr);
struct attr *attr = pi->attr;
vrf_name_len = strnlen((*l3vpn_bgp)->name, VRF_NAMSIZ);
/* copy the index parameters */
oid_copy_str(&name[namelen], (*l3vpn_bgp)->name,
vrf_name_len);

View file

@ -389,6 +389,23 @@ static void bgp_socket_set_buffer_size(const int fd)
setsockopt_so_recvbuf(fd, bm->socket_buffer);
}
static const char *bgp_peer_active2str(enum bgp_peer_active active)
{
switch (active) {
case BGP_PEER_ACTIVE:
return "active";
case BGP_PEER_CONNECTION_UNSPECIFIED:
return "unspecified connection";
case BGP_PEER_BFD_DOWN:
return "BFD down";
case BGP_PEER_AF_UNCONFIGURED:
return "no AF activated";
}
assert(!"We should never get here this is a dev escape");
return "ERROR";
}
/* Accept bgp connection. */
static void bgp_accept(struct event *thread)
{
@ -396,10 +413,11 @@ static void bgp_accept(struct event *thread)
int accept_sock;
union sockunion su;
struct bgp_listener *listener = EVENT_ARG(thread);
struct peer *peer, *peer1;
struct peer_connection *connection, *connection1;
struct peer *doppelganger, *peer;
struct peer_connection *connection, *incoming;
char buf[SU_ADDRSTRLEN];
struct bgp *bgp = NULL;
enum bgp_peer_active active;
sockunion_init(&su);
@ -475,53 +493,51 @@ static void bgp_accept(struct event *thread)
bgp_update_setsockopt_tcp_keepalive(bgp, bgp_sock);
/* Check remote IP address */
peer1 = peer_lookup(bgp, &su);
peer = peer_lookup(bgp, &su);
if (!peer1) {
peer1 = peer_lookup_dynamic_neighbor(bgp, &su);
if (peer1) {
connection1 = peer1->connection;
if (!peer) {
struct peer *dynamic_peer = peer_lookup_dynamic_neighbor(bgp, &su);
if (dynamic_peer) {
incoming = dynamic_peer->connection;
/* Dynamic neighbor has been created, let it proceed */
connection1->fd = bgp_sock;
incoming->fd = bgp_sock;
incoming->dir = CONNECTION_INCOMING;
connection1->su_local = sockunion_getsockname(connection1->fd);
connection1->su_remote = sockunion_dup(&su);
incoming->su_local = sockunion_getsockname(incoming->fd);
incoming->su_remote = sockunion_dup(&su);
if (bgp_set_socket_ttl(connection1) < 0) {
peer1->last_reset = PEER_DOWN_SOCKET_ERROR;
if (bgp_set_socket_ttl(incoming) < 0) {
dynamic_peer->last_reset = PEER_DOWN_SOCKET_ERROR;
zlog_err("%s: Unable to set min/max TTL on peer %s (dynamic), error received: %s(%d)",
__func__, peer1->host,
safe_strerror(errno), errno);
__func__, dynamic_peer->host, safe_strerror(errno), errno);
return;
}
/* Set the user configured MSS to TCP socket */
if (CHECK_FLAG(peer1->flags, PEER_FLAG_TCP_MSS))
sockopt_tcp_mss_set(bgp_sock, peer1->tcp_mss);
if (CHECK_FLAG(dynamic_peer->flags, PEER_FLAG_TCP_MSS))
sockopt_tcp_mss_set(bgp_sock, dynamic_peer->tcp_mss);
frr_with_privs (&bgpd_privs) {
vrf_bind(peer1->bgp->vrf_id, bgp_sock,
bgp_get_bound_name(connection1));
vrf_bind(dynamic_peer->bgp->vrf_id, bgp_sock,
bgp_get_bound_name(incoming));
}
bgp_peer_reg_with_nht(peer1);
bgp_fsm_change_status(connection1, Active);
EVENT_OFF(connection1->t_start);
bgp_peer_reg_with_nht(dynamic_peer);
bgp_fsm_change_status(incoming, Active);
EVENT_OFF(incoming->t_start);
if (peer_active(peer1->connection)) {
if (CHECK_FLAG(peer1->flags,
PEER_FLAG_TIMER_DELAYOPEN))
BGP_EVENT_ADD(connection1,
TCP_connection_open_w_delay);
if (peer_active(incoming) == BGP_PEER_ACTIVE) {
if (CHECK_FLAG(dynamic_peer->flags, PEER_FLAG_TIMER_DELAYOPEN))
BGP_EVENT_ADD(incoming, TCP_connection_open_w_delay);
else
BGP_EVENT_ADD(connection1,
TCP_connection_open);
BGP_EVENT_ADD(incoming, TCP_connection_open);
}
return;
}
}
if (!peer1) {
if (!peer) {
if (bgp_debug_neighbor_events(NULL)) {
zlog_debug(
"[Event] %s connection rejected(%s:%u:%s) - not configured and not valid for dynamic",
@ -532,10 +548,12 @@ static void bgp_accept(struct event *thread)
return;
}
connection1 = peer1->connection;
if (CHECK_FLAG(peer1->flags, PEER_FLAG_SHUTDOWN)
|| CHECK_FLAG(peer1->bgp->flags, BGP_FLAG_SHUTDOWN)) {
if (bgp_debug_neighbor_events(peer1))
/* bgp pointer may be null, but since we have a peer data structure we know we have it */
bgp = peer->bgp;
connection = peer->connection;
if (CHECK_FLAG(peer->flags, PEER_FLAG_SHUTDOWN) ||
CHECK_FLAG(peer->bgp->flags, BGP_FLAG_SHUTDOWN)) {
if (bgp_debug_neighbor_events(peer))
zlog_debug(
"[Event] connection from %s rejected(%s:%u:%s) due to admin shutdown",
inet_sutop(&su, buf), bgp->name_pretty, bgp->as,
@ -550,21 +568,20 @@ static void bgp_accept(struct event *thread)
* Established and then the Clearing_Completed event is generated. Also,
* block incoming connection in Deleted state.
*/
if (connection1->status == Clearing || connection1->status == Deleted) {
if (bgp_debug_neighbor_events(peer1))
zlog_debug("[Event] Closing incoming conn for %s (%p) state %d",
peer1->host, peer1,
peer1->connection->status);
if (connection->status == Clearing || connection->status == Deleted) {
if (bgp_debug_neighbor_events(peer))
zlog_debug("[Event] Closing incoming conn for %s (%p) state %d", peer->host,
peer, connection->status);
close(bgp_sock);
return;
}
/* Check that at least one AF is activated for the peer. */
if (!peer_active(connection1)) {
if (bgp_debug_neighbor_events(peer1))
zlog_debug(
"%s - incoming conn rejected - no AF activated for peer",
peer1->host);
active = peer_active(connection);
if (active != BGP_PEER_ACTIVE) {
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s - incoming conn rejected - %s", peer->host,
bgp_peer_active2str(active));
close(bgp_sock);
return;
}
@ -573,117 +590,109 @@ static void bgp_accept(struct event *thread)
* prefixes, restart timer is still running or the peer
* is shutdown, or BGP identifier is not set (0.0.0.0).
*/
if (BGP_PEER_START_SUPPRESSED(peer1)) {
if (bgp_debug_neighbor_events(peer1)) {
if (peer1->shut_during_cfg)
zlog_debug(
"[Event] Incoming BGP connection rejected from %s due to configuration being currently read in",
peer1->host);
if (BGP_PEER_START_SUPPRESSED(peer)) {
if (bgp_debug_neighbor_events(peer)) {
if (peer->shut_during_cfg)
zlog_debug("[Event] Incoming BGP connection rejected from %s due to configuration being currently read in",
peer->host);
else
zlog_debug(
"[Event] Incoming BGP connection rejected from %s due to maximum-prefix or shutdown",
peer1->host);
zlog_debug("[Event] Incoming BGP connection rejected from %s due to maximum-prefix or shutdown",
peer->host);
}
close(bgp_sock);
return;
}
if (peer1->bgp->router_id.s_addr == INADDR_ANY) {
if (peer->bgp->router_id.s_addr == INADDR_ANY) {
zlog_warn("[Event] Incoming BGP connection rejected from %s due missing BGP identifier, set it with `bgp router-id`",
peer1->host);
peer1->last_reset = PEER_DOWN_ROUTER_ID_ZERO;
peer->host);
peer->last_reset = PEER_DOWN_ROUTER_ID_ZERO;
close(bgp_sock);
return;
}
if (bgp_debug_neighbor_events(peer1))
if (bgp_debug_neighbor_events(peer))
zlog_debug("[Event] connection from %s fd %d, active peer status %d fd %d",
inet_sutop(&su, buf), bgp_sock, connection1->status,
connection1->fd);
inet_sutop(&su, buf), bgp_sock, connection->status, connection->fd);
if (peer1->doppelganger) {
if (peer->doppelganger) {
/* We have an existing connection. Kill the existing one and run
with this one.
*/
if (bgp_debug_neighbor_events(peer1))
zlog_debug(
"[Event] New active connection from peer %s, Killing previous active connection",
peer1->host);
peer_delete(peer1->doppelganger);
if (bgp_debug_neighbor_events(peer))
zlog_debug("[Event] New active connection from peer %s, Killing previous active connection",
peer->host);
peer_delete(peer->doppelganger);
}
peer = peer_create(&su, peer1->conf_if, peer1->bgp, peer1->local_as,
peer1->as, peer1->as_type, NULL, false, NULL);
doppelganger = peer_create(&su, peer->conf_if, bgp, peer->local_as, peer->as, peer->as_type,
NULL, false, NULL);
connection = peer->connection;
incoming = doppelganger->connection;
peer_xfer_config(peer, peer1);
bgp_peer_gr_flags_update(peer);
peer_xfer_config(doppelganger, peer);
bgp_peer_gr_flags_update(doppelganger);
BGP_GR_ROUTER_DETECT_AND_SEND_CAPABILITY_TO_ZEBRA(peer->bgp,
peer->bgp->peer);
BGP_GR_ROUTER_DETECT_AND_SEND_CAPABILITY_TO_ZEBRA(bgp, bgp->peer);
if (bgp_peer_gr_mode_get(peer) == PEER_DISABLE) {
if (bgp_peer_gr_mode_get(doppelganger) == PEER_DISABLE) {
UNSET_FLAG(doppelganger->sflags, PEER_STATUS_NSF_MODE);
UNSET_FLAG(peer->sflags, PEER_STATUS_NSF_MODE);
if (CHECK_FLAG(peer->sflags, PEER_STATUS_NSF_WAIT)) {
peer_nsf_stop(peer);
if (CHECK_FLAG(doppelganger->sflags, PEER_STATUS_NSF_WAIT)) {
peer_nsf_stop(doppelganger);
}
}
peer->doppelganger = peer1;
peer1->doppelganger = peer;
doppelganger->doppelganger = peer;
peer->doppelganger = doppelganger;
connection->fd = bgp_sock;
connection->su_local = sockunion_getsockname(connection->fd);
connection->su_remote = sockunion_dup(&su);
incoming->fd = bgp_sock;
incoming->dir = CONNECTION_INCOMING;
incoming->su_local = sockunion_getsockname(incoming->fd);
incoming->su_remote = sockunion_dup(&su);
if (bgp_set_socket_ttl(connection) < 0)
if (bgp_debug_neighbor_events(peer))
if (bgp_set_socket_ttl(incoming) < 0)
if (bgp_debug_neighbor_events(doppelganger))
zlog_debug("[Event] Unable to set min/max TTL on peer %s, Continuing",
peer->host);
doppelganger->host);
frr_with_privs(&bgpd_privs) {
vrf_bind(peer->bgp->vrf_id, bgp_sock,
bgp_get_bound_name(peer->connection));
vrf_bind(bgp->vrf_id, bgp_sock, bgp_get_bound_name(incoming));
}
bgp_peer_reg_with_nht(peer);
bgp_fsm_change_status(connection, Active);
EVENT_OFF(connection->t_start); /* created in peer_create() */
bgp_peer_reg_with_nht(doppelganger);
bgp_fsm_change_status(incoming, Active);
EVENT_OFF(incoming->t_start); /* created in peer_create() */
SET_FLAG(peer->sflags, PEER_STATUS_ACCEPT_PEER);
SET_FLAG(doppelganger->sflags, PEER_STATUS_ACCEPT_PEER);
/* Make dummy peer until read Open packet. */
if (peer_established(connection1) &&
CHECK_FLAG(peer1->sflags, PEER_STATUS_NSF_MODE)) {
if (peer_established(connection) && CHECK_FLAG(peer->sflags, PEER_STATUS_NSF_MODE)) {
/* If we have an existing established connection with graceful
* restart
* capability announced with one or more address families, then
* drop
* existing established connection and move state to connect.
*/
peer1->last_reset = PEER_DOWN_NSF_CLOSE_SESSION;
peer->last_reset = PEER_DOWN_NSF_CLOSE_SESSION;
if (CHECK_FLAG(peer1->flags, PEER_FLAG_GRACEFUL_RESTART)
|| CHECK_FLAG(peer1->flags,
PEER_FLAG_GRACEFUL_RESTART_HELPER))
SET_FLAG(peer1->sflags, PEER_STATUS_NSF_WAIT);
if (CHECK_FLAG(peer->flags, PEER_FLAG_GRACEFUL_RESTART) ||
CHECK_FLAG(peer->flags, PEER_FLAG_GRACEFUL_RESTART_HELPER))
SET_FLAG(peer->sflags, PEER_STATUS_NSF_WAIT);
bgp_event_update(connection1, TCP_connection_closed);
bgp_event_update(connection, TCP_connection_closed);
}
if (peer_active(peer->connection)) {
if (CHECK_FLAG(peer->flags, PEER_FLAG_TIMER_DELAYOPEN))
BGP_EVENT_ADD(connection, TCP_connection_open_w_delay);
if (peer_active(incoming) == BGP_PEER_ACTIVE) {
if (CHECK_FLAG(doppelganger->flags, PEER_FLAG_TIMER_DELAYOPEN))
BGP_EVENT_ADD(incoming, TCP_connection_open_w_delay);
else
BGP_EVENT_ADD(connection, TCP_connection_open);
BGP_EVENT_ADD(incoming, TCP_connection_open);
}
/*
* If we are doing nht for a peer that is v6 LL based
* massage the event system to make things happy
*/
bgp_nht_interface_events(peer);
bgp_nht_interface_events(doppelganger);
}
/* BGP socket bind. */
@ -801,6 +810,7 @@ enum connect_result bgp_connect(struct peer_connection *connection)
connection->fd =
vrf_sockunion_socket(&connection->su, peer->bgp->vrf_id,
bgp_get_bound_name(connection));
connection->dir = CONNECTION_OUTGOING;
}
if (connection->fd < 0) {
peer->last_reset = PEER_DOWN_SOCKET_ERROR;

View file

@ -444,7 +444,7 @@ void bgp_connected_add(struct bgp *bgp, struct connected *ifc)
!peer_established(peer->connection) &&
!CHECK_FLAG(peer->flags, PEER_FLAG_IFPEER_V6ONLY)) {
connection = peer->connection;
if (peer_active(connection))
if (peer_active(connection) == BGP_PEER_ACTIVE)
BGP_EVENT_ADD(connection, BGP_Stop);
BGP_EVENT_ADD(connection, BGP_Start);
}

View file

@ -34,11 +34,12 @@
#include "bgpd/bgp_mplsvpn.h"
#include "bgpd/bgp_ecommunity.h"
extern struct zclient *zclient;
extern struct zclient *bgp_zclient;
static void register_zebra_rnh(struct bgp_nexthop_cache *bnc);
static void unregister_zebra_rnh(struct bgp_nexthop_cache *bnc);
static bool make_prefix(int afi, struct bgp_path_info *pi, struct prefix *p);
static bool make_prefix(int afi, struct bgp_path_info *pi, struct prefix *p,
struct bgp *bgp_nexthop, struct bgp_path_info *pi_source);
static void bgp_nht_ifp_initial(struct event *thread);
DEFINE_HOOK(bgp_nht_path_update, (struct bgp *bgp, struct bgp_path_info *pi, bool valid),
@ -297,10 +298,9 @@ void bgp_unlink_nexthop_by_peer(struct peer *peer)
* A route and its nexthop might belong to different VRFs. Therefore,
* we need both the bgp_route and bgp_nexthop pointers.
*/
int bgp_find_or_add_nexthop(struct bgp *bgp_route, struct bgp *bgp_nexthop,
afi_t afi, safi_t safi, struct bgp_path_info *pi,
struct peer *peer, int connected,
const struct prefix *orig_prefix)
int bgp_find_or_add_nexthop(struct bgp *bgp_route, struct bgp *bgp_nexthop, afi_t afi, safi_t safi,
struct bgp_path_info *pi, struct peer *peer, int connected,
const struct prefix *orig_prefix, struct bgp_path_info *source_pi)
{
struct bgp_nexthop_cache_head *tree = NULL;
struct bgp_nexthop_cache *bnc;
@ -330,7 +330,7 @@ int bgp_find_or_add_nexthop(struct bgp *bgp_route, struct bgp *bgp_nexthop,
/* This will return true if the global IPv6 NH is a link local
* addr */
if (!make_prefix(afi, pi, &p))
if (!make_prefix(afi, pi, &p, bgp_nexthop, source_pi))
return 1;
/*
@ -667,7 +667,7 @@ static void bgp_process_nexthop_update(struct bgp_nexthop_cache *bnc,
nexthop->vrf_id);
if (ifp)
zclient_send_interface_radv_req(
zclient, nexthop->vrf_id, ifp,
bgp_zclient, nexthop->vrf_id, ifp,
true,
BGP_UNNUM_DEFAULT_RA_INTERVAL);
}
@ -763,10 +763,6 @@ static void bgp_nht_ifp_table_handle(struct bgp *bgp,
struct interface *ifp, bool up)
{
struct bgp_nexthop_cache *bnc;
struct nexthop *nhop;
uint16_t other_nh_count;
bool nhop_ll_found = false;
bool nhop_found = false;
if (ifp->ifindex == IFINDEX_INTERNAL) {
zlog_warn("%s: The interface %s ignored", __func__, ifp->name);
@ -774,42 +770,9 @@ static void bgp_nht_ifp_table_handle(struct bgp *bgp,
}
frr_each (bgp_nexthop_cache, table, bnc) {
other_nh_count = 0;
nhop_ll_found = bnc->ifindex_ipv6_ll == ifp->ifindex;
for (nhop = bnc->nexthop; nhop; nhop = nhop->next) {
if (nhop->ifindex == bnc->ifindex_ipv6_ll)
continue;
if (nhop->ifindex != ifp->ifindex) {
other_nh_count++;
continue;
}
if (nhop->vrf_id != ifp->vrf->vrf_id) {
other_nh_count++;
continue;
}
nhop_found = true;
}
if (!nhop_found && !nhop_ll_found)
/* The event interface does not match the nexthop cache
* entry */
if (bnc->ifindex_ipv6_ll != ifp->ifindex)
continue;
if (!up && other_nh_count > 0)
/* Down event ignored in case of multiple next-hop
* interfaces. The other might interfaces might be still
* up. The cases where all interfaces are down or a bnc
* is invalid are processed by a separate zebra rnh
* messages.
*/
continue;
if (!nhop_ll_found) {
evaluate_paths(bnc);
continue;
}
bnc->last_update = monotime(NULL);
bnc->change_flags = 0;
@ -822,7 +785,6 @@ static void bgp_nht_ifp_table_handle(struct bgp *bgp,
if (up) {
SET_FLAG(bnc->flags, BGP_NEXTHOP_VALID);
SET_FLAG(bnc->change_flags, BGP_NEXTHOP_CHANGED);
/* change nexthop number only for ll */
bnc->nexthop_num = 1;
} else {
UNSET_FLAG(bnc->flags, BGP_NEXTHOP_PEER_NOTIFIED);
@ -842,6 +804,9 @@ static void bgp_nht_ifp_handle(struct interface *ifp, bool up)
if (!bgp)
return;
if (!up)
bgp_clearing_batch_begin(bgp);
bgp_nht_ifp_table_handle(bgp, &bgp->nexthop_cache_table[AFI_IP], ifp,
up);
bgp_nht_ifp_table_handle(bgp, &bgp->import_check_table[AFI_IP], ifp,
@ -850,6 +815,9 @@ static void bgp_nht_ifp_handle(struct interface *ifp, bool up)
up);
bgp_nht_ifp_table_handle(bgp, &bgp->import_check_table[AFI_IP6], ifp,
up);
if (!up)
bgp_clearing_batch_end_event_start(bgp);
}
void bgp_nht_ifp_up(struct interface *ifp)
@ -1026,7 +994,8 @@ void bgp_cleanup_nexthops(struct bgp *bgp)
* make_prefix - make a prefix structure from the path (essentially
* path's node.
*/
static bool make_prefix(int afi, struct bgp_path_info *pi, struct prefix *p)
static bool make_prefix(int afi, struct bgp_path_info *pi, struct prefix *p,
struct bgp *bgp_nexthop, struct bgp_path_info *source_pi)
{
int is_bgp_static = ((pi->type == ZEBRA_ROUTE_BGP)
@ -1036,8 +1005,19 @@ static bool make_prefix(int afi, struct bgp_path_info *pi, struct prefix *p)
struct bgp_dest *net = pi->net;
const struct prefix *p_orig = bgp_dest_get_prefix(net);
struct in_addr ipv4;
struct peer *peer = pi->peer;
struct attr *attr = pi->attr;
struct peer *peer;
struct attr *attr;
bool local_sid = false;
struct bgp *bgp = bgp_get_default();
struct prefix_ipv6 tmp_prefix;
if (source_pi) {
attr = source_pi->attr;
peer = source_pi->peer;
} else {
peer = pi->peer;
attr = pi->attr;
}
if (p_orig->family == AF_FLOWSPEC) {
if (!peer)
@ -1067,37 +1047,50 @@ static bool make_prefix(int afi, struct bgp_path_info *pi, struct prefix *p)
break;
case AFI_IP6:
p->family = AF_INET6;
if (attr->srv6_l3vpn) {
if (bgp && bgp->srv6_locator && bgp->srv6_enabled && pi->attr->srv6_l3vpn) {
tmp_prefix.family = AF_INET6;
tmp_prefix.prefixlen = IPV6_MAX_BITLEN;
tmp_prefix.prefix = pi->attr->srv6_l3vpn->sid;
if (bgp_nexthop->vpn_policy[afi].tovpn_sid_locator &&
bgp_nexthop->vpn_policy[afi].tovpn_sid)
local_sid = prefix_match(&bgp_nexthop->vpn_policy[afi]
.tovpn_sid_locator->prefix,
&tmp_prefix);
else if (bgp_nexthop->tovpn_sid_locator && bgp_nexthop->tovpn_sid)
local_sid = prefix_match(&bgp_nexthop->tovpn_sid_locator->prefix,
&tmp_prefix);
}
if (local_sid == false && pi->attr->srv6_l3vpn) {
p->prefixlen = IPV6_MAX_BITLEN;
if (attr->srv6_l3vpn->transposition_len != 0 &&
if (pi->attr->srv6_l3vpn->transposition_len != 0 &&
BGP_PATH_INFO_NUM_LABELS(pi)) {
IPV6_ADDR_COPY(&p->u.prefix6, &attr->srv6_l3vpn->sid);
IPV6_ADDR_COPY(&p->u.prefix6, &pi->attr->srv6_l3vpn->sid);
transpose_sid(&p->u.prefix6,
decode_label(&pi->extra->labels->label[0]),
attr->srv6_l3vpn->transposition_offset,
attr->srv6_l3vpn->transposition_len);
pi->attr->srv6_l3vpn->transposition_offset,
pi->attr->srv6_l3vpn->transposition_len);
} else
IPV6_ADDR_COPY(&(p->u.prefix6), &(attr->srv6_l3vpn->sid));
IPV6_ADDR_COPY(&(p->u.prefix6), &(pi->attr->srv6_l3vpn->sid));
} else if (is_bgp_static) {
p->u.prefix6 = p_orig->u.prefix6;
p->prefixlen = p_orig->prefixlen;
} else {
} else if (attr) {
/* If we receive MP_REACH nexthop with ::(LL)
* or LL(LL), use LL address as nexthop cache.
*/
if (attr && attr->mp_nexthop_len == BGP_ATTR_NHLEN_IPV6_GLOBAL_AND_LL &&
if (attr->mp_nexthop_len == BGP_ATTR_NHLEN_IPV6_GLOBAL_AND_LL &&
(IN6_IS_ADDR_UNSPECIFIED(&attr->mp_nexthop_global) ||
IN6_IS_ADDR_LINKLOCAL(&attr->mp_nexthop_global)))
p->u.prefix6 = attr->mp_nexthop_local;
/* If we receive MR_REACH with (GA)::(LL)
* then check for route-map to choose GA or LL
*/
else if (attr && attr->mp_nexthop_len == BGP_ATTR_NHLEN_IPV6_GLOBAL_AND_LL) {
else if (attr->mp_nexthop_len == BGP_ATTR_NHLEN_IPV6_GLOBAL_AND_LL) {
if (CHECK_FLAG(attr->nh_flags, BGP_ATTR_NH_MP_PREFER_GLOBAL))
p->u.prefix6 = attr->mp_nexthop_global;
else
p->u.prefix6 = attr->mp_nexthop_local;
} else if (attr && attr->mp_nexthop_len == BGP_ATTR_NHLEN_IPV6_GLOBAL &&
} else if (attr->mp_nexthop_len == BGP_ATTR_NHLEN_IPV6_GLOBAL &&
IN6_IS_ADDR_LINKLOCAL(&attr->mp_nexthop_global)) {
/* If we receive MP_REACH with GUA as LL, we should
* check if we have Link-Local Next Hop capability also.
@ -1138,11 +1131,11 @@ static bool make_prefix(int afi, struct bgp_path_info *pi, struct prefix *p)
*/
static void sendmsg_zebra_rnh(struct bgp_nexthop_cache *bnc, int command)
{
bool exact_match = false;
bool match_p = false;
bool resolve_via_default = false;
int ret;
if (!zclient)
if (!bgp_zclient)
return;
/* Don't try to register if Zebra doesn't know of this instance. */
@ -1162,7 +1155,7 @@ static void sendmsg_zebra_rnh(struct bgp_nexthop_cache *bnc, int command)
}
if (command == ZEBRA_NEXTHOP_REGISTER) {
if (CHECK_FLAG(bnc->flags, BGP_NEXTHOP_CONNECTED))
exact_match = true;
match_p = true;
if (CHECK_FLAG(bnc->flags, BGP_STATIC_ROUTE_EXACT_MATCH))
resolve_via_default = true;
}
@ -1172,8 +1165,8 @@ static void sendmsg_zebra_rnh(struct bgp_nexthop_cache *bnc, int command)
zserv_command_string(command), &bnc->prefix,
bnc->bgp->name_pretty);
ret = zclient_send_rnh(zclient, command, &bnc->prefix, SAFI_UNICAST,
exact_match, resolve_via_default,
ret = zclient_send_rnh(bgp_zclient, command, &bnc->prefix, SAFI_UNICAST,
match_p, resolve_via_default,
bnc->bgp->vrf_id);
if (ret == ZCLIENT_SEND_FAILURE) {
flog_warn(EC_BGP_ZEBRA_SEND,
@ -1600,7 +1593,7 @@ void bgp_nht_reg_enhe_cap_intfs(struct peer *peer)
if (!ifp)
continue;
zclient_send_interface_radv_req(zclient,
zclient_send_interface_radv_req(bgp_zclient,
nhop->vrf_id,
ifp, true,
BGP_UNNUM_DEFAULT_RA_INTERVAL);
@ -1650,7 +1643,7 @@ void bgp_nht_dereg_enhe_cap_intfs(struct peer *peer)
if (!ifp)
continue;
zclient_send_interface_radv_req(zclient, nhop->vrf_id, ifp, 0,
zclient_send_interface_radv_req(bgp_zclient, nhop->vrf_id, ifp, 0,
0);
}
}

View file

@ -25,11 +25,10 @@ extern void bgp_nexthop_update(struct vrf *vrf, struct prefix *match,
* peer - The BGP peer associated with this NHT
* connected - True if NH MUST be a connected route
*/
extern int bgp_find_or_add_nexthop(struct bgp *bgp_route,
struct bgp *bgp_nexthop, afi_t a,
safi_t safi, struct bgp_path_info *p,
struct peer *peer, int connected,
const struct prefix *orig_prefix);
extern int bgp_find_or_add_nexthop(struct bgp *bgp_route, struct bgp *bgp_nexthop, afi_t a,
safi_t safi, struct bgp_path_info *p, struct peer *peer,
int connected, const struct prefix *orig_prefix,
struct bgp_path_info *source_pi);
/**
* bgp_unlink_nexthop() - Unlink the nexthop object from the path structure.

View file

@ -613,7 +613,7 @@ void bgp_generate_updgrp_packets(struct event *thread)
/*
* Creates a BGP Keepalive packet and appends it to the peer's output queue.
*/
void bgp_keepalive_send(struct peer *peer)
void bgp_keepalive_send(struct peer_connection *connection)
{
struct stream *s;
@ -628,13 +628,13 @@ void bgp_keepalive_send(struct peer *peer)
/* Dump packet if debug option is set. */
/* bgp_packet_dump (s); */
if (bgp_debug_keepalive(peer))
zlog_debug("%s sending KEEPALIVE", peer->host);
if (bgp_debug_keepalive(connection->peer))
zlog_debug("%s sending KEEPALIVE", connection->peer->host);
/* Add packet to the peer. */
bgp_packet_add(peer->connection, peer, s);
bgp_packet_add(connection, connection->peer, s);
bgp_writes_on(peer->connection);
bgp_writes_on(connection);
}
struct stream *bgp_open_make(struct peer *peer, uint16_t send_holdtime, as_t local_as,
@ -658,17 +658,12 @@ struct stream *bgp_open_make(struct peer *peer, uint16_t send_holdtime, as_t loc
ext_opt_params = true;
(void)bgp_open_capability(s, peer, ext_opt_params);
} else {
struct stream *tmp = stream_new(STREAM_SIZE(s));
size_t endp = stream_get_endp(s);
stream_copy(tmp, s);
if (bgp_open_capability(tmp, peer, ext_opt_params) >
BGP_OPEN_NON_EXT_OPT_LEN) {
stream_free(tmp);
if (bgp_open_capability(s, peer, ext_opt_params) > BGP_OPEN_NON_EXT_OPT_LEN) {
stream_set_endp(s, endp);
ext_opt_params = true;
(void)bgp_open_capability(s, peer, ext_opt_params);
} else {
stream_copy(s, tmp);
stream_free(tmp);
}
}
@ -1043,6 +1038,13 @@ static void bgp_notify_send_internal(struct peer_connection *connection,
/* Add packet to peer's output queue */
stream_fifo_push(connection->obuf, s);
/* If Graceful-Restart N-bit (Notification) is exchanged,
* and it's not a Hard Reset, let's retain the routes.
*/
if (bgp_has_graceful_restart_notification(peer) && !hard_reset &&
CHECK_FLAG(peer->sflags, PEER_STATUS_NSF_MODE))
SET_FLAG(peer->sflags, PEER_STATUS_NSF_WAIT);
bgp_peer_gr_flags_update(peer);
BGP_GR_ROUTER_DETECT_AND_SEND_CAPABILITY_TO_ZEBRA(peer->bgp,
peer->bgp->peer);
@ -3146,8 +3148,6 @@ static void bgp_dynamic_capability_paths_limit(uint8_t *pnt, int action,
SET_FLAG(peer->cap, PEER_CAP_PATHS_LIMIT_RCV);
while (data + CAPABILITY_CODE_PATHS_LIMIT_LEN <= end) {
afi_t afi;
safi_t safi;
iana_afi_t pkt_afi;
iana_safi_t pkt_safi;
uint16_t paths_limit = 0;
@ -3506,8 +3506,6 @@ static void bgp_dynamic_capability_llgr(uint8_t *pnt, int action,
SET_FLAG(peer->cap, PEER_CAP_LLGR_RCV);
while (data + BGP_CAP_LLGR_MIN_PACKET_LEN <= end) {
afi_t afi;
safi_t safi;
iana_afi_t pkt_afi;
iana_safi_t pkt_safi;
struct graceful_restart_af graf;
@ -3614,8 +3612,6 @@ static void bgp_dynamic_capability_graceful_restart(uint8_t *pnt, int action,
while (data + GRACEFUL_RESTART_CAPABILITY_PER_AFI_SAFI_SIZE <=
end) {
afi_t afi;
safi_t safi;
iana_afi_t pkt_afi;
iana_safi_t pkt_safi;
struct graceful_restart_af graf;
@ -3972,6 +3968,18 @@ int bgp_capability_receive(struct peer_connection *connection,
* would not, making event flow difficult to understand. Please think twice
* before hacking this.
*
* packet_processing is now a FIFO of connections that need to be handled
* This loop has a maximum run of 100(BGP_PACKET_PROCESS_LIMIT) packets,
* but each individual connection can only handle the quanta value as
* specified in bgp_vty.c. If the connection still has work to do, place it
* back on the back of the queue for more work. Do note that event_should_yield
* is also being called to figure out if processing should stop and work
* picked up after other items can run. This was added *After* withdrawals
* started being processed at scale and this function was taking cpu for 40+ seconds
* On my machine we are getting 2-3 packets before a yield should happen in the
* update case. Withdrawal is 1 packet being processed(note this is a very very
* fast computer) before other items should be run.
*
* Thread type: EVENT_EVENT
* @param thread
* @return 0
@ -3984,30 +3992,53 @@ void bgp_process_packet(struct event *thread)
uint32_t rpkt_quanta_old; // how many packets to read
int fsm_update_result; // return code of bgp_event_update()
int mprc; // message processing return code
uint32_t processed = 0, curr_connection_processed = 0;
bool more_work = false;
size_t count;
uint32_t total_packets_to_process;
connection = EVENT_ARG(thread);
frr_with_mutex (&bm->peer_connection_mtx)
connection = peer_connection_fifo_pop(&bm->connection_fifo);
if (!connection)
goto done;
total_packets_to_process = BGP_PACKET_PROCESS_LIMIT;
peer = connection->peer;
rpkt_quanta_old = atomic_load_explicit(&peer->bgp->rpkt_quanta,
memory_order_relaxed);
fsm_update_result = 0;
/* Guard against scheduled events that occur after peer deletion. */
if (connection->status == Deleted || connection->status == Clearing)
return;
while ((processed < total_packets_to_process) && connection) {
/* Guard against scheduled events that occur after peer deletion. */
if (connection->status == Deleted || connection->status == Clearing) {
frr_with_mutex (&bm->peer_connection_mtx)
connection = peer_connection_fifo_pop(&bm->connection_fifo);
unsigned int processed = 0;
if (connection)
peer = connection->peer;
continue;
}
while (processed < rpkt_quanta_old) {
uint8_t type = 0;
bgp_size_t size;
char notify_data_length[2];
frr_with_mutex (&connection->io_mtx) {
frr_with_mutex (&connection->io_mtx)
peer->curr = stream_fifo_pop(connection->ibuf);
}
if (peer->curr == NULL) // no packets to process, hmm...
return;
if (peer->curr == NULL) {
frr_with_mutex (&bm->peer_connection_mtx)
connection = peer_connection_fifo_pop(&bm->connection_fifo);
if (connection)
peer = connection->peer;
continue;
}
/* skip the marker and copy the packet length */
stream_forward_getp(peer->curr, BGP_MARKER_SIZE);
@ -4111,32 +4142,81 @@ void bgp_process_packet(struct event *thread)
stream_free(peer->curr);
peer->curr = NULL;
processed++;
curr_connection_processed++;
/* Update FSM */
if (mprc != BGP_PACKET_NOOP)
fsm_update_result = bgp_event_update(connection, mprc);
else
continue;
/*
* If peer was deleted, do not process any more packets. This
* is usually due to executing BGP_Stop or a stub deletion.
*/
if (fsm_update_result == FSM_PEER_TRANSFERRED
|| fsm_update_result == FSM_PEER_STOPPED)
break;
}
if (fsm_update_result == FSM_PEER_TRANSFERRED ||
fsm_update_result == FSM_PEER_STOPPED) {
frr_with_mutex (&bm->peer_connection_mtx)
connection = peer_connection_fifo_pop(&bm->connection_fifo);
if (connection)
peer = connection->peer;
continue;
}
bool yield = event_should_yield(thread);
if (curr_connection_processed >= rpkt_quanta_old || yield) {
curr_connection_processed = 0;
frr_with_mutex (&bm->peer_connection_mtx) {
if (!peer_connection_fifo_member(&bm->connection_fifo, connection))
peer_connection_fifo_add_tail(&bm->connection_fifo,
connection);
if (!yield)
connection = peer_connection_fifo_pop(&bm->connection_fifo);
else
connection = NULL;
}
if (connection)
peer = connection->peer;
continue;
}
if (fsm_update_result != FSM_PEER_TRANSFERRED
&& fsm_update_result != FSM_PEER_STOPPED) {
frr_with_mutex (&connection->io_mtx) {
// more work to do, come back later
if (connection->ibuf->count > 0)
event_add_event(bm->master, bgp_process_packet,
connection, 0,
&connection->t_process_packet);
more_work = true;
else
more_work = false;
}
if (!more_work) {
frr_with_mutex (&bm->peer_connection_mtx)
connection = peer_connection_fifo_pop(&bm->connection_fifo);
if (connection)
peer = connection->peer;
}
}
if (connection) {
frr_with_mutex (&connection->io_mtx) {
if (connection->ibuf->count > 0)
more_work = true;
else
more_work = false;
}
frr_with_mutex (&bm->peer_connection_mtx) {
if (more_work &&
!peer_connection_fifo_member(&bm->connection_fifo, connection))
peer_connection_fifo_add_tail(&bm->connection_fifo, connection);
}
}
done:
frr_with_mutex (&bm->peer_connection_mtx)
count = peer_connection_fifo_count(&bm->connection_fifo);
if (count)
event_add_event(bm->master, bgp_process_packet, NULL, 0, &bm->e_process_packet);
}
/* Send EOR when routes are processed by selection deferral timer */
@ -4149,37 +4229,3 @@ void bgp_send_delayed_eor(struct bgp *bgp)
for (ALL_LIST_ELEMENTS(bgp->peer, node, nnode, peer))
bgp_write_proceed_actions(peer);
}
/*
* Task callback to handle socket error encountered in the io pthread. We avoid
* having the io pthread try to enqueue fsm events or mess with the peer
* struct.
*/
void bgp_packet_process_error(struct event *thread)
{
struct peer_connection *connection;
struct peer *peer;
int code;
connection = EVENT_ARG(thread);
peer = connection->peer;
code = EVENT_VAL(thread);
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [Event] BGP error %d on fd %d", peer->host, code,
connection->fd);
/* Closed connection or error on the socket */
if (peer_established(connection)) {
if ((CHECK_FLAG(peer->flags, PEER_FLAG_GRACEFUL_RESTART)
|| CHECK_FLAG(peer->flags,
PEER_FLAG_GRACEFUL_RESTART_HELPER))
&& CHECK_FLAG(peer->sflags, PEER_STATUS_NSF_MODE)) {
peer->last_reset = PEER_DOWN_NSF_CLOSE_SESSION;
SET_FLAG(peer->sflags, PEER_STATUS_NSF_WAIT);
} else
peer->last_reset = PEER_DOWN_CLOSE_SESSION;
}
bgp_event_update(connection, code);
}

View file

@ -48,7 +48,7 @@ DECLARE_HOOK(bgp_packet_send,
} while (0)
/* Packet send and receive function prototypes. */
extern void bgp_keepalive_send(struct peer *peer);
extern void bgp_keepalive_send(struct peer_connection *connection);
extern struct stream *bgp_open_make(struct peer *peer, uint16_t send_holdtime, as_t local_as,
struct in_addr *id);
extern void bgp_open_send(struct peer_connection *connection);
@ -82,8 +82,6 @@ extern void bgp_process_packet(struct event *event);
extern void bgp_send_delayed_eor(struct bgp *bgp);
/* Task callback to handle socket error encountered in the io pthread */
void bgp_packet_process_error(struct event *thread);
extern struct bgp_notify
bgp_notify_decapsulate_hard_reset(struct bgp_notify *notify);
extern bool bgp_has_graceful_restart_notification(struct peer *peer);

View file

@ -279,6 +279,13 @@ static void bgp_pbr_policyroute_add_to_zebra_unit(struct bgp *bgp,
static void bgp_pbr_dump_entry(struct bgp_pbr_filter *bpf, bool add);
static void bgp_pbr_val_mask_free(void *arg)
{
struct bgp_pbr_val_mask *pbr_val_mask = arg;
XFREE(MTYPE_PBR_VALMASK, pbr_val_mask);
}
static bool bgp_pbr_extract_enumerate_unary_opposite(
uint8_t unary_operator,
struct bgp_pbr_val_mask *and_valmask,
@ -442,7 +449,7 @@ static bool bgp_pbr_extract(struct bgp_pbr_match_val list[],
struct bgp_pbr_range_port *range)
{
int i = 0;
bool exact_match = false;
bool match_p = false;
if (range)
memset(range, 0, sizeof(struct bgp_pbr_range_port));
@ -457,9 +464,9 @@ static bool bgp_pbr_extract(struct bgp_pbr_match_val list[],
OPERATOR_COMPARE_EQUAL_TO)) {
if (range)
range->min_port = list[i].value;
exact_match = true;
match_p = true;
}
if (exact_match && i > 0)
if (match_p && i > 0)
return false;
if (list[i].compare_operator ==
(OPERATOR_COMPARE_GREATER_THAN +
@ -965,7 +972,12 @@ int bgp_pbr_build_and_validate_entry(const struct prefix *p,
return 0;
}
static void bgp_pbr_match_entry_free(void *arg)
static void bgp_pbr_match_entry_free(struct bgp_pbr_match_entry *bpme)
{
XFREE(MTYPE_PBR_MATCH_ENTRY, bpme);
}
static void bgp_pbr_match_entry_hash_free(void *arg)
{
struct bgp_pbr_match_entry *bpme;
@ -976,16 +988,21 @@ static void bgp_pbr_match_entry_free(void *arg)
bpme->installed = false;
bpme->backpointer = NULL;
}
XFREE(MTYPE_PBR_MATCH_ENTRY, bpme);
bgp_pbr_match_entry_free(bpme);
}
static void bgp_pbr_match_free(void *arg)
static void bgp_pbr_match_free(struct bgp_pbr_match *bpm)
{
XFREE(MTYPE_PBR_MATCH, bpm);
}
static void bgp_pbr_match_hash_free(void *arg)
{
struct bgp_pbr_match *bpm;
bpm = (struct bgp_pbr_match *)arg;
hash_clean(bpm->entry_hash, bgp_pbr_match_entry_free);
hash_clean(bpm->entry_hash, bgp_pbr_match_entry_hash_free);
if (hashcount(bpm->entry_hash) == 0) {
/* delete iptable entry first */
@ -1004,7 +1021,7 @@ static void bgp_pbr_match_free(void *arg)
}
hash_clean_and_free(&bpm->entry_hash, NULL);
XFREE(MTYPE_PBR_MATCH, bpm);
bgp_pbr_match_free(bpm);
}
static void *bgp_pbr_match_alloc_intern(void *arg)
@ -1019,7 +1036,12 @@ static void *bgp_pbr_match_alloc_intern(void *arg)
return new;
}
static void bgp_pbr_rule_free(void *arg)
static void bgp_pbr_rule_free(struct bgp_pbr_rule *pbr)
{
XFREE(MTYPE_PBR_RULE, pbr);
}
static void bgp_pbr_rule_hash_free(void *arg)
{
struct bgp_pbr_rule *bpr;
@ -1032,7 +1054,7 @@ static void bgp_pbr_rule_free(void *arg)
bpr->action->refcnt--;
bpr->action = NULL;
}
XFREE(MTYPE_PBR_RULE, bpr);
bgp_pbr_rule_free(bpr);
}
static void *bgp_pbr_rule_alloc_intern(void *arg)
@ -1372,8 +1394,8 @@ struct bgp_pbr_match *bgp_pbr_match_iptable_lookup(vrf_id_t vrf_id,
void bgp_pbr_cleanup(struct bgp *bgp)
{
hash_clean_and_free(&bgp->pbr_match_hash, bgp_pbr_match_free);
hash_clean_and_free(&bgp->pbr_rule_hash, bgp_pbr_rule_free);
hash_clean_and_free(&bgp->pbr_match_hash, bgp_pbr_match_hash_free);
hash_clean_and_free(&bgp->pbr_rule_hash, bgp_pbr_rule_hash_free);
hash_clean_and_free(&bgp->pbr_action_hash, bgp_pbr_action_free);
if (bgp->bgp_pbr_cfg == NULL)
@ -1656,6 +1678,8 @@ static void bgp_pbr_flush_iprule(struct bgp *bgp, struct bgp_pbr_action *bpa,
}
}
hash_release(bgp->pbr_rule_hash, bpr);
bgp_pbr_rule_free(bpr);
bgp_pbr_bpa_remove(bpa);
}
@ -1685,6 +1709,7 @@ static void bgp_pbr_flush_entry(struct bgp *bgp, struct bgp_pbr_action *bpa,
}
}
hash_release(bpm->entry_hash, bpme);
bgp_pbr_match_entry_free(bpme);
if (hashcount(bpm->entry_hash) == 0) {
/* delete iptable entry first */
/* then delete ipset match */
@ -1700,6 +1725,7 @@ static void bgp_pbr_flush_entry(struct bgp *bgp, struct bgp_pbr_action *bpa,
bpm->action = NULL;
}
hash_release(bgp->pbr_match_hash, bpm);
bgp_pbr_match_free(bpm);
/* XXX release pbr_match_action if not used
* note that drop does not need to call send_pbr_action
*/
@ -2111,17 +2137,6 @@ static void bgp_pbr_policyroute_remove_from_zebra(
bgp, path, bpf, bpof, FLOWSPEC_ICMP_TYPE);
else
bgp_pbr_policyroute_remove_from_zebra_unit(bgp, path, bpf);
/* flush bpof */
if (bpof->tcpflags)
list_delete_all_node(bpof->tcpflags);
if (bpof->dscp)
list_delete_all_node(bpof->dscp);
if (bpof->flowlabel)
list_delete_all_node(bpof->flowlabel);
if (bpof->pkt_len)
list_delete_all_node(bpof->pkt_len);
if (bpof->fragment)
list_delete_all_node(bpof->fragment);
}
static void bgp_pbr_dump_entry(struct bgp_pbr_filter *bpf, bool add)
@ -2606,19 +2621,6 @@ static void bgp_pbr_policyroute_add_to_zebra(struct bgp *bgp,
bgp, path, bpf, bpof, nh, rate, FLOWSPEC_ICMP_TYPE);
else
bgp_pbr_policyroute_add_to_zebra_unit(bgp, path, bpf, nh, rate);
/* flush bpof */
if (bpof->tcpflags)
list_delete_all_node(bpof->tcpflags);
if (bpof->dscp)
list_delete_all_node(bpof->dscp);
if (bpof->pkt_len)
list_delete_all_node(bpof->pkt_len);
if (bpof->fragment)
list_delete_all_node(bpof->fragment);
if (bpof->icmp_type)
list_delete_all_node(bpof->icmp_type);
if (bpof->icmp_code)
list_delete_all_node(bpof->icmp_code);
}
static void bgp_pbr_handle_entry(struct bgp *bgp, struct bgp_path_info *path,
@ -2684,6 +2686,7 @@ static void bgp_pbr_handle_entry(struct bgp *bgp, struct bgp_path_info *path,
srcp = &range;
else {
bpof.icmp_type = list_new();
bpof.icmp_type->del = bgp_pbr_val_mask_free;
bgp_pbr_extract_enumerate(api->icmp_type,
api->match_icmp_type_num,
OPERATOR_UNARY_OR,
@ -2699,6 +2702,7 @@ static void bgp_pbr_handle_entry(struct bgp *bgp, struct bgp_path_info *path,
dstp = &range_icmp_code;
else {
bpof.icmp_code = list_new();
bpof.icmp_code->del = bgp_pbr_val_mask_free;
bgp_pbr_extract_enumerate(api->icmp_code,
api->match_icmp_code_num,
OPERATOR_UNARY_OR,
@ -2719,6 +2723,7 @@ static void bgp_pbr_handle_entry(struct bgp *bgp, struct bgp_path_info *path,
FLOWSPEC_TCP_FLAGS);
} else if (kind_enum == OPERATOR_UNARY_OR) {
bpof.tcpflags = list_new();
bpof.tcpflags->del = bgp_pbr_val_mask_free;
bgp_pbr_extract_enumerate(api->tcpflags,
api->match_tcpflags_num,
OPERATOR_UNARY_OR,
@ -2736,6 +2741,7 @@ static void bgp_pbr_handle_entry(struct bgp *bgp, struct bgp_path_info *path,
bpf.pkt_len = &pkt_len;
else {
bpof.pkt_len = list_new();
bpof.pkt_len->del = bgp_pbr_val_mask_free;
bgp_pbr_extract_enumerate(api->packet_length,
api->match_packet_length_num,
OPERATOR_UNARY_OR,
@ -2745,12 +2751,14 @@ static void bgp_pbr_handle_entry(struct bgp *bgp, struct bgp_path_info *path,
}
if (api->match_dscp_num >= 1) {
bpof.dscp = list_new();
bpof.dscp->del = bgp_pbr_val_mask_free;
bgp_pbr_extract_enumerate(api->dscp, api->match_dscp_num,
OPERATOR_UNARY_OR,
bpof.dscp, FLOWSPEC_DSCP);
}
if (api->match_fragment_num) {
bpof.fragment = list_new();
bpof.fragment->del = bgp_pbr_val_mask_free;
bgp_pbr_extract_enumerate(api->fragment,
api->match_fragment_num,
OPERATOR_UNARY_OR,
@ -2766,7 +2774,7 @@ static void bgp_pbr_handle_entry(struct bgp *bgp, struct bgp_path_info *path,
bpf.family = afi2family(api->afi);
if (!add) {
bgp_pbr_policyroute_remove_from_zebra(bgp, path, &bpf, &bpof);
return;
goto flush_bpof;
}
/* no action for add = true */
for (i = 0; i < api->action_num; i++) {
@ -2844,6 +2852,22 @@ static void bgp_pbr_handle_entry(struct bgp *bgp, struct bgp_path_info *path,
if (continue_loop == 0)
break;
}
flush_bpof:
if (bpof.tcpflags)
list_delete(&bpof.tcpflags);
if (bpof.dscp)
list_delete(&bpof.dscp);
if (bpof.flowlabel)
list_delete(&bpof.flowlabel);
if (bpof.pkt_len)
list_delete(&bpof.pkt_len);
if (bpof.fragment)
list_delete(&bpof.fragment);
if (bpof.icmp_type)
list_delete(&bpof.icmp_type);
if (bpof.icmp_code)
list_delete(&bpof.icmp_code);
}
void bgp_pbr_update_entry(struct bgp *bgp, const struct prefix *p,

View file

@ -151,8 +151,6 @@ struct bgp_pbr_config {
bool pbr_interface_any_ipv6;
};
extern struct bgp_pbr_config *bgp_pbr_cfg;
struct bgp_pbr_rule {
uint32_t flags;
struct prefix src;

View file

@ -80,6 +80,8 @@
DEFINE_MTYPE_STATIC(BGPD, BGP_EOIU_MARKER_INFO, "BGP EOIU Marker info");
DEFINE_MTYPE_STATIC(BGPD, BGP_METAQ, "BGP MetaQ");
/* Memory for batched clearing of peers from the RIB */
DEFINE_MTYPE(BGPD, CLEARING_BATCH, "Clearing batch");
DEFINE_HOOK(bgp_snmp_update_stats,
(struct bgp_dest *rn, struct bgp_path_info *pi, bool added),
@ -117,6 +119,8 @@ static const struct message bgp_pmsi_tnltype_str[] = {
#define VRFID_NONE_STR "-"
#define SOFT_RECONFIG_TASK_MAX_PREFIX 25000
static int clear_batch_rib_helper(struct bgp_clearing_info *cinfo);
static inline char *bgp_route_dump_path_info_flags(struct bgp_path_info *pi,
char *buf, size_t len)
{
@ -2621,15 +2625,32 @@ bool subgroup_announce_check(struct bgp_dest *dest, struct bgp_path_info *pi,
bgp_peer_remove_private_as(bgp, afi, safi, peer, attr);
bgp_peer_as_override(bgp, afi, safi, peer, attr);
/* draft-ietf-idr-deprecate-as-set-confed-set
* Filter routes having AS_SET or AS_CONFED_SET in the path.
* Eventually, This document (if approved) updates RFC 4271
* and RFC 5065 by eliminating AS_SET and AS_CONFED_SET types,
* and obsoletes RFC 6472.
*/
if (peer->bgp->reject_as_sets)
if (aspath_check_as_sets(attr->aspath))
/* draft-ietf-idr-deprecate-as-set-confed-set-16 */
if (peer->bgp->reject_as_sets && aspath_check_as_sets(attr->aspath)) {
struct aspath *aspath_new;
/* An aggregate prefix MUST NOT be announced to the contributing ASes */
if (pi->sub_type == BGP_ROUTE_AGGREGATE &&
aspath_loop_check(attr->aspath, peer->as)) {
zlog_warn("%pBP [Update:SEND] %pFX is filtered by `bgp reject-as-sets`",
peer, p);
return false;
}
/* When aggregating prefixes, network operators MUST use consistent brief
* aggregation as described in Section 5.2. In consistent brief aggregation,
* the AGGREGATOR and ATOMIC_AGGREGATE Path Attributes are included, but the
* AS_PATH does not have AS_SET or AS_CONFED_SET path segment types.
* The ATOMIC_AGGREGATE Path Attribute is subsequently attached to the BGP
* route, if AS_SETs are dropped.
*/
if (attr->aspath->refcnt)
aspath_new = aspath_dup(attr->aspath);
else
aspath_new = attr->aspath;
attr->aspath = aspath_delete_as_set_seq(aspath_new);
}
/* If neighbor soo is configured, then check if the route has
* SoO extended community and validate against the configured
@ -3264,11 +3285,8 @@ void bgp_best_selection(struct bgp *bgp, struct bgp_dest *dest,
if (worse->prev)
worse->prev->next = first;
first->next = worse;
if (worse) {
first->prev = worse->prev;
worse->prev = first;
} else
first->prev = NULL;
first->prev = worse->prev;
worse->prev = first;
if (dest->info == worse) {
bgp_dest_set_bgp_path_info(dest, first);
@ -3392,13 +3410,14 @@ void subgroup_process_announce_selected(struct update_subgroup *subgrp,
safi_t safi, uint32_t addpath_tx_id)
{
const struct prefix *p;
struct peer *onlypeer;
struct peer *onlypeer, *peer;
struct attr attr = { 0 }, *pattr = &attr;
struct bgp *bgp;
bool advertise;
p = bgp_dest_get_prefix(dest);
bgp = SUBGRP_INST(subgrp);
peer = SUBGRP_PEER(subgrp);
onlypeer = ((SUBGRP_PCOUNT(subgrp) == 1) ? (SUBGRP_PFIRST(subgrp))->peer
: NULL);
@ -3433,6 +3452,26 @@ void subgroup_process_announce_selected(struct update_subgroup *subgrp,
pattr,
selected))
bgp_attr_flush(pattr);
/* Remove paths from Adj-RIB-Out if it's not a best (selected) path.
* Why should we keep Adj-RIB-Out with stale paths?
*/
if (!bgp_addpath_encode_tx(peer, afi, safi)) {
struct bgp_adj_out *adj, *adj_next;
RB_FOREACH_SAFE (adj, bgp_adj_out_rb,
&dest->adj_out, adj_next) {
if (adj->subgroup != subgrp)
continue;
if (!adj->adv &&
adj->addpath_tx_id != addpath_tx_id) {
bgp_adj_out_unset_subgroup(dest,
subgrp, 1,
adj->addpath_tx_id);
}
}
}
} else {
bgp_adj_out_unset_subgroup(
dest, subgrp, 1, addpath_tx_id);
@ -4162,12 +4201,30 @@ static wq_item_status meta_queue_process(struct work_queue *dummy, void *data)
{
struct meta_queue *mq = data;
uint32_t i;
uint32_t peers_on_fifo;
static uint32_t total_runs = 0;
total_runs++;
frr_with_mutex (&bm->peer_connection_mtx)
peers_on_fifo = peer_connection_fifo_count(&bm->connection_fifo);
/*
* If the number of peers on the fifo is greater than 10
* let's yield this run of the MetaQ to allow the packet processing to make
* progress against the incoming packets. But we should also
* attempt to allow this to run occassionally. Let's run
* something every 10 attempts to process the work queue.
*/
if (peers_on_fifo > 10 && total_runs % 10 != 0)
return WQ_QUEUE_BLOCKED;
for (i = 0; i < MQ_SIZE; i++)
if (process_subq(mq->subq[i], i)) {
mq->size--;
break;
}
return mq->size ? WQ_REQUEUE : WQ_SUCCESS;
}
@ -4284,9 +4341,14 @@ static void early_meta_queue_free(struct meta_queue *mq, struct bgp_dest_queue *
struct bgp_dest *dest;
while (!STAILQ_EMPTY(l)) {
struct bgp_table *table;
dest = STAILQ_FIRST(l);
STAILQ_REMOVE_HEAD(l, pq);
STAILQ_NEXT(dest, pq) = NULL; /* complete unlink */
table = bgp_dest_table(dest);
bgp_table_unlock(table);
mq->size--;
}
}
@ -4297,9 +4359,14 @@ static void other_meta_queue_free(struct meta_queue *mq, struct bgp_dest_queue *
struct bgp_dest *dest;
while (!STAILQ_EMPTY(l)) {
struct bgp_table *table;
dest = STAILQ_FIRST(l);
STAILQ_REMOVE_HEAD(l, pq);
STAILQ_NEXT(dest, pq) = NULL; /* complete unlink */
table = bgp_dest_table(dest);
bgp_table_unlock(table);
mq->size--;
}
}
@ -4870,6 +4937,7 @@ bgp_update_nexthop_reachability_check(struct bgp *bgp, struct peer *peer, struct
{
bool connected;
afi_t nh_afi;
struct bgp_path_info *bpi_ultimate = NULL;
if (((afi == AFI_IP || afi == AFI_IP6) &&
(safi == SAFI_UNICAST || safi == SAFI_LABELED_UNICAST ||
@ -4885,13 +4953,16 @@ bgp_update_nexthop_reachability_check(struct bgp *bgp, struct peer *peer, struct
struct bgp *bgp_nexthop = bgp;
if (pi->extra && pi->extra->vrfleak && pi->extra->vrfleak->bgp_orig)
if (pi->extra && pi->extra->vrfleak && pi->extra->vrfleak->bgp_orig) {
bgp_nexthop = pi->extra->vrfleak->bgp_orig;
if (pi->sub_type == BGP_ROUTE_IMPORTED)
bpi_ultimate = bgp_get_imported_bpi_ultimate(pi);
}
nh_afi = BGP_ATTR_NH_AFI(afi, pi->attr);
if (bgp_find_or_add_nexthop(bgp, bgp_nexthop, nh_afi, safi, pi, NULL, connected,
bgp_nht_param_prefix) ||
bgp_nht_param_prefix, bpi_ultimate) ||
CHECK_FLAG(peer->flags, PEER_FLAG_IS_RFAPI_HD)) {
if (accept_own)
bgp_path_info_set_flag(dest, pi, BGP_PATH_ACCEPT_OWN);
@ -5151,7 +5222,8 @@ void bgp_update(struct peer *peer, const struct prefix *p, uint32_t addpath_id,
* attr->evpn_overlay with evpn directly. Instead memcpy
* evpn to new_atr.evpn_overlay before it is interned.
*/
if (soft_reconfig && evpn && afi == AFI_L2VPN) {
if (evpn && afi == AFI_L2VPN &&
(soft_reconfig || !CHECK_FLAG(peer->af_flags[afi][safi], PEER_FLAG_SOFT_RECONFIG))) {
bgp_attr_set_evpn_overlay(&new_attr, evpn);
p_evpn = NULL;
}
@ -6440,11 +6512,380 @@ void bgp_clear_route(struct peer *peer, afi_t afi, safi_t safi)
peer_unlock(peer);
}
/*
* Clear one path-info during clearing processing
*/
static void clearing_clear_one_pi(struct bgp_table *table, struct bgp_dest *dest,
struct bgp_path_info *pi)
{
afi_t afi;
safi_t safi;
struct bgp *bgp;
bgp = table->bgp;
afi = table->afi;
safi = table->safi;
/* graceful restart STALE flag set. */
if (((CHECK_FLAG(pi->peer->sflags, PEER_STATUS_NSF_WAIT)
&& pi->peer->nsf[afi][safi])
|| CHECK_FLAG(pi->peer->af_sflags[afi][safi],
PEER_STATUS_ENHANCED_REFRESH))
&& !CHECK_FLAG(pi->flags, BGP_PATH_STALE)
&& !CHECK_FLAG(pi->flags, BGP_PATH_UNUSEABLE)) {
bgp_path_info_set_flag(dest, pi, BGP_PATH_STALE);
} else {
/* If this is an EVPN route, process for
* un-import. */
if (safi == SAFI_EVPN)
bgp_evpn_unimport_route(
bgp, afi, safi,
bgp_dest_get_prefix(dest), pi);
/* Handle withdraw for VRF route-leaking and L3VPN */
if (SAFI_UNICAST == safi
&& (bgp->inst_type == BGP_INSTANCE_TYPE_VRF ||
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
vpn_leak_from_vrf_withdraw(bgp_get_default(),
bgp, pi);
}
if (SAFI_MPLS_VPN == safi &&
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT) {
vpn_leak_to_vrf_withdraw(pi);
}
bgp_rib_remove(dest, pi, pi->peer, afi, safi);
}
}
/*
* Helper to capture interrupt/resume context info for clearing processing. We
* may be iterating at two levels, so we may need to capture two levels of context
* or keying data.
*/
static void set_clearing_resume_info(struct bgp_clearing_info *cinfo,
const struct bgp_table *table,
const struct prefix *p, bool inner_p)
{
if (bgp_debug_neighbor_events(NULL))
zlog_debug("%s: %sinfo for %s/%s %pFX", __func__,
inner_p ? "inner " : "", afi2str(table->afi),
safi2str(table->safi), p);
SET_FLAG(cinfo->flags, BGP_CLEARING_INFO_FLAG_RESUME);
if (inner_p) {
cinfo->inner_afi = table->afi;
cinfo->inner_safi = table->safi;
memcpy(&cinfo->inner_pfx, p, sizeof(struct prefix));
SET_FLAG(cinfo->flags, BGP_CLEARING_INFO_FLAG_INNER);
} else {
cinfo->last_afi = table->afi;
cinfo->last_safi = table->safi;
memcpy(&cinfo->last_pfx, p, sizeof(struct prefix));
}
}
/*
* Helper to establish position in a table, possibly using "resume" info stored
* during an iteration
*/
static struct bgp_dest *clearing_dest_helper(struct bgp_table *table,
struct bgp_clearing_info *cinfo,
bool inner_p)
{
struct bgp_dest *dest;
const struct prefix *pfx;
/* Iterate at start of table, or resume using inner or outer prefix */
dest = bgp_table_top(table);
if (CHECK_FLAG(cinfo->flags, BGP_CLEARING_INFO_FLAG_RESUME)) {
pfx = NULL;
if (inner_p) {
if (CHECK_FLAG(cinfo->flags, BGP_CLEARING_INFO_FLAG_INNER))
pfx = &(cinfo->inner_pfx);
} else {
pfx = &(cinfo->last_pfx);
}
if (pfx) {
dest = bgp_node_match(table, pfx);
if (dest) {
/* if 'dest' matches or precedes the 'last' prefix
* visited, then advance.
*/
while (dest && (prefix_cmp(&(dest->rn->p), pfx) <= 0))
dest = bgp_route_next(dest);
}
}
}
return dest;
}
/*
* Callback to begin or resume the rib-walk for peer clearing, with info carried in
* a clearing context.
*/
static void clear_dests_callback(struct event *event)
{
int ret;
struct bgp_clearing_info *cinfo = EVENT_ARG(event);
/* Begin, or continue, work */
ret = clear_batch_rib_helper(cinfo);
if (ret == 0) {
/* All done, clean up context */
bgp_clearing_batch_completed(cinfo);
} else {
/* Need to resume the work, with 'cinfo' */
event_add_event(bm->master, clear_dests_callback, cinfo, 0,
&cinfo->t_sched);
}
}
/*
* Walk a single table for batch peer clearing processing. Limit the number of dests
* examined, and return when reaching the limit. Capture "last" info about the
* last dest we process so we can resume later.
*/
static int walk_batch_table_helper(struct bgp_clearing_info *cinfo,
struct bgp_table *table, bool inner_p)
{
int ret = 0;
struct bgp_dest *dest;
bool force = (cinfo->bgp->process_queue == NULL);
uint32_t examined = 0, processed = 0;
struct prefix pfx;
/* Locate starting dest, possibly using "resume" info */
dest = clearing_dest_helper(table, cinfo, inner_p);
if (dest == NULL) {
/* Nothing more to do for this table? */
return 0;
}
for ( ; dest; dest = bgp_route_next(dest)) {
struct bgp_path_info *pi, *next;
struct bgp_adj_in *ain;
struct bgp_adj_in *ain_next;
examined++;
cinfo->curr_counter++;
/* Save dest's prefix */
memcpy(&pfx, &dest->rn->p, sizeof(struct prefix));
ain = dest->adj_in;
while (ain) {
ain_next = ain->next;
if (bgp_clearing_batch_check_peer(cinfo, ain->peer))
bgp_adj_in_remove(&dest, ain);
ain = ain_next;
assert(dest != NULL);
}
for (pi = bgp_dest_get_bgp_path_info(dest); pi; pi = next) {
next = pi->next;
if (!bgp_clearing_batch_check_peer(cinfo, pi->peer))
continue;
processed++;
if (force) {
bgp_path_info_reap(dest, pi);
} else {
/* Do clearing for this pi */
clearing_clear_one_pi(table, dest, pi);
}
}
if (cinfo->curr_counter >= bm->peer_clearing_batch_max_dests) {
/* Capture info about last dest seen and break */
if (bgp_debug_neighbor_events(NULL))
zlog_debug("%s: %s/%s: pfx %pFX reached limit %u", __func__,
afi2str(table->afi), safi2str(table->safi), &pfx,
cinfo->curr_counter);
/* Reset the counter */
cinfo->curr_counter = 0;
set_clearing_resume_info(cinfo, table, &pfx, inner_p);
ret = -1;
break;
}
}
if (examined > 0) {
if (bgp_debug_neighbor_events(NULL))
zlog_debug("%s: %s/%s: examined %u dests, processed %u paths",
__func__, afi2str(table->afi),
safi2str(table->safi), examined, processed);
}
return ret;
}
/*
* RIB-walking helper for batch clearing work: walk all tables, identify
* dests that are affected by the peers in the batch, enqueue the dests for
* async processing.
*/
static int clear_batch_rib_helper(struct bgp_clearing_info *cinfo)
{
int ret = 0;
afi_t afi;
safi_t safi;
struct bgp_dest *dest;
struct bgp_table *table, *outer_table;
struct prefix pfx;
/* Maybe resume afi/safi iteration */
if (CHECK_FLAG(cinfo->flags, BGP_CLEARING_INFO_FLAG_RESUME)) {
afi = cinfo->last_afi;
safi = cinfo->last_safi;
} else {
afi = AFI_IP;
safi = SAFI_UNICAST;
}
/* Iterate through afi/safi combos */
for (; afi < AFI_MAX; afi++) {
for (; safi < SAFI_MAX; safi++) {
/* Identify table to be examined: special handling
* for some SAFIs
*/
if (bgp_debug_neighbor_events(NULL))
zlog_debug("%s: examining AFI/SAFI %s/%s", __func__, afi2str(afi),
safi2str(safi));
/* Record the tables we've seen and don't repeat */
if (cinfo->table_map[afi][safi] > 0)
continue;
if (safi != SAFI_MPLS_VPN && safi != SAFI_ENCAP && safi != SAFI_EVPN) {
table = cinfo->bgp->rib[afi][safi];
if (!table) {
/* Invalid table: don't use 'resume' info */
UNSET_FLAG(cinfo->flags, BGP_CLEARING_INFO_FLAG_RESUME);
continue;
}
ret = walk_batch_table_helper(cinfo, table, false /*inner*/);
if (ret != 0)
break;
cinfo->table_map[afi][safi] = 1;
} else {
/* Process "inner" table for these SAFIs */
outer_table = cinfo->bgp->rib[afi][safi];
/* Begin or resume iteration in "outer" table */
dest = clearing_dest_helper(outer_table, cinfo, false);
for (; dest; dest = bgp_route_next(dest)) {
table = bgp_dest_get_bgp_table_info(dest);
if (!table) {
/* If we resumed to an inner afi/safi, but
* it's no longer valid, reset resume info.
*/
UNSET_FLAG(cinfo->flags,
BGP_CLEARING_INFO_FLAG_RESUME);
continue;
}
/* Capture last prefix */
memcpy(&pfx, &dest->rn->p, sizeof(struct prefix));
/* This will resume the "inner" walk if necessary */
ret = walk_batch_table_helper(cinfo, table, true /*inner*/);
if (ret != 0) {
/* The "inner" resume info will be set;
* capture the resume info we need
* from the outer afi/safi and dest
*/
set_clearing_resume_info(cinfo, outer_table, &pfx,
false);
break;
}
}
if (ret != 0)
break;
cinfo->table_map[afi][safi] = 1;
}
/* We've finished with a table: ensure we don't try to use stale
* resume info.
*/
UNSET_FLAG(cinfo->flags, BGP_CLEARING_INFO_FLAG_RESUME);
}
/* Return immediately, otherwise the 'ret' state will be overwritten
* by next afi/safi. Also resume state stored for current afi/safi
* in walk_batch_table_helper, will be overwritten. This may cause to
* skip the nets to be walked again, so they won't be marked for deletion
* from BGP table
*/
if (ret != 0)
return ret;
safi = SAFI_UNICAST;
}
return ret;
}
/*
* Identify prefixes that need to be cleared for a batch of peers in 'cinfo'.
* The actual clearing processing will be done async...
*/
void bgp_clear_route_batch(struct bgp_clearing_info *cinfo)
{
int ret;
if (bgp_debug_neighbor_events(NULL))
zlog_debug("%s: BGP %s, batch %u", __func__,
cinfo->bgp->name_pretty, cinfo->id);
/* Walk the rib, checking the peers in the batch. If the rib walk needs
* to continue, a task will be scheduled
*/
ret = clear_batch_rib_helper(cinfo);
if (ret == 0) {
/* All done - clean up. */
bgp_clearing_batch_completed(cinfo);
} else {
/* Handle pause/resume for the walk: we've captured key info
* in cinfo so we can resume later.
*/
if (bgp_debug_neighbor_events(NULL))
zlog_debug("%s: reschedule cinfo at %s/%s, %pFX", __func__,
afi2str(cinfo->last_afi),
safi2str(cinfo->last_safi), &(cinfo->last_pfx));
event_add_event(bm->master, clear_dests_callback, cinfo, 0,
&cinfo->t_sched);
}
}
void bgp_clear_route_all(struct peer *peer)
{
afi_t afi;
safi_t safi;
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s: peer %pBP", __func__, peer);
/* We may be able to batch multiple peers' clearing work: check
* and see.
*/
if (bgp_clearing_batch_add_peer(peer->bgp, peer))
return;
FOREACH_AFI_SAFI (afi, safi)
bgp_clear_route(peer, afi, safi);
@ -6905,8 +7346,8 @@ static void bgp_nexthop_reachability_check(afi_t afi, safi_t safi,
/* Nexthop reachability check. */
if (safi == SAFI_UNICAST || safi == SAFI_LABELED_UNICAST) {
if (CHECK_FLAG(bgp->flags, BGP_FLAG_IMPORT_CHECK)) {
if (bgp_find_or_add_nexthop(bgp, bgp_nexthop, afi, safi,
bpi, NULL, 0, p))
if (bgp_find_or_add_nexthop(bgp, bgp_nexthop, afi, safi, bpi, NULL, 0, p,
NULL))
bgp_path_info_set_flag(dest, bpi,
BGP_PATH_VALID);
else {
@ -7078,9 +7519,9 @@ void bgp_static_update(struct bgp *bgp, const struct prefix *p,
break;
if (pi) {
if (attrhash_cmp(pi->attr, attr_new)
&& !CHECK_FLAG(pi->flags, BGP_PATH_REMOVED)
&& !CHECK_FLAG(bgp->flags, BGP_FLAG_FORCE_STATIC_PROCESS)) {
if (!CHECK_FLAG(pi->flags, BGP_PATH_REMOVED) &&
!CHECK_FLAG(bgp->flags, BGP_FLAG_FORCE_STATIC_PROCESS) &&
attrhash_cmp(pi->attr, attr_new)) {
bgp_dest_unlock_node(dest);
bgp_attr_unintern(&attr_new);
aspath_unintern(&attr.aspath);
@ -7127,7 +7568,7 @@ void bgp_static_update(struct bgp *bgp, const struct prefix *p,
&pi->extra->labels->label[0]);
}
#endif
if (pi->extra && pi->extra->vrfleak->bgp_orig)
if (pi->extra && pi->extra->vrfleak && pi->extra->vrfleak->bgp_orig)
bgp_nexthop = pi->extra->vrfleak->bgp_orig;
bgp_nexthop_reachability_check(afi, safi, pi, p, dest,
@ -7574,6 +8015,8 @@ void bgp_static_delete(struct bgp *bgp)
rm = bgp_dest_unlock_node(rm);
assert(rm);
}
bgp_table_unlock(table);
} else {
bgp_static = bgp_dest_get_bgp_static_info(dest);
bgp_static_withdraw(bgp,
@ -8015,6 +8458,9 @@ static void bgp_aggregate_install(
bgp_process(bgp, dest, new, afi, safi);
if (debug)
zlog_debug(" aggregate %pFX: installed", p);
if (SAFI_UNICAST == safi && (bgp->inst_type == BGP_INSTANCE_TYPE_VRF ||
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT))
vpn_leak_from_vrf_update(bgp_get_default(), bgp, new);
} else {
uninstall_aggregate_route:
/* Withdraw the aggregate route from routing table. */
@ -8023,6 +8469,11 @@ static void bgp_aggregate_install(
bgp_process(bgp, dest, pi, afi, safi);
if (debug)
zlog_debug(" aggregate %pFX: uninstall", p);
if (SAFI_UNICAST == safi &&
(bgp->inst_type == BGP_INSTANCE_TYPE_VRF ||
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
vpn_leak_from_vrf_withdraw(bgp_get_default(), bgp, pi);
}
}
}
@ -8127,14 +8578,25 @@ void bgp_aggregate_toggle_suppressed(struct bgp_aggregate *aggregate,
/* We are toggling suppression back. */
if (suppress) {
/* Suppress route if not suppressed already. */
if (aggr_suppress_path(aggregate, pi))
if (aggr_suppress_path(aggregate, pi)) {
bgp_process(bgp, dest, pi, afi, safi);
if (SAFI_UNICAST == safi &&
(bgp->inst_type == BGP_INSTANCE_TYPE_VRF ||
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT))
vpn_leak_from_vrf_withdraw(bgp_get_default(), bgp,
pi);
}
continue;
}
/* Install route if there is no more suppression. */
if (aggr_unsuppress_path(aggregate, pi))
if (aggr_unsuppress_path(aggregate, pi)) {
bgp_process(bgp, dest, pi, afi, safi);
if (SAFI_UNICAST == safi &&
(bgp->inst_type == BGP_INSTANCE_TYPE_VRF ||
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT))
vpn_leak_from_vrf_update(bgp_get_default(), bgp, pi);
}
}
}
bgp_dest_unlock_node(top);
@ -8265,8 +8727,14 @@ bool bgp_aggregate_route(struct bgp *bgp, const struct prefix *p, afi_t afi,
*/
if (aggregate->summary_only
&& AGGREGATE_MED_VALID(aggregate)) {
if (aggr_suppress_path(aggregate, pi))
if (aggr_suppress_path(aggregate, pi)) {
bgp_process(bgp, dest, pi, afi, safi);
if (SAFI_UNICAST == safi &&
(bgp->inst_type == BGP_INSTANCE_TYPE_VRF ||
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT))
vpn_leak_from_vrf_withdraw(bgp_get_default(), bgp,
pi);
}
}
/*
@ -8281,8 +8749,14 @@ bool bgp_aggregate_route(struct bgp *bgp, const struct prefix *p, afi_t afi,
if (aggregate->suppress_map_name
&& AGGREGATE_MED_VALID(aggregate)
&& aggr_suppress_map_test(bgp, aggregate, pi)) {
if (aggr_suppress_path(aggregate, pi))
if (aggr_suppress_path(aggregate, pi)) {
bgp_process(bgp, dest, pi, afi, safi);
if (SAFI_UNICAST == safi &&
(bgp->inst_type == BGP_INSTANCE_TYPE_VRF ||
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT))
vpn_leak_from_vrf_withdraw(bgp_get_default(), bgp,
pi);
}
}
aggregate->count++;
@ -8430,8 +8904,13 @@ void bgp_aggregate_delete(struct bgp *bgp, const struct prefix *p, afi_t afi,
*/
if (pi->extra && pi->extra->aggr_suppressors &&
listcount(pi->extra->aggr_suppressors)) {
if (aggr_unsuppress_path(aggregate, pi))
if (aggr_unsuppress_path(aggregate, pi)) {
bgp_process(bgp, dest, pi, afi, safi);
if (SAFI_UNICAST == safi &&
(bgp->inst_type == BGP_INSTANCE_TYPE_VRF ||
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT))
vpn_leak_from_vrf_update(bgp_get_default(), bgp, pi);
}
}
if (aggregate->count > 0)
@ -8637,13 +9116,21 @@ static void bgp_remove_route_from_aggregate(struct bgp *bgp, afi_t afi,
return;
if (aggregate->summary_only && AGGREGATE_MED_VALID(aggregate))
if (aggr_unsuppress_path(aggregate, pi))
if (aggr_unsuppress_path(aggregate, pi)) {
bgp_process(bgp, pi->net, pi, afi, safi);
if (SAFI_UNICAST == safi && (bgp->inst_type == BGP_INSTANCE_TYPE_VRF ||
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT))
vpn_leak_from_vrf_update(bgp_get_default(), bgp, pi);
}
if (aggregate->suppress_map_name && AGGREGATE_MED_VALID(aggregate)
&& aggr_suppress_map_test(bgp, aggregate, pi))
if (aggr_unsuppress_path(aggregate, pi))
if (aggr_unsuppress_path(aggregate, pi)) {
bgp_process(bgp, pi->net, pi, afi, safi);
if (SAFI_UNICAST == safi && (bgp->inst_type == BGP_INSTANCE_TYPE_VRF ||
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT))
vpn_leak_from_vrf_update(bgp_get_default(), bgp, pi);
}
/*
* This must be called after `summary`, `suppress-map` check to avoid
@ -8902,7 +9389,6 @@ static int bgp_aggregate_set(struct vty *vty, const char *prefix_str, afi_t afi,
struct prefix p;
struct bgp_dest *dest;
struct bgp_aggregate *aggregate;
uint8_t as_set_new = as_set;
if (suppress_map && summary_only) {
vty_out(vty,
@ -8960,7 +9446,6 @@ static int bgp_aggregate_set(struct vty *vty, const char *prefix_str, afi_t afi,
*/
if (bgp->reject_as_sets) {
if (as_set == AGGREGATE_AS_SET) {
as_set_new = AGGREGATE_AS_UNSET;
zlog_warn(
"%s: Ignoring as-set because `bgp reject-as-sets` is enabled.",
__func__);
@ -8969,7 +9454,7 @@ static int bgp_aggregate_set(struct vty *vty, const char *prefix_str, afi_t afi,
}
}
aggregate->as_set = as_set_new;
aggregate->as_set = as_set;
/* Override ORIGIN attribute if defined.
* E.g.: Cisco and Juniper set ORIGIN for aggregated address
@ -9290,8 +9775,8 @@ void bgp_redistribute_add(struct bgp *bgp, struct prefix *p,
if (bpi) {
/* Ensure the (source route) type is updated. */
bpi->type = type;
if (attrhash_cmp(bpi->attr, new_attr)
&& !CHECK_FLAG(bpi->flags, BGP_PATH_REMOVED)) {
if (!CHECK_FLAG(bpi->flags, BGP_PATH_REMOVED) &&
attrhash_cmp(bpi->attr, new_attr)) {
bgp_attr_unintern(&new_attr);
aspath_unintern(&attr.aspath);
bgp_dest_unlock_node(bn);
@ -11561,8 +12046,6 @@ void route_vty_out_detail(struct vty *vty, struct bgp *bgp, struct bgp_dest *bn,
/* Line 7 display Originator, Cluster-id */
if (CHECK_FLAG(attr->flag, ATTR_FLAG_BIT(BGP_ATTR_ORIGINATOR_ID)) ||
CHECK_FLAG(attr->flag, ATTR_FLAG_BIT(BGP_ATTR_CLUSTER_LIST))) {
char buf[BUFSIZ] = {0};
if (CHECK_FLAG(attr->flag, ATTR_FLAG_BIT(BGP_ATTR_ORIGINATOR_ID))) {
if (json_paths)
json_object_string_addf(json_path,
@ -11574,9 +12057,7 @@ void route_vty_out_detail(struct vty *vty, struct bgp *bgp, struct bgp_dest *bn,
}
if (CHECK_FLAG(attr->flag, ATTR_FLAG_BIT(BGP_ATTR_CLUSTER_LIST))) {
struct cluster_list *cluster =
bgp_attr_get_cluster(attr);
int i;
struct cluster_list *cluster = bgp_attr_get_cluster(attr);
if (json_paths) {
json_cluster_list = json_object_new_object();
@ -12712,6 +13193,9 @@ void route_vty_out_detail_header(struct vty *vty, struct bgp *bgp,
* though then we must display Advertised to on a path-by-path basis. */
if (!bgp_addpath_is_addpath_used(&bgp->tx_addpath, afi, safi)) {
for (ALL_LIST_ELEMENTS(bgp->peer, node, nnode, peer)) {
if (peer->group)
continue;
if (bgp_adj_out_lookup(peer, dest, 0)) {
if (json && !json_adv_to)
json_adv_to = json_object_new_object();
@ -12853,8 +13337,6 @@ static int bgp_show_route_in_table(struct vty *vty, struct bgp *bgp, struct bgp_
return CMD_WARNING;
}
match.family = afi2family(afi);
if (use_json)
json = json_object_new_object();
@ -13103,7 +13585,7 @@ DEFUN (show_ip_bgp_large_community_list,
afi_t afi = AFI_IP6;
safi_t safi = SAFI_UNICAST;
int idx = 0;
bool exact_match = 0;
bool match_p = 0;
struct bgp *bgp = NULL;
bool uj = use_json(argc, argv);
@ -13120,10 +13602,10 @@ DEFUN (show_ip_bgp_large_community_list,
const char *clist_number_or_name = argv[++idx]->arg;
if (++idx < argc && strmatch(argv[idx]->text, "exact-match"))
exact_match = 1;
match_p = 1;
return bgp_show_lcommunity_list(vty, bgp, clist_number_or_name,
exact_match, afi, safi, uj);
match_p, afi, safi, uj);
}
DEFUN (show_ip_bgp_large_community,
show_ip_bgp_large_community_cmd,
@ -13142,7 +13624,7 @@ DEFUN (show_ip_bgp_large_community,
afi_t afi = AFI_IP6;
safi_t safi = SAFI_UNICAST;
int idx = 0;
bool exact_match = 0;
bool match_p = false;
struct bgp *bgp = NULL;
bool uj = use_json(argc, argv);
uint16_t show_flags = 0;
@ -13160,10 +13642,10 @@ DEFUN (show_ip_bgp_large_community,
if (argv_find(argv, argc, "AA:BB:CC", &idx)) {
if (argv_find(argv, argc, "exact-match", &idx)) {
argc--;
exact_match = 1;
match_p = true;
}
return bgp_show_lcommunity(vty, bgp, argc, argv,
exact_match, afi, safi, uj);
match_p, afi, safi, uj);
} else
return bgp_show(vty, bgp, afi, safi,
bgp_show_type_lcommunity_all, NULL, show_flags,
@ -13434,7 +13916,7 @@ DEFPY(show_ip_bgp, show_ip_bgp_cmd,
void *output_arg = NULL;
struct bgp *bgp = NULL;
int idx = 0;
int exact_match = 0;
int match_p = 0;
char *community = NULL;
bool first = true;
uint16_t show_flags = 0;
@ -13499,7 +13981,7 @@ DEFPY(show_ip_bgp, show_ip_bgp_cmd,
community = maybecomm;
if (argv_find(argv, argc, "exact-match", &idx))
exact_match = 1;
match_p = 1;
if (!community)
sh_type = bgp_show_type_community_all;
@ -13510,7 +13992,7 @@ DEFPY(show_ip_bgp, show_ip_bgp_cmd,
struct community_list *list;
if (argv_find(argv, argc, "exact-match", &idx))
exact_match = 1;
match_p = 1;
list = community_list_lookup(bgp_clist, clist_number_or_name, 0,
COMMUNITY_LIST_MASTER);
@ -13520,7 +14002,7 @@ DEFPY(show_ip_bgp, show_ip_bgp_cmd,
return CMD_WARNING;
}
if (exact_match)
if (match_p)
sh_type = bgp_show_type_community_list_exact;
else
sh_type = bgp_show_type_community_list;
@ -13630,7 +14112,7 @@ DEFPY(show_ip_bgp, show_ip_bgp_cmd,
/* show bgp: AFI_IP6, show ip bgp: AFI_IP */
if (community)
return bgp_show_community(vty, bgp, community,
exact_match, afi, safi,
match_p, afi, safi,
show_flags);
else
return bgp_show(vty, bgp, afi, safi, sh_type,
@ -13675,7 +14157,7 @@ DEFPY(show_ip_bgp, show_ip_bgp_cmd,
if (community)
bgp_show_community(
vty, abgp, community,
exact_match, afi, safi,
match_p, afi, safi,
show_flags);
else
bgp_show(vty, abgp, afi, safi,
@ -13723,7 +14205,7 @@ DEFPY(show_ip_bgp, show_ip_bgp_cmd,
if (community)
bgp_show_community(
vty, abgp, community,
exact_match, afi, safi,
match_p, afi, safi,
show_flags);
else
bgp_show(vty, abgp, afi, safi,
@ -14993,15 +15475,15 @@ show_adj_route(struct vty *vty, struct peer *peer, struct bgp_table *table,
json_net =
json_object_new_object();
struct bgp_path_info bpi;
struct bgp_path_info pathi;
struct bgp_dest buildit = *dest;
struct bgp_dest *pass_in;
if (route_filtered ||
ret == RMAP_DENY) {
bpi.attr = &attr;
bpi.peer = peer;
buildit.info = &bpi;
pathi.attr = &attr;
pathi.peer = peer;
buildit.info = &pathi;
pass_in = &buildit;
} else
@ -15272,11 +15754,15 @@ static int peer_adj_routes(struct vty *vty, struct peer *peer, afi_t afi,
} else {
json_object_object_add(json_ar, rd_str, json_routes);
}
}
} else if (json_routes)
json_object_free(json_routes);
output_count += output_count_per_rd;
filtered_count += filtered_count_per_rd;
}
if (json_ar &&
(type == bgp_show_adj_route_advertised || type == bgp_show_adj_route_received))
json_object_free(json_ar);
if (first == false && json_routes)
vty_out(vty, "}");
} else {
@ -16261,8 +16747,6 @@ static int bgp_clear_damp_route(struct vty *vty, const char *view_name,
return CMD_WARNING;
}
match.family = afi2family(afi);
if ((safi == SAFI_MPLS_VPN) || (safi == SAFI_ENCAP)
|| (safi == SAFI_EVPN)) {
for (dest = bgp_table_top(bgp->rib[AFI_IP][safi]); dest;

View file

@ -88,7 +88,6 @@ enum bgp_show_adj_route_type {
#define BGP_NLRI_PARSE_ERROR_EVPN_TYPE4_SIZE -9
#define BGP_NLRI_PARSE_ERROR_EVPN_TYPE5_SIZE -10
#define BGP_NLRI_PARSE_ERROR_FLOWSPEC_IPV6_NOT_SUPPORTED -11
#define BGP_NLRI_PARSE_ERROR_FLOWSPEC_NLRI_SIZELIMIT -12
#define BGP_NLRI_PARSE_ERROR_FLOWSPEC_BAD_FORMAT -13
#define BGP_NLRI_PARSE_ERROR_ADDRESS_FAMILY -14
#define BGP_NLRI_PARSE_ERROR_EVPN_TYPE1_SIZE -15
@ -779,6 +778,9 @@ extern void bgp_soft_reconfig_table_task_cancel(const struct bgp *bgp,
extern bool bgp_soft_reconfig_in(struct peer *peer, afi_t afi, safi_t safi);
extern void bgp_clear_route(struct peer *, afi_t, safi_t);
extern void bgp_clear_route_all(struct peer *);
/* Clear routes for a batch of peers */
void bgp_clear_route_batch(struct bgp_clearing_info *cinfo);
extern void bgp_clear_adj_in(struct peer *, afi_t, safi_t);
extern void bgp_clear_stale_route(struct peer *, afi_t, safi_t);
extern void bgp_set_stale_route(struct peer *peer, afi_t afi, safi_t safi);
@ -934,9 +936,6 @@ extern bool subgroup_announce_check(struct bgp_dest *dest,
const struct prefix *p, struct attr *attr,
struct attr *post_attr);
extern void bgp_peer_clear_node_queue_drain_immediate(struct peer *peer);
extern void bgp_process_queues_drain_immediate(void);
/* for encap/vpn */
extern struct bgp_dest *bgp_safi_node_lookup(struct bgp_table *table,
safi_t safi,

View file

@ -1441,7 +1441,7 @@ route_set_evpn_gateway_ip(void *rule, const struct prefix *prefix, void *object)
/* Set gateway-ip value. */
bre->type = OVERLAY_INDEX_GATEWAY_IP;
memcpy(&bre->gw_ip, &gw_ip->ip.addr, IPADDRSZ(gw_ip));
bre->gw_ip = *gw_ip;
bgp_attr_set_evpn_overlay(path->attr, bre);
return RMAP_OKAY;
@ -2615,6 +2615,9 @@ route_set_aspath_exclude(void *rule, const struct prefix *dummy, void *object)
path->attr->aspath =
aspath_filter_exclude_acl(new_path,
ase->exclude_aspath_acl);
else
aspath_free(new_path);
return RMAP_OKAY;
}

View file

@ -1355,7 +1355,7 @@ lib_route_map_entry_match_condition_rmap_match_condition_comm_list_finish(
{
struct routemap_hook_context *rhc;
const char *value;
bool exact_match = false;
bool match_p = false;
bool any = false;
char *argstr;
const char *condition;
@ -1367,13 +1367,13 @@ lib_route_map_entry_match_condition_rmap_match_condition_comm_list_finish(
value = yang_dnode_get_string(args->dnode, "comm-list-name");
if (yang_dnode_exists(args->dnode, "comm-list-name-exact-match"))
exact_match = yang_dnode_get_bool(
match_p = yang_dnode_get_bool(
args->dnode, "./comm-list-name-exact-match");
if (yang_dnode_exists(args->dnode, "comm-list-name-any"))
any = yang_dnode_get_bool(args->dnode, "comm-list-name-any");
if (exact_match) {
if (match_p) {
argstr = XMALLOC(MTYPE_ROUTE_MAP_COMPILED,
strlen(value) + strlen("exact-match") + 2);

View file

@ -529,7 +529,10 @@ static struct rtr_mgr_group *get_groups(struct list *cache_list)
inline bool is_synchronized(struct rpki_vrf *rpki_vrf)
{
return rpki_vrf->rtr_is_synced;
if (is_running(rpki_vrf))
return rpki_vrf->rtr_is_synced;
else
return false;
}
inline bool is_running(struct rpki_vrf *rpki_vrf)

View file

@ -967,10 +967,10 @@ static int update_group_show_walkcb(struct update_group *updgrp, void *arg)
if (ctx->uj) {
json_peers = json_object_new_array();
SUBGRP_FOREACH_PEER (subgrp, paf) {
json_object *peer =
json_object *jpeer =
json_object_new_string(
paf->peer->host);
json_object_array_add(json_peers, peer);
json_object_array_add(json_peers, jpeer);
}
json_object_object_add(json_subgrp, "peers",
json_peers);

View file

@ -1111,9 +1111,12 @@ static int bgp_clear(struct vty *vty, struct bgp *bgp, afi_t afi, safi_t safi,
int ret = 0;
bool found = false;
struct peer *peer;
bool afi_safi_unspec = false;
VTY_BGP_GR_DEFINE_LOOP_VARIABLE;
afi_safi_unspec = ((afi == AFI_UNSPEC) && (safi == SAFI_UNSPEC));
/* Clear all neighbors. */
/*
* Pass along pointer to next node to peer_clear() when walking all
@ -1121,6 +1124,8 @@ static int bgp_clear(struct vty *vty, struct bgp *bgp, afi_t afi, safi_t safi,
* doppelganger
*/
if (sort == clear_all) {
if (afi_safi_unspec)
bgp_clearing_batch_begin(bgp);
for (ALL_LIST_ELEMENTS(bgp->peer, node, nnode, peer)) {
bgp_peer_gr_flags_update(peer);
@ -1147,6 +1152,8 @@ static int bgp_clear(struct vty *vty, struct bgp *bgp, afi_t afi, safi_t safi,
if (stype == BGP_CLEAR_SOFT_NONE)
bgp->update_delay_over = 0;
if (afi_safi_unspec)
bgp_clearing_batch_end_event_start(bgp);
return CMD_SUCCESS;
}
@ -1202,6 +1209,8 @@ static int bgp_clear(struct vty *vty, struct bgp *bgp, afi_t afi, safi_t safi,
return CMD_WARNING;
}
if (afi_safi_unspec)
bgp_clearing_batch_begin(bgp);
for (ALL_LIST_ELEMENTS(group->peer, node, nnode, peer)) {
ret = bgp_peer_clear(peer, afi, safi, &nnode, stype);
@ -1210,6 +1219,8 @@ static int bgp_clear(struct vty *vty, struct bgp *bgp, afi_t afi, safi_t safi,
else
found = true;
}
if (afi_safi_unspec)
bgp_clearing_batch_end_event_start(bgp);
if (!found)
vty_out(vty,
@ -1221,6 +1232,8 @@ static int bgp_clear(struct vty *vty, struct bgp *bgp, afi_t afi, safi_t safi,
/* Clear all external (eBGP) neighbors. */
if (sort == clear_external) {
if (afi_safi_unspec)
bgp_clearing_batch_begin(bgp);
for (ALL_LIST_ELEMENTS(bgp->peer, node, nnode, peer)) {
if (peer->sort == BGP_PEER_IBGP)
continue;
@ -1245,7 +1258,8 @@ static int bgp_clear(struct vty *vty, struct bgp *bgp, afi_t afi, safi_t safi,
&& bgp->present_zebra_gr_state == ZEBRA_GR_ENABLE) {
bgp_zebra_send_capabilities(bgp, true);
}
if (afi_safi_unspec)
bgp_clearing_batch_end_event_start(bgp);
if (!found)
vty_out(vty,
"%% BGP: No external %s peer is configured\n",
@ -1263,6 +1277,8 @@ static int bgp_clear(struct vty *vty, struct bgp *bgp, afi_t afi, safi_t safi,
return CMD_WARNING;
}
if (afi_safi_unspec)
bgp_clearing_batch_begin(bgp);
for (ALL_LIST_ELEMENTS(bgp->peer, node, nnode, peer)) {
if (peer->as != as)
continue;
@ -1288,6 +1304,8 @@ static int bgp_clear(struct vty *vty, struct bgp *bgp, afi_t afi, safi_t safi,
bgp_zebra_send_capabilities(bgp, true);
}
if (afi_safi_unspec)
bgp_clearing_batch_end_event_start(bgp);
if (!found)
vty_out(vty,
"%% BGP: No %s peer is configured with AS %s\n",
@ -11062,7 +11080,6 @@ static int bgp_clear_prefix(struct vty *vty, const char *view_name,
return CMD_WARNING;
}
match.family = afi2family(afi);
rib = bgp->rib[afi][safi];
if (safi == SAFI_MPLS_VPN) {
@ -11486,7 +11503,7 @@ DEFPY (show_bgp_vrfs,
json_vrfs = json_object_new_object();
for (ALL_LIST_ELEMENTS_RO(inst, node, bgp)) {
const char *name;
const char *bname;
/* Skip Views. */
if (bgp->inst_type == BGP_INSTANCE_TYPE_VIEW)
@ -11505,18 +11522,18 @@ DEFPY (show_bgp_vrfs,
json_vrf = json_object_new_object();
if (bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT) {
name = VRF_DEFAULT_NAME;
bname = VRF_DEFAULT_NAME;
type = "DFLT";
} else {
name = bgp->name;
bname = bgp->name;
type = "VRF";
}
show_bgp_vrfs_detail_common(vty, bgp, json_vrf, name, type,
show_bgp_vrfs_detail_common(vty, bgp, json_vrf, bname, type,
false);
if (uj)
json_object_object_add(json_vrfs, name, json_vrf);
json_object_object_add(json_vrfs, bname, json_vrf);
}
if (uj) {
@ -14071,9 +14088,14 @@ static void bgp_show_peer_afi(struct vty *vty, struct peer *p, afi_t afi,
? "Advertise"
: "Withdraw");
/* Receive prefix count */
vty_out(vty, " %u accepted prefixes\n",
p->pcount[afi][safi]);
/* Receive and sent prefix count, if available */
paf = peer_af_find(p, afi, safi);
if (paf && PAF_SUBGRP(paf))
vty_out(vty, " %u accepted, %u sent prefixes\n",
p->pcount[afi][safi], PAF_SUBGRP(paf)->scount);
else
vty_out(vty, " %u accepted prefixes\n",
p->pcount[afi][safi]);
/* maximum-prefix-out */
if (CHECK_FLAG(p->af_flags[afi][safi],

View file

@ -56,8 +56,8 @@
#include "bgpd/bgp_lcommunity.h"
/* All information about zebra. */
struct zclient *zclient = NULL;
struct zclient *zclient_sync;
struct zclient *bgp_zclient = NULL;
struct zclient *bgp_zclient_sync;
static bool bgp_zebra_label_manager_connect(void);
/* hook to indicate vrf status change for SNMP */
@ -69,7 +69,7 @@ DEFINE_MTYPE_STATIC(BGPD, BGP_IF_INFO, "BGP interface context");
/* Can we install into zebra? */
static inline bool bgp_install_info_to_zebra(struct bgp *bgp)
{
if (zclient->sock <= 0)
if (bgp_zclient->sock <= 0)
return false;
if (!IS_BGP_INST_KNOWN_TO_ZEBRA(bgp)) {
@ -137,7 +137,7 @@ static void bgp_start_interface_nbrs(struct bgp *bgp, struct interface *ifp)
for (ALL_LIST_ELEMENTS(bgp->peer, node, nnode, peer)) {
if (peer->conf_if && (strcmp(peer->conf_if, ifp->name) == 0) &&
!peer_established(peer->connection)) {
if (peer_active(peer->connection))
if (peer_active(peer->connection) == BGP_PEER_ACTIVE)
BGP_EVENT_ADD(peer->connection, BGP_Stop);
BGP_EVENT_ADD(peer->connection, BGP_Start);
}
@ -1010,15 +1010,15 @@ struct bgp *bgp_tm_bgp;
static void bgp_zebra_tm_connect(struct event *t)
{
struct zclient *zclient;
struct zclient *zc;
int delay = 10, ret = 0;
zclient = EVENT_ARG(t);
if (bgp_tm_status_connected && zclient->sock > 0)
zc = EVENT_ARG(t);
if (bgp_tm_status_connected && zc->sock > 0)
delay = 60;
else {
bgp_tm_status_connected = false;
ret = tm_table_manager_connect(zclient);
ret = tm_table_manager_connect(zc);
}
if (ret < 0) {
zlog_err("Error connecting to table manager!");
@ -1031,7 +1031,7 @@ static void bgp_zebra_tm_connect(struct event *t)
}
bgp_tm_status_connected = true;
if (!bgp_tm_chunk_obtained) {
if (bgp_zebra_get_table_range(zclient, bgp_tm_chunk_size,
if (bgp_zebra_get_table_range(zc, bgp_tm_chunk_size,
&bgp_tm_min,
&bgp_tm_max) >= 0) {
bgp_tm_chunk_obtained = true;
@ -1040,7 +1040,7 @@ static void bgp_zebra_tm_connect(struct event *t)
}
}
}
event_add_timer(bm->master, bgp_zebra_tm_connect, zclient, delay,
event_add_timer(bm->master, bgp_zebra_tm_connect, zc, delay,
&bgp_tm_thread_connect);
}
@ -1071,7 +1071,7 @@ void bgp_zebra_init_tm_connect(struct bgp *bgp)
bgp_tm_min = bgp_tm_max = 0;
bgp_tm_chunk_size = BGP_FLOWSPEC_TABLE_CHUNK;
bgp_tm_bgp = bgp;
event_add_timer(bm->master, bgp_zebra_tm_connect, zclient_sync, delay,
event_add_timer(bm->master, bgp_zebra_tm_connect, bgp_zclient_sync, delay,
&bgp_tm_thread_connect);
}
@ -1650,7 +1650,7 @@ bgp_zebra_announce_actual(struct bgp_dest *dest, struct bgp_path_info *info,
__func__, p, (allow_recursion ? "" : "NOT "));
}
return zclient_route_send(ZEBRA_ROUTE_ADD, zclient, &api);
return zclient_route_send(ZEBRA_ROUTE_ADD, bgp_zclient, &api);
}
@ -1747,7 +1747,7 @@ enum zclient_send_status bgp_zebra_withdraw_actual(struct bgp_dest *dest,
zlog_debug("Tx route delete %s (table id %u) %pFX",
bgp->name_pretty, api.tableid, &api.prefix);
return zclient_route_send(ZEBRA_ROUTE_DELETE, zclient, &api);
return zclient_route_send(ZEBRA_ROUTE_DELETE, bgp_zclient, &api);
}
/*
@ -2071,19 +2071,19 @@ int bgp_redistribute_set(struct bgp *bgp, afi_t afi, int type,
.table_id = instance,
.vrf_id = bgp->vrf_id,
};
if (redist_lookup_table_direct(&zclient->mi_redist[afi][type], &table) !=
NULL)
if (redist_lookup_table_direct(&bgp_zclient->mi_redist[afi][type],
&table) != NULL)
return CMD_WARNING;
redist_add_table_direct(&zclient->mi_redist[afi][type], &table);
redist_add_table_direct(&bgp_zclient->mi_redist[afi][type], &table);
} else {
if (redist_check_instance(&zclient->mi_redist[afi][type], instance))
if (redist_check_instance(&bgp_zclient->mi_redist[afi][type], instance))
return CMD_WARNING;
redist_add_instance(&zclient->mi_redist[afi][type], instance);
redist_add_instance(&bgp_zclient->mi_redist[afi][type], instance);
}
} else {
if (vrf_bitmap_check(&zclient->redist[afi][type], bgp->vrf_id))
if (vrf_bitmap_check(&bgp_zclient->redist[afi][type], bgp->vrf_id))
return CMD_WARNING;
#ifdef ENABLE_BGP_VNC
@ -2093,7 +2093,7 @@ int bgp_redistribute_set(struct bgp *bgp, afi_t afi, int type,
}
#endif
vrf_bitmap_set(&zclient->redist[afi][type], bgp->vrf_id);
vrf_bitmap_set(&bgp_zclient->redist[afi][type], bgp->vrf_id);
}
/*
@ -2111,7 +2111,7 @@ int bgp_redistribute_set(struct bgp *bgp, afi_t afi, int type,
instance);
/* Send distribute add message to zebra. */
zebra_redistribute_send(ZEBRA_REDISTRIBUTE_ADD, zclient, afi, type,
zebra_redistribute_send(ZEBRA_REDISTRIBUTE_ADD, bgp_zclient, afi, type,
instance, bgp->vrf_id);
return CMD_SUCCESS;
@ -2132,9 +2132,9 @@ int bgp_redistribute_resend(struct bgp *bgp, afi_t afi, int type,
instance);
/* Send distribute add message to zebra. */
zebra_redistribute_send(ZEBRA_REDISTRIBUTE_DELETE, zclient, afi, type,
zebra_redistribute_send(ZEBRA_REDISTRIBUTE_DELETE, bgp_zclient, afi, type,
instance, bgp->vrf_id);
zebra_redistribute_send(ZEBRA_REDISTRIBUTE_ADD, zclient, afi, type,
zebra_redistribute_send(ZEBRA_REDISTRIBUTE_ADD, bgp_zclient, afi, type,
instance, bgp->vrf_id);
return 0;
@ -2214,21 +2214,21 @@ int bgp_redistribute_unreg(struct bgp *bgp, afi_t afi, int type,
.table_id = instance,
.vrf_id = bgp->vrf_id,
};
if (redist_lookup_table_direct(&zclient->mi_redist[afi][type], &table) ==
if (redist_lookup_table_direct(&bgp_zclient->mi_redist[afi][type], &table) ==
NULL)
return CMD_WARNING;
redist_del_table_direct(&zclient->mi_redist[afi][type], &table);
redist_del_table_direct(&bgp_zclient->mi_redist[afi][type], &table);
} else {
if (!redist_check_instance(&zclient->mi_redist[afi][type], instance))
if (!redist_check_instance(&bgp_zclient->mi_redist[afi][type], instance))
return CMD_WARNING;
redist_del_instance(&zclient->mi_redist[afi][type], instance);
redist_del_instance(&bgp_zclient->mi_redist[afi][type], instance);
}
} else {
if (!vrf_bitmap_check(&zclient->redist[afi][type], bgp->vrf_id))
if (!vrf_bitmap_check(&bgp_zclient->redist[afi][type], bgp->vrf_id))
return CMD_WARNING;
vrf_bitmap_unset(&zclient->redist[afi][type], bgp->vrf_id);
vrf_bitmap_unset(&bgp_zclient->redist[afi][type], bgp->vrf_id);
}
if (bgp_install_info_to_zebra(bgp)) {
@ -2237,7 +2237,7 @@ int bgp_redistribute_unreg(struct bgp *bgp, afi_t afi, int type,
zlog_debug("Tx redistribute del %s afi %d %s %d",
bgp->name_pretty, afi,
zebra_route_string(type), instance);
zebra_redistribute_send(ZEBRA_REDISTRIBUTE_DELETE, zclient, afi,
zebra_redistribute_send(ZEBRA_REDISTRIBUTE_DELETE, bgp_zclient, afi,
type, instance, bgp->vrf_id);
}
@ -2325,7 +2325,7 @@ void bgp_redistribute_redo(struct bgp *bgp)
void bgp_zclient_reset(void)
{
zclient_reset(zclient);
zclient_reset(bgp_zclient);
}
/* Register this instance with Zebra. Invoked upon connect (for
@ -2335,14 +2335,14 @@ void bgp_zclient_reset(void)
void bgp_zebra_instance_register(struct bgp *bgp)
{
/* Don't try to register if we're not connected to Zebra */
if (!zclient || zclient->sock < 0)
if (!bgp_zclient || bgp_zclient->sock < 0)
return;
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("Registering %s", bgp->name_pretty);
/* Register for router-id, interfaces, redistributed routes. */
zclient_send_reg_requests(zclient, bgp->vrf_id);
zclient_send_reg_requests(bgp_zclient, bgp->vrf_id);
/* For EVPN instance, register to learn about VNIs, if appropriate. */
if (bgp->advertise_all_vni)
@ -2364,7 +2364,7 @@ void bgp_zebra_instance_register(struct bgp *bgp)
void bgp_zebra_instance_deregister(struct bgp *bgp)
{
/* Don't try to deregister if we're not connected to Zebra */
if (zclient->sock < 0)
if (bgp_zclient->sock < 0)
return;
if (BGP_DEBUG(zebra, ZEBRA))
@ -2375,7 +2375,7 @@ void bgp_zebra_instance_deregister(struct bgp *bgp)
bgp_zebra_advertise_all_vni(bgp, 0);
/* Deregister for router-id, interfaces, redistributed routes. */
zclient_send_dereg_requests(zclient, bgp->vrf_id);
zclient_send_dereg_requests(bgp_zclient, bgp->vrf_id);
}
void bgp_zebra_initiate_radv(struct bgp *bgp, struct peer *peer)
@ -2386,7 +2386,7 @@ void bgp_zebra_initiate_radv(struct bgp *bgp, struct peer *peer)
return;
/* Don't try to initiate if we're not connected to Zebra */
if (zclient->sock < 0)
if (bgp_zclient->sock < 0)
return;
if (BGP_DEBUG(zebra, ZEBRA))
@ -2398,7 +2398,7 @@ void bgp_zebra_initiate_radv(struct bgp *bgp, struct peer *peer)
* If we don't have an ifp pointer, call function to find the
* ifps for a numbered enhe peer to turn RAs on.
*/
peer->ifp ? zclient_send_interface_radv_req(zclient, bgp->vrf_id,
peer->ifp ? zclient_send_interface_radv_req(bgp_zclient, bgp->vrf_id,
peer->ifp, 1, ra_interval)
: bgp_nht_reg_enhe_cap_intfs(peer);
}
@ -2406,7 +2406,7 @@ void bgp_zebra_initiate_radv(struct bgp *bgp, struct peer *peer)
void bgp_zebra_terminate_radv(struct bgp *bgp, struct peer *peer)
{
/* Don't try to terminate if we're not connected to Zebra */
if (zclient->sock < 0)
if (bgp_zclient->sock < 0)
return;
if (BGP_DEBUG(zebra, ZEBRA))
@ -2418,7 +2418,7 @@ void bgp_zebra_terminate_radv(struct bgp *bgp, struct peer *peer)
* If we don't have an ifp pointer, call function to find the
* ifps for a numbered enhe peer to turn RAs off.
*/
peer->ifp ? zclient_send_interface_radv_req(zclient, bgp->vrf_id,
peer->ifp ? zclient_send_interface_radv_req(bgp_zclient, bgp->vrf_id,
peer->ifp, 0, 0)
: bgp_nht_dereg_enhe_cap_intfs(peer);
}
@ -2428,7 +2428,7 @@ int bgp_zebra_advertise_subnet(struct bgp *bgp, int advertise, vni_t vni)
struct stream *s = NULL;
/* Check socket. */
if (!zclient || zclient->sock < 0)
if (!bgp_zclient || bgp_zclient->sock < 0)
return 0;
/* Don't try to register if Zebra doesn't know of this instance. */
@ -2440,7 +2440,7 @@ int bgp_zebra_advertise_subnet(struct bgp *bgp, int advertise, vni_t vni)
return 0;
}
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(s, ZEBRA_ADVERTISE_SUBNET, bgp->vrf_id);
@ -2448,7 +2448,7 @@ int bgp_zebra_advertise_subnet(struct bgp *bgp, int advertise, vni_t vni)
stream_put3(s, vni);
stream_putw_at(s, 0, stream_get_endp(s));
return zclient_send_message(zclient);
return zclient_send_message(bgp_zclient);
}
int bgp_zebra_advertise_svi_macip(struct bgp *bgp, int advertise, vni_t vni)
@ -2456,14 +2456,14 @@ int bgp_zebra_advertise_svi_macip(struct bgp *bgp, int advertise, vni_t vni)
struct stream *s = NULL;
/* Check socket. */
if (!zclient || zclient->sock < 0)
if (!bgp_zclient || bgp_zclient->sock < 0)
return 0;
/* Don't try to register if Zebra doesn't know of this instance. */
if (!IS_BGP_INST_KNOWN_TO_ZEBRA(bgp))
return 0;
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(s, ZEBRA_ADVERTISE_SVI_MACIP, bgp->vrf_id);
@ -2471,7 +2471,7 @@ int bgp_zebra_advertise_svi_macip(struct bgp *bgp, int advertise, vni_t vni)
stream_putl(s, vni);
stream_putw_at(s, 0, stream_get_endp(s));
return zclient_send_message(zclient);
return zclient_send_message(bgp_zclient);
}
int bgp_zebra_advertise_gw_macip(struct bgp *bgp, int advertise, vni_t vni)
@ -2479,7 +2479,7 @@ int bgp_zebra_advertise_gw_macip(struct bgp *bgp, int advertise, vni_t vni)
struct stream *s = NULL;
/* Check socket. */
if (!zclient || zclient->sock < 0)
if (!bgp_zclient || bgp_zclient->sock < 0)
return 0;
/* Don't try to register if Zebra doesn't know of this instance. */
@ -2491,7 +2491,7 @@ int bgp_zebra_advertise_gw_macip(struct bgp *bgp, int advertise, vni_t vni)
return 0;
}
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(s, ZEBRA_ADVERTISE_DEFAULT_GW, bgp->vrf_id);
@ -2499,7 +2499,7 @@ int bgp_zebra_advertise_gw_macip(struct bgp *bgp, int advertise, vni_t vni)
stream_putl(s, vni);
stream_putw_at(s, 0, stream_get_endp(s));
return zclient_send_message(zclient);
return zclient_send_message(bgp_zclient);
}
int bgp_zebra_vxlan_flood_control(struct bgp *bgp,
@ -2508,7 +2508,7 @@ int bgp_zebra_vxlan_flood_control(struct bgp *bgp,
struct stream *s;
/* Check socket. */
if (!zclient || zclient->sock < 0)
if (!bgp_zclient || bgp_zclient->sock < 0)
return 0;
/* Don't try to register if Zebra doesn't know of this instance. */
@ -2520,14 +2520,14 @@ int bgp_zebra_vxlan_flood_control(struct bgp *bgp,
return 0;
}
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(s, ZEBRA_VXLAN_FLOOD_CONTROL, bgp->vrf_id);
stream_putc(s, flood_ctrl);
stream_putw_at(s, 0, stream_get_endp(s));
return zclient_send_message(zclient);
return zclient_send_message(bgp_zclient);
}
int bgp_zebra_advertise_all_vni(struct bgp *bgp, int advertise)
@ -2535,14 +2535,14 @@ int bgp_zebra_advertise_all_vni(struct bgp *bgp, int advertise)
struct stream *s;
/* Check socket. */
if (!zclient || zclient->sock < 0)
if (!bgp_zclient || bgp_zclient->sock < 0)
return 0;
/* Don't try to register if Zebra doesn't know of this instance. */
if (!IS_BGP_INST_KNOWN_TO_ZEBRA(bgp))
return 0;
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(s, ZEBRA_ADVERTISE_ALL_VNI, bgp->vrf_id);
@ -2553,7 +2553,7 @@ int bgp_zebra_advertise_all_vni(struct bgp *bgp, int advertise)
stream_putc(s, bgp->vxlan_flood_ctrl);
stream_putw_at(s, 0, stream_get_endp(s));
return zclient_send_message(zclient);
return zclient_send_message(bgp_zclient);
}
int bgp_zebra_dup_addr_detection(struct bgp *bgp)
@ -2561,7 +2561,7 @@ int bgp_zebra_dup_addr_detection(struct bgp *bgp)
struct stream *s;
/* Check socket. */
if (!zclient || zclient->sock < 0)
if (!bgp_zclient || bgp_zclient->sock < 0)
return 0;
/* Don't try to register if Zebra doesn't know of this instance. */
@ -2578,7 +2578,7 @@ int bgp_zebra_dup_addr_detection(struct bgp *bgp)
"enable" : "disable",
bgp->evpn_info->dad_freeze_time);
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(s, ZEBRA_DUPLICATE_ADDR_DETECTION,
bgp->vrf_id);
@ -2589,7 +2589,7 @@ int bgp_zebra_dup_addr_detection(struct bgp *bgp)
stream_putl(s, bgp->evpn_info->dad_freeze_time);
stream_putw_at(s, 0, stream_get_endp(s));
return zclient_send_message(zclient);
return zclient_send_message(bgp_zclient);
}
static int rule_notify_owner(ZAPI_CALLBACK_ARGS)
@ -3965,7 +3965,7 @@ void bgp_if_init(void)
static bool bgp_zebra_label_manager_ready(void)
{
return (zclient_sync->sock > 0);
return (bgp_zclient_sync->sock > 0);
}
static void bgp_start_label_manager(struct event *start)
@ -3979,29 +3979,29 @@ static void bgp_start_label_manager(struct event *start)
static bool bgp_zebra_label_manager_connect(void)
{
/* Connect to label manager. */
if (zclient_socket_connect(zclient_sync) < 0) {
if (zclient_socket_connect(bgp_zclient_sync) < 0) {
zlog_warn("%s: failed connecting synchronous zclient!",
__func__);
return false;
}
/* make socket non-blocking */
set_nonblocking(zclient_sync->sock);
set_nonblocking(bgp_zclient_sync->sock);
/* Send hello to notify zebra this is a synchronous client */
if (zclient_send_hello(zclient_sync) == ZCLIENT_SEND_FAILURE) {
if (zclient_send_hello(bgp_zclient_sync) == ZCLIENT_SEND_FAILURE) {
zlog_warn("%s: failed sending hello for synchronous zclient!",
__func__);
close(zclient_sync->sock);
zclient_sync->sock = -1;
close(bgp_zclient_sync->sock);
bgp_zclient_sync->sock = -1;
return false;
}
/* Connect to label manager */
if (lm_label_manager_connect(zclient_sync, 0) != 0) {
if (lm_label_manager_connect(bgp_zclient_sync, 0) != 0) {
zlog_warn("%s: failed connecting to label manager!", __func__);
if (zclient_sync->sock > 0) {
close(zclient_sync->sock);
zclient_sync->sock = -1;
if (bgp_zclient_sync->sock > 0) {
close(bgp_zclient_sync->sock);
bgp_zclient_sync->sock = -1;
}
return false;
}
@ -4030,22 +4030,22 @@ void bgp_zebra_init(struct event_loop *master, unsigned short instance)
hook_register_prio(if_unreal, 0, bgp_ifp_destroy);
/* Set default values. */
zclient = zclient_new(master, &zclient_options_default, bgp_handlers,
array_size(bgp_handlers));
zclient_init(zclient, ZEBRA_ROUTE_BGP, 0, &bgpd_privs);
zclient->zebra_buffer_write_ready = bgp_zebra_buffer_write_ready;
zclient->zebra_connected = bgp_zebra_connected;
zclient->zebra_capabilities = bgp_zebra_capabilities;
zclient->nexthop_update = bgp_nexthop_update;
zclient->instance = instance;
bgp_zclient = zclient_new(master, &zclient_options_default, bgp_handlers,
array_size(bgp_handlers));
zclient_init(bgp_zclient, ZEBRA_ROUTE_BGP, 0, &bgpd_privs);
bgp_zclient->zebra_buffer_write_ready = bgp_zebra_buffer_write_ready;
bgp_zclient->zebra_connected = bgp_zebra_connected;
bgp_zclient->zebra_capabilities = bgp_zebra_capabilities;
bgp_zclient->nexthop_update = bgp_nexthop_update;
bgp_zclient->instance = instance;
/* Initialize special zclient for synchronous message exchanges. */
zclient_sync = zclient_new(master, &zclient_options_sync, NULL, 0);
zclient_sync->sock = -1;
zclient_sync->redist_default = ZEBRA_ROUTE_BGP;
zclient_sync->instance = instance;
zclient_sync->session_id = 1;
zclient_sync->privs = &bgpd_privs;
bgp_zclient_sync = zclient_new(master, &zclient_options_sync, NULL, 0);
bgp_zclient_sync->sock = -1;
bgp_zclient_sync->redist_default = ZEBRA_ROUTE_BGP;
bgp_zclient_sync->instance = instance;
bgp_zclient_sync->session_id = 1;
bgp_zclient_sync->privs = &bgpd_privs;
if (!bgp_zebra_label_manager_ready())
event_add_timer(master, bgp_start_label_manager, NULL, 1,
@ -4054,17 +4054,17 @@ void bgp_zebra_init(struct event_loop *master, unsigned short instance)
void bgp_zebra_destroy(void)
{
if (zclient == NULL)
if (bgp_zclient == NULL)
return;
zclient_stop(zclient);
zclient_free(zclient);
zclient = NULL;
zclient_stop(bgp_zclient);
zclient_free(bgp_zclient);
bgp_zclient = NULL;
if (zclient_sync == NULL)
if (bgp_zclient_sync == NULL)
return;
zclient_stop(zclient_sync);
zclient_free(zclient_sync);
zclient_sync = NULL;
zclient_stop(bgp_zclient_sync);
zclient_free(bgp_zclient_sync);
bgp_zclient_sync = NULL;
}
int bgp_zebra_num_connects(void)
@ -4090,7 +4090,7 @@ void bgp_send_pbr_rule_action(struct bgp_pbr_action *pbra,
zlog_debug("%s: table %d fwmark %d %d", __func__,
pbra->table_id, pbra->fwmark, install);
}
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(s,
@ -4099,7 +4099,7 @@ void bgp_send_pbr_rule_action(struct bgp_pbr_action *pbra,
bgp_encode_pbr_rule_action(s, pbra, pbr);
if ((zclient_send_message(zclient) != ZCLIENT_SEND_FAILURE)
if ((zclient_send_message(bgp_zclient) != ZCLIENT_SEND_FAILURE)
&& install) {
if (!pbr)
pbra->install_in_progress = true;
@ -4118,7 +4118,7 @@ void bgp_send_pbr_ipset_match(struct bgp_pbr_match *pbrim, bool install)
zlog_debug("%s: name %s type %d %d, ID %u", __func__,
pbrim->ipset_name, pbrim->type, install,
pbrim->unique);
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(s,
@ -4131,7 +4131,7 @@ void bgp_send_pbr_ipset_match(struct bgp_pbr_match *pbrim, bool install)
bgp_encode_pbr_ipset_match(s, pbrim);
stream_putw_at(s, 0, stream_get_endp(s));
if ((zclient_send_message(zclient) != ZCLIENT_SEND_FAILURE) && install)
if ((zclient_send_message(bgp_zclient) != ZCLIENT_SEND_FAILURE) && install)
pbrim->install_in_progress = true;
}
@ -4146,7 +4146,7 @@ void bgp_send_pbr_ipset_entry_match(struct bgp_pbr_match_entry *pbrime,
zlog_debug("%s: name %s %d %d, ID %u", __func__,
pbrime->backpointer->ipset_name, pbrime->unique,
install, pbrime->unique);
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(s,
@ -4159,7 +4159,7 @@ void bgp_send_pbr_ipset_entry_match(struct bgp_pbr_match_entry *pbrime,
bgp_encode_pbr_ipset_entry_match(s, pbrime);
stream_putw_at(s, 0, stream_get_endp(s));
if ((zclient_send_message(zclient) != ZCLIENT_SEND_FAILURE) && install)
if ((zclient_send_message(bgp_zclient) != ZCLIENT_SEND_FAILURE) && install)
pbrime->install_in_progress = true;
}
@ -4218,7 +4218,7 @@ void bgp_send_pbr_iptable(struct bgp_pbr_action *pba,
zlog_debug("%s: name %s type %d mark %d %d, ID %u", __func__,
pbm->ipset_name, pbm->type, pba->fwmark, install,
pbm->unique2);
s = zclient->obuf;
s = bgp_zclient->obuf;
stream_reset(s);
zclient_create_header(s,
@ -4232,7 +4232,7 @@ void bgp_send_pbr_iptable(struct bgp_pbr_action *pba,
if (nb_interface)
bgp_encode_pbr_interface_list(pba->bgp, s, pbm->family);
stream_putw_at(s, 0, stream_get_endp(s));
ret = zclient_send_message(zclient);
ret = zclient_send_message(bgp_zclient);
if (install) {
if (ret != ZCLIENT_SEND_FAILURE)
pba->refcnt++;
@ -4319,7 +4319,7 @@ void bgp_zebra_announce_default(struct bgp *bgp, struct nexthop *nh,
}
zclient_route_send(announce ? ZEBRA_ROUTE_ADD : ZEBRA_ROUTE_DELETE,
zclient, &api);
bgp_zclient, &api);
}
/* Send capabilities to RIB */
@ -4332,7 +4332,7 @@ int bgp_zebra_send_capabilities(struct bgp *bgp, bool disable)
zlog_debug("%s: Sending %sable for %s", __func__,
disable ? "dis" : "en", bgp->name_pretty);
if (zclient == NULL) {
if (bgp_zclient == NULL) {
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("%s: %s zclient invalid", __func__,
bgp->name_pretty);
@ -4340,7 +4340,7 @@ int bgp_zebra_send_capabilities(struct bgp *bgp, bool disable)
}
/* Check if the client is connected */
if ((zclient->sock < 0) || (zclient->t_connect)) {
if ((bgp_zclient->sock < 0) || (bgp_zclient->t_connect)) {
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("%s: %s client not connected", __func__,
bgp->name_pretty);
@ -4365,7 +4365,7 @@ int bgp_zebra_send_capabilities(struct bgp *bgp, bool disable)
api.vrf_id = bgp->vrf_id;
}
if (zclient_capabilities_send(ZEBRA_CLIENT_CAPABILITIES, zclient, &api)
if (zclient_capabilities_send(ZEBRA_CLIENT_CAPABILITIES, bgp_zclient, &api)
== ZCLIENT_SEND_FAILURE) {
zlog_err("%s(%d): Error sending GR capability to zebra",
bgp->name_pretty, bgp->vrf_id);
@ -4394,7 +4394,7 @@ int bgp_zebra_update(struct bgp *bgp, afi_t afi, safi_t safi,
bgp->name_pretty, afi, safi,
zserv_gr_client_cap_string(type));
if (zclient == NULL) {
if (bgp_zclient == NULL) {
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("%s: %s zclient == NULL, invalid", __func__,
bgp->name_pretty);
@ -4402,7 +4402,7 @@ int bgp_zebra_update(struct bgp *bgp, afi_t afi, safi_t safi,
}
/* Check if the client is connected */
if ((zclient->sock < 0) || (zclient->t_connect)) {
if ((bgp_zclient->sock < 0) || (bgp_zclient->t_connect)) {
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("%s: %s client not connected", __func__,
bgp->name_pretty);
@ -4414,7 +4414,7 @@ int bgp_zebra_update(struct bgp *bgp, afi_t afi, safi_t safi,
api.vrf_id = bgp->vrf_id;
api.cap = type;
if (zclient_capabilities_send(ZEBRA_CLIENT_CAPABILITIES, zclient, &api)
if (zclient_capabilities_send(ZEBRA_CLIENT_CAPABILITIES, bgp_zclient, &api)
== ZCLIENT_SEND_FAILURE) {
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("%s: %s error sending capability", __func__,
@ -4434,14 +4434,14 @@ int bgp_zebra_stale_timer_update(struct bgp *bgp)
zlog_debug("%s: %s Timer Update to %u", __func__,
bgp->name_pretty, bgp->rib_stale_time);
if (zclient == NULL) {
if (bgp_zclient == NULL) {
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("zclient invalid");
return BGP_GR_FAILURE;
}
/* Check if the client is connected */
if ((zclient->sock < 0) || (zclient->t_connect)) {
if ((bgp_zclient->sock < 0) || (bgp_zclient->t_connect)) {
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("%s: %s client not connected", __func__,
bgp->name_pretty);
@ -4452,7 +4452,7 @@ int bgp_zebra_stale_timer_update(struct bgp *bgp)
api.cap = ZEBRA_CLIENT_RIB_STALE_TIME;
api.stale_removal_time = bgp->rib_stale_time;
api.vrf_id = bgp->vrf_id;
if (zclient_capabilities_send(ZEBRA_CLIENT_CAPABILITIES, zclient, &api)
if (zclient_capabilities_send(ZEBRA_CLIENT_CAPABILITIES, bgp_zclient, &api)
== ZCLIENT_SEND_FAILURE) {
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("%s: %s error sending capability", __func__,
@ -4465,12 +4465,12 @@ int bgp_zebra_stale_timer_update(struct bgp *bgp)
int bgp_zebra_srv6_manager_get_locator_chunk(const char *name)
{
return srv6_manager_get_locator_chunk(zclient, name);
return srv6_manager_get_locator_chunk(bgp_zclient, name);
}
int bgp_zebra_srv6_manager_release_locator_chunk(const char *name)
{
return srv6_manager_release_locator_chunk(zclient, name);
return srv6_manager_release_locator_chunk(bgp_zclient, name);
}
/**
@ -4488,7 +4488,7 @@ int bgp_zebra_srv6_manager_get_locator(const char *name)
* Send the Get Locator request to the SRv6 Manager and return the
* result
*/
return srv6_manager_get_locator(zclient, name);
return srv6_manager_get_locator(bgp_zclient, name);
}
/**
@ -4520,7 +4520,7 @@ bool bgp_zebra_request_srv6_sid(const struct srv6_sid_ctx *ctx,
* Send the Get SRv6 SID request to the SRv6 Manager and check the
* result
*/
ret = srv6_manager_get_sid(zclient, ctx, sid_value, locator_name,
ret = srv6_manager_get_sid(bgp_zclient, ctx, sid_value, locator_name,
sid_func);
if (ret < 0) {
zlog_warn("%s: error getting SRv6 SID!", __func__);
@ -4549,7 +4549,7 @@ void bgp_zebra_release_srv6_sid(const struct srv6_sid_ctx *ctx)
* Send the Release SRv6 SID request to the SRv6 Manager and check the
* result
*/
ret = srv6_manager_release_sid(zclient, ctx);
ret = srv6_manager_release_sid(bgp_zclient, ctx);
if (ret < 0) {
zlog_warn("%s: error releasing SRv6 SID!", __func__);
return;
@ -4592,7 +4592,7 @@ void bgp_zebra_send_nexthop_label(int cmd, mpls_label_t label,
znh->labels[i] = out_labels[i];
}
/* vrf_id is DEFAULT_VRF */
zebra_send_mpls_labels(zclient, cmd, &zl);
zebra_send_mpls_labels(bgp_zclient, cmd, &zl);
}
bool bgp_zebra_request_label_range(uint32_t base, uint32_t chunk_size,
@ -4601,10 +4601,10 @@ bool bgp_zebra_request_label_range(uint32_t base, uint32_t chunk_size,
int ret;
uint32_t start, end;
if (!zclient_sync || !bgp_zebra_label_manager_ready())
if (!bgp_zclient_sync || !bgp_zebra_label_manager_ready())
return false;
ret = lm_get_label_chunk(zclient_sync, 0, base, chunk_size, &start,
ret = lm_get_label_chunk(bgp_zclient_sync, 0, base, chunk_size, &start,
&end);
if (ret < 0) {
zlog_warn("%s: error getting label range!", __func__);
@ -4633,10 +4633,10 @@ void bgp_zebra_release_label_range(uint32_t start, uint32_t end)
{
int ret;
if (!zclient_sync || !bgp_zebra_label_manager_ready())
if (!bgp_zclient_sync || !bgp_zebra_label_manager_ready())
return;
ret = lm_release_label_chunk(zclient_sync, start, end);
ret = lm_release_label_chunk(bgp_zclient_sync, start, end);
if (ret < 0)
zlog_warn("%s: error releasing label range!", __func__);
}

View file

@ -8,6 +8,9 @@
#include "vxlan.h"
/* The global zapi session handle */
extern struct zclient *bgp_zclient;
/* Macro to update bgp_original based on bpg_path_info */
#define BGP_ORIGINAL_UPDATE(_bgp_orig, _mpinfo, _bgp) \
((_mpinfo->extra && _mpinfo->extra->vrfleak && \

View file

@ -88,6 +88,22 @@ DEFINE_HOOK(bgp_inst_delete, (struct bgp *bgp), (bgp));
DEFINE_HOOK(bgp_instance_state, (struct bgp *bgp), (bgp));
DEFINE_HOOK(bgp_routerid_update, (struct bgp *bgp, bool withdraw), (bgp, withdraw));
/* Peers with connection error/failure, per bgp instance */
DECLARE_DLIST(bgp_peer_conn_errlist, struct peer_connection, conn_err_link);
/* List of info about peers that are being cleared from BGP RIBs in a batch */
DECLARE_DLIST(bgp_clearing_info, struct bgp_clearing_info, link);
/* List of dests that need to be processed in a clearing batch */
DECLARE_LIST(bgp_clearing_destlist, struct bgp_clearing_dest, link);
/* Hash of peers in clearing info object */
static int peer_clearing_hash_cmp(const struct peer *p1, const struct peer *p2);
static uint32_t peer_clearing_hashfn(const struct peer *p1);
DECLARE_HASH(bgp_clearing_hash, struct peer, clear_hash_link,
peer_clearing_hash_cmp, peer_clearing_hashfn);
/* BGP process wide configuration. */
static struct bgp_master bgp_master;
@ -105,7 +121,7 @@ unsigned int bgp_suppress_fib_count;
static void bgp_if_finish(struct bgp *bgp);
static void peer_drop_dynamic_neighbor(struct peer *peer);
extern struct zclient *zclient;
extern struct zclient *bgp_zclient;
/* handle main socket creation or deletion */
static int bgp_check_main_socket(bool create, struct bgp *bgp)
@ -431,9 +447,9 @@ void bm_wait_for_fib_set(bool set)
send_msg = true;
}
if (send_msg && zclient)
if (send_msg && bgp_zclient)
zebra_route_notify_send(ZEBRA_ROUTE_NOTIFY_REQUEST,
zclient, set);
bgp_zclient, set);
/*
* If this is configed at a time when peers are already set
@ -491,9 +507,9 @@ void bgp_suppress_fib_pending_set(struct bgp *bgp, bool set)
if (BGP_DEBUG(zebra, ZEBRA))
zlog_debug("Sending ZEBRA_ROUTE_NOTIFY_REQUEST");
if (zclient)
if (bgp_zclient)
zebra_route_notify_send(ZEBRA_ROUTE_NOTIFY_REQUEST,
zclient, set);
bgp_zclient, set);
}
/*
@ -1180,6 +1196,22 @@ void bgp_peer_connection_free(struct peer_connection **connection)
connection = NULL;
}
const char *bgp_peer_get_connection_direction(struct peer_connection *connection)
{
switch (connection->dir) {
case UNKNOWN:
return "Unknown";
case CONNECTION_INCOMING:
return "Incoming";
case CONNECTION_OUTGOING:
return "Outgoing";
case ESTABLISHED:
return "Established";
}
assert(!"DEV Escape: Expected switch to take care of this state");
}
struct peer_connection *bgp_peer_connection_new(struct peer *peer)
{
struct peer_connection *connection;
@ -1527,6 +1559,7 @@ struct peer *peer_new(struct bgp *bgp)
/* Create buffers. */
peer->connection = bgp_peer_connection_new(peer);
peer->connection->dir = CONNECTION_OUTGOING;
/* Set default value. */
peer->v_start = BGP_INIT_START_TIMER;
@ -1943,7 +1976,7 @@ struct peer *peer_create(union sockunion *su, const char *conf_if,
enum peer_asn_type as_type, struct peer_group *group,
bool config_node, const char *as_str)
{
int active;
enum bgp_peer_active active;
struct peer *peer;
char buf[SU_ADDRSTRLEN];
afi_t afi;
@ -1997,7 +2030,7 @@ struct peer *peer_create(union sockunion *su, const char *conf_if,
}
active = peer_active(peer->connection);
if (!active) {
if (active != BGP_PEER_ACTIVE) {
if (peer->connection->su.sa.sa_family == AF_UNSPEC)
peer->last_reset = PEER_DOWN_NBR_ADDR;
else
@ -2029,7 +2062,7 @@ struct peer *peer_create(union sockunion *su, const char *conf_if,
if (bgp->autoshutdown)
peer_flag_set(peer, PEER_FLAG_SHUTDOWN);
/* Set up peer's events and timers. */
else if (!active && peer_active(peer->connection)) {
else if (active != BGP_PEER_ACTIVE && peer_active(peer->connection) == BGP_PEER_ACTIVE) {
if (peer->last_reset == PEER_DOWN_NOAFI_ACTIVATED)
peer->last_reset = 0;
bgp_timer_set(peer->connection);
@ -2402,7 +2435,7 @@ static void peer_group2peer_config_copy_af(struct peer_group *group,
static int peer_activate_af(struct peer *peer, afi_t afi, safi_t safi)
{
int active;
enum bgp_peer_active active;
struct peer *other;
if (CHECK_FLAG(peer->sflags, PEER_STATUS_GROUP)) {
@ -2430,7 +2463,7 @@ static int peer_activate_af(struct peer *peer, afi_t afi, safi_t safi)
if (peer->group)
peer_group2peer_config_copy_af(peer->group, peer, afi, safi);
if (!active && peer_active(peer->connection)) {
if (active != BGP_PEER_ACTIVE && peer_active(peer->connection) == BGP_PEER_ACTIVE) {
bgp_timer_set(peer->connection);
} else {
peer->last_reset = PEER_DOWN_AF_ACTIVATE;
@ -2680,6 +2713,9 @@ int peer_delete(struct peer *peer)
assert(peer->connection->status != Deleted);
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s: peer %pBP", __func__, peer);
bgp = peer->bgp;
accept_peer = CHECK_FLAG(peer->sflags, PEER_STATUS_ACCEPT_PEER);
@ -2695,6 +2731,13 @@ int peer_delete(struct peer *peer)
PEER_THREAD_READS_ON));
assert(!CHECK_FLAG(peer->thread_flags, PEER_THREAD_KEEPALIVES_ON));
/* Ensure the peer is removed from the connection error list */
frr_with_mutex (&bgp->peer_errs_mtx) {
if (bgp_peer_conn_errlist_anywhere(peer->connection))
bgp_peer_conn_errlist_del(&bgp->peer_conn_errlist,
peer->connection);
}
if (CHECK_FLAG(peer->sflags, PEER_STATUS_NSF_WAIT))
peer_nsf_stop(peer);
@ -3374,7 +3417,7 @@ int peer_group_bind(struct bgp *bgp, union sockunion *su, struct peer *peer,
}
/* Set up peer's events and timers. */
if (peer_active(peer->connection))
if (peer_active(peer->connection) == BGP_PEER_ACTIVE)
bgp_timer_set(peer->connection);
}
@ -3546,7 +3589,8 @@ peer_init:
bgp->vpn_policy[afi].tovpn_zebra_vrf_label_last_sent =
MPLS_LABEL_NONE;
bgp->vpn_policy[afi].import_vrf = list_new();
if (!bgp->vpn_policy[afi].import_vrf)
bgp->vpn_policy[afi].import_vrf = list_new();
bgp->vpn_policy[afi].import_vrf->del =
bgp_vrf_string_name_delete;
if (!hidden) {
@ -3564,7 +3608,7 @@ peer_init:
bgp_mplsvpn_nh_label_bind_cache_init(&bgp->mplsvpn_nh_label_bind);
if (name)
if (name && !bgp->name)
bgp->name = XSTRDUP(MTYPE_BGP_NAME, name);
event_add_timer(bm->master, bgp_startup_timer_expire, bgp,
@ -3620,6 +3664,11 @@ peer_init:
memset(&bgp->ebgprequirespolicywarning, 0,
sizeof(bgp->ebgprequirespolicywarning));
/* Init peer connection error info */
pthread_mutex_init(&bgp->peer_errs_mtx, NULL);
bgp_peer_conn_errlist_init(&bgp->peer_conn_errlist);
bgp_clearing_info_init(&bgp->clearing_list);
return bgp;
}
@ -3782,6 +3831,7 @@ int bgp_lookup_by_as_name_type(struct bgp **bgp_val, as_t *as, const char *as_pr
hidden);
UNSET_FLAG(bgp->flags,
BGP_FLAG_INSTANCE_HIDDEN);
UNSET_FLAG(bgp->flags, BGP_FLAG_DELETE_IN_PROGRESS);
} else {
bgp->as = *as;
if (force_config == false)
@ -3879,16 +3929,16 @@ static void bgp_zclient_set_redist(afi_t afi, int type, unsigned short instance,
{
if (instance) {
if (set)
redist_add_instance(&zclient->mi_redist[afi][type],
redist_add_instance(&bgp_zclient->mi_redist[afi][type],
instance);
else
redist_del_instance(&zclient->mi_redist[afi][type],
redist_del_instance(&bgp_zclient->mi_redist[afi][type],
instance);
} else {
if (set)
vrf_bitmap_set(&zclient->redist[afi][type], vrf_id);
vrf_bitmap_set(&bgp_zclient->redist[afi][type], vrf_id);
else
vrf_bitmap_unset(&zclient->redist[afi][type], vrf_id);
vrf_bitmap_unset(&bgp_zclient->redist[afi][type], vrf_id);
}
}
@ -4002,11 +4052,13 @@ int bgp_delete(struct bgp *bgp)
struct bgp *bgp_to_proc = NULL;
struct bgp *bgp_to_proc_next = NULL;
struct bgp *bgp_default = bgp_get_default();
struct bgp_clearing_info *cinfo;
struct peer_connection *connection;
assert(bgp);
/*
* Iterate the pending dest list and remove all the dest pertaininig to
* Iterate the pending dest list and remove all the dest pertaining to
* the bgp under delete.
*/
b_ann_cnt = zebra_announce_count(&bm->zebra_announce_head);
@ -4052,6 +4104,10 @@ int bgp_delete(struct bgp *bgp)
a_l3_cnt);
}
/* Cleanup for peer connection batching */
while ((cinfo = bgp_clearing_info_first(&bgp->clearing_list)) != NULL)
bgp_clearing_batch_completed(cinfo);
bgp_soft_reconfig_table_task_cancel(bgp, NULL, NULL);
/* make sure we withdraw any exported routes */
@ -4098,6 +4154,7 @@ int bgp_delete(struct bgp *bgp)
EVENT_OFF(bgp->t_maxmed_onstartup);
EVENT_OFF(bgp->t_update_delay);
EVENT_OFF(bgp->t_establish_wait);
EVENT_OFF(bgp->clearing_end);
/* Set flag indicating bgp instance delete in progress */
SET_FLAG(bgp->flags, BGP_FLAG_DELETE_IN_PROGRESS);
@ -4176,26 +4233,56 @@ int bgp_delete(struct bgp *bgp)
if (i != ZEBRA_ROUTE_BGP)
bgp_redistribute_unset(bgp, afi, i, 0);
/* Clear list of peers with connection errors - each
* peer will need to check again, in case the io pthread is racing
* with us, but this batch cleanup should make the per-peer check
* cheaper.
*/
frr_with_mutex (&bgp->peer_errs_mtx) {
do {
connection = bgp_peer_conn_errlist_pop(
&bgp->peer_conn_errlist);
} while (connection != NULL);
}
/* Free peers and peer-groups. */
for (ALL_LIST_ELEMENTS(bgp->group, node, next, group))
peer_group_delete(group);
while (listcount(bgp->peer)) {
peer = listnode_head(bgp->peer);
peer_delete(peer);
if (peer->ifp || CHECK_FLAG(peer->flags, PEER_FLAG_CAPABILITY_ENHE))
bgp_zebra_terminate_radv(peer->bgp, peer);
if (BGP_PEER_GRACEFUL_RESTART_CAPABLE(peer)) {
if (bgp_debug_neighbor_events(peer))
zlog_debug("%pBP configured Graceful-Restart, skipping unconfig notification",
peer);
peer_delete(peer);
} else {
peer_notify_unconfig(peer->connection);
peer_delete(peer);
}
}
if (bgp->peer_self && !IS_BGP_INSTANCE_HIDDEN(bgp)) {
if (bgp->peer_self && (!IS_BGP_INSTANCE_HIDDEN(bgp) || bm->terminating)) {
peer_delete(bgp->peer_self);
bgp->peer_self = NULL;
}
update_bgp_group_free(bgp);
/* Cancel peer connection errors event */
EVENT_OFF(bgp->t_conn_errors);
/* Cleanup for peer connection batching */
while ((cinfo = bgp_clearing_info_pop(&bgp->clearing_list)) != NULL)
bgp_clearing_batch_completed(cinfo);
/* TODO - Other memory may need to be freed - e.g., NHT */
#ifdef ENABLE_BGP_VNC
if (!IS_BGP_INSTANCE_HIDDEN(bgp))
if (!IS_BGP_INSTANCE_HIDDEN(bgp) || bm->terminating)
rfapi_delete(bgp);
#endif
@ -4203,8 +4290,7 @@ int bgp_delete(struct bgp *bgp)
FOREACH_AFI_SAFI (afi, safi) {
struct bgp_aggregate *aggregate = NULL;
for (struct bgp_dest *dest =
bgp_table_top(bgp->aggregate[afi][safi]);
for (dest = bgp_table_top(bgp->aggregate[afi][safi]);
dest; dest = bgp_route_next(dest)) {
aggregate = bgp_dest_get_bgp_aggregate_info(dest);
if (aggregate == NULL)
@ -4246,7 +4332,7 @@ int bgp_delete(struct bgp *bgp)
bgp_zebra_instance_deregister(bgp);
}
if (!IS_BGP_INSTANCE_HIDDEN(bgp)) {
if (!IS_BGP_INSTANCE_HIDDEN(bgp) || bm->terminating) {
/* Remove visibility via the master list -
* there may however still be routes to be processed
* still referencing the struct bgp.
@ -4258,7 +4344,7 @@ int bgp_delete(struct bgp *bgp)
vrf = bgp_vrf_lookup_by_instance_type(bgp);
bgp_handle_socket(bgp, vrf, VRF_UNKNOWN, false);
if (vrf && !IS_BGP_INSTANCE_HIDDEN(bgp))
if (vrf && (!IS_BGP_INSTANCE_HIDDEN(bgp) || bm->terminating))
bgp_vrf_unlink(bgp, vrf);
/* Update EVPN VRF pointer */
@ -4269,7 +4355,7 @@ int bgp_delete(struct bgp *bgp)
bgp_set_evpn(bgp_get_default());
}
if (!IS_BGP_INSTANCE_HIDDEN(bgp)) {
if (!IS_BGP_INSTANCE_HIDDEN(bgp) || bm->terminating) {
if (bgp->process_queue)
work_queue_free_and_null(&bgp->process_queue);
bgp_unlock(bgp); /* initial reference */
@ -4367,6 +4453,9 @@ void bgp_free(struct bgp *bgp)
bgp_srv6_cleanup(bgp);
bgp_confederation_id_unset(bgp);
bgp_peer_conn_errlist_init(&bgp->peer_conn_errlist);
pthread_mutex_destroy(&bgp->peer_errs_mtx);
for (int i = 0; i < bgp->confed_peers_cnt; i++)
XFREE(MTYPE_BGP_NAME, bgp->confed_peers[i].as_pretty);
@ -4692,16 +4781,16 @@ bool bgp_path_attribute_treat_as_withdraw(struct peer *peer, char *buf,
}
/* If peer is configured at least one address family return 1. */
bool peer_active(struct peer_connection *connection)
enum bgp_peer_active peer_active(struct peer_connection *connection)
{
struct peer *peer = connection->peer;
if (BGP_CONNECTION_SU_UNSPEC(connection))
return false;
return BGP_PEER_CONNECTION_UNSPECIFIED;
if (peer->bfd_config) {
if (bfd_session_is_down(peer->bfd_config->session))
return false;
if (peer_established(connection) && bfd_session_is_down(peer->bfd_config->session))
return BGP_PEER_BFD_DOWN;
}
if (peer->afc[AFI_IP][SAFI_UNICAST] || peer->afc[AFI_IP][SAFI_MULTICAST]
@ -4715,8 +4804,9 @@ bool peer_active(struct peer_connection *connection)
|| peer->afc[AFI_IP6][SAFI_ENCAP]
|| peer->afc[AFI_IP6][SAFI_FLOWSPEC]
|| peer->afc[AFI_L2VPN][SAFI_EVPN])
return true;
return false;
return BGP_PEER_ACTIVE;
return BGP_PEER_AF_UNCONFIGURED;
}
/* If peer is negotiated at least one address family return 1. */
@ -6400,7 +6490,7 @@ int peer_timers_connect_set(struct peer *peer, uint32_t connect)
/* Skip peer-group mechanics for regular peers. */
if (!CHECK_FLAG(peer->sflags, PEER_STATUS_GROUP)) {
if (!peer_established(peer->connection)) {
if (peer_active(peer->connection))
if (peer_active(peer->connection) == BGP_PEER_ACTIVE)
BGP_EVENT_ADD(peer->connection, BGP_Stop);
BGP_EVENT_ADD(peer->connection, BGP_Start);
}
@ -6421,7 +6511,7 @@ int peer_timers_connect_set(struct peer *peer, uint32_t connect)
member->v_connect = connect;
if (!peer_established(member->connection)) {
if (peer_active(member->connection))
if (peer_active(member->connection) == BGP_PEER_ACTIVE)
BGP_EVENT_ADD(member->connection, BGP_Stop);
BGP_EVENT_ADD(member->connection, BGP_Start);
}
@ -6454,7 +6544,7 @@ int peer_timers_connect_unset(struct peer *peer)
/* Skip peer-group mechanics for regular peers. */
if (!CHECK_FLAG(peer->sflags, PEER_STATUS_GROUP)) {
if (!peer_established(peer->connection)) {
if (peer_active(peer->connection))
if (peer_active(peer->connection) == BGP_PEER_ACTIVE)
BGP_EVENT_ADD(peer->connection, BGP_Stop);
BGP_EVENT_ADD(peer->connection, BGP_Start);
}
@ -6475,7 +6565,7 @@ int peer_timers_connect_unset(struct peer *peer)
member->v_connect = peer->bgp->default_connect_retry;
if (!peer_established(member->connection)) {
if (peer_active(member->connection))
if (peer_active(member->connection) == BGP_PEER_ACTIVE)
BGP_EVENT_ADD(member->connection, BGP_Stop);
BGP_EVENT_ADD(member->connection, BGP_Start);
}
@ -8593,6 +8683,10 @@ void bgp_master_init(struct event_loop *master, const int buffer_size,
bm = &bgp_master;
/* Initialize the peer connection FIFO list */
peer_connection_fifo_init(&bm->connection_fifo);
pthread_mutex_init(&bm->peer_connection_mtx, NULL);
zebra_announce_init(&bm->zebra_announce_head);
zebra_l2_vni_init(&bm->zebra_l2_vni_head);
zebra_l3_vni_init(&bm->zebra_l3_vni_head);
@ -8622,6 +8716,11 @@ void bgp_master_init(struct event_loop *master, const int buffer_size,
bm->t_bgp_zebra_l2_vni = NULL;
bm->t_bgp_zebra_l3_vni = NULL;
bm->peer_clearing_batch_id = 1;
/* TODO -- make these configurable */
bm->peer_conn_errs_dequeue_limit = BGP_CONN_ERROR_DEQUEUE_MAX;
bm->peer_clearing_batch_max_dests = BGP_CLEARING_BATCH_MAX_DESTS;
bgp_mac_init();
/* init the rd id space.
assign 0th index in the bitfield,
@ -8754,7 +8853,8 @@ static int peer_unshut_after_cfg(struct bgp *bgp)
peer->host);
peer->shut_during_cfg = false;
if (peer_active(peer->connection) && peer->connection->status != Established) {
if (peer_active(peer->connection) == BGP_PEER_ACTIVE &&
peer->connection->status != Established) {
if (peer->connection->status != Idle)
BGP_EVENT_ADD(peer->connection, BGP_Stop);
BGP_EVENT_ADD(peer->connection, BGP_Start);
@ -8872,6 +8972,9 @@ void bgp_terminate(void)
EVENT_OFF(bm->t_bgp_zebra_l3_vni);
bgp_mac_finish();
#ifdef ENABLE_BGP_VNC
rfapi_terminate();
#endif
}
struct peer *peer_lookup_in_view(struct vty *vty, struct bgp *bgp,
@ -8960,6 +9063,373 @@ void bgp_gr_apply_running_config(void)
}
}
/* Hash of peers in clearing info object */
static int peer_clearing_hash_cmp(const struct peer *p1, const struct peer *p2)
{
if (p1 == p2)
return 0;
else if (p1 < p2)
return -1;
else
return 1;
}
static uint32_t peer_clearing_hashfn(const struct peer *p1)
{
return (uint32_t)((intptr_t)p1 & 0xffffffffULL);
}
/*
* Free a clearing batch: this really just does the memory cleanup; the
* clearing code is expected to manage the peer, dest, table, etc refcounts
*/
static void bgp_clearing_batch_free(struct bgp *bgp,
struct bgp_clearing_info **pinfo)
{
struct bgp_clearing_info *cinfo = *pinfo;
struct bgp_clearing_dest *destinfo;
if (bgp_clearing_info_anywhere(cinfo))
bgp_clearing_info_del(&bgp->clearing_list, cinfo);
while ((destinfo = bgp_clearing_destlist_pop(&cinfo->destlist)) != NULL)
XFREE(MTYPE_CLEARING_BATCH, destinfo);
bgp_clearing_hash_fini(&cinfo->peers);
XFREE(MTYPE_CLEARING_BATCH, *pinfo);
}
/*
* Done with a peer that was part of a clearing batch
*/
static void bgp_clearing_peer_done(struct peer *peer)
{
UNSET_FLAG(peer->flags, PEER_FLAG_CLEARING_BATCH);
/* Tickle FSM to start moving again */
BGP_EVENT_ADD(peer->connection, Clearing_Completed);
peer_unlock(peer); /* bgp_clear_route */
}
/*
* Initialize a new batch struct for clearing peer(s) from the RIB
*/
void bgp_clearing_batch_begin(struct bgp *bgp)
{
struct bgp_clearing_info *cinfo;
if (event_is_scheduled(bgp->clearing_end))
return;
cinfo = XCALLOC(MTYPE_CLEARING_BATCH, sizeof(struct bgp_clearing_info));
cinfo->bgp = bgp;
cinfo->id = bm->peer_clearing_batch_id++;
/* Init hash of peers and list of dests */
bgp_clearing_hash_init(&cinfo->peers);
bgp_clearing_destlist_init(&cinfo->destlist);
/* Batch is open for more peers */
SET_FLAG(cinfo->flags, BGP_CLEARING_INFO_FLAG_OPEN);
bgp_clearing_info_add_head(&bgp->clearing_list, cinfo);
}
/*
* Close a batch of clearing peers, and begin working on the RIB
*/
static void bgp_clearing_batch_end(struct bgp *bgp)
{
struct bgp_clearing_info *cinfo;
if (event_is_scheduled(bgp->clearing_end))
return;
cinfo = bgp_clearing_info_first(&bgp->clearing_list);
assert(cinfo != NULL);
assert(CHECK_FLAG(cinfo->flags, BGP_CLEARING_INFO_FLAG_OPEN));
/* Batch is closed */
UNSET_FLAG(cinfo->flags, BGP_CLEARING_INFO_FLAG_OPEN);
/* If we have no peers to examine, just discard the batch info */
if (bgp_clearing_hash_count(&cinfo->peers) == 0) {
bgp_clearing_batch_free(bgp, &cinfo);
return;
}
/* Do a RIB walk for the current batch. If it finds dests/prefixes
* to work on, this will schedule a task to process
* the dests/prefixes in the batch.
* NB this will free the batch if it finishes, or if there was no work
* to do.
*/
bgp_clear_route_batch(cinfo);
}
static void bgp_clearing_batch_end_event(struct event *event)
{
struct bgp *bgp = EVENT_ARG(event);
bgp_clearing_batch_end(bgp);
bgp_unlock(bgp);
}
void bgp_clearing_batch_end_event_start(struct bgp *bgp)
{
if (!event_is_scheduled(bgp->clearing_end))
bgp_lock(bgp);
EVENT_OFF(bgp->clearing_end);
event_add_timer_msec(bm->master, bgp_clearing_batch_end_event, bgp, 100, &bgp->clearing_end);
}
/* Check whether a dest's peer is relevant to a clearing batch */
bool bgp_clearing_batch_check_peer(struct bgp_clearing_info *cinfo,
const struct peer *peer)
{
struct peer *p;
p = bgp_clearing_hash_find(&cinfo->peers, peer);
return (p != NULL);
}
/*
* Check whether a clearing batch has any dests to process
*/
bool bgp_clearing_batch_dests_present(struct bgp_clearing_info *cinfo)
{
return (bgp_clearing_destlist_count(&cinfo->destlist) > 0);
}
/*
* Done with a peer clearing batch; deal with refcounts, free memory
*/
void bgp_clearing_batch_completed(struct bgp_clearing_info *cinfo)
{
struct peer *peer;
struct bgp_dest *dest;
struct bgp_clearing_dest *destinfo;
struct bgp_table *table;
/* Ensure event is not scheduled */
event_cancel_event(bm->master, &cinfo->t_sched);
/* Remove all peers and un-ref */
while ((peer = bgp_clearing_hash_pop(&cinfo->peers)) != NULL)
bgp_clearing_peer_done(peer);
/* Remove any dests/prefixes and unlock */
destinfo = bgp_clearing_destlist_pop(&cinfo->destlist);
while (destinfo) {
dest = destinfo->dest;
XFREE(MTYPE_CLEARING_BATCH, destinfo);
table = bgp_dest_table(dest);
bgp_dest_unlock_node(dest);
bgp_table_unlock(table);
destinfo = bgp_clearing_destlist_pop(&cinfo->destlist);
}
/* Free memory */
bgp_clearing_batch_free(cinfo->bgp, &cinfo);
}
/*
* Add a prefix/dest to a clearing batch
*/
void bgp_clearing_batch_add_dest(struct bgp_clearing_info *cinfo,
struct bgp_dest *dest)
{
struct bgp_clearing_dest *destinfo;
destinfo = XCALLOC(MTYPE_CLEARING_BATCH,
sizeof(struct bgp_clearing_dest));
destinfo->dest = dest;
bgp_clearing_destlist_add_tail(&cinfo->destlist, destinfo);
}
/*
* Return the next dest for batch clear processing
*/
struct bgp_dest *bgp_clearing_batch_next_dest(struct bgp_clearing_info *cinfo)
{
struct bgp_clearing_dest *destinfo;
struct bgp_dest *dest = NULL;
destinfo = bgp_clearing_destlist_pop(&cinfo->destlist);
if (destinfo) {
dest = destinfo->dest;
XFREE(MTYPE_CLEARING_BATCH, destinfo);
}
return dest;
}
/* If a clearing batch is available for 'peer', add it and return 'true',
* else return 'false'.
*/
bool bgp_clearing_batch_add_peer(struct bgp *bgp, struct peer *peer)
{
struct bgp_clearing_info *cinfo;
cinfo = bgp_clearing_info_first(&bgp->clearing_list);
if (cinfo && CHECK_FLAG(cinfo->flags, BGP_CLEARING_INFO_FLAG_OPEN)) {
if (!CHECK_FLAG(peer->flags, PEER_FLAG_CLEARING_BATCH)) {
/* Add a peer ref */
peer_lock(peer);
bgp_clearing_hash_add(&cinfo->peers, peer);
SET_FLAG(peer->flags, PEER_FLAG_CLEARING_BATCH);
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s: peer %pBP batched in %#x", __func__,
peer, cinfo->id);
}
return true;
}
return false;
}
/*
* Task callback in the main pthread to handle socket errors
* encountered in the io pthread. We avoid having the io pthread try
* to enqueue fsm events or mess with the peer struct.
*/
static void bgp_process_conn_error(struct event *event)
{
struct bgp *bgp;
struct peer *peer;
struct peer_connection *connection;
uint32_t counter = 0;
size_t list_count = 0;
bool more_p = false;
bgp = EVENT_ARG(event);
frr_with_mutex (&bgp->peer_errs_mtx) {
connection = bgp_peer_conn_errlist_pop(&bgp->peer_conn_errlist);
list_count =
bgp_peer_conn_errlist_count(&bgp->peer_conn_errlist);
}
/* If we have multiple peers with errors, try to batch some
* clearing work.
*/
if (list_count > 0)
bgp_clearing_batch_begin(bgp);
/* Dequeue peers from the error list */
while (connection != NULL) {
peer = connection->peer;
if (bgp_debug_neighbor_events(peer))
zlog_debug("%s [Event] BGP error %d on fd %d",
peer->host, connection->connection_errcode,
connection->fd);
/* Closed connection or error on the socket */
if (peer_established(connection)) {
if ((CHECK_FLAG(peer->flags, PEER_FLAG_GRACEFUL_RESTART)
|| CHECK_FLAG(peer->flags,
PEER_FLAG_GRACEFUL_RESTART_HELPER))
&& CHECK_FLAG(peer->sflags, PEER_STATUS_NSF_MODE)) {
peer->last_reset = PEER_DOWN_NSF_CLOSE_SESSION;
SET_FLAG(peer->sflags, PEER_STATUS_NSF_WAIT);
} else
peer->last_reset = PEER_DOWN_CLOSE_SESSION;
}
/* No need for keepalives, if enabled */
bgp_keepalives_off(peer->connection);
/* Drive into state-machine changes */
bgp_event_update(connection, connection->connection_errcode);
counter++;
if (counter >= bm->peer_conn_errs_dequeue_limit)
break;
connection = bgp_dequeue_conn_err(bgp, &more_p);
}
/* Reschedule event if necessary */
if (more_p)
bgp_conn_err_reschedule(bgp);
/* Done with a clearing batch */
if (list_count > 0)
bgp_clearing_batch_end(bgp);
if (bgp_debug_neighbor_events(NULL))
zlog_debug("%s: dequeued and processed %d peers", __func__,
counter);
}
/*
* Enqueue a connection with an error to be handled in the main pthread;
* this is called from the io pthread.
*/
int bgp_enqueue_conn_err(struct bgp *bgp, struct peer_connection *connection,
int errcode)
{
frr_with_mutex (&bgp->peer_errs_mtx) {
connection->connection_errcode = errcode;
/* Careful not to double-enqueue */
if (!bgp_peer_conn_errlist_anywhere(connection)) {
bgp_peer_conn_errlist_add_tail(&bgp->peer_conn_errlist,
connection);
}
}
/* Ensure an event is scheduled */
event_add_event(bm->master, bgp_process_conn_error, bgp, 0,
&bgp->t_conn_errors);
return 0;
}
/*
* Dequeue a connection that encountered a connection error; signal whether there
* are more queued peers.
*/
struct peer_connection *bgp_dequeue_conn_err(struct bgp *bgp, bool *more_p)
{
struct peer_connection *connection = NULL;
bool more = false;
frr_with_mutex (&bgp->peer_errs_mtx) {
connection = bgp_peer_conn_errlist_pop(&bgp->peer_conn_errlist);
if (bgp_peer_conn_errlist_const_first(
&bgp->peer_conn_errlist) != NULL)
more = true;
}
if (more_p)
*more_p = more;
return connection;
}
/*
* Reschedule the connection error event - probably after processing
* some of the peers on the list.
*/
void bgp_conn_err_reschedule(struct bgp *bgp)
{
event_add_event(bm->master, bgp_process_conn_error, bgp, 0,
&bgp->t_conn_errors);
}
printfrr_ext_autoreg_p("BP", printfrr_bp);
static ssize_t printfrr_bp(struct fbuf *buf, struct printfrr_eargs *ea,
const void *ptr)

View file

@ -48,6 +48,11 @@ DECLARE_HOOK(bgp_hook_config_write_vrf, (struct vty *vty, struct vrf *vrf),
/* Default interval for IPv6 RAs when triggered by BGP unnumbered neighbor. */
#define BGP_UNNUM_DEFAULT_RA_INTERVAL 10
/* Max number of peers to process without rescheduling */
#define BGP_CONN_ERROR_DEQUEUE_MAX 10
/* Limit the number of clearing dests we'll process per callback */
#define BGP_CLEARING_BATCH_MAX_DESTS 100
struct update_subgroup;
struct bpacket;
struct bgp_pbr_config;
@ -102,6 +107,9 @@ enum bgp_af_index {
extern struct frr_pthread *bgp_pth_io;
extern struct frr_pthread *bgp_pth_ka;
/* FIFO list for peer connections */
PREDECL_LIST(peer_connection_fifo);
/* BGP master for system wide configurations and variables. */
struct bgp_master {
/* BGP instance list. */
@ -116,6 +124,11 @@ struct bgp_master {
/* BGP port number. */
uint16_t port;
/* FIFO list head for peer connections */
struct peer_connection_fifo_head connection_fifo;
struct event *e_process_packet;
pthread_mutex_t peer_connection_mtx;
/* Listener addresses */
struct list *addresses;
@ -214,6 +227,16 @@ struct bgp_master {
/* To preserve ordering of processing of BGP-VRFs for L3 VNIs */
struct zebra_l3_vni_head zebra_l3_vni_head;
/* ID value for peer clearing batches */
uint32_t peer_clearing_batch_id;
/* Limits for batched peer clearing code:
* Max number of errored peers to process without rescheduling
*/
uint32_t peer_conn_errs_dequeue_limit;
/* Limit the number of clearing dests we'll process per callback */
uint32_t peer_clearing_batch_max_dests;
QOBJ_FIELDS;
};
DECLARE_QOBJ_TYPE(bgp_master);
@ -385,6 +408,81 @@ struct as_confed {
struct bgp_mplsvpn_nh_label_bind_cache;
PREDECL_RBTREE_UNIQ(bgp_mplsvpn_nh_label_bind_cache);
/* List of peers that have connection errors in the io pthread */
PREDECL_DLIST(bgp_peer_conn_errlist);
/* List of info about peers that are being cleared from BGP RIBs in a batch */
PREDECL_DLIST(bgp_clearing_info);
/* Hash of peers in clearing info object */
PREDECL_HASH(bgp_clearing_hash);
/* List of dests that need to be processed in a clearing batch */
PREDECL_LIST(bgp_clearing_destlist);
struct bgp_clearing_dest {
struct bgp_dest *dest;
struct bgp_clearing_destlist_item link;
};
/* Info about a batch of peers that need to be cleared from the RIB.
* If many peers need to be cleared, we process them in batches, taking
* one walk through the RIB for each batch. This is only used for "all"
* afi/safis, typically when processing peer connection errors.
*/
struct bgp_clearing_info {
/* Owning bgp instance */
struct bgp *bgp;
/* Hash of peers */
struct bgp_clearing_hash_head peers;
/* Batch ID, for debugging/logging */
uint32_t id;
/* Flags */
uint32_t flags;
/* List of dests - wrapped by a small wrapper struct */
struct bgp_clearing_destlist_head destlist;
/* Event to schedule/reschedule processing */
struct event *t_sched;
/* Info for rescheduling the RIB walk */
afi_t last_afi;
safi_t last_safi;
struct prefix last_pfx;
/* For some afi/safi (vpn/evpn e.g.), bgp may do an inner walk
* for a related table; the 'last' info represents the outer walk,
* and this info represents the inner walk.
*/
afi_t inner_afi;
safi_t inner_safi;
struct prefix inner_pfx;
/* Map of afi/safi so we don't re-walk any tables */
uint8_t table_map[AFI_MAX][SAFI_MAX];
/* Counters: current iteration, overall total, and processed count. */
uint32_t curr_counter;
uint32_t total_counter;
uint32_t total_processed;
/* TODO -- id, serial number, for debugging/logging? */
/* Linkage for list of batches per bgp */
struct bgp_clearing_info_item link;
};
/* Batch is open, new peers can be added */
#define BGP_CLEARING_INFO_FLAG_OPEN (1 << 0)
/* Batch is resuming iteration after yielding */
#define BGP_CLEARING_INFO_FLAG_RESUME (1 << 1)
/* Batch has 'inner' resume info set */
#define BGP_CLEARING_INFO_FLAG_INNER (1 << 2)
/* BGP instance structure. */
struct bgp {
/* AS number of this BGP instance. */
@ -464,6 +562,8 @@ struct bgp {
/* start-up timer on only once at the beginning */
struct event *t_startup;
struct event *clearing_end;
uint32_t v_maxmed_onstartup; /* Duration of max-med on start-up */
#define BGP_MAXMED_ONSTARTUP_UNCONFIGURED 0 /* 0 means off, its the default */
uint32_t maxmed_onstartup_value; /* Max-med value when active on
@ -870,6 +970,21 @@ struct bgp {
uint16_t tcp_keepalive_intvl;
uint16_t tcp_keepalive_probes;
/* List of peers that have connection errors in the IO pthread */
struct bgp_peer_conn_errlist_head peer_conn_errlist;
/* Mutex that guards the connection-errors list */
pthread_mutex_t peer_errs_mtx;
/* Event indicating that there have been connection errors; this
* is typically signalled in the IO pthread; it's handled in the
* main pthread.
*/
struct event *t_conn_errors;
/* List of batches of peers being cleared from BGP RIBs */
struct bgp_clearing_info_head clearing_list;
struct timeval ebgprequirespolicywarning;
#define FIFTEENMINUTE2USEC (int64_t)15 * 60 * 1000000
@ -1213,8 +1328,28 @@ struct addpath_paths_limit {
uint16_t receive;
};
/*
* The peer data structure has a incoming and outgoing peer connection
* variables. In the early stage of the FSM, it is possible to have
* both a incoming and outgoing connection at the same time. These
* connections both have events scheduled to happen that both produce
* logs. It is very hard to tell these debugs apart when looking at
* the log files so the debugs are now adding direction strings to
* help figure out what is going on. At a later stage in the FSM
* one of the connections will be closed and the other one kept.
* The one being kept is moved to the ESTABLISHED connection direction
* so that debugs can be figured out.
*/
enum connection_direction {
UNKNOWN,
CONNECTION_INCOMING,
CONNECTION_OUTGOING,
ESTABLISHED,
};
struct peer_connection {
struct peer *peer;
enum connection_direction dir;
/* Status of the peer connection. */
enum bgp_fsm_status status;
@ -1251,18 +1386,30 @@ struct peer_connection {
struct event *t_pmax_restart;
struct event *t_routeadv;
struct event *t_process_packet;
struct event *t_process_packet_error;
struct event *t_stop_with_notify;
/* Linkage for list connections with errors, from IO pthread */
struct bgp_peer_conn_errlist_item conn_err_link;
/* Connection error code */
uint16_t connection_errcode;
union sockunion su;
#define BGP_CONNECTION_SU_UNSPEC(connection) \
(connection->su.sa.sa_family == AF_UNSPEC)
union sockunion *su_local; /* Sockunion of local address. */
union sockunion *su_remote; /* Sockunion of remote address. */
/* For FIFO list */
struct peer_connection_fifo_item fifo_item;
};
/* Declare the FIFO list implementation */
DECLARE_LIST(peer_connection_fifo, struct peer_connection, fifo_item);
const char *bgp_peer_get_connection_direction(struct peer_connection *connection);
extern struct peer_connection *bgp_peer_connection_new(struct peer *peer);
extern void bgp_peer_connection_free(struct peer_connection **connection);
extern void bgp_peer_connection_buffers_free(struct peer_connection *connection);
@ -1547,6 +1694,8 @@ struct peer {
#define PEER_FLAG_EXTENDED_LINK_BANDWIDTH (1ULL << 39)
#define PEER_FLAG_DUAL_AS (1ULL << 40)
#define PEER_FLAG_CAPABILITY_LINK_LOCAL (1ULL << 41)
/* Peer is part of a batch clearing its routes */
#define PEER_FLAG_CLEARING_BATCH (1ULL << 42)
/*
*GR-Disabled mode means unset PEER_FLAG_GRACEFUL_RESTART
@ -1944,6 +2093,9 @@ struct peer {
/* Add-Path Paths-Limit */
struct addpath_paths_limit addpath_paths_limit[AFI_MAX][SAFI_MAX];
/* Linkage for hash of clearing peers being cleared in a batch */
struct bgp_clearing_hash_item clear_hash_link;
QOBJ_FIELDS;
};
DECLARE_QOBJ_TYPE(peer);
@ -2278,6 +2430,14 @@ enum bgp_martian_type {
BGP_MARTIAN_SOO, /* bgp->evpn_info->macvrf_soo */
};
/* Distinguish the reason why the peer is not active. */
enum bgp_peer_active {
BGP_PEER_ACTIVE,
BGP_PEER_CONNECTION_UNSPECIFIED,
BGP_PEER_BFD_DOWN,
BGP_PEER_AF_UNCONFIGURED,
};
extern const struct message bgp_martian_type_str[];
extern const char *bgp_martian_type2str(enum bgp_martian_type mt);
@ -2326,7 +2486,7 @@ extern struct peer *peer_unlock_with_caller(const char *, struct peer *);
extern enum bgp_peer_sort peer_sort(struct peer *peer);
extern enum bgp_peer_sort peer_sort_lookup(struct peer *peer);
extern bool peer_active(struct peer_connection *connection);
extern enum bgp_peer_active peer_active(struct peer_connection *connection);
extern bool peer_active_nego(struct peer *);
extern bool peer_afc_received(struct peer *peer);
extern bool peer_afc_advertised(struct peer *peer);
@ -2584,6 +2744,11 @@ void bgp_gr_apply_running_config(void);
int bgp_global_gr_init(struct bgp *bgp);
int bgp_peer_gr_init(struct peer *peer);
/* APIs for the per-bgp peer connection error list */
int bgp_enqueue_conn_err(struct bgp *bgp, struct peer_connection *connection,
int errcode);
struct peer_connection *bgp_dequeue_conn_err(struct bgp *bgp, bool *more_p);
void bgp_conn_err_reschedule(struct bgp *bgp);
#define BGP_GR_ROUTER_DETECT_AND_SEND_CAPABILITY_TO_ZEBRA(_bgp, _peer_list) \
do { \
@ -2902,6 +3067,27 @@ extern void srv6_function_free(struct bgp_srv6_function *func);
extern void bgp_session_reset_safe(struct peer *peer, struct listnode **nnode);
/* If a clearing batch is available for 'peer', add it and return 'true',
* else return 'false'.
*/
bool bgp_clearing_batch_add_peer(struct bgp *bgp, struct peer *peer);
/* Add a prefix/dest to a clearing batch */
void bgp_clearing_batch_add_dest(struct bgp_clearing_info *cinfo,
struct bgp_dest *dest);
/* Check whether a dest's peer is relevant to a clearing batch */
bool bgp_clearing_batch_check_peer(struct bgp_clearing_info *cinfo,
const struct peer *peer);
/* Check whether a clearing batch has any dests to process */
bool bgp_clearing_batch_dests_present(struct bgp_clearing_info *cinfo);
/* Returns the next dest for batch clear processing */
struct bgp_dest *bgp_clearing_batch_next_dest(struct bgp_clearing_info *cinfo);
/* Done with a peer clearing batch; deal with refcounts, free memory */
void bgp_clearing_batch_completed(struct bgp_clearing_info *cinfo);
/* Start a new batch of peers to clear */
void bgp_clearing_batch_begin(struct bgp *bgp);
/* End a new batch of peers to clear */
void bgp_clearing_batch_end_event_start(struct bgp *bgp);
#ifdef _FRR_ATTRIBUTE_PRINTFRR
/* clang-format off */
#pragma FRR printfrr_ext "%pBP" (struct peer *)

View file

@ -946,8 +946,7 @@ void add_vnc_route(struct rfapi_descriptor *rfd, /* cookie, VPN UN addr, peer */
}
}
if (attrhash_cmp(bpi->attr, new_attr)
&& !CHECK_FLAG(bpi->flags, BGP_PATH_REMOVED)) {
if (!CHECK_FLAG(bpi->flags, BGP_PATH_REMOVED) && attrhash_cmp(bpi->attr, new_attr)) {
bgp_attr_unintern(&new_attr);
bgp_dest_unlock_node(bn);
@ -3546,6 +3545,8 @@ DEFUN (skiplist_debug_cli,
void rfapi_init(void)
{
rfapi_rib_init();
rfapi_import_init();
bgp_rfapi_cfg_init();
vnc_debug_init();
@ -3576,6 +3577,12 @@ void rfapi_init(void)
rfapi_vty_init();
}
void rfapi_terminate(void)
{
rfapi_import_terminate();
rfapi_rib_terminate();
}
#ifdef DEBUG_RFAPI
static void rfapi_print_exported(struct bgp *bgp)
{

View file

@ -14,6 +14,7 @@
#include "bgpd/bgp_nexthop.h"
extern void rfapi_init(void);
extern void rfapi_terminate(void);
extern void vnc_zebra_init(struct event_loop *master);
extern void vnc_zebra_destroy(void);

View file

@ -52,14 +52,23 @@
#undef DEBUG_IT_NODES
#undef DEBUG_BI_SEARCH
/*
* Hash to keep track of outstanding timers so we can force them to
* expire at shutdown time, thus freeing their allocated memory.
*/
PREDECL_HASH(rwcb);
/*
* Allocated for each withdraw timer instance; freed when the timer
* expires or is canceled
*/
struct rfapi_withdraw {
struct rwcb_item rwcbi;
struct rfapi_import_table *import_table;
struct agg_node *node;
struct bgp_path_info *info;
void (*timer_service_func)(struct event *t); /* for cleanup */
safi_t safi; /* used only for bulk operations */
/*
* For import table node reference count checking (i.e., debugging).
@ -72,6 +81,19 @@ struct rfapi_withdraw {
int lockoffset;
};
static int _rwcb_cmp(const struct rfapi_withdraw *w1, const struct rfapi_withdraw *w2)
{
return (w1 != w2);
}
static uint32_t _rwcb_hash(const struct rfapi_withdraw *w)
{
return (uintptr_t)w & 0xffffffff;
}
DECLARE_HASH(rwcb, struct rfapi_withdraw, rwcbi, _rwcb_cmp, _rwcb_hash);
static struct rwcb_head _rwcbhash;
/*
* DEBUG FUNCTION
* Count remote routes and compare with actively-maintained values.
@ -826,6 +848,7 @@ static void rfapiBgpInfoChainFree(struct bgp_path_info *bpi)
struct rfapi_withdraw *wcb =
EVENT_ARG(bpi->extra->vnc->vnc.import.timer);
rwcb_del(&_rwcbhash, wcb);
XFREE(MTYPE_RFAPI_WITHDRAW, wcb);
EVENT_OFF(bpi->extra->vnc->vnc.import.timer);
}
@ -1329,11 +1352,11 @@ rfapiRouteInfo2NextHopEntry(struct rfapi_ip_prefix *rprefix,
bgp_attr_extcom_tunnel_type(bpi->attr, &tun_type);
if (tun_type == BGP_ENCAP_TYPE_MPLS) {
struct prefix p;
struct prefix pfx;
/* MPLS carries UN address in next hop */
rfapiNexthop2Prefix(bpi->attr, &p);
if (p.family != AF_UNSPEC) {
rfapiQprefix2Raddr(&p, &new->un_address);
rfapiNexthop2Prefix(bpi->attr, &pfx);
if (pfx.family != AF_UNSPEC) {
rfapiQprefix2Raddr(&pfx, &new->un_address);
have_vnc_tunnel_un = 1;
}
}
@ -1750,7 +1773,7 @@ struct rfapi_next_hop_entry *rfapiRouteNode2NextHopList(
* Add non-withdrawn routes from less-specific prefix
*/
if (parent) {
const struct prefix *p = agg_node_get_prefix(parent);
p = agg_node_get_prefix(parent);
rib_rn = rfd_rib_table ? agg_node_get(rfd_rib_table, p) : NULL;
rfapiQprefix2Rprefix(p, &rprefix);
@ -2349,6 +2372,7 @@ static void rfapiWithdrawTimerVPN(struct event *t)
/* This callback is responsible for the withdraw object's memory */
if (early_exit) {
rwcb_del(&_rwcbhash, wcb);
XFREE(MTYPE_RFAPI_WITHDRAW, wcb);
return;
}
@ -2462,6 +2486,7 @@ done:
RFAPI_CHECK_REFCOUNT(wcb->node, SAFI_MPLS_VPN, 1 + wcb->lockoffset);
agg_unlock_node(wcb->node); /* decr ref count */
rwcb_del(&_rwcbhash, wcb);
XFREE(MTYPE_RFAPI_WITHDRAW, wcb);
}
@ -2705,6 +2730,7 @@ static void rfapiWithdrawTimerEncap(struct event *t)
done:
RFAPI_CHECK_REFCOUNT(wcb->node, SAFI_ENCAP, 1);
agg_unlock_node(wcb->node); /* decr ref count */
rwcb_del(&_rwcbhash, wcb);
XFREE(MTYPE_RFAPI_WITHDRAW, wcb);
skiplist_free(vpn_node_sl);
}
@ -2754,6 +2780,8 @@ rfapiBiStartWithdrawTimer(struct rfapi_import_table *import_table,
wcb->node = rn;
wcb->info = bpi;
wcb->import_table = import_table;
wcb->timer_service_func = timer_service_func;
rwcb_add(&_rwcbhash, wcb);
bgp_attr_intern(bpi->attr);
if (VNC_DEBUG(VERBOSE)) {
@ -2819,6 +2847,7 @@ static void rfapiExpireEncapNow(struct rfapi_import_table *it,
wcb->info = bpi;
wcb->node = rn;
wcb->import_table = it;
rwcb_add(&_rwcbhash, wcb);
memset(&t, 0, sizeof(t));
t.arg = wcb;
rfapiWithdrawTimerEncap(&t); /* frees wcb */
@ -3057,6 +3086,7 @@ static void rfapiBgpInfoFilteredImportEncap(
struct rfapi_withdraw *wcb = EVENT_ARG(
bpi->extra->vnc->vnc.import.timer);
rwcb_del(&_rwcbhash, wcb);
XFREE(MTYPE_RFAPI_WITHDRAW, wcb);
EVENT_OFF(bpi->extra->vnc->vnc.import
.timer);
@ -3083,6 +3113,7 @@ static void rfapiBgpInfoFilteredImportEncap(
wcb->info = bpi;
wcb->node = rn;
wcb->import_table = import_table;
rwcb_add(&_rwcbhash, wcb);
memset(&t, 0, sizeof(t));
t.arg = wcb;
rfapiWithdrawTimerEncap(
@ -3149,6 +3180,7 @@ static void rfapiBgpInfoFilteredImportEncap(
struct rfapi_withdraw *wcb =
EVENT_ARG(bpi->extra->vnc->vnc.import.timer);
rwcb_del(&_rwcbhash, wcb);
XFREE(MTYPE_RFAPI_WITHDRAW, wcb);
EVENT_OFF(bpi->extra->vnc->vnc.import.timer);
}
@ -3192,7 +3224,7 @@ static void rfapiBgpInfoFilteredImportEncap(
__func__, rn);
#endif
for (m = RFAPI_MONITOR_ENCAP(rn); m; m = m->next) {
const struct prefix *p;
const struct prefix *pfx;
/*
* For each referenced bpi/route, copy the ENCAP route's
@ -3220,9 +3252,9 @@ static void rfapiBgpInfoFilteredImportEncap(
* list
* per prefix.
*/
p = agg_node_get_prefix(m->node);
pfx = agg_node_get_prefix(m->node);
referenced_vpn_prefix =
agg_node_get(referenced_vpn_table, p);
agg_node_get(referenced_vpn_table, pfx);
assert(referenced_vpn_prefix);
for (mnext = referenced_vpn_prefix->info; mnext;
mnext = mnext->next) {
@ -3293,6 +3325,7 @@ static void rfapiExpireVpnNow(struct rfapi_import_table *it,
wcb->node = rn;
wcb->import_table = it;
wcb->lockoffset = lockoffset;
rwcb_add(&_rwcbhash, wcb);
memset(&t, 0, sizeof(t));
t.arg = wcb;
rfapiWithdrawTimerVPN(&t); /* frees wcb */
@ -3510,6 +3543,7 @@ void rfapiBgpInfoFilteredImportVPN(
struct rfapi_withdraw *wcb = EVENT_ARG(
bpi->extra->vnc->vnc.import.timer);
rwcb_del(&_rwcbhash, wcb);
XFREE(MTYPE_RFAPI_WITHDRAW, wcb);
EVENT_OFF(bpi->extra->vnc->vnc.import
.timer);
@ -3729,6 +3763,7 @@ void rfapiBgpInfoFilteredImportVPN(
struct rfapi_withdraw *wcb =
EVENT_ARG(bpi->extra->vnc->vnc.import.timer);
rwcb_del(&_rwcbhash, wcb);
XFREE(MTYPE_RFAPI_WITHDRAW, wcb);
EVENT_OFF(bpi->extra->vnc->vnc.import.timer);
}
@ -4480,6 +4515,7 @@ static void rfapiDeleteRemotePrefixesIt(
RFAPI_UPDATE_ITABLE_COUNT(
bpi, wcb->import_table,
afi, 1);
rwcb_del(&_rwcbhash, wcb);
XFREE(MTYPE_RFAPI_WITHDRAW,
wcb);
EVENT_OFF(bpi->extra->vnc->vnc
@ -4804,3 +4840,33 @@ uint32_t rfapiGetHolddownFromLifetime(uint32_t lifetime)
else
return RFAPI_LIFETIME_INFINITE_WITHDRAW_DELAY;
}
void rfapi_import_init(void)
{
rwcb_init(&_rwcbhash);
}
void rfapi_import_terminate(void)
{
struct rfapi_withdraw *wcb;
struct bgp_path_info *bpi;
void (*timer_service_func)(struct event *t);
struct event t;
vnc_zlog_debug_verbose("%s: cleaning up %zu pending timers", __func__,
rwcb_count(&_rwcbhash));
/*
* clean up memory allocations stored in pending timers
*/
while ((wcb = rwcb_pop(&_rwcbhash))) {
bpi = wcb->info;
assert(wcb == EVENT_ARG(bpi->extra->vnc->vnc.import.timer));
EVENT_OFF(bpi->extra->vnc->vnc.import.timer);
timer_service_func = wcb->timer_service_func;
memset(&t, 0, sizeof(t));
t.arg = wcb;
(*timer_service_func)(&t); /* frees wcb */
}
}

View file

@ -225,4 +225,7 @@ extern void rfapiCountAllItRoutes(int *pALRcount, /* active local routes */
--------------------------------------------*/
extern uint32_t rfapiGetHolddownFromLifetime(uint32_t lifetime);
extern void rfapi_import_init(void);
extern void rfapi_import_terminate(void);
#endif /* QUAGGA_HGP_RFAPI_IMPORT_H */

View file

@ -18,6 +18,7 @@
#include "lib/log.h"
#include "lib/skiplist.h"
#include "lib/workqueue.h"
#include <typesafe.h>
#include "bgpd/bgpd.h"
#include "bgpd/bgp_route.h"
@ -40,12 +41,11 @@
#define DEBUG_PENDING_DELETE_ROUTE 0
#define DEBUG_NHL 0
#define DEBUG_RIB_SL_RD 0
#define DEBUG_CLEANUP 0
#define DEBUG_CLEANUP 0
/* forward decl */
#if DEBUG_NHL
static void rfapiRibShowRibSl(void *stream, struct prefix *pfx,
struct skiplist *sl);
static void rfapiRibShowRibSl(void *stream, const struct prefix *pfx, struct skiplist *sl);
#endif
/*
@ -234,9 +234,45 @@ void rfapiFreeRfapiVnOptionChain(struct rfapi_vn_option *p)
}
/*
* Hash to keep track of outstanding timers so we can force them to
* expire at shutdown time, thus freeing their allocated memory.
*/
PREDECL_HASH(rrtcb);
/*
* Timer control block for recently-deleted and expired routes
*/
struct rfapi_rib_tcb {
struct rrtcb_item tcbi;
struct rfapi_descriptor *rfd;
struct skiplist *sl;
struct rfapi_info *ri;
struct agg_node *rn;
int flags;
#define RFAPI_RIB_TCB_FLAG_DELETED 0x00000001
};
static int _rrtcb_cmp(const struct rfapi_rib_tcb *t1, const struct rfapi_rib_tcb *t2)
{
return (t1 != t2);
}
static uint32_t _rrtcb_hash(const struct rfapi_rib_tcb *t)
{
return (uintptr_t)t & 0xffffffff;
}
DECLARE_HASH(rrtcb, struct rfapi_rib_tcb, tcbi, _rrtcb_cmp, _rrtcb_hash);
static struct rrtcb_head _rrtcbhash;
static void rfapi_info_free(struct rfapi_info *goner)
{
if (goner) {
#if DEBUG_CLEANUP
zlog_debug("%s: ri %p, timer %p", __func__, goner, goner->timer);
#endif
if (goner->tea_options) {
rfapiFreeBgpTeaOptionChain(goner->tea_options);
goner->tea_options = NULL;
@ -253,32 +289,19 @@ static void rfapi_info_free(struct rfapi_info *goner)
struct rfapi_rib_tcb *tcb;
tcb = EVENT_ARG(goner->timer);
#if DEBUG_CLEANUP
zlog_debug("%s: ri %p, tcb %p", __func__, goner, tcb);
#endif
EVENT_OFF(goner->timer);
rrtcb_del(&_rrtcbhash, tcb);
XFREE(MTYPE_RFAPI_RECENT_DELETE, tcb);
}
XFREE(MTYPE_RFAPI_INFO, goner);
}
}
/*
* Timer control block for recently-deleted and expired routes
*/
struct rfapi_rib_tcb {
struct rfapi_descriptor *rfd;
struct skiplist *sl;
struct rfapi_info *ri;
struct agg_node *rn;
int flags;
#define RFAPI_RIB_TCB_FLAG_DELETED 0x00000001
};
/*
* remove route from rib
*/
static void rfapiRibExpireTimer(struct event *t)
static void _rfapiRibExpireTimer(struct rfapi_rib_tcb *tcb)
{
struct rfapi_rib_tcb *tcb = EVENT_ARG(t);
RFAPI_RIB_CHECK_COUNTS(1, 0);
/*
@ -309,11 +332,22 @@ static void rfapiRibExpireTimer(struct event *t)
agg_unlock_node(tcb->rn);
}
rrtcb_del(&_rrtcbhash, tcb);
XFREE(MTYPE_RFAPI_RECENT_DELETE, tcb);
RFAPI_RIB_CHECK_COUNTS(1, 0);
}
/*
* remove route from rib
*/
static void rfapiRibExpireTimer(struct event *t)
{
struct rfapi_rib_tcb *tcb = EVENT_ARG(t);
_rfapiRibExpireTimer(tcb);
}
static void rfapiRibStartTimer(struct rfapi_descriptor *rfd,
struct rfapi_info *ri,
struct agg_node *rn, /* route node attached to */
@ -349,6 +383,8 @@ static void rfapiRibStartTimer(struct rfapi_descriptor *rfd,
event_add_timer(bm->master, rfapiRibExpireTimer, tcb, ri->lifetime,
&ri->timer);
rrtcb_add(&_rrtcbhash, tcb);
}
extern void rfapi_rib_key_init(struct prefix *prefix, /* may be NULL */
@ -519,6 +555,7 @@ void rfapiRibClear(struct rfapi_descriptor *rfd)
tcb = EVENT_ARG(
ri->timer);
EVENT_OFF(ri->timer);
rrtcb_del(&_rrtcbhash, tcb);
XFREE(MTYPE_RFAPI_RECENT_DELETE,
tcb);
}
@ -852,11 +889,6 @@ static void process_pending_node(struct bgp *bgp, struct rfapi_descriptor *rfd,
int rib_node_started_nonempty = 0;
int sendingsomeroutes = 0;
const struct prefix *p;
#if DEBUG_PROCESS_PENDING_NODE
unsigned int count_rib_initial = 0;
unsigned int count_pend_vn_initial = 0;
unsigned int count_pend_cost_initial = 0;
#endif
assert(pn);
p = agg_node_get_prefix(pn);
@ -885,19 +917,6 @@ static void process_pending_node(struct bgp *bgp, struct rfapi_descriptor *rfd,
slPendPt = (struct skiplist *)(pn->aggregate);
lPendCost = (struct list *)(pn->info);
#if DEBUG_PROCESS_PENDING_NODE
/* debugging */
if (slRibPt)
count_rib_initial = skiplist_count(slRibPt);
if (slPendPt)
count_pend_vn_initial = skiplist_count(slPendPt);
if (lPendCost && lPendCost != (struct list *)1)
count_pend_cost_initial = lPendCost->count;
#endif
/*
* Handle special case: delete all routes at prefix
*/
@ -920,6 +939,7 @@ static void process_pending_node(struct bgp *bgp, struct rfapi_descriptor *rfd,
tcb = EVENT_ARG(ri->timer);
EVENT_OFF(ri->timer);
rrtcb_del(&_rrtcbhash, tcb);
XFREE(MTYPE_RFAPI_RECENT_DELETE, tcb);
}
@ -1005,6 +1025,7 @@ static void process_pending_node(struct bgp *bgp, struct rfapi_descriptor *rfd,
tcb = EVENT_ARG(ori->timer);
EVENT_OFF(ori->timer);
rrtcb_del(&_rrtcbhash, tcb);
XFREE(MTYPE_RFAPI_RECENT_DELETE, tcb);
}
@ -1017,6 +1038,11 @@ static void process_pending_node(struct bgp *bgp, struct rfapi_descriptor *rfd,
#endif
} else {
#if DEBUG_PROCESS_PENDING_NODE
vnc_zlog_debug_verbose("%s: slRibPt ri %p matched in pending list",
__func__, ori);
#endif
/*
* Found in pending list. If same lifetime,
* cost, options,
@ -1040,14 +1066,10 @@ static void process_pending_node(struct bgp *bgp, struct rfapi_descriptor *rfd,
rfapi_info_free(
ri); /* grr... */
}
}
#if DEBUG_PROCESS_PENDING_NODE
vnc_zlog_debug_verbose(
"%s: slRibPt ri %p matched in pending list, %s",
__func__, ori,
(same ? "same info"
: "different info"));
vnc_zlog_debug_verbose("%s: same info", __func__);
#endif
}
}
}
/*
@ -1339,6 +1361,7 @@ callback:
tcb = EVENT_ARG(ri->timer);
EVENT_OFF(ri->timer);
rrtcb_del(&_rrtcbhash, tcb);
XFREE(MTYPE_RFAPI_RECENT_DELETE, tcb);
}
RFAPI_RIB_CHECK_COUNTS(0, delete_list->count);
@ -2285,8 +2308,7 @@ static int print_rib_sl(int (*fp)(void *, const char *, ...), struct vty *vty,
/*
* This one is for debugging (set stream to NULL to send output to log)
*/
static void rfapiRibShowRibSl(void *stream, struct prefix *pfx,
struct skiplist *sl)
static void rfapiRibShowRibSl(void *stream, const struct prefix *pfx, struct skiplist *sl)
{
int (*fp)(void *, const char *, ...);
struct vty *vty;
@ -2426,3 +2448,25 @@ void rfapiRibShowResponses(void *stream, struct prefix *pfx_match,
fp(out, "\n");
}
}
void rfapi_rib_init(void)
{
rrtcb_init(&_rrtcbhash);
}
void rfapi_rib_terminate(void)
{
struct rfapi_rib_tcb *tcb;
vnc_zlog_debug_verbose("%s: cleaning up %zu pending timers", __func__,
rrtcb_count(&_rrtcbhash));
/*
* Clean up memory allocations stored in pending timers
*/
while ((tcb = rrtcb_pop(&_rrtcbhash))) {
assert(tcb == EVENT_ARG(tcb->ri->timer));
EVENT_OFF(tcb->ri->timer);
_rfapiRibExpireTimer(tcb); /* deletes hash entry, frees tcb */
}
}

View file

@ -138,4 +138,7 @@ extern int rfapi_rib_key_cmp(const void *k1, const void *k2);
extern void rfapiAdbFree(struct rfapi_adb *adb);
extern void rfapi_rib_init(void);
extern void rfapi_rib_terminate(void);
#endif /* QUAGGA_HGP_RFAPI_RIB_H */

View file

@ -338,13 +338,12 @@ static int process_unicast_route(struct bgp *bgp, /* in */
hattr = *attr;
if (rmap) {
struct bgp_path_info info;
struct bgp_path_info pinfo = {};
route_map_result_t ret;
memset(&info, 0, sizeof(info));
info.peer = peer;
info.attr = &hattr;
ret = route_map_apply(rmap, prefix, &info);
pinfo.peer = peer;
pinfo.attr = &hattr;
ret = route_map_apply(rmap, prefix, &pinfo);
if (ret == RMAP_DENYMATCH) {
bgp_attr_flush(&hattr);
vnc_zlog_debug_verbose(
@ -768,13 +767,12 @@ static void vnc_import_bgp_add_route_mode_plain(struct bgp *bgp,
hattr = *attr;
if (rmap) {
struct bgp_path_info info;
struct bgp_path_info pinfo = {};
route_map_result_t ret;
memset(&info, 0, sizeof(info));
info.peer = peer;
info.attr = &hattr;
ret = route_map_apply(rmap, prefix, &info);
pinfo.peer = peer;
pinfo.attr = &hattr;
ret = route_map_apply(rmap, prefix, &pinfo);
if (ret == RMAP_DENYMATCH) {
bgp_attr_flush(&hattr);
vnc_zlog_debug_verbose(

View file

@ -467,6 +467,7 @@ AC_C_FLAG([-Wbad-function-cast])
AC_C_FLAG([-Wwrite-strings])
AC_C_FLAG([-Wundef])
AC_C_FLAG([-Wimplicit-fallthrough])
AC_C_FLAG([-Wshadow])
if test "$enable_gcc_ultra_verbose" = "yes" ; then
AC_C_FLAG([-Wcast-qual])
AC_C_FLAG([-Wmissing-noreturn])
@ -474,7 +475,6 @@ if test "$enable_gcc_ultra_verbose" = "yes" ; then
AC_C_FLAG([-Wunreachable-code])
AC_C_FLAG([-Wpacked])
AC_C_FLAG([-Wpadded])
AC_C_FLAG([-Wshadow])
else
AC_C_FLAG([-Wno-unused-result])
fi
@ -732,8 +732,6 @@ AC_ARG_ENABLE([mgmtd_local_validations],
AS_HELP_STRING([--enable-mgmtd-local-validations], [dev: unimplemented local validation]))
AC_ARG_ENABLE([mgmtd_test_be_client],
AS_HELP_STRING([--enable-mgmtd-test-be-client], [build test backend client]))
AC_ARG_ENABLE([rustlibd],
AS_HELP_STRING([--enable-rustlibd], [enable rust library based daemon template]))
AC_ARG_ENABLE([fpm_listener],
AS_HELP_STRING([--enable-fpm-listener], [build fpm listener test program]))
AC_ARG_ENABLE([ripd],
@ -1056,6 +1054,11 @@ AC_MSG_FAILURE([Please specify a number from 0-12 for log precision ARG])
;;
esac
with_log_timestamp_precision=${with_log_timestamp_precision:-0}
if test "${with_log_timestamp_precision}" != 0; then
AC_SUBST([LOG_TIMESTAMP_PRECISION_CLI], ["
log timestamp precision ${with_log_timestamp_precision}"])
AM_SUBST_NOTMAKE([LOG_TIMESTAMP_PRECISION_CLI])
fi
AC_DEFINE_UNQUOTED([LOG_TIMESTAMP_PRECISION], [${with_log_timestamp_precision}], [Startup zlog timestamp precision])
AC_DEFINE_UNQUOTED([VTYSH_PAGER], ["$VTYSH_PAGER"], [What pager to use])
@ -1874,10 +1877,6 @@ AS_IF([test "$enable_ripngd" != "no"], [
AC_DEFINE([HAVE_RIPNGD], [1], [ripngd])
])
AS_IF([test "$enable_rustlibd" != "no"], [
AC_DEFINE([HAVE_RUSTLIBD], [1], [rustlibd])
])
AS_IF([test "$enable_ospfd" != "no"], [
AC_DEFINE([HAVE_OSPFD], [1], [ospfd])
])
@ -2052,7 +2051,7 @@ if test "$enable_snmp" != "" -a "$enable_snmp" != "no"; then
# net-snmp lists all of its own dependencies. we absolutely do not want that
# among other things we avoid a GPL vs. OpenSSL license conflict here
for removelib in crypto ssl sensors pci wrap; do
SNMP_LIBS="`echo $SNMP_LIBS | sed -e 's/\(^\|\s\)-l'$removelib'\b/ /g' -e 's/\(^\|\s\)\([^\s]*\/\)\?lib'$removelib'\.[^\s]\+\b/ /g'`"
SNMP_LIBS="`echo $SNMP_LIBS | sed -e 's/-l'$removelib'/ /g'`"
done
AC_MSG_CHECKING([whether we can link to Net-SNMP])
AC_LINK_IFELSE_FLAGS([$SNMP_CFLAGS], [$SNMP_LIBS], [AC_LANG_PROGRAM([
@ -2119,40 +2118,6 @@ if test "$enable_config_rollbacks" = "yes"; then
])
fi
dnl ------------------------------------------------------
dnl rust general (add to conditional any new rust daemons)
dnl ------------------------------------------------------
if test "$enable_rustlibd" = "yes"; then
AC_PATH_PROG([CARGO], [cargo], [notfound])
AS_IF([test "$CARGO" = "notfound"], [AC_MSG_ERROR([cargo is required])])
AC_PATH_PROG([RUSTC], [rustc], [notfound])
AS_IF([test "$RUSTC" = "notfound"], [AC_MSG_ERROR([rustc is required])])
if test "$enable_dev_build" = "yes"; then
CARGO_TARGET_DIR=debug
else
CARGO_TARGET_DIR=release
fi
AC_SUBST(CARGO_TARGET_DIR)
fi
dnl ---------------
dnl rustlibd
dnl ---------------
if test "$enable_rustlibd" = "yes"; then
AC_CONFIG_FILES([rustlibd/build.rs rustlibd/wrapper.h rustlibd/Cargo.toml])
AC_CONFIG_COMMANDS([gen-dot-cargo-config], [
if test "$ac_abs_top_builddir" != "$ac_abs_top_srcdir"; then
mkdir -p ${srcdir}/rustlibd/.cargo
if ! test -e "${srcdir}/rustlibd/.cargo/config.toml"; then
printf '[[build]]\ntarget-dir = "%s"\n' "${ac_abs_top_builddir}/rustlibd/target" > "${srcdir}/rustlibd/.cargo/config.toml"
fi
fi]
)
fi
dnl ---------------
dnl sysrepo
dnl ---------------
@ -2822,7 +2787,6 @@ AM_CONDITIONAL([ENABLE_BGP_VNC], [test "$enable_bgp_vnc" != "no"])
AM_CONDITIONAL([BGP_BMP], [$bgpd_bmp])
dnl northbound
AM_CONDITIONAL([SQLITE3], [$SQLITE3])
AM_CONDITIONAL([RUSTLIBD], [test "$enable_rustlibd" = "yes"])
AM_CONDITIONAL([SYSREPO], [test "$enable_sysrepo" = "yes"])
AM_CONDITIONAL([GRPC], [test "$enable_grpc" = "yes"])
AM_CONDITIONAL([ZEROMQ], [test "$ZEROMQ" = "true"])

View file

@ -32,5 +32,7 @@ Building FRR
building-frr-for-ubuntu1804
building-frr-for-ubuntu2004
building-frr-for-ubuntu2204
building-frr-for-ubuntu2404
building-docker
cross-compiling
building-doc

View file

@ -23,5 +23,5 @@ FRRouting Developer's Guide
path
pceplib
link-state
rust-dev
northbound/northbound
sbfd

View file

@ -187,6 +187,17 @@ To switch between compatible data structures, only these two lines need to be
changes. To switch to a data structure with a different API, some source
changes are necessary.
As a example to the developer here are some example commits that convert
over to usage of the typesafe data structures:
+------------------------------------------------------+------------------------------------+
| Commit Message | SHA |
+======================================================+====================================+
| bgpd: Convert the bgp_advertise_attr->adv to a fifo | b2e0c12d723a6464f67491ceb9 |
+------------------------------------------------------+------------------------------------+
| zebra: convert LSP nhlfe lists to use typesafe lists | ee70f629792b90f92ea7e6bece |
+------------------------------------------------------+------------------------------------+
Common iteration macros
-----------------------
@ -762,6 +773,20 @@ Why is it ``PREDECL`` + ``DECLARE`` instead of ``DECLARE`` + ``DEFINE``?
2 ``.c`` files, but only **if the macro arguments are identical.** Maybe
don't do that unless you really need it.
COMMON PROBLEMS
---------------
The ``fini`` call of the various typesafe structures actually close the data
structure off and attempts to use the data structure after that introduce
intentional crashes. This is leading to situations when converting from
an older data structure to the new typesafe where, on shutdown, the older
data structures would still be attempted to be accessed. This access would
just be ignored or result in benign code running. With the new typesafe
data structure crashes will occurr. Be aware that when modifying the code
base that this sort of change might end up with crashes on shutdown and
work must be done to ensure that the newly changed does not use the data
structure after the fini call.
FRR lists
---------

View file

@ -429,3 +429,8 @@ The client and server sides of oper-state query
.. figure:: ../figures/cli-oper-state.svg
:align: center
Config datastore cleanup for non-implict commits (i.e., file reads currently)
.. figure:: ../figures/datastores.svg
:align: center

View file

@ -1,177 +0,0 @@
.. -*- coding: utf-8 -*-
..
.. SPDX-License-Identifier: GPL-2.0-or-later
..
.. February 26 2025, Christian Hopps <chopps@labn.net>
..
.. Copyright (c) 2025, LabN Consulting, L.L.C.
..
.. _rust_dev:
Rust Development
================
Overview
--------
The FRR project has started adding support for daemons written in rust. The
following sections document the infrastructure to support to-date. This is the
initial approach of rust integration, we expect changes as best-practices within
the community evolve.
General Structure
-----------------
An example template of the general structure of a rust based daemon can be found
in ``rustlib/`` sub-directory. The recommended structure so far is to use a C
main file and function to drive initialization of the daemon calling out to rust
at 3 critical points. The Rust code is then built as a static library and linked
into the daemon. Rust bindings are built for ``libfrr`` and accessed through a
c_shim sub-module. Here's the files and as of the time of this writing:
.. code-block:: make
rustlibd/
.gitignore
Cargo.toml.in
Makefile
README.org
build.rs.in
c_shim.rs
frrutil.rs (symlink)
rustlib_lib.rs
rustlib_main.c
sandbox.rs
subdir.am
wrapper.h.in
:file:`frrutil.rs` is a symlink to :file:`../lib/frrutil.rs` kept here to keep
various rust tools happy about files being inside or below the main source
directory.
NOTE: if you use a separate build dir (named `build` in the below example) and
you want to have your development environment proper analyze code (e.g.,
vs-code/emacs LSP mode) you should create an additional 2 symlinks and create a
local :file:`Cargo.toml` file like so:
.. code-block:: sh
cd frr/rustlibd
sed -e 's,@srcdir@/,,g' < Cargo.toml.in > Cargo.toml
ln -s ../build/rustlibd/build.rs .
ln -s ../build/rustlibd/wrapper.h .
Logging
-------
FRR logging is transparently supported using some bridging code that connects
the native rust ``tracing`` calls directly to the ``zlog`` functionality in FRR.
The only thing you have to do is call the function :func:`bridge_rust_logging`
at startup. This is already done for you in the `rustlibd` template :func:`main`
if you started with that code.
.. code-block:: rust
use tracing::{debug, info};
fn myrustfunc(sval: &str, uval: u32) {
debug!("Some DEBUG level output of str value: {}", sval);
info!("Some INFO level output of uint value: {}", uval);
}
Northbound Integration
----------------------
Support for the FRR northbound callback system is handled through rust macros.
These rust macros define C shims which then call your rust functions which will
use natural rust types. The rust macros hide the unsafe and tricky conversion
code. You put pointers to the generated C shim functions into the
:struct:`frr_yang_module_info` structure.
NOTE: Locking will probably be important as your callbacks will be called in the
FRR event loop main thread and your rust code is probably running in it's own
different thread (perhaps using the tokio async runtime as setup in the
:file:`rustlibd` template).
Here's an example of defining a handler for a config leave value `enable`:
.. code-block:: C
const struct frr_yang_module_info frr_my_module_nb_info = {
.name = "frr-my-module",
.nodes = {
{
.xpath = "/frr-my-module:lib/bvalue",
.cbs = {
.modify = my_module_bvalue_modify_shim,
.destroy = my_module_bvalue_destroy_shim
}
},
...
.. code-block:: rust
use crate::{define_nb_destroy_shim, define_nb_modify_shim};
pub(crate) fn my_module_bvalue_modify(
event: NbEvent,
_node: &DataNodeRef,
) -> Result<(), nb_error> {
debug!("RUST: bvalue modify: {}", event);
match event {
NbEvent::APPLY(_) => {
// handle the change to the `bvalue` leaf.
Ok(())
},
_ => Ok(()), // All other events just return Ok.
}
}
pub(crate) fn my_module_bvalue_destroy(
event: NbEvent,
_node: &DataNodeRef,
) -> Result<(), nb_error> {
// handle the removal of the `bvalue` leaf.
// ...
}
define_nb_modify_shim!(
my_module_bvalue_modify_shim,
my_module_bvalue_modify);
define_nb_destroy_shim!(
my_module_bvalue_destroy_shim,
my_module_bvalue_destroy);
CLI commands
~~~~~~~~~~~~
For CLI commands you should continue to write the DEFPY_YANG() calls in C which
simply set your YANG config data base on the args to DEFPY_YANG(). The actual
configuration will be handled in your rust based callbacks you defined for your
YANG model that are describe above.
Operational State
~~~~~~~~~~~~~~~~~
You have 2 choices with operation state. You can implement the operation state
callbacks in rust and use the rust macros to bridge these to the
:struct:`frr_yang_module_info` definition as you did with your config handlers, or you
can keep your operational state in a ``yang-rs`` (i.e., ``libyang``) based tree.
Here's an example of using the macros:
If you choose to do the latter and save all your operational state in a
``libyang`` :struct:`DataTree`, you only need to define 2 callback functions, a
:func:`get_tree_locked()` function which returns the :struct:`DataTree` in a
:struct:`MutexGuard` (i.e., a held lock), and an :func:`unlock_tree()` function
which is passed back the :struct:`MutexGuard` object for unlocking. You use 2
macros: :func:`define_nb_get_tree_locked`, and :func:`define_nb_unlock_tree` to
create the C based shims to plug into your :struct:`frr_yang_module_info`
structure.
NOTE: As with config, locking will probably be important as your callbacks will
be called in the FRR event loop main thread and your rust code is probably
running in it's own different thread.

View file

@ -0,0 +1,291 @@
<mxfile host="Electron" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/26.2.2 Chrome/134.0.6998.178 Electron/35.1.2 Safari/537.36" version="26.2.2">
<diagram name="Page-1" id="i24xzCYeKZV1rkTH0XTW">
<mxGraphModel dx="1667" dy="1191" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1100" pageHeight="850" math="0" shadow="0">
<root>
<mxCell id="0" />
<mxCell id="1" parent="0" />
<mxCell id="U9ftda_CDvz5WDsUi4ve-36" value="nb_candidate_commit_apply()" style="whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;rounded=1;fillStyle=auto;strokeWidth=1;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="890" y="670" width="180" height="136.87" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-29" value="&lt;i&gt;&lt;font style=&quot;font-size: 16px;&quot;&gt;Daemon CLI Parsing (lib/vty.c)&lt;/font&gt;&lt;/i&gt;" style="rounded=1;whiteSpace=wrap;html=1;dashed=1;fillColor=#dae8fc;strokeColor=default;fillStyle=solid;strokeWidth=1;perimeterSpacing=0;dashPattern=1 2;gradientColor=none;gradientDirection=radial;glass=0;shadow=0;opacity=50;verticalAlign=bottom;spacingBottom=30;" parent="1" vertex="1">
<mxGeometry x="50" y="220" width="660" height="170" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-7" value="&lt;div style=&quot;font-size: 12px;&quot;&gt;mgmtd&lt;/div&gt;&lt;div style=&quot;font-size: 12px;&quot;&gt;(new config path)&lt;/div&gt;" style="rounded=1;whiteSpace=wrap;html=1;arcSize=24;fillColor=#dae8fc;strokeColor=#6c8ebf;shadow=1;comic=0;labelBackgroundColor=none;fontFamily=Verdana;fontSize=12;align=center;verticalAlign=top;fontStyle=1" parent="1" vertex="1">
<mxGeometry x="230" y="40" width="490" height="270" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-13" value="&lt;div&gt;&lt;font&gt;vty_shared_&lt;/font&gt;&lt;/div&gt;&lt;div&gt;&lt;font&gt;candidate_config&lt;/font&gt;&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#fff2cc;strokeColor=#d6b656;" parent="1" vertex="1">
<mxGeometry x="136.25" y="70" width="97.5" height="130" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-14" value="&lt;div&gt;running_config&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#d5e8d4;strokeColor=#82b366;" parent="1" vertex="1">
<mxGeometry x="260" y="70" width="97.5" height="130" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-18" value="" style="group;shadow=0;" parent="1" vertex="1" connectable="0">
<mxGeometry x="80" y="60" width="270" height="210" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-19" value="&lt;div style=&quot;font-size: 12px;&quot;&gt;B daemon&amp;nbsp;&lt;span style=&quot;background-color: transparent; color: light-dark(rgb(0, 0, 0), rgb(255, 255, 255));&quot;&gt;(old direct vty)&lt;/span&gt;&lt;/div&gt;" style="rounded=1;whiteSpace=wrap;html=1;arcSize=24;fillColor=#fad9d5;strokeColor=#ae4132;shadow=1;comic=0;labelBackgroundColor=none;fontFamily=Verdana;fontSize=12;align=center;verticalAlign=top;fontStyle=1" parent="QL32OzfzetEIIOdSfswY-18" vertex="1">
<mxGeometry x="-10" width="270" height="190" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-20" value="&lt;div&gt;&lt;font&gt;vty_shared_&lt;/font&gt;&lt;/div&gt;&lt;div&gt;&lt;font&gt;candidate_config&lt;/font&gt;&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#fff2cc;strokeColor=#d6b656;" parent="QL32OzfzetEIIOdSfswY-18" vertex="1">
<mxGeometry x="20" y="30" width="97.5" height="130" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-21" value="&lt;div&gt;running_config&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#d5e8d4;strokeColor=#82b366;" parent="QL32OzfzetEIIOdSfswY-18" vertex="1">
<mxGeometry x="150" y="30" width="97.5" height="130" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-23" value="&lt;div style=&quot;font-size: 12px;&quot;&gt;A daemon (old direct vty)&lt;/div&gt;" style="rounded=1;whiteSpace=wrap;html=1;arcSize=24;fillColor=#fad9d5;strokeColor=#ae4132;shadow=1;comic=0;labelBackgroundColor=none;fontFamily=Verdana;fontSize=12;align=center;verticalAlign=top;fontStyle=1" parent="QL32OzfzetEIIOdSfswY-18" vertex="1">
<mxGeometry x="-40" y="20" width="270" height="190" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-25" value="&lt;div&gt;running_config&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#d5e8d4;strokeColor=#82b366;" parent="1" vertex="1">
<mxGeometry x="200" y="110" width="97.5" height="130" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-2" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=1;entryY=0.5;entryDx=0;entryDy=0;" parent="1" target="4hLhriEXD62TuEoW85Ij-1" edge="1">
<mxGeometry relative="1" as="geometry">
<mxPoint x="648.75" y="160" as="sourcePoint" />
<mxPoint x="487.5" y="585" as="targetPoint" />
<Array as="points">
<mxPoint x="790" y="160" />
<mxPoint x="790" y="530" />
</Array>
</mxGeometry>
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-8" value="&lt;div&gt;&lt;font&gt;vty_mgmt_&lt;/font&gt;&lt;/div&gt;&lt;div&gt;&lt;font&gt;candidate_config&lt;/font&gt;&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#fff2cc;strokeColor=#d6b656;" parent="1" vertex="1">
<mxGeometry x="551.25" y="70" width="97.5" height="130" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-26" value="mm-&amp;gt;running" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="370" y="230" width="120" height="60" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-14" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=1;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="QL32OzfzetEIIOdSfswY-27" target="QL32OzfzetEIIOdSfswY-26">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-27" value="mm-&amp;gt;candidate" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="540" y="230" width="120" height="60" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-1" value="vty_config_entry()" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f8cecc;strokeColor=#b85450;fillStyle=auto;strokeWidth=3;" parent="1" vertex="1">
<mxGeometry x="315" y="500" width="130" height="60" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-3" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" parent="1" source="QL32OzfzetEIIOdSfswY-24" target="4hLhriEXD62TuEoW85Ij-1" edge="1">
<mxGeometry relative="1" as="geometry">
<mxPoint x="120" y="260" as="sourcePoint" />
<mxPoint x="320" y="600" as="targetPoint" />
<Array as="points">
<mxPoint x="120" y="530" />
</Array>
</mxGeometry>
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-24" value="&lt;div&gt;&lt;font&gt;vty_shared_&lt;/font&gt;&lt;/div&gt;&lt;div&gt;&lt;font&gt;candidate_config&lt;/font&gt;&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#fff2cc;strokeColor=#d6b656;" parent="1" vertex="1">
<mxGeometry x="70" y="110" width="97.5" height="130" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-8" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" parent="1" source="4hLhriEXD62TuEoW85Ij-4" target="4hLhriEXD62TuEoW85Ij-7" edge="1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-4" value="CLI: config_exclusive()&lt;div&gt;(northbound_cli.c)&lt;/div&gt;" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d0cee2;strokeColor=#56517e;" parent="1" vertex="1">
<mxGeometry x="910" y="40" width="140" height="50" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-5" value="CLI: config_private()&lt;div&gt;(northbound_cli.c)&lt;/div&gt;" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d0cee2;strokeColor=#56517e;" parent="1" vertex="1">
<mxGeometry x="760" y="45" width="140" height="40" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-6" value="vty_config_entry()" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f8cecc;strokeColor=#b85450;fillStyle=auto;strokeWidth=3;" parent="1" vertex="1">
<mxGeometry x="860" y="230" width="120" height="60" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-10" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" parent="1" source="4hLhriEXD62TuEoW85Ij-7" target="4hLhriEXD62TuEoW85Ij-6" edge="1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-7" value="&lt;div&gt;private_config&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#fff2cc;strokeColor=#d6b656;" parent="1" vertex="1">
<mxGeometry x="871.25" y="130" width="97.5" height="70" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-9" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" parent="1" source="4hLhriEXD62TuEoW85Ij-5" edge="1">
<mxGeometry relative="1" as="geometry">
<mxPoint x="910" y="130" as="targetPoint" />
<Array as="points">
<mxPoint x="850" y="110" />
<mxPoint x="911" y="110" />
</Array>
</mxGeometry>
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-15" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=0.5;entryY=0;entryDx=0;entryDy=0;" parent="1" source="4hLhriEXD62TuEoW85Ij-11" target="4hLhriEXD62TuEoW85Ij-1" edge="1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-20" value="2" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" parent="1" source="4hLhriEXD62TuEoW85Ij-11" target="4hLhriEXD62TuEoW85Ij-1" edge="1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-16" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;startArrow=classic;startFill=1;strokeWidth=2;" edge="1" parent="1" source="4hLhriEXD62TuEoW85Ij-11" target="U9ftda_CDvz5WDsUi4ve-15">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-17" value="1" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="U9ftda_CDvz5WDsUi4ve-16">
<mxGeometry x="0.0305" y="2" relative="1" as="geometry">
<mxPoint as="offset" />
</mxGeometry>
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-19" value="1: (mgmtd only)" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="U9ftda_CDvz5WDsUi4ve-16">
<mxGeometry x="0.0074" y="1" relative="1" as="geometry">
<mxPoint as="offset" />
</mxGeometry>
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-11" value="CLI: config_terminal()&lt;div&gt;(command.c)&lt;/div&gt;" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d0cee2;strokeColor=#56517e;" parent="1" vertex="1">
<mxGeometry x="315" y="420" width="130" height="40" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-31" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" parent="1" source="4hLhriEXD62TuEoW85Ij-29" target="4hLhriEXD62TuEoW85Ij-11" edge="1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-27" value="&lt;div style=&quot;font-size: 12px;&quot;&gt;&lt;br&gt;&lt;/div&gt;" style="rounded=1;whiteSpace=wrap;html=1;arcSize=12;fillColor=#dae8fc;strokeColor=#6c8ebf;shadow=1;comic=0;labelBackgroundColor=none;fontFamily=Verdana;fontSize=12;align=center;verticalAlign=top;fontStyle=1;container=0;" parent="1" vertex="1">
<mxGeometry x="50" y="600" width="550" height="190" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-18" value="vty_read_config()" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;" parent="1" vertex="1">
<mxGeometry x="260" y="670" width="130" height="40" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-21" value="vty_apply_config()" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;" parent="1" vertex="1">
<mxGeometry x="260" y="730" width="130" height="40" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-22" value="&lt;b&gt;&lt;i&gt;&quot;copy FILE to rrunning&quot;&lt;/i&gt;&lt;/b&gt;" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;" parent="1" vertex="1">
<mxGeometry x="63.75" y="730" width="150" height="40" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-23" value="&lt;b&gt;&lt;i&gt;vtysh_main.c: main()&lt;/i&gt;&lt;/b&gt;" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;" parent="1" vertex="1">
<mxGeometry x="430" y="730" width="150" height="40" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-19" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" parent="1" source="4hLhriEXD62TuEoW85Ij-18" target="4hLhriEXD62TuEoW85Ij-16" edge="1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-26" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" parent="1" source="4hLhriEXD62TuEoW85Ij-21" target="4hLhriEXD62TuEoW85Ij-18" edge="1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-25" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" parent="1" source="4hLhriEXD62TuEoW85Ij-22" target="4hLhriEXD62TuEoW85Ij-21" edge="1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-24" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" parent="1" source="4hLhriEXD62TuEoW85Ij-23" target="4hLhriEXD62TuEoW85Ij-21" edge="1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-34" value="VTYSH" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;strokeColor=default;fontColor=#333333;opacity=50;dashed=1;dashPattern=8 8;" parent="1" vertex="1">
<mxGeometry x="500" y="610" width="90" height="30" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-12" value="" style="curved=0;endArrow=none;html=1;rounded=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;exitX=0.25;exitY=1;exitDx=0;exitDy=0;dashed=1;startFill=0;" edge="1" parent="1">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="215" y="400" as="sourcePoint" />
<mxPoint x="380" y="400" as="targetPoint" />
<Array as="points">
<mxPoint x="215" y="370" />
<mxPoint x="380" y="370" />
</Array>
</mxGeometry>
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-16" value="vty_read_file()&lt;div&gt;&lt;b&gt;&lt;i&gt;&quot;conf term file-lock&quot;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;" parent="1" vertex="1">
<mxGeometry x="260" y="610" width="130" height="40" as="geometry" />
</mxCell>
<mxCell id="4hLhriEXD62TuEoW85Ij-17" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;jumpStyle=line;exitX=0.5;exitY=0;exitDx=0;exitDy=0;shadow=1;" parent="1" source="4hLhriEXD62TuEoW85Ij-16" edge="1">
<mxGeometry relative="1" as="geometry">
<Array as="points">
<mxPoint x="325" y="580" />
<mxPoint x="215" y="580" />
</Array>
<mxPoint x="395" y="670" as="sourcePoint" />
<mxPoint x="215" y="390" as="targetPoint" />
</mxGeometry>
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-28" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;startArrow=classic;startFill=1;endArrow=oval;endFill=1;" parent="1" source="QL32OzfzetEIIOdSfswY-14" target="QL32OzfzetEIIOdSfswY-26" edge="1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="QL32OzfzetEIIOdSfswY-29" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;endArrow=oval;startFill=1;startArrow=classic;endFill=1;" parent="1" source="QL32OzfzetEIIOdSfswY-8" target="QL32OzfzetEIIOdSfswY-27" edge="1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-15" value="&lt;div&gt;lock mm-&amp;gt;candidate&lt;/div&gt;" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;" vertex="1" parent="1">
<mxGeometry x="580" y="420" width="130" height="40" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-20" value="If we don&#39;t lock for non-mgmtd then&lt;div&gt;multiple vtysh conf t are allowed!&lt;/div&gt;" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;fontStyle=3;fontColor=#FF0000;" vertex="1" parent="1">
<mxGeometry x="425" y="463" width="210" height="40" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-24" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" edge="1" parent="1" source="U9ftda_CDvz5WDsUi4ve-21" target="U9ftda_CDvz5WDsUi4ve-23">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-21" value="vty_config_node_exit()" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f8cecc;strokeColor=#b85450;fillStyle=auto;strokeWidth=3;" vertex="1" parent="1">
<mxGeometry x="830" y="340" width="180" height="45" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-26" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" edge="1" parent="1" source="U9ftda_CDvz5WDsUi4ve-23" target="U9ftda_CDvz5WDsUi4ve-25">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-29" value="pendign == true" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="U9ftda_CDvz5WDsUi4ve-26">
<mxGeometry x="-0.0182" y="-3" relative="1" as="geometry">
<mxPoint as="offset" />
</mxGeometry>
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-23" value="&lt;div&gt;&amp;nbsp; &amp;nbsp;nb_cli_pending_commit_check()&lt;/div&gt;&lt;div&gt;&lt;br&gt;&lt;/div&gt;" style="whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;rounded=1;fillStyle=auto;strokeWidth=1;" vertex="1" parent="1">
<mxGeometry x="830" y="420" width="180" height="35" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-28" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" edge="1" parent="1" target="U9ftda_CDvz5WDsUi4ve-27">
<mxGeometry relative="1" as="geometry">
<mxPoint x="920" y="570" as="sourcePoint" />
<Array as="points">
<mxPoint x="920" y="569" />
<mxPoint x="920" y="596" />
<mxPoint x="910" y="596" />
</Array>
</mxGeometry>
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-35" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" edge="1" parent="1" source="U9ftda_CDvz5WDsUi4ve-27" target="U9ftda_CDvz5WDsUi4ve-36">
<mxGeometry relative="1" as="geometry">
<mxPoint x="920" y="574.37" as="sourcePoint" />
<mxPoint x="1000" y="610.0000000000001" as="targetPoint" />
<Array as="points">
<mxPoint x="960" y="635" />
<mxPoint x="980" y="635" />
</Array>
</mxGeometry>
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-47" value="success" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="U9ftda_CDvz5WDsUi4ve-35">
<mxGeometry x="-0.275" y="1" relative="1" as="geometry">
<mxPoint as="offset" />
</mxGeometry>
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-51" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" edge="1" parent="1" source="U9ftda_CDvz5WDsUi4ve-25" target="U9ftda_CDvz5WDsUi4ve-27">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-25" value="nb_cli_classic_commit()" style="whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;rounded=1;fillStyle=auto;strokeWidth=1;" vertex="1" parent="1">
<mxGeometry x="830" y="500" width="180" height="37.5" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-31" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" edge="1" parent="1" source="U9ftda_CDvz5WDsUi4ve-27" target="U9ftda_CDvz5WDsUi4ve-30">
<mxGeometry relative="1" as="geometry">
<Array as="points">
<mxPoint x="880" y="635" />
<mxPoint x="781" y="635" />
</Array>
</mxGeometry>
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-32" value="fail" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="U9ftda_CDvz5WDsUi4ve-31">
<mxGeometry x="-0.055" y="-3" relative="1" as="geometry">
<mxPoint as="offset" />
</mxGeometry>
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-27" value="nb_candidate_commit_prepare()" style="whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;rounded=1;fillStyle=auto;strokeWidth=1;" vertex="1" parent="1">
<mxGeometry x="830" y="566.25" width="180" height="33.75" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-30" value="" style="whiteSpace=wrap;html=1;fillColor=#e1d5e7;strokeColor=#9673a6;rounded=1;fillStyle=auto;strokeWidth=1;" vertex="1" parent="1">
<mxGeometry x="691.25" y="670.01" width="180" height="99.99" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-40" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" edge="1" parent="1" source="U9ftda_CDvz5WDsUi4ve-38" target="U9ftda_CDvz5WDsUi4ve-39">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-38" value="&lt;div&gt;running&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#d5e8d4;strokeColor=#82b366;" vertex="1" parent="1">
<mxGeometry x="706.25" y="685" width="50" height="70" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-39" value="&lt;div&gt;private or&lt;/div&gt;&lt;div&gt;candidate&lt;/div&gt;&lt;div&gt;&lt;br&gt;&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#fff2cc;strokeColor=#d6b656;" vertex="1" parent="1">
<mxGeometry x="796.25" y="685" width="60" height="70" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-42" value="&lt;div&gt;running&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#d5e8d4;strokeColor=#82b366;" vertex="1" parent="1">
<mxGeometry x="990" y="715" width="50" height="70" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-41" value="&lt;div&gt;private or&lt;/div&gt;&lt;div&gt;candidate&lt;/div&gt;&lt;div&gt;&lt;br&gt;&lt;/div&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;align=center;fillColor=#fff2cc;strokeColor=#d6b656;" vertex="1" parent="1">
<mxGeometry x="900" y="715" width="65" height="70" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-44" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;" edge="1" parent="1" source="U9ftda_CDvz5WDsUi4ve-36" target="U9ftda_CDvz5WDsUi4ve-36">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-48" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" edge="1" parent="1" source="U9ftda_CDvz5WDsUi4ve-41" target="U9ftda_CDvz5WDsUi4ve-42">
<mxGeometry relative="1" as="geometry">
<mxPoint x="960" y="705" as="sourcePoint" />
<mxPoint x="990" y="705" as="targetPoint" />
</mxGeometry>
</mxCell>
<mxCell id="U9ftda_CDvz5WDsUi4ve-52" value="&lt;b&gt;&lt;font style=&quot;font-size: 15px;&quot;&gt;Config Datastore Non-Implicit Commit Cleanup&lt;/font&gt;&lt;/b&gt;" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;" vertex="1" parent="1">
<mxGeometry x="400" y="10" width="360" height="30" as="geometry" />
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 70 KiB

View file

@ -215,6 +215,11 @@ BFD peers and profiles share the same BFD session configuration commands.
The default value is 254 (which means we only expect one hop between
this system and the peer).
.. clicmd:: log-session-changes
Enables or disables logging of session state transitions into Up
state or when the session transitions from Up state to Down state.
BFD Peer Specific Commands
--------------------------

View file

@ -537,6 +537,13 @@ Reject routes with AS_SET or AS_CONFED_SET types
This command enables rejection of incoming and outgoing routes having AS_SET or AS_CONFED_SET type.
The aggregated routes are not sent to the contributing neighbors.
.. seealso::
https://datatracker.ietf.org/doc/html/draft-ietf-idr-deprecate-as-set-confed-set
Default: disabled.
Enforce first AS
----------------

View file

@ -25,6 +25,8 @@ There are several options that control the behavior of ``frr-reload``:
* ``--stdout``: print output to stdout
* ``--bindir BINDIR``: path to the vtysh executable
* ``--confdir CONFDIR``: path to the existing daemon config files
* ``--logfile FILENAME``: file (with path) to logfile for the reload operation.
Default is ``/var/log/frr/frr-reload.log``
* ``--rundir RUNDIR``: path to a folder to be used to write the temporary files
needed by the script to do its job. The script should have write access to it
* ``--daemon DAEMON``: by default ``frr-reload.py`` assumes that we are using

View file

@ -9,6 +9,7 @@ Protocols
zebra
bfd
sbfd
bgp
babeld
fabricd

View file

@ -46,8 +46,8 @@ a static prefix and gateway, with several possible forms.
NETWORK is destination prefix with a valid v4 or v6 network based upon
initial form of the command.
GATEWAY is the IP address to use as next-hop for the prefix. Currently, it must match
the v4 or v6 route type specified at the start of the command.
GATEWAY is the IP address to use as next-hop for the prefix. Routes of type v4 can use v4 and v6 next-hops,
v6 routes only support v6 next-hops.
IFNAME is the name of the interface to use as next-hop. If only IFNAME is specified
(without GATEWAY), a connected route will be created.

View file

@ -84,9 +84,9 @@ endif
#
.PHONY: info html pdf
info: $(USERBUILD)/texinfo/frr.info
html: $(USERBUILD)/html/.buildinfo
pdf: $(USERBUILD)/latexpdf
info-local: $(USERBUILD)/texinfo/frr.info
html-local: $(USERBUILD)/html/.buildinfo
pdf-local: $(USERBUILD)/latexpdf
#
# hook-ins for clean / install / doc
@ -100,7 +100,7 @@ clean-userdocs:
# INSTALL_INFO=install-info
.PHONY: install-info uninstall-info install-html uninstall-html
install-info: $(USERBUILD)/texinfo/frr.info
install-info-local: $(USERBUILD)/texinfo/frr.info
$(MKDIR_P) "$(DESTDIR)$(infodir)"
$(INSTALL_DATA) "$<" "$(DESTDIR)$(infodir)"
[ -z "${DESTDIR}" ] && $(INSTALL_INFO) --info-dir="$(DESTDIR)$(infodir)" "$<" || true
@ -108,7 +108,7 @@ uninstall-info: $(USERBUILD)/texinfo/frr.info
-rm -f "$(DESTDIR)$(infodir)/$<"
[ -z "${DESTDIR}" ] && $(INSTALL_INFO) --delete --info-dir="$(DESTDIR)$(infodir)" "$<" || true
install-html: $(USERBUILD)/html/.buildinfo
install-html-local: $(USERBUILD)/html/.buildinfo
$(MKDIR_P) "$(DESTDIR)$(htmldir)"
cp -r "$(USERBUILD)/html" "$(DESTDIR)$(htmldir)"
uninstall-html:

View file

@ -81,11 +81,11 @@ static int config_write_debug(struct vty *vty)
static int eigrp_neighbor_packet_queue_sum(struct eigrp_interface *ei)
{
struct eigrp_neighbor *nbr;
struct listnode *node, *nnode;
int sum;
sum = 0;
for (ALL_LIST_ELEMENTS(ei->nbrs, node, nnode, nbr)) {
frr_each (eigrp_nbr_hash, &ei->nbr_hash_head, nbr) {
sum += nbr->retrans_queue->count;
}
@ -152,7 +152,7 @@ void show_ip_eigrp_interface_sub(struct vty *vty, struct eigrp *eigrp,
vty_out(vty, "%-16s ", IF_NAME(ei));
vty_out(vty, "%-11u", ei->params.bandwidth);
vty_out(vty, "%-11u", ei->params.delay);
vty_out(vty, "%-7u", ei->nbrs->count);
vty_out(vty, "%-7zu", eigrp_nbr_hash_count(&ei->nbr_hash_head));
vty_out(vty, "%u %c %-10u", 0, '/',
eigrp_neighbor_packet_queue_sum(ei));
vty_out(vty, "%-7u %-14u %-12u %-8u", 0, 0, 0, 0);
@ -228,7 +228,7 @@ void show_ip_eigrp_prefix_descriptor(struct vty *vty,
vty_out(vty, "%-3c", (tn->state > 0) ? 'A' : 'P');
vty_out(vty, "%pFX, ", tn->destination);
vty_out(vty, "%pFX, ", &tn->destination);
vty_out(vty, "%u successors, ", (successors) ? successors->count : 0);
vty_out(vty, "FD is %u, serno: %" PRIu64 " \n", tn->fdistance,
tn->serno);

View file

@ -42,6 +42,7 @@
#include "eigrpd/eigrp_const.h"
#include "eigrpd/eigrp_filter.h"
#include "eigrpd/eigrp_packet.h"
#include "eigrpd/eigrp_interface.h"
/*
* Distribute-list update functions.
@ -126,10 +127,9 @@ void eigrp_distribute_update(struct distribute_ctx *ctx,
/*struct eigrp_if_info * info = ifp->info;
ei = info->eigrp_interface;*/
struct listnode *node, *nnode;
struct eigrp_interface *ei2;
/* Find proper interface */
for (ALL_LIST_ELEMENTS(e->eiflist, node, nnode, ei2)) {
frr_each (eigrp_interface_hash, &e->eifs, ei2) {
if (strcmp(ei2->ifp->name, ifp->name) == 0) {
ei = ei2;
break;

View file

@ -403,12 +403,10 @@ int eigrp_fsm_event(struct eigrp_fsm_action_message *msg)
{
enum eigrp_fsm_events event = eigrp_get_fsm_event(msg);
zlog_info(
"EIGRP AS: %d State: %s Event: %s Network: %pI4 Packet Type: %s Reply RIJ Count: %d change: %s",
msg->eigrp->AS, prefix_state2str(msg->prefix->state),
fsm_state2str(event), &msg->prefix->destination->u.prefix4,
packet_type2str(msg->packet_type), msg->prefix->rij->count,
change2str(msg->change));
zlog_info("EIGRP AS: %d State: %s Event: %s Network: %pFX Packet Type: %s Reply RIJ Count: %d change: %s",
msg->eigrp->AS, prefix_state2str(msg->prefix->state), fsm_state2str(event),
&msg->prefix->destination, packet_type2str(msg->packet_type),
msg->prefix->rij->count, change2str(msg->change));
(*(NSM[msg->prefix->state][event].func))(msg);
return 1;

View file

@ -496,7 +496,6 @@ static uint16_t eigrp_sequence_encode(struct eigrp *eigrp, struct stream *s)
{
uint16_t length = EIGRP_TLV_SEQ_BASE_LEN;
struct eigrp_interface *ei;
struct listnode *node, *node2, *nnode2;
struct eigrp_neighbor *nbr;
size_t backup_end, size_end;
int found;
@ -509,8 +508,8 @@ static uint16_t eigrp_sequence_encode(struct eigrp *eigrp, struct stream *s)
stream_putc(s, IPV4_MAX_BYTELEN);
found = 0;
for (ALL_LIST_ELEMENTS_RO(eigrp->eiflist, node, ei)) {
for (ALL_LIST_ELEMENTS(ei->nbrs, node2, nnode2, nbr)) {
frr_each (eigrp_interface_hash, &eigrp->eifs, ei) {
frr_each (eigrp_nbr_hash, &ei->nbr_hash_head, nbr) {
if (nbr->multicast_queue->count > 0) {
length += (uint16_t)stream_put_ipv4(
s, nbr->src.s_addr);

View file

@ -45,6 +45,16 @@
DEFINE_MTYPE_STATIC(EIGRPD, EIGRP_IF, "EIGRP interface");
int eigrp_interface_cmp(const struct eigrp_interface *a, const struct eigrp_interface *b)
{
return if_cmp_func(a->ifp, b->ifp);
}
uint32_t eigrp_interface_hash(const struct eigrp_interface *ei)
{
return ei->ifp->ifindex;
}
struct eigrp_interface *eigrp_if_new(struct eigrp *eigrp, struct interface *ifp,
struct prefix *p)
{
@ -61,12 +71,12 @@ struct eigrp_interface *eigrp_if_new(struct eigrp *eigrp, struct interface *ifp,
prefix_copy(&ei->address, p);
ifp->info = ei;
listnode_add(eigrp->eiflist, ei);
eigrp_interface_hash_add(&eigrp->eifs, ei);
ei->type = EIGRP_IFTYPE_BROADCAST;
/* Initialize neighbor list. */
ei->nbrs = list_new();
eigrp_nbr_hash_init(&ei->nbr_hash_head);
ei->crypt_seqnum = frr_sequence32_next();
@ -102,10 +112,10 @@ int eigrp_if_delete_hook(struct interface *ifp)
if (!ei)
return 0;
list_delete(&ei->nbrs);
eigrp_nbr_hash_fini(&ei->nbr_hash_head);
eigrp = ei->eigrp;
listnode_delete(eigrp->eiflist, ei);
eigrp_interface_hash_del(&eigrp->eifs, ei);
eigrp_fifo_free(ei->obuf);
@ -238,7 +248,6 @@ int eigrp_if_up(struct eigrp_interface *ei)
struct eigrp_route_descriptor *ne;
struct eigrp_metrics metric;
struct eigrp_interface *ei2;
struct listnode *node, *nnode;
struct eigrp *eigrp;
if (ei == NULL)
@ -285,8 +294,7 @@ int eigrp_if_up(struct eigrp_interface *ei)
if (pe == NULL) {
pe = eigrp_prefix_descriptor_new();
pe->serno = eigrp->serno;
pe->destination = (struct prefix *)prefix_ipv4_new();
prefix_copy(pe->destination, &dest_addr);
prefix_copy(&pe->destination, &dest_addr);
pe->af = AF_INET;
pe->nt = EIGRP_TOPOLOGY_TYPE_CONNECTED;
@ -300,9 +308,8 @@ int eigrp_if_up(struct eigrp_interface *ei)
eigrp_route_descriptor_add(eigrp, pe, ne);
for (ALL_LIST_ELEMENTS(eigrp->eiflist, node, nnode, ei2)) {
frr_each (eigrp_interface_hash, &eigrp->eifs, ei2)
eigrp_update_send(ei2);
}
pe->req_action &= ~EIGRP_FSM_NEED_UPDATE;
listnode_delete(eigrp->topology_changes_internalIPV4, pe);
@ -327,9 +334,6 @@ int eigrp_if_up(struct eigrp_interface *ei)
int eigrp_if_down(struct eigrp_interface *ei)
{
struct listnode *node, *nnode;
struct eigrp_neighbor *nbr;
if (ei == NULL)
return 0;
@ -340,9 +344,9 @@ int eigrp_if_down(struct eigrp_interface *ei)
/*Set infinite metrics to routes learned by this interface and start
* query process*/
for (ALL_LIST_ELEMENTS(ei->nbrs, node, nnode, nbr)) {
eigrp_nbr_delete(nbr);
}
while (eigrp_nbr_hash_count(&ei->nbr_hash_head) > 0)
eigrp_nbr_delete(eigrp_nbr_hash_first(&ei->nbr_hash_head));
return 1;
}
@ -436,8 +440,6 @@ void eigrp_if_free(struct eigrp_interface *ei, int source)
pe);
eigrp_if_down(ei);
listnode_delete(ei->eigrp->eiflist, ei);
}
/* Simulate down/up on the interface. This is needed, for example, when
@ -457,10 +459,9 @@ struct eigrp_interface *eigrp_if_lookup_by_local_addr(struct eigrp *eigrp,
struct interface *ifp,
struct in_addr address)
{
struct listnode *node;
struct eigrp_interface *ei;
for (ALL_LIST_ELEMENTS_RO(eigrp->eiflist, node, ei)) {
frr_each (eigrp_interface_hash, &eigrp->eifs, ei) {
if (ifp && ei->ifp != ifp)
continue;
@ -486,10 +487,10 @@ struct eigrp_interface *eigrp_if_lookup_by_name(struct eigrp *eigrp,
const char *if_name)
{
struct eigrp_interface *ei;
struct listnode *node;
/* iterate over all eigrp interfaces */
for (ALL_LIST_ELEMENTS_RO(eigrp->eiflist, node, ei)) {
// XXX
frr_each (eigrp_interface_hash, &eigrp->eifs, ei) {
/* compare int name with eigrp interface's name */
if (strcmp(ei->ifp->name, if_name) == 0) {
return ei;

View file

@ -43,4 +43,10 @@ extern struct eigrp_interface *eigrp_if_lookup_by_name(struct eigrp *,
/* Simulate down/up on the interface. */
extern void eigrp_if_reset(struct interface *);
extern int eigrp_interface_cmp(const struct eigrp_interface *a, const struct eigrp_interface *b);
extern uint32_t eigrp_interface_hash(const struct eigrp_interface *ei);
DECLARE_HASH(eigrp_interface_hash, struct eigrp_interface, eif_item, eigrp_interface_cmp,
eigrp_interface_hash);
#endif /* ZEBRA_EIGRP_INTERFACE_H_ */

View file

@ -98,6 +98,9 @@ static void sigint(void)
keychain_terminate();
route_map_finish();
prefix_list_reset();
eigrp_terminate();
exit(0);

View file

@ -41,6 +41,21 @@
DEFINE_MTYPE_STATIC(EIGRPD, EIGRP_NEIGHBOR, "EIGRP neighbor");
int eigrp_nbr_comp(const struct eigrp_neighbor *a, const struct eigrp_neighbor *b)
{
if (a->src.s_addr == b->src.s_addr)
return 0;
else if (a->src.s_addr < b->src.s_addr)
return -1;
return 1;
}
uint32_t eigrp_nbr_hash(const struct eigrp_neighbor *a)
{
return a->src.s_addr;
}
struct eigrp_neighbor *eigrp_nbr_new(struct eigrp_interface *ei)
{
struct eigrp_neighbor *nbr;
@ -80,17 +95,18 @@ struct eigrp_neighbor *eigrp_nbr_get(struct eigrp_interface *ei,
struct eigrp_header *eigrph,
struct ip *iph)
{
struct eigrp_neighbor *nbr;
struct listnode *node, *nnode;
struct eigrp_neighbor lookup, *nbr;
for (ALL_LIST_ELEMENTS(ei->nbrs, node, nnode, nbr)) {
if (iph->ip_src.s_addr == nbr->src.s_addr) {
return nbr;
}
lookup.src = iph->ip_src;
lookup.ei = ei;
nbr = eigrp_nbr_hash_find(&ei->nbr_hash_head, &lookup);
if (nbr) {
return nbr;
}
nbr = eigrp_nbr_add(ei, eigrph, iph);
listnode_add(ei->nbrs, nbr);
eigrp_nbr_hash_add(&ei->nbr_hash_head, nbr);
return nbr;
}
@ -110,16 +126,12 @@ struct eigrp_neighbor *eigrp_nbr_get(struct eigrp_interface *ei,
struct eigrp_neighbor *eigrp_nbr_lookup_by_addr(struct eigrp_interface *ei,
struct in_addr *addr)
{
struct eigrp_neighbor *nbr;
struct listnode *node, *nnode;
struct eigrp_neighbor lookup, *nbr;
for (ALL_LIST_ELEMENTS(ei->nbrs, node, nnode, nbr)) {
if (addr->s_addr == nbr->src.s_addr) {
return nbr;
}
}
lookup.src = *addr;
nbr = eigrp_nbr_hash_find(&ei->nbr_hash_head, &lookup);
return NULL;
return nbr;
}
/**
@ -138,17 +150,15 @@ struct eigrp_neighbor *eigrp_nbr_lookup_by_addr_process(struct eigrp *eigrp,
struct in_addr nbr_addr)
{
struct eigrp_interface *ei;
struct listnode *node, *node2, *nnode2;
struct eigrp_neighbor *nbr;
struct eigrp_neighbor lookup, *nbr;
/* iterate over all eigrp interfaces */
for (ALL_LIST_ELEMENTS_RO(eigrp->eiflist, node, ei)) {
frr_each (eigrp_interface_hash, &eigrp->eifs, ei) {
/* iterate over all neighbors on eigrp interface */
for (ALL_LIST_ELEMENTS(ei->nbrs, node2, nnode2, nbr)) {
/* compare if neighbor address is same as arg address */
if (nbr->src.s_addr == nbr_addr.s_addr) {
return nbr;
}
lookup.src = nbr_addr;
nbr = eigrp_nbr_hash_find(&ei->nbr_hash_head, &lookup);
if (nbr) {
return nbr;
}
}
@ -170,7 +180,7 @@ void eigrp_nbr_delete(struct eigrp_neighbor *nbr)
EVENT_OFF(nbr->t_holddown);
if (nbr->ei)
listnode_delete(nbr->ei->nbrs, nbr);
eigrp_nbr_hash_del(&nbr->ei->nbr_hash_head, nbr);
XFREE(MTYPE_EIGRP_NEIGHBOR, nbr);
}
@ -278,18 +288,12 @@ void eigrp_nbr_state_update(struct eigrp_neighbor *nbr)
int eigrp_nbr_count_get(struct eigrp *eigrp)
{
struct eigrp_interface *iface;
struct listnode *node, *node2, *nnode2;
struct eigrp_neighbor *nbr;
uint32_t counter;
counter = 0;
for (ALL_LIST_ELEMENTS_RO(eigrp->eiflist, node, iface)) {
for (ALL_LIST_ELEMENTS(iface->nbrs, node2, nnode2, nbr)) {
if (nbr->state == EIGRP_NEIGHBOR_UP) {
counter++;
}
}
}
frr_each (eigrp_interface_hash, &eigrp->eifs, iface)
counter += eigrp_nbr_hash_count(&iface->nbr_hash_head);
return counter;
}

View file

@ -26,8 +26,6 @@ extern void eigrp_nbr_delete(struct eigrp_neighbor *neigh);
extern void holddown_timer_expired(struct event *thread);
extern int eigrp_neighborship_check(struct eigrp_neighbor *neigh,
struct TLV_Parameter_Type *tlv);
extern void eigrp_nbr_state_update(struct eigrp_neighbor *neigh);
extern void eigrp_nbr_state_set(struct eigrp_neighbor *neigh, uint8_t state);
extern uint8_t eigrp_nbr_state_get(struct eigrp_neighbor *neigh);
@ -41,4 +39,9 @@ extern void eigrp_nbr_hard_restart(struct eigrp_neighbor *nbr, struct vty *vty);
extern int eigrp_nbr_split_horizon_check(struct eigrp_route_descriptor *ne,
struct eigrp_interface *ei);
extern int eigrp_nbr_comp(const struct eigrp_neighbor *a, const struct eigrp_neighbor *b);
extern uint32_t eigrp_nbr_hash(const struct eigrp_neighbor *a);
DECLARE_HASH(eigrp_nbr_hash, struct eigrp_neighbor, nbr_hash_item, eigrp_nbr_comp, eigrp_nbr_hash);
#endif /* _ZEBRA_EIGRP_NEIGHBOR_H */

View file

@ -219,6 +219,21 @@ int eigrp_network_set(struct eigrp *eigrp, struct prefix *p)
return 1;
}
static void eigrp_network_delete_all(struct eigrp *eigrp, struct route_table *table)
{
struct route_node *rn;
for (rn = route_top(table); rn; rn = route_next(rn)) {
prefix_free((struct prefix **)&rn->info);
}
}
void eigrp_network_free(struct eigrp *eigrp, struct route_table *table)
{
eigrp_network_delete_all(eigrp, table);
route_table_finish(table);
}
/* Check whether interface matches given network
* returns: 1, true. 0, false
*/
@ -262,7 +277,6 @@ static void eigrp_network_run_interface(struct eigrp *eigrp, struct prefix *p,
void eigrp_if_update(struct interface *ifp)
{
struct listnode *node, *nnode;
struct route_node *rn;
struct eigrp *eigrp;
@ -270,7 +284,7 @@ void eigrp_if_update(struct interface *ifp)
* In the event there are multiple eigrp autonymnous systems running,
* we need to check eac one and add the interface as approperate
*/
for (ALL_LIST_ELEMENTS(eigrp_om->eigrp, node, nnode, eigrp)) {
frr_each (eigrp_master_hash, &eigrp_om->eigrp, eigrp) {
if (ifp->vrf->vrf_id != eigrp->vrf_id)
continue;
@ -289,7 +303,6 @@ void eigrp_if_update(struct interface *ifp)
int eigrp_network_unset(struct eigrp *eigrp, struct prefix *p)
{
struct route_node *rn;
struct listnode *node, *nnode;
struct eigrp_interface *ei;
struct prefix *pref;
@ -307,7 +320,7 @@ int eigrp_network_unset(struct eigrp *eigrp, struct prefix *p)
route_unlock_node(rn); /* initial reference */
/* Find interfaces that not configured already. */
for (ALL_LIST_ELEMENTS(eigrp->eiflist, node, nnode, ei)) {
frr_each (eigrp_interface_hash, &eigrp->eifs, ei) {
bool found = false;
for (rn = route_top(eigrp->networks); rn; rn = route_next(rn)) {

View file

@ -19,6 +19,7 @@ extern int eigrp_sock_init(struct vrf *vrf);
extern int eigrp_if_ipmulticast(struct eigrp *, struct prefix *, unsigned int);
extern int eigrp_network_set(struct eigrp *eigrp, struct prefix *p);
extern int eigrp_network_unset(struct eigrp *eigrp, struct prefix *p);
extern void eigrp_network_free(struct eigrp *eigrp, struct route_table *table);
extern void eigrp_hello_timer(struct event *thread);
extern void eigrp_if_update(struct interface *);

View file

@ -43,13 +43,11 @@ static void redistribute_get_metrics(const struct lyd_node *dnode,
em->reliability = yang_dnode_get_uint32(dnode, "reliability");
}
static struct eigrp_interface *eigrp_interface_lookup(const struct eigrp *eigrp,
const char *ifname)
static struct eigrp_interface *eigrp_interface_lookup(struct eigrp *eigrp, const char *ifname)
{
struct eigrp_interface *eif;
struct listnode *ln;
for (ALL_LIST_ELEMENTS_RO(eigrp->eiflist, ln, eif)) {
frr_each (eigrp_interface_hash, &eigrp->eifs, eif) {
if (strcmp(ifname, eif->ifp->name))
continue;
@ -741,7 +739,7 @@ static int eigrpd_instance_redistribute_create(struct nb_cb_create_args *args)
else
vrfid = VRF_DEFAULT;
if (vrf_bitmap_check(&zclient->redist[AFI_IP][proto], vrfid))
if (vrf_bitmap_check(&eigrp_zclient->redist[AFI_IP][proto], vrfid))
return NB_ERR_INCONSISTENCY;
break;
case NB_EV_PREPARE:

View file

@ -532,8 +532,8 @@ void eigrp_read(struct event *thread)
return;
/* Self-originated packet should be discarded silently. */
if (eigrp_if_lookup_by_local_addr(eigrp, NULL, iph->ip_src)
|| (IPV4_ADDR_SAME(&srcaddr, &ei->address.u.prefix4))) {
if (eigrp_if_lookup_by_local_addr(eigrp, ifp, iph->ip_src) ||
(IPV4_ADDR_SAME(&srcaddr, &ei->address.u.prefix4))) {
if (IS_DEBUG_EIGRP_TRANSMIT(0, RECV))
zlog_debug(
"eigrp_read[%pI4]: Dropping self-originated packet",
@ -1129,7 +1129,7 @@ uint16_t eigrp_add_internalTLV_to_stream(struct stream *s,
uint16_t length;
stream_putw(s, EIGRP_TLV_IPv4_INT);
switch (pe->destination->prefixlen) {
switch (pe->destination.prefixlen) {
case 0:
case 1:
case 2:
@ -1176,8 +1176,8 @@ uint16_t eigrp_add_internalTLV_to_stream(struct stream *s,
stream_putw(s, length);
break;
default:
flog_err(EC_LIB_DEVELOPMENT, "%s: Unexpected prefix length: %d",
__func__, pe->destination->prefixlen);
flog_err(EC_LIB_DEVELOPMENT, "%s: Unexpected prefix length: %d", __func__,
pe->destination.prefixlen);
return 0;
}
stream_putl(s, 0x00000000);
@ -1194,15 +1194,15 @@ uint16_t eigrp_add_internalTLV_to_stream(struct stream *s,
stream_putc(s, pe->reported_metric.tag);
stream_putc(s, pe->reported_metric.flags);
stream_putc(s, pe->destination->prefixlen);
stream_putc(s, pe->destination.prefixlen);
stream_putc(s, (ntohl(pe->destination->u.prefix4.s_addr) >> 24) & 0xFF);
if (pe->destination->prefixlen > 8)
stream_putc(s, (ntohl(pe->destination->u.prefix4.s_addr) >> 16) & 0xFF);
if (pe->destination->prefixlen > 16)
stream_putc(s, (ntohl(pe->destination->u.prefix4.s_addr) >> 8) & 0xFF);
if (pe->destination->prefixlen > 24)
stream_putc(s, ntohl(pe->destination->u.prefix4.s_addr) & 0xFF);
stream_putc(s, (ntohl(pe->destination.u.prefix4.s_addr) >> 24) & 0xFF);
if (pe->destination.prefixlen > 8)
stream_putc(s, (ntohl(pe->destination.u.prefix4.s_addr) >> 16) & 0xFF);
if (pe->destination.prefixlen > 16)
stream_putc(s, (ntohl(pe->destination.u.prefix4.s_addr) >> 8) & 0xFF);
if (pe->destination.prefixlen > 24)
stream_putc(s, ntohl(pe->destination.u.prefix4.s_addr) & 0xFF);
return length;
}

View file

@ -41,7 +41,7 @@
uint32_t eigrp_query_send_all(struct eigrp *eigrp)
{
struct eigrp_interface *iface;
struct listnode *node, *node2, *nnode2;
struct listnode *node2, *nnode2;
struct eigrp_prefix_descriptor *pe;
uint32_t counter;
@ -51,7 +51,7 @@ uint32_t eigrp_query_send_all(struct eigrp *eigrp)
}
counter = 0;
for (ALL_LIST_ELEMENTS_RO(eigrp->eiflist, node, iface)) {
frr_each (eigrp_interface_hash, &eigrp->eifs, iface) {
eigrp_send_query(iface);
counter++;
}
@ -146,7 +146,7 @@ void eigrp_send_query(struct eigrp_interface *ei)
{
struct eigrp_packet *ep = NULL;
uint16_t length = EIGRP_HEADER_LEN;
struct listnode *node, *nnode, *node2, *nnode2;
struct listnode *node, *nnode;
struct eigrp_neighbor *nbr;
struct eigrp_prefix_descriptor *pe;
bool has_tlv = false;
@ -177,7 +177,7 @@ void eigrp_send_query(struct eigrp_interface *ei)
length += eigrp_add_internalTLV_to_stream(ep->s, pe);
has_tlv = true;
for (ALL_LIST_ELEMENTS(ei->nbrs, node2, nnode2, nbr)) {
frr_each (eigrp_nbr_hash, &ei->nbr_hash_head, nbr) {
if (nbr->state == EIGRP_NEIGHBOR_UP)
listnode_add(pe->rij, nbr);
}
@ -197,7 +197,7 @@ void eigrp_send_query(struct eigrp_interface *ei)
ep->sequence_number = ei->eigrp->sequence_number;
ei->eigrp->sequence_number++;
for (ALL_LIST_ELEMENTS(ei->nbrs, node2, nnode2, nbr)) {
frr_each (eigrp_nbr_hash, &ei->nbr_hash_head, nbr) {
struct eigrp_packet *dup;
if (nbr->state != EIGRP_NEIGHBOR_UP)
@ -237,7 +237,7 @@ void eigrp_send_query(struct eigrp_interface *ei)
ep->sequence_number = ei->eigrp->sequence_number;
ei->eigrp->sequence_number++;
for (ALL_LIST_ELEMENTS(ei->nbrs, node2, nnode2, nbr)) {
frr_each (eigrp_nbr_hash, &ei->nbr_hash_head, nbr) {
struct eigrp_packet *dup;
if (nbr->state != EIGRP_NEIGHBOR_UP)

View file

@ -61,8 +61,7 @@ void eigrp_send_reply(struct eigrp_neighbor *nbr,
sizeof(struct eigrp_prefix_descriptor));
memcpy(pe2, pe, sizeof(struct eigrp_prefix_descriptor));
if (eigrp_update_prefix_apply(eigrp, ei, EIGRP_FILTER_OUT,
pe2->destination)) {
if (eigrp_update_prefix_apply(eigrp, ei, EIGRP_FILTER_OUT, &pe2->destination)) {
zlog_info("REPLY SEND: Setting Metric to max");
pe2->reported_metric.delay = EIGRP_MAX_METRIC;
}

Some files were not shown because too many files have changed in this diff Show more