2002-12-13 21:15:29 +01:00
|
|
|
/* BGP routing information
|
2017-05-13 10:25:29 +02:00
|
|
|
* Copyright (C) 1996, 97, 98, 99 Kunihiro Ishiguro
|
|
|
|
* Copyright (C) 2016 Job Snijders <job@instituut.net>
|
|
|
|
*
|
|
|
|
* This file is part of GNU Zebra.
|
|
|
|
*
|
|
|
|
* GNU Zebra is free software; you can redistribute it and/or modify it
|
|
|
|
* under the terms of the GNU General Public License as published by the
|
|
|
|
* Free Software Foundation; either version 2, or (at your option) any
|
|
|
|
* later version.
|
|
|
|
*
|
|
|
|
* GNU Zebra is distributed in the hope that it will be useful, but
|
|
|
|
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License along
|
|
|
|
* with this program; see the file COPYING; if not, write to the Free Software
|
|
|
|
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
|
|
|
|
*/
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
#include <zebra.h>
|
2017-08-23 18:21:30 +02:00
|
|
|
#include <math.h>
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-06-12 20:18:12 +02:00
|
|
|
#include "printfrr.h"
|
2002-12-13 21:15:29 +01:00
|
|
|
#include "prefix.h"
|
|
|
|
#include "linklist.h"
|
|
|
|
#include "memory.h"
|
|
|
|
#include "command.h"
|
|
|
|
#include "stream.h"
|
|
|
|
#include "filter.h"
|
|
|
|
#include "log.h"
|
|
|
|
#include "routemap.h"
|
|
|
|
#include "buffer.h"
|
|
|
|
#include "sockunion.h"
|
|
|
|
#include "plist.h"
|
|
|
|
#include "thread.h"
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
#include "workqueue.h"
|
2015-05-20 03:03:47 +02:00
|
|
|
#include "queue.h"
|
2015-08-26 16:44:57 +02:00
|
|
|
#include "memory.h"
|
2016-09-22 17:15:50 +02:00
|
|
|
#include "lib/json.h"
|
2018-08-16 18:21:25 +02:00
|
|
|
#include "lib_errors.h"
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
#include "bgpd/bgpd.h"
|
|
|
|
#include "bgpd/bgp_table.h"
|
|
|
|
#include "bgpd/bgp_route.h"
|
|
|
|
#include "bgpd/bgp_attr.h"
|
|
|
|
#include "bgpd/bgp_debug.h"
|
2018-06-15 23:08:53 +02:00
|
|
|
#include "bgpd/bgp_errors.h"
|
2002-12-13 21:15:29 +01:00
|
|
|
#include "bgpd/bgp_aspath.h"
|
|
|
|
#include "bgpd/bgp_regex.h"
|
|
|
|
#include "bgpd/bgp_community.h"
|
|
|
|
#include "bgpd/bgp_ecommunity.h"
|
2016-11-15 11:00:39 +01:00
|
|
|
#include "bgpd/bgp_lcommunity.h"
|
2002-12-13 21:15:29 +01:00
|
|
|
#include "bgpd/bgp_clist.h"
|
|
|
|
#include "bgpd/bgp_packet.h"
|
|
|
|
#include "bgpd/bgp_filter.h"
|
|
|
|
#include "bgpd/bgp_fsm.h"
|
|
|
|
#include "bgpd/bgp_mplsvpn.h"
|
|
|
|
#include "bgpd/bgp_nexthop.h"
|
|
|
|
#include "bgpd/bgp_damp.h"
|
|
|
|
#include "bgpd/bgp_advertise.h"
|
|
|
|
#include "bgpd/bgp_zebra.h"
|
2005-02-01 21:57:17 +01:00
|
|
|
#include "bgpd/bgp_vty.h"
|
2011-07-21 05:45:12 +02:00
|
|
|
#include "bgpd/bgp_mpath.h"
|
2015-05-20 02:47:21 +02:00
|
|
|
#include "bgpd/bgp_nht.h"
|
2015-05-20 03:03:47 +02:00
|
|
|
#include "bgpd/bgp_updgrp.h"
|
2017-03-09 15:54:20 +01:00
|
|
|
#include "bgpd/bgp_label.h"
|
bgpd: Re-use TX Addpath IDs where possible
The motivation for this patch is to address a concerning behavior of
tx-addpath-bestpath-per-AS. Prior to this patch, all paths' TX ID was
pre-determined as the path was received from a peer. However, this meant
that any time the path selected as best from an AS changed, bgpd had no
choice but to withdraw the previous best path, and advertise the new
best-path under a new TX ID. This could cause significant network
disruption, especially for the subset of prefixes coming from only one
AS that were also communicated over a bestpath-per-AS session.
The patch's general approach is best illustrated by
txaddpath_update_ids. After a bestpath run (required for best-per-AS to
know what will and will not be sent as addpaths) ID numbers will be
stripped from paths that no longer need to be sent, and held in a pool.
Then, paths that will be sent as addpaths and do not already have ID
numbers will allocate new ID numbers, pulling first from that pool.
Finally, anything left in the pool will be returned to the allocator.
In order for this to work, ID numbers had to be split by strategy. The
tx-addpath-All strategy would keep every ID number "in use" constantly,
preventing IDs from being transferred to different paths. Rather than
create two variables for ID, this patch create a more generic array that
will easily enable more addpath strategies to be implemented. The
previously described ID manipulations will happen per addpath strategy,
and will only be run for strategies that are enabled on at least one
peer.
Finally, the ID numbers are allocated from an allocator that tracks per
AFI/SAFI/Addpath Strategy which IDs are in use. Though it would be very
improbable, there was the possibility with the free-running counter
approach for rollover to cause two paths on the same prefix to get
assigned the same TX ID. As remote as the possibility is, we prefer to
not leave it to chance.
This ID re-use method is not perfect. In some cases you could still get
withdraw-then-add behaviors where not strictly necessary. In the case of
bestpath-per-AS this requires one AS to advertise a prefix for the first
time, then a second AS withdraws that prefix, all within the space of an
already pending MRAI timer. In those situations a withdraw-then-add is
more forgivable, and fixing it would probably require a much more
significant effort, as IDs would need to be moved to ADVs instead of
paths.
Signed-off-by Mitchell Skiba <mskiba@amazon.com>
2018-05-10 01:10:02 +02:00
|
|
|
#include "bgpd/bgp_addpath.h"
|
2018-10-12 16:23:08 +02:00
|
|
|
#include "bgpd/bgp_mac.h"
|
2002-12-13 21:15:29 +01:00
|
|
|
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2016-09-29 00:03:43 +02:00
|
|
|
#include "bgpd/rfapi/rfapi_backend.h"
|
|
|
|
#include "bgpd/rfapi/vnc_import_bgp.h"
|
|
|
|
#include "bgpd/rfapi/vnc_export_bgp.h"
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2016-08-10 10:46:53 +02:00
|
|
|
#include "bgpd/bgp_encap_types.h"
|
|
|
|
#include "bgpd/bgp_encap_tlv.h"
|
2016-09-05 14:07:01 +02:00
|
|
|
#include "bgpd/bgp_evpn.h"
|
2016-10-27 08:02:36 +02:00
|
|
|
#include "bgpd/bgp_evpn_vty.h"
|
2018-02-19 17:17:41 +01:00
|
|
|
#include "bgpd/bgp_flowspec.h"
|
2018-03-15 13:32:04 +01:00
|
|
|
#include "bgpd/bgp_flowspec_util.h"
|
2018-03-08 19:16:03 +01:00
|
|
|
#include "bgpd/bgp_pbr.h"
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
#ifndef VTYSH_EXTRACT_PL
|
|
|
|
#include "bgpd/bgp_route_clippy.c"
|
|
|
|
#endif
|
2016-08-10 10:46:53 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Extern from bgp_dump.c */
|
2009-05-09 00:19:07 +02:00
|
|
|
extern const char *bgp_origin_str[];
|
|
|
|
extern const char *bgp_origin_long_str[];
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2018-03-13 19:14:26 +01:00
|
|
|
/* PMSI strings. */
|
|
|
|
#define PMSI_TNLTYPE_STR_NO_INFO "No info"
|
|
|
|
#define PMSI_TNLTYPE_STR_DEFAULT PMSI_TNLTYPE_STR_NO_INFO
|
|
|
|
static const struct message bgp_pmsi_tnltype_str[] = {
|
|
|
|
{PMSI_TNLTYPE_NO_INFO, PMSI_TNLTYPE_STR_NO_INFO},
|
|
|
|
{PMSI_TNLTYPE_RSVP_TE_P2MP, "RSVP-TE P2MP"},
|
|
|
|
{PMSI_TNLTYPE_MLDP_P2MP, "mLDP P2MP"},
|
|
|
|
{PMSI_TNLTYPE_PIM_SSM, "PIM-SSM"},
|
|
|
|
{PMSI_TNLTYPE_PIM_SM, "PIM-SM"},
|
|
|
|
{PMSI_TNLTYPE_PIM_BIDIR, "PIM-BIDIR"},
|
|
|
|
{PMSI_TNLTYPE_INGR_REPL, "Ingress Replication"},
|
|
|
|
{PMSI_TNLTYPE_MLDP_MP2MP, "mLDP MP2MP"},
|
2018-03-13 20:42:41 +01:00
|
|
|
{0}
|
|
|
|
};
|
2018-03-13 19:14:26 +01:00
|
|
|
|
2018-04-09 22:28:11 +02:00
|
|
|
#define VRFID_NONE_STR "-"
|
|
|
|
|
2019-05-09 11:12:14 +02:00
|
|
|
DEFINE_HOOK(bgp_process,
|
|
|
|
(struct bgp *bgp, afi_t afi, safi_t safi,
|
|
|
|
struct bgp_node *bn, struct peer *peer, bool withdraw),
|
|
|
|
(bgp, afi, safi, bn, peer, withdraw))
|
|
|
|
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *bgp_afi_node_get(struct bgp_table *table, afi_t afi,
|
|
|
|
safi_t safi, struct prefix *p,
|
|
|
|
struct prefix_rd *prd)
|
|
|
|
{
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_node *prn = NULL;
|
|
|
|
|
|
|
|
assert(table);
|
|
|
|
if (!table)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if ((safi == SAFI_MPLS_VPN) || (safi == SAFI_ENCAP)
|
|
|
|
|| (safi == SAFI_EVPN)) {
|
|
|
|
prn = bgp_node_get(table, (struct prefix *)prd);
|
|
|
|
|
2018-09-26 02:37:16 +02:00
|
|
|
if (!bgp_node_has_bgp_path_info_data(prn))
|
|
|
|
bgp_node_set_bgp_table_info(
|
|
|
|
prn, bgp_table_init(table->bgp, afi, safi));
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
bgp_unlock_node(prn);
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(prn);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
rn = bgp_node_get(table, p);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if ((safi == SAFI_MPLS_VPN) || (safi == SAFI_ENCAP)
|
|
|
|
|| (safi == SAFI_EVPN))
|
|
|
|
rn->prn = prn;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return rn;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *bgp_afi_node_lookup(struct bgp_table *table, afi_t afi,
|
|
|
|
safi_t safi, struct prefix *p,
|
|
|
|
struct prefix_rd *prd)
|
2017-05-15 23:34:04 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_node *prn = NULL;
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!table)
|
|
|
|
return NULL;
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if ((safi == SAFI_MPLS_VPN) || (safi == SAFI_ENCAP)
|
|
|
|
|| (safi == SAFI_EVPN)) {
|
|
|
|
prn = bgp_node_lookup(table, (struct prefix *)prd);
|
|
|
|
if (!prn)
|
|
|
|
return NULL;
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
if (!bgp_node_has_bgp_path_info_data(prn)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(prn);
|
|
|
|
return NULL;
|
|
|
|
}
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(prn);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
rn = bgp_node_lookup(table, p);
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return rn;
|
2017-05-15 23:34:04 +02:00
|
|
|
}
|
|
|
|
|
2018-10-03 00:15:34 +02:00
|
|
|
/* Allocate bgp_path_info_extra */
|
|
|
|
static struct bgp_path_info_extra *bgp_path_info_extra_new(void)
|
2007-05-04 22:15:47 +02:00
|
|
|
{
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info_extra *new;
|
|
|
|
new = XCALLOC(MTYPE_BGP_ROUTE_EXTRA,
|
|
|
|
sizeof(struct bgp_path_info_extra));
|
2017-11-21 11:42:05 +01:00
|
|
|
new->label[0] = MPLS_INVALID_LABEL;
|
|
|
|
new->num_labels = 0;
|
2019-02-08 14:48:28 +01:00
|
|
|
new->bgp_fs_pbr = NULL;
|
|
|
|
new->bgp_fs_iprule = NULL;
|
2017-07-17 14:03:14 +02:00
|
|
|
return new;
|
2007-05-04 22:15:47 +02:00
|
|
|
}
|
|
|
|
|
2018-12-05 15:09:35 +01:00
|
|
|
void bgp_path_info_extra_free(struct bgp_path_info_extra **extra)
|
2007-05-04 22:15:47 +02:00
|
|
|
{
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info_extra *e;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-06-12 16:38:37 +02:00
|
|
|
if (!extra || !*extra)
|
|
|
|
return;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-06-12 16:38:37 +02:00
|
|
|
e = *extra;
|
|
|
|
if (e->damp_info)
|
2019-11-10 19:13:20 +01:00
|
|
|
bgp_damp_info_free(e->damp_info, 0, e->damp_info->afi,
|
|
|
|
e->damp_info->safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-06-12 16:38:37 +02:00
|
|
|
e->damp_info = NULL;
|
|
|
|
if (e->parent) {
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *bpi = (struct bgp_path_info *)e->parent;
|
2018-06-12 16:38:37 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (bpi->net) {
|
2018-08-18 04:47:27 +02:00
|
|
|
/* FIXME: since multiple e may have the same e->parent
|
|
|
|
* and e->parent->net is holding a refcount for each
|
|
|
|
* of them, we need to do some fudging here.
|
|
|
|
*
|
2018-10-03 02:43:07 +02:00
|
|
|
* WARNING: if bpi->net->lock drops to 0, bpi may be
|
|
|
|
* freed as well (because bpi->net was holding the
|
|
|
|
* last reference to bpi) => write after free!
|
2018-08-18 04:47:27 +02:00
|
|
|
*/
|
|
|
|
unsigned refcount;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bpi = bgp_path_info_lock(bpi);
|
|
|
|
refcount = bpi->net->lock - 1;
|
|
|
|
bgp_unlock_node((struct bgp_node *)bpi->net);
|
2018-08-18 04:47:27 +02:00
|
|
|
if (!refcount)
|
2018-10-03 02:43:07 +02:00
|
|
|
bpi->net = NULL;
|
|
|
|
bgp_path_info_unlock(bpi);
|
2018-08-18 04:47:27 +02:00
|
|
|
}
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_unlock(e->parent);
|
2018-06-12 16:38:37 +02:00
|
|
|
e->parent = NULL;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2018-06-12 16:38:37 +02:00
|
|
|
|
|
|
|
if (e->bgp_orig)
|
|
|
|
bgp_unlock(e->bgp_orig);
|
2018-06-28 17:26:22 +02:00
|
|
|
|
2018-11-30 14:56:40 +01:00
|
|
|
if ((*extra)->bgp_fs_iprule)
|
|
|
|
list_delete(&((*extra)->bgp_fs_iprule));
|
2018-06-28 17:26:22 +02:00
|
|
|
if ((*extra)->bgp_fs_pbr)
|
2018-10-02 11:39:51 +02:00
|
|
|
list_delete(&((*extra)->bgp_fs_pbr));
|
2018-06-12 16:38:37 +02:00
|
|
|
XFREE(MTYPE_BGP_ROUTE_EXTRA, *extra);
|
|
|
|
|
|
|
|
*extra = NULL;
|
2007-05-04 22:15:47 +02:00
|
|
|
}
|
|
|
|
|
2018-10-03 00:15:34 +02:00
|
|
|
/* Get bgp_path_info extra information for the given bgp_path_info, lazy
|
|
|
|
* allocated if required.
|
2007-05-04 22:15:47 +02:00
|
|
|
*/
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info_extra *bgp_path_info_extra_get(struct bgp_path_info *pi)
|
2007-05-04 22:15:47 +02:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!pi->extra)
|
|
|
|
pi->extra = bgp_path_info_extra_new();
|
|
|
|
return pi->extra;
|
2007-05-04 22:15:47 +02:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Free bgp route information. */
|
2018-10-03 00:34:03 +02:00
|
|
|
static void bgp_path_info_free(struct bgp_path_info *path)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2019-10-16 16:25:19 +02:00
|
|
|
bgp_attr_unintern(&path->attr);
|
2015-05-20 02:40:34 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
bgp_unlink_nexthop(path);
|
|
|
|
bgp_path_info_extra_free(&path->extra);
|
|
|
|
bgp_path_info_mpath_free(&path->mpath);
|
bgpd: fix null pointer dereference bug
If path->net is NULL in the bgp_path_info_free() function, then
bgpd would crash in bgp_addpath_free_info_data() with the following
backtrace:
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007ff7b267a42a in __GI_abort () at abort.c:89
#2 0x00007ff7b39c1ca0 in core_handler (signo=11, siginfo=0x7ffff66414f0, context=<optimized out>) at lib/sigevent.c:249
#3 <signal handler called>
#4 idalloc_free_to_pool (pool_ptr=pool_ptr@entry=0x0, id=3) at lib/id_alloc.c:368
#5 0x0000560096246688 in bgp_addpath_free_info_data (d=d@entry=0x560098665468, nd=0x0) at bgpd/bgp_addpath.c:100
#6 0x00005600961bb522 in bgp_path_info_free (path=0x560098665400) at bgpd/bgp_route.c:252
#7 bgp_path_info_unlock (path=0x560098665400) at bgpd/bgp_route.c:276
#8 0x00005600961bb719 in bgp_path_info_reap (rn=rn@entry=0x5600986b2110, pi=pi@entry=0x560098665400) at bgpd/bgp_route.c:320
#9 0x00005600961bf4db in bgp_process_main_one (safi=SAFI_MPLS_VPN, afi=AFI_IP, rn=0x5600986b2110, bgp=0x560098587320) at bgpd/bgp_route.c:2476
#10 bgp_process_wq (wq=<optimized out>, data=0x56009869b8f0) at bgpd/bgp_route.c:2503
#11 0x00007ff7b39d5fcc in work_queue_run (thread=0x7ffff6641e10) at lib/workqueue.c:294
#12 0x00007ff7b39ce3b1 in thread_call (thread=thread@entry=0x7ffff6641e10) at lib/thread.c:1606
#13 0x00007ff7b39a3538 in frr_run (master=0x5600980795b0) at lib/libfrr.c:1011
#14 0x000056009618a5a3 in main (argc=3, argv=0x7ffff6642078) at bgpd/bgp_main.c:481
Add a null-check protection to fix this problem.
Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
2019-02-20 19:37:29 +01:00
|
|
|
if (path->net)
|
|
|
|
bgp_addpath_free_info_data(&path->tx_addpath,
|
|
|
|
&path->net->tx_addpath);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
peer_unlock(path->peer); /* bgp_path_info peer reference */
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
XFREE(MTYPE_BGP_ROUTE, path);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
struct bgp_path_info *bgp_path_info_lock(struct bgp_path_info *path)
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
{
|
2018-10-03 00:34:03 +02:00
|
|
|
path->lock++;
|
|
|
|
return path;
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
}
|
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
struct bgp_path_info *bgp_path_info_unlock(struct bgp_path_info *path)
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
{
|
2018-10-03 00:34:03 +02:00
|
|
|
assert(path && path->lock > 0);
|
|
|
|
path->lock--;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (path->lock == 0) {
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
#if 0
|
|
|
|
zlog_debug ("%s: unlocked and freeing", __func__);
|
|
|
|
zlog_backtrace (LOG_DEBUG);
|
|
|
|
#endif
|
2018-10-03 00:34:03 +02:00
|
|
|
bgp_path_info_free(path);
|
2017-07-17 14:03:14 +02:00
|
|
|
return NULL;
|
|
|
|
}
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
|
|
|
|
#if 0
|
2018-10-03 00:34:03 +02:00
|
|
|
if (path->lock == 1)
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
{
|
|
|
|
zlog_debug ("%s: unlocked to 1", __func__);
|
|
|
|
zlog_backtrace (LOG_DEBUG);
|
|
|
|
}
|
|
|
|
#endif
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
return path;
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
void bgp_path_info_add(struct bgp_node *rn, struct bgp_path_info *pi)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *top;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
top = bgp_node_get_bgp_path_info(rn);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
pi->next = top;
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->prev = NULL;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (top)
|
2018-10-03 02:43:07 +02:00
|
|
|
top->prev = pi;
|
2018-07-30 17:40:02 +02:00
|
|
|
bgp_node_set_bgp_path_info(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_lock(pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_lock_node(rn);
|
2018-10-03 02:43:07 +02:00
|
|
|
peer_lock(pi->peer); /* bgp_path_info peer reference */
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Do the actual removal of info from RIB, for use by bgp_process
|
2005-08-22 Paul Jakma <paul.jakma@sun.com>
* bgp_route.h: (struct bgp_info) add a new flag, BGP_INFO_REMOVED.
BGP_INFO_VALID is already overloaded, don't care to do same thing
to STALE or HISTORY.
* bgpd.h: (BGP_INFO_HOLDDOWN) Add INFO_REMOVED to the macro, as a
route which should generally be ignored.
* bgp_route.c: (bgp_info_delete) Just set the REMOVE flag, rather
than doing actual work, so that bgp_process (called directly,
or indirectly via the scanner) can catch withdrawn routes.
(bgp_info_reap) Actually remove the route, what bgp_info_delete
used to do, only for use by bgp_process.
(bgp_best_selection) reap any REMOVED routes, other than the old
selected route.
(bgp_process_rsclient) reap the old-selected route, if appropriate
(bgp_process_main) ditto
(bgp_rib_withdraw, bgp_rib_remove) make them more consistent with
each other. Don't play games with the VALID flag, bgp_process
is async now, so it didn't make a difference anyway.
Remove the 'force' argument from bgp_rib_withdraw, withdraw+force
is equivalent to bgp_rib_remove. Update all its callers.
(bgp_update_rsclient) bgp_rib_withdraw and force set is same as
bgp_rib_remove.
(route_vty_short_status_out) new helper to print the leading
route-status string used in many command outputs. Consolidate.
(route_vty_out, route_vty_out_tag, damp_route_vty_out,
flap_route_vty_out) use route_vty_short_status_out rather than
duplicate.
(route_vty_out_detail) print state of REMOVED flag.
(BGP_SHOW_SCODE_HEADER) update for Removed flag.
2005-08-23 00:34:41 +02:00
|
|
|
completion callback *only* */
|
2018-10-03 02:43:07 +02:00
|
|
|
void bgp_path_info_reap(struct bgp_node *rn, struct bgp_path_info *pi)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->next)
|
|
|
|
pi->next->prev = pi->prev;
|
|
|
|
if (pi->prev)
|
|
|
|
pi->prev->next = pi->next;
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2018-07-30 17:40:02 +02:00
|
|
|
bgp_node_set_bgp_path_info(rn, pi->next);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_mpath_dequeue(pi);
|
|
|
|
bgp_path_info_unlock(pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
void bgp_path_info_delete(struct bgp_node *rn, struct bgp_path_info *pi)
|
2005-08-22 Paul Jakma <paul.jakma@sun.com>
* bgp_route.h: (struct bgp_info) add a new flag, BGP_INFO_REMOVED.
BGP_INFO_VALID is already overloaded, don't care to do same thing
to STALE or HISTORY.
* bgpd.h: (BGP_INFO_HOLDDOWN) Add INFO_REMOVED to the macro, as a
route which should generally be ignored.
* bgp_route.c: (bgp_info_delete) Just set the REMOVE flag, rather
than doing actual work, so that bgp_process (called directly,
or indirectly via the scanner) can catch withdrawn routes.
(bgp_info_reap) Actually remove the route, what bgp_info_delete
used to do, only for use by bgp_process.
(bgp_best_selection) reap any REMOVED routes, other than the old
selected route.
(bgp_process_rsclient) reap the old-selected route, if appropriate
(bgp_process_main) ditto
(bgp_rib_withdraw, bgp_rib_remove) make them more consistent with
each other. Don't play games with the VALID flag, bgp_process
is async now, so it didn't make a difference anyway.
Remove the 'force' argument from bgp_rib_withdraw, withdraw+force
is equivalent to bgp_rib_remove. Update all its callers.
(bgp_update_rsclient) bgp_rib_withdraw and force set is same as
bgp_rib_remove.
(route_vty_short_status_out) new helper to print the leading
route-status string used in many command outputs. Consolidate.
(route_vty_out, route_vty_out_tag, damp_route_vty_out,
flap_route_vty_out) use route_vty_short_status_out rather than
duplicate.
(route_vty_out_detail) print state of REMOVED flag.
(BGP_SHOW_SCODE_HEADER) update for Removed flag.
2005-08-23 00:34:41 +02:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_set_flag(rn, pi, BGP_PATH_REMOVED);
|
2017-07-17 14:03:14 +02:00
|
|
|
/* set of previous already took care of pcount */
|
2018-10-03 02:43:07 +02:00
|
|
|
UNSET_FLAG(pi->flags, BGP_PATH_VALID);
|
2005-08-22 Paul Jakma <paul.jakma@sun.com>
* bgp_route.h: (struct bgp_info) add a new flag, BGP_INFO_REMOVED.
BGP_INFO_VALID is already overloaded, don't care to do same thing
to STALE or HISTORY.
* bgpd.h: (BGP_INFO_HOLDDOWN) Add INFO_REMOVED to the macro, as a
route which should generally be ignored.
* bgp_route.c: (bgp_info_delete) Just set the REMOVE flag, rather
than doing actual work, so that bgp_process (called directly,
or indirectly via the scanner) can catch withdrawn routes.
(bgp_info_reap) Actually remove the route, what bgp_info_delete
used to do, only for use by bgp_process.
(bgp_best_selection) reap any REMOVED routes, other than the old
selected route.
(bgp_process_rsclient) reap the old-selected route, if appropriate
(bgp_process_main) ditto
(bgp_rib_withdraw, bgp_rib_remove) make them more consistent with
each other. Don't play games with the VALID flag, bgp_process
is async now, so it didn't make a difference anyway.
Remove the 'force' argument from bgp_rib_withdraw, withdraw+force
is equivalent to bgp_rib_remove. Update all its callers.
(bgp_update_rsclient) bgp_rib_withdraw and force set is same as
bgp_rib_remove.
(route_vty_short_status_out) new helper to print the leading
route-status string used in many command outputs. Consolidate.
(route_vty_out, route_vty_out_tag, damp_route_vty_out,
flap_route_vty_out) use route_vty_short_status_out rather than
duplicate.
(route_vty_out_detail) print state of REMOVED flag.
(BGP_SHOW_SCODE_HEADER) update for Removed flag.
2005-08-23 00:34:41 +02:00
|
|
|
}
|
|
|
|
|
2018-10-03 00:15:34 +02:00
|
|
|
/* undo the effects of a previous call to bgp_path_info_delete; typically
|
2006-11-28 20:50:46 +01:00
|
|
|
called when a route is deleted and then quickly re-added before the
|
|
|
|
deletion has been processed */
|
2018-10-03 02:43:07 +02:00
|
|
|
void bgp_path_info_restore(struct bgp_node *rn, struct bgp_path_info *pi)
|
2006-11-28 20:50:46 +01:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_unset_flag(rn, pi, BGP_PATH_REMOVED);
|
2017-07-17 14:03:14 +02:00
|
|
|
/* unset of previous already took care of pcount */
|
2018-10-03 02:43:07 +02:00
|
|
|
SET_FLAG(pi->flags, BGP_PATH_VALID);
|
2006-11-28 20:50:46 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Adjust pcount as required */
|
2018-10-03 02:43:07 +02:00
|
|
|
static void bgp_pcount_adjust(struct bgp_node *rn, struct bgp_path_info *pi)
|
[bgpd] Handle pcount as flags are changed, fixing pcount issues
2006-09-06 Paul Jakma <paul.jakma@sun.com>
* (general) Squash any and all prefix-count issues by
abstracting route flag changes, and maintaining count as and
when flags are modified (rather than relying on explicit
modifications of count being sprinkled in just the right
places throughout the code).
* bgp_route.c: (bgp_pcount_{dec,inc}rement) removed.
(bgp_pcount_adjust) new, update prefix count as
needed for a given route.
(bgp_info_{uns,s}et_flag) set/unset a BGP_INFO route status
flag, calling previous function when appropriate.
(general) Update all set/unsets of flags to use previous.
Remove pcount_{dec,inc}rement calls.
No need to unset BGP_INFO_VALID in places where
bgp_info_delete is called, it does that anyway.
* bgp_{damp,nexthop}.c: Update to use bgp_info_{un,}set_flag.
* bgp_route.h: Export bgp_info_{un,}set_flag.
Add a 'meta' BGP_INFO flag, BGP_INFO_UNUSEABLE.
Move BGP_INFO_HOLDDOWN macro to here from bgpd.h
2006-09-07 02:24:49 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_table *table;
|
2012-08-17 17:19:49 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
assert(rn && bgp_node_table(rn));
|
2018-10-03 02:43:07 +02:00
|
|
|
assert(pi && pi->peer && pi->peer->bgp);
|
2006-10-22 21:13:07 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
table = bgp_node_table(rn);
|
2012-08-17 17:19:49 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == pi->peer->bgp->peer_self)
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!BGP_PATH_COUNTABLE(pi)
|
|
|
|
&& CHECK_FLAG(pi->flags, BGP_PATH_COUNTED)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
UNSET_FLAG(pi->flags, BGP_PATH_COUNTED);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* slight hack, but more robust against errors. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer->pcount[table->afi][table->safi])
|
|
|
|
pi->peer->pcount[table->afi][table->safi]--;
|
2018-08-16 18:21:25 +02:00
|
|
|
else
|
2018-09-13 21:34:28 +02:00
|
|
|
flog_err(EC_LIB_DEVELOPMENT,
|
2018-08-16 18:21:25 +02:00
|
|
|
"Asked to decrement 0 prefix count for peer");
|
2018-10-03 02:43:07 +02:00
|
|
|
} else if (BGP_PATH_COUNTABLE(pi)
|
|
|
|
&& !CHECK_FLAG(pi->flags, BGP_PATH_COUNTED)) {
|
|
|
|
SET_FLAG(pi->flags, BGP_PATH_COUNTED);
|
|
|
|
pi->peer->pcount[table->afi][table->safi]++;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
[bgpd] Handle pcount as flags are changed, fixing pcount issues
2006-09-06 Paul Jakma <paul.jakma@sun.com>
* (general) Squash any and all prefix-count issues by
abstracting route flag changes, and maintaining count as and
when flags are modified (rather than relying on explicit
modifications of count being sprinkled in just the right
places throughout the code).
* bgp_route.c: (bgp_pcount_{dec,inc}rement) removed.
(bgp_pcount_adjust) new, update prefix count as
needed for a given route.
(bgp_info_{uns,s}et_flag) set/unset a BGP_INFO route status
flag, calling previous function when appropriate.
(general) Update all set/unsets of flags to use previous.
Remove pcount_{dec,inc}rement calls.
No need to unset BGP_INFO_VALID in places where
bgp_info_delete is called, it does that anyway.
* bgp_{damp,nexthop}.c: Update to use bgp_info_{un,}set_flag.
* bgp_route.h: Export bgp_info_{un,}set_flag.
Add a 'meta' BGP_INFO flag, BGP_INFO_UNUSEABLE.
Move BGP_INFO_HOLDDOWN macro to here from bgpd.h
2006-09-07 02:24:49 +02:00
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
static int bgp_label_index_differs(struct bgp_path_info *pi1,
|
|
|
|
struct bgp_path_info *pi2)
|
2017-03-09 18:55:54 +01:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
return (!(pi1->attr->label_index == pi2->attr->label_index));
|
2017-03-09 18:55:54 +01:00
|
|
|
}
|
[bgpd] Handle pcount as flags are changed, fixing pcount issues
2006-09-06 Paul Jakma <paul.jakma@sun.com>
* (general) Squash any and all prefix-count issues by
abstracting route flag changes, and maintaining count as and
when flags are modified (rather than relying on explicit
modifications of count being sprinkled in just the right
places throughout the code).
* bgp_route.c: (bgp_pcount_{dec,inc}rement) removed.
(bgp_pcount_adjust) new, update prefix count as
needed for a given route.
(bgp_info_{uns,s}et_flag) set/unset a BGP_INFO route status
flag, calling previous function when appropriate.
(general) Update all set/unsets of flags to use previous.
Remove pcount_{dec,inc}rement calls.
No need to unset BGP_INFO_VALID in places where
bgp_info_delete is called, it does that anyway.
* bgp_{damp,nexthop}.c: Update to use bgp_info_{un,}set_flag.
* bgp_route.h: Export bgp_info_{un,}set_flag.
Add a 'meta' BGP_INFO flag, BGP_INFO_UNUSEABLE.
Move BGP_INFO_HOLDDOWN macro to here from bgpd.h
2006-09-07 02:24:49 +02:00
|
|
|
|
2018-10-03 00:15:34 +02:00
|
|
|
/* Set/unset bgp_path_info flags, adjusting any other state as needed.
|
[bgpd] Handle pcount as flags are changed, fixing pcount issues
2006-09-06 Paul Jakma <paul.jakma@sun.com>
* (general) Squash any and all prefix-count issues by
abstracting route flag changes, and maintaining count as and
when flags are modified (rather than relying on explicit
modifications of count being sprinkled in just the right
places throughout the code).
* bgp_route.c: (bgp_pcount_{dec,inc}rement) removed.
(bgp_pcount_adjust) new, update prefix count as
needed for a given route.
(bgp_info_{uns,s}et_flag) set/unset a BGP_INFO route status
flag, calling previous function when appropriate.
(general) Update all set/unsets of flags to use previous.
Remove pcount_{dec,inc}rement calls.
No need to unset BGP_INFO_VALID in places where
bgp_info_delete is called, it does that anyway.
* bgp_{damp,nexthop}.c: Update to use bgp_info_{un,}set_flag.
* bgp_route.h: Export bgp_info_{un,}set_flag.
Add a 'meta' BGP_INFO flag, BGP_INFO_UNUSEABLE.
Move BGP_INFO_HOLDDOWN macro to here from bgpd.h
2006-09-07 02:24:49 +02:00
|
|
|
* This is here primarily to keep prefix-count in check.
|
|
|
|
*/
|
2018-10-03 02:43:07 +02:00
|
|
|
void bgp_path_info_set_flag(struct bgp_node *rn, struct bgp_path_info *pi,
|
2018-10-03 00:15:34 +02:00
|
|
|
uint32_t flag)
|
[bgpd] Handle pcount as flags are changed, fixing pcount issues
2006-09-06 Paul Jakma <paul.jakma@sun.com>
* (general) Squash any and all prefix-count issues by
abstracting route flag changes, and maintaining count as and
when flags are modified (rather than relying on explicit
modifications of count being sprinkled in just the right
places throughout the code).
* bgp_route.c: (bgp_pcount_{dec,inc}rement) removed.
(bgp_pcount_adjust) new, update prefix count as
needed for a given route.
(bgp_info_{uns,s}et_flag) set/unset a BGP_INFO route status
flag, calling previous function when appropriate.
(general) Update all set/unsets of flags to use previous.
Remove pcount_{dec,inc}rement calls.
No need to unset BGP_INFO_VALID in places where
bgp_info_delete is called, it does that anyway.
* bgp_{damp,nexthop}.c: Update to use bgp_info_{un,}set_flag.
* bgp_route.h: Export bgp_info_{un,}set_flag.
Add a 'meta' BGP_INFO flag, BGP_INFO_UNUSEABLE.
Move BGP_INFO_HOLDDOWN macro to here from bgpd.h
2006-09-07 02:24:49 +02:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
SET_FLAG(pi->flags, flag);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* early bath if we know it's not a flag that changes countability state
|
|
|
|
*/
|
|
|
|
if (!CHECK_FLAG(flag,
|
2018-09-14 02:34:42 +02:00
|
|
|
BGP_PATH_VALID | BGP_PATH_HISTORY | BGP_PATH_REMOVED))
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_pcount_adjust(rn, pi);
|
[bgpd] Handle pcount as flags are changed, fixing pcount issues
2006-09-06 Paul Jakma <paul.jakma@sun.com>
* (general) Squash any and all prefix-count issues by
abstracting route flag changes, and maintaining count as and
when flags are modified (rather than relying on explicit
modifications of count being sprinkled in just the right
places throughout the code).
* bgp_route.c: (bgp_pcount_{dec,inc}rement) removed.
(bgp_pcount_adjust) new, update prefix count as
needed for a given route.
(bgp_info_{uns,s}et_flag) set/unset a BGP_INFO route status
flag, calling previous function when appropriate.
(general) Update all set/unsets of flags to use previous.
Remove pcount_{dec,inc}rement calls.
No need to unset BGP_INFO_VALID in places where
bgp_info_delete is called, it does that anyway.
* bgp_{damp,nexthop}.c: Update to use bgp_info_{un,}set_flag.
* bgp_route.h: Export bgp_info_{un,}set_flag.
Add a 'meta' BGP_INFO flag, BGP_INFO_UNUSEABLE.
Move BGP_INFO_HOLDDOWN macro to here from bgpd.h
2006-09-07 02:24:49 +02:00
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
void bgp_path_info_unset_flag(struct bgp_node *rn, struct bgp_path_info *pi,
|
2018-10-03 00:15:34 +02:00
|
|
|
uint32_t flag)
|
[bgpd] Handle pcount as flags are changed, fixing pcount issues
2006-09-06 Paul Jakma <paul.jakma@sun.com>
* (general) Squash any and all prefix-count issues by
abstracting route flag changes, and maintaining count as and
when flags are modified (rather than relying on explicit
modifications of count being sprinkled in just the right
places throughout the code).
* bgp_route.c: (bgp_pcount_{dec,inc}rement) removed.
(bgp_pcount_adjust) new, update prefix count as
needed for a given route.
(bgp_info_{uns,s}et_flag) set/unset a BGP_INFO route status
flag, calling previous function when appropriate.
(general) Update all set/unsets of flags to use previous.
Remove pcount_{dec,inc}rement calls.
No need to unset BGP_INFO_VALID in places where
bgp_info_delete is called, it does that anyway.
* bgp_{damp,nexthop}.c: Update to use bgp_info_{un,}set_flag.
* bgp_route.h: Export bgp_info_{un,}set_flag.
Add a 'meta' BGP_INFO flag, BGP_INFO_UNUSEABLE.
Move BGP_INFO_HOLDDOWN macro to here from bgpd.h
2006-09-07 02:24:49 +02:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
UNSET_FLAG(pi->flags, flag);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* early bath if we know it's not a flag that changes countability state
|
|
|
|
*/
|
|
|
|
if (!CHECK_FLAG(flag,
|
2018-09-14 02:34:42 +02:00
|
|
|
BGP_PATH_VALID | BGP_PATH_HISTORY | BGP_PATH_REMOVED))
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_pcount_adjust(rn, pi);
|
[bgpd] Handle pcount as flags are changed, fixing pcount issues
2006-09-06 Paul Jakma <paul.jakma@sun.com>
* (general) Squash any and all prefix-count issues by
abstracting route flag changes, and maintaining count as and
when flags are modified (rather than relying on explicit
modifications of count being sprinkled in just the right
places throughout the code).
* bgp_route.c: (bgp_pcount_{dec,inc}rement) removed.
(bgp_pcount_adjust) new, update prefix count as
needed for a given route.
(bgp_info_{uns,s}et_flag) set/unset a BGP_INFO route status
flag, calling previous function when appropriate.
(general) Update all set/unsets of flags to use previous.
Remove pcount_{dec,inc}rement calls.
No need to unset BGP_INFO_VALID in places where
bgp_info_delete is called, it does that anyway.
* bgp_{damp,nexthop}.c: Update to use bgp_info_{un,}set_flag.
* bgp_route.h: Export bgp_info_{un,}set_flag.
Add a 'meta' BGP_INFO flag, BGP_INFO_UNUSEABLE.
Move BGP_INFO_HOLDDOWN macro to here from bgpd.h
2006-09-07 02:24:49 +02:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Get MED value. If MED value is missing and "bgp bestpath
|
|
|
|
missing-as-worst" is specified, treat it as the worst value. */
|
2018-03-27 21:13:34 +02:00
|
|
|
static uint32_t bgp_med_value(struct attr *attr, struct bgp *bgp)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_MULTI_EXIT_DISC))
|
|
|
|
return attr->med;
|
|
|
|
else {
|
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_MED_MISSING_AS_WORST))
|
|
|
|
return BGP_MED_MAX;
|
|
|
|
else
|
|
|
|
return 0;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
void bgp_path_info_path_with_addpath_rx_str(struct bgp_path_info *pi, char *buf)
|
2015-12-07 20:56:02 +01:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->addpath_rx_id)
|
|
|
|
sprintf(buf, "path %s (addpath rxid %d)", pi->peer->host,
|
|
|
|
pi->addpath_rx_id);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2018-10-03 02:43:07 +02:00
|
|
|
sprintf(buf, "path %s", pi->peer->host);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Compare two bgp route entity. If 'new' is preferable over 'exist' return 1.
|
|
|
|
*/
|
2018-10-03 00:15:34 +02:00
|
|
|
static int bgp_path_info_cmp(struct bgp *bgp, struct bgp_path_info *new,
|
|
|
|
struct bgp_path_info *exist, int *paths_eq,
|
|
|
|
struct bgp_maxpaths_cfg *mpath_cfg, int debug,
|
2019-05-16 03:05:37 +02:00
|
|
|
char *pfx_buf, afi_t afi, safi_t safi,
|
|
|
|
enum bgp_path_selection_reason *reason)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct attr *newattr, *existattr;
|
|
|
|
bgp_peer_sort_t new_sort;
|
|
|
|
bgp_peer_sort_t exist_sort;
|
2018-03-27 21:13:34 +02:00
|
|
|
uint32_t new_pref;
|
|
|
|
uint32_t exist_pref;
|
|
|
|
uint32_t new_med;
|
|
|
|
uint32_t exist_med;
|
|
|
|
uint32_t new_weight;
|
|
|
|
uint32_t exist_weight;
|
2017-07-17 14:03:14 +02:00
|
|
|
uint32_t newm, existm;
|
|
|
|
struct in_addr new_id;
|
|
|
|
struct in_addr exist_id;
|
|
|
|
int new_cluster;
|
|
|
|
int exist_cluster;
|
|
|
|
int internal_as_route;
|
|
|
|
int confed_as_route;
|
2017-09-28 00:18:36 +02:00
|
|
|
int ret = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
char new_buf[PATH_ADDPATH_STR_BUFFER];
|
|
|
|
char exist_buf[PATH_ADDPATH_STR_BUFFER];
|
2018-03-27 21:13:34 +02:00
|
|
|
uint32_t new_mm_seq;
|
|
|
|
uint32_t exist_mm_seq;
|
2018-10-15 17:16:51 +02:00
|
|
|
int nh_cmp;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
*paths_eq = 0;
|
|
|
|
|
|
|
|
/* 0. Null check. */
|
|
|
|
if (new == NULL) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_none;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug("%s: new is NULL", pfx_buf);
|
|
|
|
return 0;
|
|
|
|
}
|
2015-12-07 20:56:02 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_path_with_addpath_rx_str(new, new_buf);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (exist == NULL) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_first;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug("%s: %s is the initial bestpath", pfx_buf,
|
|
|
|
new_buf);
|
|
|
|
return 1;
|
|
|
|
}
|
2015-12-07 20:56:02 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug) {
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_path_with_addpath_rx_str(exist, exist_buf);
|
2017-07-17 14:03:14 +02:00
|
|
|
zlog_debug("%s: Comparing %s flags 0x%x with %s flags 0x%x",
|
|
|
|
pfx_buf, new_buf, new->flags, exist_buf,
|
|
|
|
exist->flags);
|
|
|
|
}
|
2012-05-07 18:52:56 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
newattr = new->attr;
|
|
|
|
existattr = exist->attr;
|
|
|
|
|
|
|
|
/* For EVPN routes, we cannot just go by local vs remote, we have to
|
|
|
|
* look at the MAC mobility sequence number, if present.
|
|
|
|
*/
|
|
|
|
if (safi == SAFI_EVPN) {
|
|
|
|
/* This is an error condition described in RFC 7432 Section
|
|
|
|
* 15.2. The RFC
|
|
|
|
* states that in this scenario "the PE MUST alert the operator"
|
|
|
|
* but it
|
|
|
|
* does not state what other action to take. In order to provide
|
|
|
|
* some
|
|
|
|
* consistency in this scenario we are going to prefer the path
|
|
|
|
* with the
|
|
|
|
* sticky flag.
|
|
|
|
*/
|
|
|
|
if (newattr->sticky != existattr->sticky) {
|
|
|
|
if (!debug) {
|
|
|
|
prefix2str(&new->net->p, pfx_buf,
|
|
|
|
sizeof(*pfx_buf)
|
|
|
|
* PREFIX2STR_BUFFER);
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_path_with_addpath_rx_str(new,
|
|
|
|
new_buf);
|
|
|
|
bgp_path_info_path_with_addpath_rx_str(
|
|
|
|
exist, exist_buf);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if (newattr->sticky && !existattr->sticky) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_evpn_sticky_mac;
|
2018-08-16 02:37:45 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to sticky MAC flag",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
2017-07-17 14:03:14 +02:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!newattr->sticky && existattr->sticky) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_evpn_sticky_mac;
|
2018-08-16 02:37:45 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to sticky MAC flag",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
new_mm_seq = mac_mobility_seqnum(newattr);
|
|
|
|
exist_mm_seq = mac_mobility_seqnum(existattr);
|
2012-05-07 18:52:56 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (new_mm_seq > exist_mm_seq) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_evpn_seq;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to MM seq %u > %u",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_mm_seq,
|
|
|
|
exist_mm_seq);
|
|
|
|
return 1;
|
|
|
|
}
|
2012-05-07 18:52:56 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (new_mm_seq < exist_mm_seq) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_evpn_seq;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to MM seq %u < %u",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_mm_seq,
|
|
|
|
exist_mm_seq);
|
|
|
|
return 0;
|
|
|
|
}
|
2018-10-15 17:16:51 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* if sequence numbers are the same path with the lowest IP
|
|
|
|
* wins
|
|
|
|
*/
|
|
|
|
nh_cmp = bgp_path_info_nexthop_cmp(new, exist);
|
|
|
|
if (nh_cmp < 0) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_evpn_lower_ip;
|
2018-10-15 17:16:51 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to same MM seq %u and lower IP %s",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_mm_seq,
|
|
|
|
inet_ntoa(new->attr->nexthop));
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
if (nh_cmp > 0) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_evpn_lower_ip;
|
2018-10-15 17:16:51 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to same MM seq %u and higher IP %s",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_mm_seq,
|
|
|
|
inet_ntoa(new->attr->nexthop));
|
|
|
|
return 0;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 1. Weight check. */
|
|
|
|
new_weight = newattr->weight;
|
|
|
|
exist_weight = existattr->weight;
|
2012-05-07 18:52:56 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (new_weight > exist_weight) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_weight;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug("%s: %s wins over %s due to weight %d > %d",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_weight,
|
|
|
|
exist_weight);
|
|
|
|
return 1;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (new_weight < exist_weight) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_weight;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug("%s: %s loses to %s due to weight %d < %d",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_weight,
|
|
|
|
exist_weight);
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 2. Local preference check. */
|
|
|
|
new_pref = exist_pref = bgp->default_local_pref;
|
|
|
|
|
|
|
|
if (newattr->flag & ATTR_FLAG_BIT(BGP_ATTR_LOCAL_PREF))
|
|
|
|
new_pref = newattr->local_pref;
|
|
|
|
if (existattr->flag & ATTR_FLAG_BIT(BGP_ATTR_LOCAL_PREF))
|
|
|
|
exist_pref = existattr->local_pref;
|
|
|
|
|
|
|
|
if (new_pref > exist_pref) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_local_pref;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to localpref %d > %d",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_pref,
|
|
|
|
exist_pref);
|
|
|
|
return 1;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (new_pref < exist_pref) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_local_pref;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to localpref %d < %d",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_pref,
|
|
|
|
exist_pref);
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 3. Local route check. We prefer:
|
|
|
|
* - BGP_ROUTE_STATIC
|
|
|
|
* - BGP_ROUTE_AGGREGATE
|
|
|
|
* - BGP_ROUTE_REDISTRIBUTE
|
|
|
|
*/
|
2018-03-17 00:22:17 +01:00
|
|
|
if (!(new->sub_type == BGP_ROUTE_NORMAL ||
|
|
|
|
new->sub_type == BGP_ROUTE_IMPORTED)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_local_route;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to preferred BGP_ROUTE type",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 1;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-03-17 00:22:17 +01:00
|
|
|
if (!(exist->sub_type == BGP_ROUTE_NORMAL ||
|
2018-03-30 02:24:00 +02:00
|
|
|
exist->sub_type == BGP_ROUTE_IMPORTED)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_local_route;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to preferred BGP_ROUTE type",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 0;
|
2005-04-08 17:40:36 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 4. AS path length check. */
|
|
|
|
if (!bgp_flag_check(bgp, BGP_FLAG_ASPATH_IGNORE)) {
|
|
|
|
int exist_hops = aspath_count_hops(existattr->aspath);
|
|
|
|
int exist_confeds = aspath_count_confeds(existattr->aspath);
|
|
|
|
|
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_ASPATH_CONFED)) {
|
|
|
|
int aspath_hops;
|
|
|
|
|
|
|
|
aspath_hops = aspath_count_hops(newattr->aspath);
|
|
|
|
aspath_hops += aspath_count_confeds(newattr->aspath);
|
|
|
|
|
|
|
|
if (aspath_hops < (exist_hops + exist_confeds)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_confed_as_path;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to aspath (with confeds) hopcount %d < %d",
|
|
|
|
pfx_buf, new_buf, exist_buf,
|
|
|
|
aspath_hops,
|
|
|
|
(exist_hops + exist_confeds));
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (aspath_hops > (exist_hops + exist_confeds)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_confed_as_path;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to aspath (with confeds) hopcount %d > %d",
|
|
|
|
pfx_buf, new_buf, exist_buf,
|
|
|
|
aspath_hops,
|
|
|
|
(exist_hops + exist_confeds));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
int newhops = aspath_count_hops(newattr->aspath);
|
|
|
|
|
|
|
|
if (newhops < exist_hops) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_as_path;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to aspath hopcount %d < %d",
|
|
|
|
pfx_buf, new_buf, exist_buf,
|
|
|
|
newhops, exist_hops);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (newhops > exist_hops) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_as_path;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to aspath hopcount %d > %d",
|
|
|
|
pfx_buf, new_buf, exist_buf,
|
|
|
|
newhops, exist_hops);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 5. Origin check. */
|
|
|
|
if (newattr->origin < existattr->origin) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_origin;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug("%s: %s wins over %s due to ORIGIN %s < %s",
|
|
|
|
pfx_buf, new_buf, exist_buf,
|
|
|
|
bgp_origin_long_str[newattr->origin],
|
|
|
|
bgp_origin_long_str[existattr->origin]);
|
|
|
|
return 1;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (newattr->origin > existattr->origin) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_origin;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug("%s: %s loses to %s due to ORIGIN %s > %s",
|
|
|
|
pfx_buf, new_buf, exist_buf,
|
|
|
|
bgp_origin_long_str[newattr->origin],
|
|
|
|
bgp_origin_long_str[existattr->origin]);
|
|
|
|
return 0;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 6. MED check. */
|
|
|
|
internal_as_route = (aspath_count_hops(newattr->aspath) == 0
|
|
|
|
&& aspath_count_hops(existattr->aspath) == 0);
|
|
|
|
confed_as_route = (aspath_count_confeds(newattr->aspath) > 0
|
|
|
|
&& aspath_count_confeds(existattr->aspath) > 0
|
|
|
|
&& aspath_count_hops(newattr->aspath) == 0
|
|
|
|
&& aspath_count_hops(existattr->aspath) == 0);
|
|
|
|
|
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_ALWAYS_COMPARE_MED)
|
|
|
|
|| (bgp_flag_check(bgp, BGP_FLAG_MED_CONFED) && confed_as_route)
|
|
|
|
|| aspath_cmp_left(newattr->aspath, existattr->aspath)
|
|
|
|
|| aspath_cmp_left_confed(newattr->aspath, existattr->aspath)
|
|
|
|
|| internal_as_route) {
|
|
|
|
new_med = bgp_med_value(new->attr, bgp);
|
|
|
|
exist_med = bgp_med_value(exist->attr, bgp);
|
|
|
|
|
|
|
|
if (new_med < exist_med) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_med;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to MED %d < %d",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_med,
|
|
|
|
exist_med);
|
|
|
|
return 1;
|
|
|
|
}
|
2012-05-07 18:52:56 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (new_med > exist_med) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_med;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to MED %d > %d",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_med,
|
|
|
|
exist_med);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 7. Peer type check. */
|
|
|
|
new_sort = new->peer->sort;
|
|
|
|
exist_sort = exist->peer->sort;
|
|
|
|
|
|
|
|
if (new_sort == BGP_PEER_EBGP
|
|
|
|
&& (exist_sort == BGP_PEER_IBGP || exist_sort == BGP_PEER_CONFED)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_peer;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to eBGP peer > iBGP peer",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 1;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (exist_sort == BGP_PEER_EBGP
|
|
|
|
&& (new_sort == BGP_PEER_IBGP || new_sort == BGP_PEER_CONFED)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_peer;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to iBGP peer < eBGP peer",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 0;
|
|
|
|
}
|
2012-05-07 18:52:56 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 8. IGP metric check. */
|
|
|
|
newm = existm = 0;
|
2012-05-07 18:52:56 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (new->extra)
|
|
|
|
newm = new->extra->igpmetric;
|
|
|
|
if (exist->extra)
|
|
|
|
existm = exist->extra->igpmetric;
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (newm < existm) {
|
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to IGP metric %d < %d",
|
|
|
|
pfx_buf, new_buf, exist_buf, newm, existm);
|
|
|
|
ret = 1;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (newm > existm) {
|
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to IGP metric %d > %d",
|
|
|
|
pfx_buf, new_buf, exist_buf, newm, existm);
|
|
|
|
ret = 0;
|
2015-05-20 02:40:31 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 9. Same IGP metric. Compare the cluster list length as
|
|
|
|
representative of IGP hops metric. Rewrite the metric value
|
|
|
|
pair (newm, existm) with the cluster list length. Prefer the
|
|
|
|
path with smaller cluster list length. */
|
|
|
|
if (newm == existm) {
|
|
|
|
if (peer_sort(new->peer) == BGP_PEER_IBGP
|
|
|
|
&& peer_sort(exist->peer) == BGP_PEER_IBGP
|
|
|
|
&& (mpath_cfg == NULL
|
|
|
|
|| CHECK_FLAG(
|
|
|
|
mpath_cfg->ibgp_flags,
|
|
|
|
BGP_FLAG_IBGP_MULTIPATH_SAME_CLUSTERLEN))) {
|
|
|
|
newm = BGP_CLUSTER_LIST_LENGTH(new->attr);
|
|
|
|
existm = BGP_CLUSTER_LIST_LENGTH(exist->attr);
|
|
|
|
|
|
|
|
if (newm < existm) {
|
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to CLUSTER_LIST length %d < %d",
|
|
|
|
pfx_buf, new_buf, exist_buf,
|
|
|
|
newm, existm);
|
|
|
|
ret = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (newm > existm) {
|
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to CLUSTER_LIST length %d > %d",
|
|
|
|
pfx_buf, new_buf, exist_buf,
|
|
|
|
newm, existm);
|
|
|
|
ret = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
BGP: bestpath needs to prefer confed-external over confed-internal
Topology:
+-----------------------------------------+
| |
| AS 100 |
| |
| +----------------+ |
+-----------+ | | | |
| | | | SubAS 65001 | |
| AS 90 | | | | +-------------+ |
| r9----------------r1---------r2----\ | | |
| | | | | | | | | | SubAS 65002 | |
+-----|-----+ | | \--- r3 --/ | \-------r4 | |
\---------------------/ \---------------/ | | |
| | | | | | |
| +----------------+ | | | |
| | | | |
| +----------------+ | r5 | |
+-----------+ | | | | | | |
| | | | SubAS 65003 | +-----|-------+ |
| AS 80 | | | | | |
| r8----------------r7--------r6--------------/ |
| | | | | |
+-----------+ | +----------------+ |
+-----------------------------------------+
Important info:
- r8 originates 8.8.8.8/32
- r1, r2, r3 -> r7 are 10.0.0.1, 10.0.0.2, etc
- 'bgp bestpath compare-routerid' is configured everywhere (we could still hit
the problem without this though)
Bestpath selection for 8.8.8.8/32 on r2 and r3 is inconsistent. Here r4
advertised the 8.8.8.8/32 to r2 first, r2 then advertised it to r3, r3 selected
the path from r2 as the bestpath due to lowest router-id.
r2
BGP routing table entry for 8.8.8.8/32
Paths: (1 available, best #1, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.1 10.0.0.3 10.0.0.4
(65002 65003) 80
10.0.0.7 (metric 50) from 10.0.0.4 (10.0.0.4)
Origin IGP, metric 0, localpref 100, valid, confed-external, best
Last update: Fri May 1 14:46:57 2015
r3
BGP routing table entry for 8.8.8.8/32
Paths: (2 available, best #1, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.4 90.1.1.6
(65002 65003) 80
10.0.0.7 (metric 50) from 10.0.0.2 (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, confed-internal, best
Last update: Fri May 1 14:46:58 2015
(65002 65003) 80
10.0.0.7 (metric 50) from 10.0.0.4 (10.0.0.4)
Origin IGP, metric 0, localpref 100, valid, confed-external
Last update: Fri May 1 14:46:57 2015
Here r4 advertised the 8.8.8.8/32 to r3 first, r3 then advertised it to r2, r2
selected the path from r3 as the bestpath due to lowest router-id.
r2
BGP routing table entry for 8.8.8.8/32
Paths: (2 available, best #2, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.4
(65002 65003) 80
10.0.0.7 (metric 50) from 10.0.0.4 (10.0.0.4)
Origin IGP, metric 0, localpref 100, valid, confed-external
Last update: Fri May 1 15:37:27 2015
(65002 65003) 80
10.0.0.7 (metric 50) from 10.0.0.3 (10.0.0.3)
Origin IGP, metric 0, localpref 100, valid, confed-internal, best
Last update: Fri May 1 15:37:27 2015
r3
BGP routing table entry for 8.8.8.8/32
Paths: (1 available, best #1, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.1 10.0.0.2 10.0.0.4 90.1.1.6
(65002 65003) 80
10.0.0.7 (metric 50) from 10.0.0.4 (10.0.0.4)
Origin IGP, metric 0, localpref 100, valid, confed-external, best
Last update: Fri May 1 15:37:22 2015
The fix is to have bestpath prefer a confed-external path over a confed-internal
path. I added this just after the "nexthop IGP cost" step because some confed
customers will have one IGP covering multiple sub-ASs, in that case you want to
compare nexthop IGP cost.
2015-06-12 16:59:10 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 10. confed-external vs. confed-internal */
|
|
|
|
if (CHECK_FLAG(bgp->config, BGP_CONFIG_CONFEDERATION)) {
|
|
|
|
if (new_sort == BGP_PEER_CONFED
|
|
|
|
&& exist_sort == BGP_PEER_IBGP) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_confed;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to confed-external peer > confed-internal peer",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 1;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (exist_sort == BGP_PEER_CONFED
|
|
|
|
&& new_sort == BGP_PEER_IBGP) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_confed;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to confed-internal peer < confed-external peer",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 11. Maximum path check. */
|
|
|
|
if (newm == existm) {
|
|
|
|
/* If one path has a label but the other does not, do not treat
|
|
|
|
* them as equals for multipath
|
|
|
|
*/
|
2018-02-09 19:22:50 +01:00
|
|
|
if ((new->extra &&bgp_is_valid_label(&new->extra->label[0]))
|
2017-07-17 14:03:14 +02:00
|
|
|
!= (exist->extra
|
2017-11-21 11:42:05 +01:00
|
|
|
&& bgp_is_valid_label(&exist->extra->label[0]))) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s and %s cannot be multipath, one has a label while the other does not",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
} else if (bgp_flag_check(bgp,
|
|
|
|
BGP_FLAG_ASPATH_MULTIPATH_RELAX)) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For the two paths, all comparison steps till IGP
|
|
|
|
* metric
|
|
|
|
* have succeeded - including AS_PATH hop count. Since
|
|
|
|
* 'bgp
|
|
|
|
* bestpath as-path multipath-relax' knob is on, we
|
|
|
|
* don't need
|
|
|
|
* an exact match of AS_PATH. Thus, mark the paths are
|
|
|
|
* equal.
|
|
|
|
* That will trigger both these paths to get into the
|
|
|
|
* multipath
|
|
|
|
* array.
|
|
|
|
*/
|
|
|
|
*paths_eq = 1;
|
|
|
|
|
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s and %s are equal via multipath-relax",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
} else if (new->peer->sort == BGP_PEER_IBGP) {
|
|
|
|
if (aspath_cmp(new->attr->aspath,
|
|
|
|
exist->attr->aspath)) {
|
|
|
|
*paths_eq = 1;
|
|
|
|
|
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s and %s are equal via matching aspaths",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
}
|
|
|
|
} else if (new->peer->as == exist->peer->as) {
|
|
|
|
*paths_eq = 1;
|
|
|
|
|
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s and %s are equal via same remote-as",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* TODO: If unequal cost ibgp multipath is enabled we can
|
|
|
|
* mark the paths as equal here instead of returning
|
|
|
|
*/
|
|
|
|
if (debug) {
|
|
|
|
if (ret == 1)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s after IGP metric comparison",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
else
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s after IGP metric comparison",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
}
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_igp_metric;
|
2017-07-17 14:03:14 +02:00
|
|
|
return ret;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 12. If both paths are external, prefer the path that was received
|
|
|
|
first (the oldest one). This step minimizes route-flap, since a
|
|
|
|
newer path won't displace an older one, even if it was the
|
|
|
|
preferred route based on the additional decision criteria below. */
|
|
|
|
if (!bgp_flag_check(bgp, BGP_FLAG_COMPARE_ROUTER_ID)
|
|
|
|
&& new_sort == BGP_PEER_EBGP && exist_sort == BGP_PEER_EBGP) {
|
2018-09-14 02:34:42 +02:00
|
|
|
if (CHECK_FLAG(new->flags, BGP_PATH_SELECTED)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_older;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to oldest external",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 1;
|
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2018-09-14 02:34:42 +02:00
|
|
|
if (CHECK_FLAG(exist->flags, BGP_PATH_SELECTED)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_older;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to oldest external",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 13. Router-ID comparision. */
|
|
|
|
/* If one of the paths is "stale", the corresponding peer router-id will
|
|
|
|
* be 0 and would always win over the other path. If originator id is
|
|
|
|
* used for the comparision, it will decide which path is better.
|
|
|
|
*/
|
|
|
|
if (newattr->flag & ATTR_FLAG_BIT(BGP_ATTR_ORIGINATOR_ID))
|
|
|
|
new_id.s_addr = newattr->originator_id.s_addr;
|
|
|
|
else
|
|
|
|
new_id.s_addr = new->peer->remote_id.s_addr;
|
|
|
|
if (existattr->flag & ATTR_FLAG_BIT(BGP_ATTR_ORIGINATOR_ID))
|
|
|
|
exist_id.s_addr = existattr->originator_id.s_addr;
|
|
|
|
else
|
|
|
|
exist_id.s_addr = exist->peer->remote_id.s_addr;
|
|
|
|
|
|
|
|
if (ntohl(new_id.s_addr) < ntohl(exist_id.s_addr)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_router_id;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to Router-ID comparison",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 1;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (ntohl(new_id.s_addr) > ntohl(exist_id.s_addr)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_router_id;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to Router-ID comparison",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 14. Cluster length comparision. */
|
|
|
|
new_cluster = BGP_CLUSTER_LIST_LENGTH(new->attr);
|
|
|
|
exist_cluster = BGP_CLUSTER_LIST_LENGTH(exist->attr);
|
|
|
|
|
|
|
|
if (new_cluster < exist_cluster) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_cluster_length;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to CLUSTER_LIST length %d < %d",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_cluster,
|
|
|
|
exist_cluster);
|
|
|
|
return 1;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (new_cluster > exist_cluster) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_cluster_length;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to CLUSTER_LIST length %d > %d",
|
|
|
|
pfx_buf, new_buf, exist_buf, new_cluster,
|
|
|
|
exist_cluster);
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* 15. Neighbor address comparision. */
|
|
|
|
/* Do this only if neither path is "stale" as stale paths do not have
|
|
|
|
* valid peer information (as the connection may or may not be up).
|
|
|
|
*/
|
2018-09-14 02:34:42 +02:00
|
|
|
if (CHECK_FLAG(exist->flags, BGP_PATH_STALE)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_stale;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to latter path being STALE",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 1;
|
|
|
|
}
|
2015-05-20 03:03:44 +02:00
|
|
|
|
2018-09-14 02:34:42 +02:00
|
|
|
if (CHECK_FLAG(new->flags, BGP_PATH_STALE)) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_stale;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to former path being STALE",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 0;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* locally configured routes to advertise do not have su_remote */
|
2019-05-16 03:05:37 +02:00
|
|
|
if (new->peer->su_remote == NULL) {
|
|
|
|
*reason = bgp_path_selection_local_configured;
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
2019-05-16 03:05:37 +02:00
|
|
|
}
|
|
|
|
if (exist->peer->su_remote == NULL) {
|
|
|
|
*reason = bgp_path_selection_local_configured;
|
2017-07-17 14:03:14 +02:00
|
|
|
return 1;
|
2019-05-16 03:05:37 +02:00
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
ret = sockunion_cmp(new->peer->su_remote, exist->peer->su_remote);
|
|
|
|
|
|
|
|
if (ret == 1) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_neighbor_ip;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s loses to %s due to Neighor IP comparison",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret == -1) {
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_neighbor_ip;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s wins over %s due to Neighor IP comparison",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
|
|
|
return 1;
|
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2019-05-16 03:05:37 +02:00
|
|
|
*reason = bgp_path_selection_default;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug("%s: %s wins over %s due to nothing left to compare",
|
|
|
|
pfx_buf, new_buf, exist_buf);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return 1;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
/* Compare two bgp route entity. Return -1 if new is preferred, 1 if exist
|
|
|
|
* is preferred, or 0 if they are the same (usually will only occur if
|
2017-07-17 14:03:14 +02:00
|
|
|
* multipath is enabled
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
* This version is compatible with */
|
2018-10-03 00:15:34 +02:00
|
|
|
int bgp_path_info_cmp_compatible(struct bgp *bgp, struct bgp_path_info *new,
|
|
|
|
struct bgp_path_info *exist, char *pfx_buf,
|
2019-05-16 03:05:37 +02:00
|
|
|
afi_t afi, safi_t safi,
|
|
|
|
enum bgp_path_selection_reason *reason)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
int paths_eq;
|
|
|
|
int ret;
|
2018-10-03 00:15:34 +02:00
|
|
|
ret = bgp_path_info_cmp(bgp, new, exist, &paths_eq, NULL, 0, pfx_buf,
|
2019-05-16 03:05:37 +02:00
|
|
|
afi, safi, reason);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (paths_eq)
|
|
|
|
ret = 0;
|
|
|
|
else {
|
|
|
|
if (ret == 1)
|
|
|
|
ret = -1;
|
|
|
|
else
|
|
|
|
ret = 1;
|
|
|
|
}
|
|
|
|
return ret;
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static enum filter_type bgp_input_filter(struct peer *peer, struct prefix *p,
|
|
|
|
struct attr *attr, afi_t afi,
|
|
|
|
safi_t safi)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_filter *filter;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
filter = &peer->filter[afi][safi];
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
#define FILTER_EXIST_WARN(F, f, filter) \
|
|
|
|
if (BGP_DEBUG(update, UPDATE_IN) && !(F##_IN(filter))) \
|
2018-08-16 18:21:25 +02:00
|
|
|
zlog_debug("%s: Could not find configured input %s-list %s!", \
|
|
|
|
peer->host, #f, F##_IN_NAME(filter));
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (DISTRIBUTE_IN_NAME(filter)) {
|
|
|
|
FILTER_EXIST_WARN(DISTRIBUTE, distribute, filter);
|
|
|
|
|
|
|
|
if (access_list_apply(DISTRIBUTE_IN(filter), p) == FILTER_DENY)
|
|
|
|
return FILTER_DENY;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (PREFIX_LIST_IN_NAME(filter)) {
|
|
|
|
FILTER_EXIST_WARN(PREFIX_LIST, prefix, filter);
|
|
|
|
|
|
|
|
if (prefix_list_apply(PREFIX_LIST_IN(filter), p) == PREFIX_DENY)
|
|
|
|
return FILTER_DENY;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (FILTER_LIST_IN_NAME(filter)) {
|
|
|
|
FILTER_EXIST_WARN(FILTER_LIST, as, filter);
|
|
|
|
|
|
|
|
if (as_list_apply(FILTER_LIST_IN(filter), attr->aspath)
|
|
|
|
== AS_FILTER_DENY)
|
|
|
|
return FILTER_DENY;
|
|
|
|
}
|
|
|
|
|
|
|
|
return FILTER_PERMIT;
|
2009-06-25 19:06:31 +02:00
|
|
|
#undef FILTER_EXIST_WARN
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static enum filter_type bgp_output_filter(struct peer *peer, struct prefix *p,
|
|
|
|
struct attr *attr, afi_t afi,
|
|
|
|
safi_t safi)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_filter *filter;
|
|
|
|
|
|
|
|
filter = &peer->filter[afi][safi];
|
|
|
|
|
|
|
|
#define FILTER_EXIST_WARN(F, f, filter) \
|
|
|
|
if (BGP_DEBUG(update, UPDATE_OUT) && !(F##_OUT(filter))) \
|
2018-08-16 18:21:25 +02:00
|
|
|
zlog_debug("%s: Could not find configured output %s-list %s!", \
|
|
|
|
peer->host, #f, F##_OUT_NAME(filter));
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (DISTRIBUTE_OUT_NAME(filter)) {
|
|
|
|
FILTER_EXIST_WARN(DISTRIBUTE, distribute, filter);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (access_list_apply(DISTRIBUTE_OUT(filter), p) == FILTER_DENY)
|
|
|
|
return FILTER_DENY;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (PREFIX_LIST_OUT_NAME(filter)) {
|
|
|
|
FILTER_EXIST_WARN(PREFIX_LIST, prefix, filter);
|
2009-06-25 19:06:31 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (prefix_list_apply(PREFIX_LIST_OUT(filter), p)
|
|
|
|
== PREFIX_DENY)
|
|
|
|
return FILTER_DENY;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (FILTER_LIST_OUT_NAME(filter)) {
|
|
|
|
FILTER_EXIST_WARN(FILTER_LIST, as, filter);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (as_list_apply(FILTER_LIST_OUT(filter), attr->aspath)
|
|
|
|
== AS_FILTER_DENY)
|
|
|
|
return FILTER_DENY;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return FILTER_PERMIT;
|
2009-06-25 19:06:31 +02:00
|
|
|
#undef FILTER_EXIST_WARN
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* If community attribute includes no_export then return 1. */
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_community_filter(struct peer *peer, struct attr *attr)
|
|
|
|
{
|
|
|
|
if (attr->community) {
|
|
|
|
/* NO_ADVERTISE check. */
|
|
|
|
if (community_include(attr->community, COMMUNITY_NO_ADVERTISE))
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
/* NO_EXPORT check. */
|
|
|
|
if (peer->sort == BGP_PEER_EBGP
|
|
|
|
&& community_include(attr->community, COMMUNITY_NO_EXPORT))
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
/* NO_EXPORT_SUBCONFED check. */
|
|
|
|
if (peer->sort == BGP_PEER_EBGP
|
|
|
|
|| peer->sort == BGP_PEER_CONFED)
|
|
|
|
if (community_include(attr->community,
|
|
|
|
COMMUNITY_NO_EXPORT_SUBCONFED))
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Route reflection loop check. */
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_cluster_filter(struct peer *peer, struct attr *attr)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct in_addr cluster_id;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (attr->cluster) {
|
|
|
|
if (peer->bgp->config & BGP_CONFIG_CLUSTER_ID)
|
|
|
|
cluster_id = peer->bgp->cluster_id;
|
|
|
|
else
|
|
|
|
cluster_id = peer->bgp->router_id;
|
|
|
|
|
|
|
|
if (cluster_loop_check(attr->cluster, cluster_id))
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_input_modifier(struct peer *peer, struct prefix *p,
|
|
|
|
struct attr *attr, afi_t afi, safi_t safi,
|
2019-06-19 23:29:34 +02:00
|
|
|
const char *rmap_name, mpls_label_t *label,
|
2019-11-13 01:51:24 +01:00
|
|
|
uint32_t num_labels, struct bgp_node *rn)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_filter *filter;
|
2019-06-19 23:29:34 +02:00
|
|
|
struct bgp_path_info rmap_path = { 0 };
|
|
|
|
struct bgp_path_info_extra extra = { 0 };
|
2017-07-17 14:03:14 +02:00
|
|
|
route_map_result_t ret;
|
|
|
|
struct route_map *rmap = NULL;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
filter = &peer->filter[afi][safi];
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Apply default weight value. */
|
|
|
|
if (peer->weight[afi][safi])
|
|
|
|
attr->weight = peer->weight[afi][safi];
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (rmap_name) {
|
|
|
|
rmap = route_map_lookup_by_name(rmap_name);
|
2015-05-20 03:04:21 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (rmap == NULL)
|
|
|
|
return RMAP_DENY;
|
|
|
|
} else {
|
|
|
|
if (ROUTE_MAP_IN_NAME(filter)) {
|
|
|
|
rmap = ROUTE_MAP_IN(filter);
|
2015-05-20 03:04:21 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (rmap == NULL)
|
|
|
|
return RMAP_DENY;
|
|
|
|
}
|
|
|
|
}
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Route map apply. */
|
|
|
|
if (rmap) {
|
2018-10-03 02:43:07 +02:00
|
|
|
memset(&rmap_path, 0, sizeof(struct bgp_path_info));
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Duplicate current value to new strucutre for modification. */
|
2018-10-03 02:43:07 +02:00
|
|
|
rmap_path.peer = peer;
|
|
|
|
rmap_path.attr = attr;
|
2019-06-19 23:29:34 +02:00
|
|
|
rmap_path.extra = &extra;
|
2019-11-13 01:51:24 +01:00
|
|
|
rmap_path.net = rn;
|
|
|
|
|
2019-06-19 23:29:34 +02:00
|
|
|
extra.num_labels = num_labels;
|
|
|
|
if (label && num_labels && num_labels <= BGP_MAX_LABELS)
|
|
|
|
memcpy(extra.label, label,
|
|
|
|
num_labels * sizeof(mpls_label_t));
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
SET_FLAG(peer->rmap_type, PEER_RMAP_TYPE_IN);
|
2003-08-12 07:32:27 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Apply BGP route map to the attribute. */
|
2018-10-03 02:43:07 +02:00
|
|
|
ret = route_map_apply(rmap, p, RMAP_BGP, &rmap_path);
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
peer->rmap_type = 0;
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2018-06-14 19:40:36 +02:00
|
|
|
if (ret == RMAP_DENYMATCH)
|
2017-07-17 14:03:14 +02:00
|
|
|
return RMAP_DENY;
|
2015-05-20 02:40:45 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
return RMAP_PERMIT;
|
2015-05-20 02:40:45 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_output_modifier(struct peer *peer, struct prefix *p,
|
|
|
|
struct attr *attr, afi_t afi, safi_t safi,
|
|
|
|
const char *rmap_name)
|
2015-05-20 02:40:45 +02:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info rmap_path;
|
2017-07-17 14:03:14 +02:00
|
|
|
route_map_result_t ret;
|
|
|
|
struct route_map *rmap = NULL;
|
2018-03-27 21:13:34 +02:00
|
|
|
uint8_t rmap_type;
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-10-25 02:23:06 +02:00
|
|
|
/*
|
|
|
|
* So if we get to this point and have no rmap_name
|
|
|
|
* we want to just show the output as it currently
|
|
|
|
* exists.
|
|
|
|
*/
|
|
|
|
if (!rmap_name)
|
|
|
|
return RMAP_PERMIT;
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Apply default weight value. */
|
|
|
|
if (peer->weight[afi][safi])
|
|
|
|
attr->weight = peer->weight[afi][safi];
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-10-25 02:23:06 +02:00
|
|
|
rmap = route_map_lookup_by_name(rmap_name);
|
2015-05-20 03:04:21 +02:00
|
|
|
|
2017-10-25 02:23:06 +02:00
|
|
|
/*
|
|
|
|
* If we have a route map name and we do not find
|
|
|
|
* the routemap that means we have an implicit
|
|
|
|
* deny.
|
|
|
|
*/
|
|
|
|
if (rmap == NULL)
|
|
|
|
return RMAP_DENY;
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
memset(&rmap_path, 0, sizeof(struct bgp_path_info));
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Route map apply. */
|
2017-10-25 02:23:06 +02:00
|
|
|
/* Duplicate current value to new strucutre for modification. */
|
2018-10-03 02:43:07 +02:00
|
|
|
rmap_path.peer = peer;
|
|
|
|
rmap_path.attr = attr;
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-10-25 02:41:59 +02:00
|
|
|
rmap_type = peer->rmap_type;
|
2017-10-25 02:23:06 +02:00
|
|
|
SET_FLAG(peer->rmap_type, PEER_RMAP_TYPE_OUT);
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-10-25 02:23:06 +02:00
|
|
|
/* Apply BGP route map to the attribute. */
|
2018-10-03 02:43:07 +02:00
|
|
|
ret = route_map_apply(rmap, p, RMAP_BGP, &rmap_path);
|
2003-08-12 07:32:27 +02:00
|
|
|
|
2017-10-25 02:41:59 +02:00
|
|
|
peer->rmap_type = rmap_type;
|
2017-10-25 02:23:06 +02:00
|
|
|
|
|
|
|
if (ret == RMAP_DENYMATCH)
|
|
|
|
/*
|
|
|
|
* caller has multiple error paths with bgp_attr_flush()
|
|
|
|
*/
|
|
|
|
return RMAP_DENY;
|
2003-08-12 07:32:27 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return RMAP_PERMIT;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2015-05-20 02:57:34 +02:00
|
|
|
/* If this is an EBGP peer with remove-private-AS */
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_peer_remove_private_as(struct bgp *bgp, afi_t afi, safi_t safi,
|
|
|
|
struct peer *peer, struct attr *attr)
|
|
|
|
{
|
|
|
|
if (peer->sort == BGP_PEER_EBGP
|
|
|
|
&& (peer_af_flag_check(peer, afi, safi,
|
|
|
|
PEER_FLAG_REMOVE_PRIVATE_AS_ALL_REPLACE)
|
|
|
|
|| peer_af_flag_check(peer, afi, safi,
|
|
|
|
PEER_FLAG_REMOVE_PRIVATE_AS_REPLACE)
|
|
|
|
|| peer_af_flag_check(peer, afi, safi,
|
|
|
|
PEER_FLAG_REMOVE_PRIVATE_AS_ALL)
|
|
|
|
|| peer_af_flag_check(peer, afi, safi,
|
|
|
|
PEER_FLAG_REMOVE_PRIVATE_AS))) {
|
|
|
|
// Take action on the entire aspath
|
|
|
|
if (peer_af_flag_check(peer, afi, safi,
|
|
|
|
PEER_FLAG_REMOVE_PRIVATE_AS_ALL_REPLACE)
|
|
|
|
|| peer_af_flag_check(peer, afi, safi,
|
|
|
|
PEER_FLAG_REMOVE_PRIVATE_AS_ALL)) {
|
|
|
|
if (peer_af_flag_check(
|
|
|
|
peer, afi, safi,
|
|
|
|
PEER_FLAG_REMOVE_PRIVATE_AS_ALL_REPLACE))
|
|
|
|
attr->aspath = aspath_replace_private_asns(
|
2019-07-25 17:35:06 +02:00
|
|
|
attr->aspath, bgp->as, peer->as);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
// The entire aspath consists of private ASNs so create
|
|
|
|
// an empty aspath
|
|
|
|
else if (aspath_private_as_check(attr->aspath))
|
|
|
|
attr->aspath = aspath_empty_get();
|
|
|
|
|
|
|
|
// There are some public and some private ASNs, remove
|
|
|
|
// the private ASNs
|
|
|
|
else
|
|
|
|
attr->aspath = aspath_remove_private_asns(
|
2019-07-25 17:35:06 +02:00
|
|
|
attr->aspath, peer->as);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// 'all' was not specified so the entire aspath must be private
|
|
|
|
// ASNs
|
|
|
|
// for us to do anything
|
|
|
|
else if (aspath_private_as_check(attr->aspath)) {
|
|
|
|
if (peer_af_flag_check(
|
|
|
|
peer, afi, safi,
|
|
|
|
PEER_FLAG_REMOVE_PRIVATE_AS_REPLACE))
|
|
|
|
attr->aspath = aspath_replace_private_asns(
|
2019-07-25 17:35:06 +02:00
|
|
|
attr->aspath, bgp->as, peer->as);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
attr->aspath = aspath_empty_get();
|
|
|
|
}
|
|
|
|
}
|
2015-05-20 02:57:34 +02:00
|
|
|
}
|
|
|
|
|
2015-05-20 03:03:14 +02:00
|
|
|
/* If this is an EBGP peer with as-override */
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_peer_as_override(struct bgp *bgp, afi_t afi, safi_t safi,
|
|
|
|
struct peer *peer, struct attr *attr)
|
|
|
|
{
|
|
|
|
if (peer->sort == BGP_PEER_EBGP
|
|
|
|
&& peer_af_flag_check(peer, afi, safi, PEER_FLAG_AS_OVERRIDE)) {
|
|
|
|
if (aspath_single_asn_check(attr->aspath, peer->as))
|
|
|
|
attr->aspath = aspath_replace_specific_asn(
|
|
|
|
attr->aspath, peer->as, bgp->as);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-08-25 20:27:49 +02:00
|
|
|
void bgp_attr_add_gshut_community(struct attr *attr)
|
|
|
|
{
|
|
|
|
struct community *old;
|
|
|
|
struct community *new;
|
|
|
|
struct community *merge;
|
|
|
|
struct community *gshut;
|
|
|
|
|
|
|
|
old = attr->community;
|
|
|
|
gshut = community_str2com("graceful-shutdown");
|
|
|
|
|
2018-06-21 17:49:13 +02:00
|
|
|
assert(gshut);
|
|
|
|
|
2017-08-25 20:27:49 +02:00
|
|
|
if (old) {
|
|
|
|
merge = community_merge(community_dup(old), gshut);
|
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
if (old->refcnt == 0)
|
2018-10-22 21:58:39 +02:00
|
|
|
community_free(&old);
|
2017-08-25 20:27:49 +02:00
|
|
|
|
|
|
|
new = community_uniq_sort(merge);
|
2018-10-22 21:58:39 +02:00
|
|
|
community_free(&merge);
|
2017-08-25 20:27:49 +02:00
|
|
|
} else {
|
|
|
|
new = community_dup(gshut);
|
|
|
|
}
|
|
|
|
|
2018-10-22 21:58:39 +02:00
|
|
|
community_free(&gshut);
|
2017-08-25 20:27:49 +02:00
|
|
|
attr->community = new;
|
|
|
|
attr->flag |= ATTR_FLAG_BIT(BGP_ATTR_COMMUNITIES);
|
|
|
|
|
|
|
|
/* When we add the graceful-shutdown community we must also
|
|
|
|
* lower the local-preference */
|
|
|
|
attr->flag |= ATTR_FLAG_BIT(BGP_ATTR_LOCAL_PREF);
|
|
|
|
attr->local_pref = BGP_GSHUT_LOCAL_PREF;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2018-03-27 21:13:34 +02:00
|
|
|
static void subgroup_announce_reset_nhop(uint8_t family, struct attr *attr)
|
2015-05-20 03:03:14 +02:00
|
|
|
{
|
2018-03-24 00:57:03 +01:00
|
|
|
if (family == AF_INET) {
|
2017-07-17 14:03:14 +02:00
|
|
|
attr->nexthop.s_addr = 0;
|
2018-03-24 00:57:03 +01:00
|
|
|
attr->mp_nexthop_global_in.s_addr = 0;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
if (family == AF_INET6)
|
|
|
|
memset(&attr->mp_nexthop_global, 0, IPV6_MAX_BYTELEN);
|
2018-03-01 10:47:28 +01:00
|
|
|
if (family == AF_EVPN)
|
|
|
|
memset(&attr->mp_nexthop_global_in, 0, BGP_ATTR_NHLEN_IPV4);
|
2015-05-20 03:03:14 +02:00
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
int subgroup_announce_check(struct bgp_node *rn, struct bgp_path_info *pi,
|
2017-07-17 14:03:14 +02:00
|
|
|
struct update_subgroup *subgrp, struct prefix *p,
|
|
|
|
struct attr *attr)
|
|
|
|
{
|
|
|
|
struct bgp_filter *filter;
|
|
|
|
struct peer *from;
|
|
|
|
struct peer *peer;
|
|
|
|
struct peer *onlypeer;
|
|
|
|
struct bgp *bgp;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct attr *piattr;
|
2017-07-17 14:03:14 +02:00
|
|
|
char buf[PREFIX_STRLEN];
|
lib: Introducing a 3rd state for route-map match cmd: RMAP_NOOP
Introducing a 3rd state for route_map_apply library function: RMAP_NOOP
Traditionally route map MATCH rule apis were designed to return
a binary response, consisting of either RMAP_MATCH or RMAP_NOMATCH.
(Route-map SET rule apis return RMAP_OKAY or RMAP_ERROR).
Depending on this response, the following statemachine decided the
course of action:
State1:
If match cmd returns RMAP_MATCH then, keep existing behaviour.
If routemap type is PERMIT, execute set cmds or call cmds if applicable,
otherwise PERMIT!
Else If routemap type is DENY, we DENYMATCH right away
State2:
If match cmd returns RMAP_NOMATCH, continue on to next route-map. If there
are no other rules or if all the rules return RMAP_NOMATCH, return DENYMATCH
We require a 3rd state because of the following situation:
The issue - what if, the rule api needs to abort or ignore a rule?:
"match evpn vni xx" route-map filter can be applied to incoming routes
regardless of whether the tunnel type is vxlan or mpls.
This rule should be N/A for mpls based evpn route, but applicable to only
vxlan based evpn route.
Also, this rule should be applicable for routes with VNI label only, and
not for routes without labels. For example, type 3 and type 4 EVPN routes
do not have labels, so, this match cmd should let them through.
Today, the filter produces either a match or nomatch response regardless of
whether it is mpls/vxlan, resulting in either permitting or denying the
route.. So an mpls evpn route may get filtered out incorrectly.
Eg: "route-map RM1 permit 10 ; match evpn vni 20" or
"route-map RM2 deny 20 ; match vni 20"
With the introduction of the 3rd state, we can abort this rule check safely.
How? The rules api can now return RMAP_NOOP to indicate
that it encountered an invalid check, and needs to abort just that rule,
but continue with other rules.
As a result we have a 3rd state:
State3:
If match cmd returned RMAP_NOOP
Then, proceed to other route-map, otherwise if there are no more
rules or if all the rules return RMAP_NOOP, then, return RMAP_PERMITMATCH.
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-06-19 23:04:36 +02:00
|
|
|
route_map_result_t ret;
|
2017-07-17 14:03:14 +02:00
|
|
|
int transparent;
|
|
|
|
int reflect;
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
|
|
|
int samepeer_safe = 0; /* for synthetic mplsvpns routes */
|
|
|
|
|
|
|
|
if (DISABLE_BGP_ANNOUNCE)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
afi = SUBGRP_AFI(subgrp);
|
|
|
|
safi = SUBGRP_SAFI(subgrp);
|
|
|
|
peer = SUBGRP_PEER(subgrp);
|
|
|
|
onlypeer = NULL;
|
|
|
|
if (CHECK_FLAG(peer->flags, PEER_FLAG_LONESOUL))
|
|
|
|
onlypeer = SUBGRP_PFIRST(subgrp)->peer;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
from = pi->peer;
|
2017-07-17 14:03:14 +02:00
|
|
|
filter = &peer->filter[afi][safi];
|
|
|
|
bgp = SUBGRP_INST(subgrp);
|
2018-10-03 02:43:07 +02:00
|
|
|
piattr = bgp_path_info_mpath_count(pi) ? bgp_path_info_mpath_attr(pi)
|
|
|
|
: pi->attr;
|
2015-05-20 03:03:47 +02:00
|
|
|
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
if (((afi == AFI_IP) || (afi == AFI_IP6)) && (safi == SAFI_MPLS_VPN)
|
2018-10-03 02:43:07 +02:00
|
|
|
&& ((pi->type == ZEBRA_ROUTE_BGP_DIRECT)
|
|
|
|
|| (pi->type == ZEBRA_ROUTE_BGP_DIRECT_EXT))) {
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* direct and direct_ext type routes originate internally even
|
|
|
|
* though they can have peer pointers that reference other
|
|
|
|
* systems
|
|
|
|
*/
|
|
|
|
prefix2str(p, buf, PREFIX_STRLEN);
|
|
|
|
zlog_debug("%s: pfx %s bgp_direct->vpn route peer safe",
|
|
|
|
__func__, buf);
|
|
|
|
samepeer_safe = 1;
|
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
|
|
|
|
2018-03-09 21:52:55 +01:00
|
|
|
if (((afi == AFI_IP) || (afi == AFI_IP6))
|
|
|
|
&& ((safi == SAFI_MPLS_VPN) || (safi == SAFI_UNICAST))
|
2018-10-03 02:43:07 +02:00
|
|
|
&& (pi->type == ZEBRA_ROUTE_BGP)
|
|
|
|
&& (pi->sub_type == BGP_ROUTE_IMPORTED)) {
|
2018-03-09 21:52:55 +01:00
|
|
|
|
|
|
|
/* Applies to routes leaked vpn->vrf and vrf->vpn */
|
|
|
|
|
|
|
|
samepeer_safe = 1;
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* With addpath we may be asked to TX all kinds of paths so make sure
|
2018-10-03 02:43:07 +02:00
|
|
|
* pi is valid */
|
|
|
|
if (!CHECK_FLAG(pi->flags, BGP_PATH_VALID)
|
|
|
|
|| CHECK_FLAG(pi->flags, BGP_PATH_HISTORY)
|
|
|
|
|| CHECK_FLAG(pi->flags, BGP_PATH_REMOVED)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
|
|
|
}
|
BGP: support for addpath TX
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Reviewed-by: Vivek Venkataraman <vivek@cumulusnetworks.com
Ticket: CM-8014
This implements addpath TX with the first feature to use it
being "neighbor x.x.x.x addpath-tx-all-paths".
One change to show output is 'show ip bgp x.x.x.x'. If no addpath-tx
features are configured for any peers then everything looks the same
as it is today in that "Advertised to" is at the top and refers to
which peers the bestpath was advertise to.
root@superm-redxp-05[quagga-stash5]# vtysh -c 'show ip bgp 1.1.1.1'
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Last update: Fri Oct 30 18:26:44 2015
[snip]
but once you enable an addpath feature we must display "Advertised to" on a path-by-path basis:
superm-redxp-05# show ip bgp 1.1.1.1/32
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:44 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r3(10.0.0.3) (10.0.0.3)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 7
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r6(10.0.0.6) (10.0.0.6)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 6
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r5(10.0.0.5) (10.0.0.5)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 5
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r4(10.0.0.4) (10.0.0.4)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 4
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r1(10.0.0.1) (10.0.0.1)
Origin IGP, metric 0, localpref 100, valid, internal, best
AddPath ID: RX 0, TX 3
Advertised to: r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Last update: Fri Oct 30 18:26:34 2015
superm-redxp-05#
2015-11-05 18:29:43 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If this is not the bestpath then check to see if there is an enabled
|
|
|
|
* addpath
|
|
|
|
* feature that requires us to advertise it */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!CHECK_FLAG(pi->flags, BGP_PATH_SELECTED)) {
|
bgpd: Re-use TX Addpath IDs where possible
The motivation for this patch is to address a concerning behavior of
tx-addpath-bestpath-per-AS. Prior to this patch, all paths' TX ID was
pre-determined as the path was received from a peer. However, this meant
that any time the path selected as best from an AS changed, bgpd had no
choice but to withdraw the previous best path, and advertise the new
best-path under a new TX ID. This could cause significant network
disruption, especially for the subset of prefixes coming from only one
AS that were also communicated over a bestpath-per-AS session.
The patch's general approach is best illustrated by
txaddpath_update_ids. After a bestpath run (required for best-per-AS to
know what will and will not be sent as addpaths) ID numbers will be
stripped from paths that no longer need to be sent, and held in a pool.
Then, paths that will be sent as addpaths and do not already have ID
numbers will allocate new ID numbers, pulling first from that pool.
Finally, anything left in the pool will be returned to the allocator.
In order for this to work, ID numbers had to be split by strategy. The
tx-addpath-All strategy would keep every ID number "in use" constantly,
preventing IDs from being transferred to different paths. Rather than
create two variables for ID, this patch create a more generic array that
will easily enable more addpath strategies to be implemented. The
previously described ID manipulations will happen per addpath strategy,
and will only be run for strategies that are enabled on at least one
peer.
Finally, the ID numbers are allocated from an allocator that tracks per
AFI/SAFI/Addpath Strategy which IDs are in use. Though it would be very
improbable, there was the possibility with the free-running counter
approach for rollover to cause two paths on the same prefix to get
assigned the same TX ID. As remote as the possibility is, we prefer to
not leave it to chance.
This ID re-use method is not perfect. In some cases you could still get
withdraw-then-add behaviors where not strictly necessary. In the case of
bestpath-per-AS this requires one AS to advertise a prefix for the first
time, then a second AS withdraws that prefix, all within the space of an
already pending MRAI timer. In those situations a withdraw-then-add is
more forgivable, and fixing it would probably require a much more
significant effort, as IDs would need to be moved to ADVs instead of
paths.
Signed-off-by Mitchell Skiba <mskiba@amazon.com>
2018-05-10 01:10:02 +02:00
|
|
|
if (!bgp_addpath_tx_path(peer->addpath_type[afi][safi], pi)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
2015-11-06 17:34:41 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Aggregate-address suppress check. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->extra && pi->extra->suppress)
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!UNSUPPRESS_MAP_NAME(filter)) {
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2018-04-25 16:23:22 +02:00
|
|
|
/*
|
|
|
|
* If we are doing VRF 2 VRF leaking via the import
|
|
|
|
* statement, we want to prevent the route going
|
|
|
|
* off box as that the RT and RD created are localy
|
|
|
|
* significant and globaly useless.
|
|
|
|
*/
|
2018-10-03 02:43:07 +02:00
|
|
|
if (safi == SAFI_MPLS_VPN && pi->extra && pi->extra->num_labels
|
|
|
|
&& pi->extra->label[0] == BGP_PREVENT_VRF_2_VRF_LEAK)
|
2018-04-25 16:23:22 +02:00
|
|
|
return 0;
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If it's labeled safi, make sure the route has a valid label. */
|
|
|
|
if (safi == SAFI_LABELED_UNICAST) {
|
2018-10-03 02:43:07 +02:00
|
|
|
mpls_label_t label = bgp_adv_label(rn, pi, peer, afi, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!bgp_is_valid_label(&label)) {
|
|
|
|
if (bgp_debug_update(NULL, p, subgrp->update_group, 0))
|
|
|
|
zlog_debug("u%" PRIu64 ":s%" PRIu64
|
|
|
|
" %s/%d is filtered - no label (%p)",
|
|
|
|
subgrp->update_group->id, subgrp->id,
|
|
|
|
inet_ntop(p->family, &p->u.prefix,
|
|
|
|
buf, SU_ADDRSTRLEN),
|
|
|
|
p->prefixlen, &label);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
2017-03-09 15:54:20 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Do not send back route to sender. */
|
|
|
|
if (onlypeer && from == onlypeer) {
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Do not send the default route in the BGP table if the neighbor is
|
|
|
|
* configured for default-originate */
|
|
|
|
if (CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_DEFAULT_ORIGINATE)) {
|
|
|
|
if (p->family == AF_INET && p->u.prefix4.s_addr == INADDR_ANY)
|
|
|
|
return 0;
|
|
|
|
else if (p->family == AF_INET6 && p->prefixlen == 0)
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:29:19 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Transparency check. */
|
|
|
|
if (CHECK_FLAG(peer->af_flags[afi][safi], PEER_FLAG_RSERVER_CLIENT)
|
|
|
|
&& CHECK_FLAG(from->af_flags[afi][safi], PEER_FLAG_RSERVER_CLIENT))
|
|
|
|
transparent = 1;
|
|
|
|
else
|
|
|
|
transparent = 0;
|
|
|
|
|
|
|
|
/* If community is not disabled check the no-export and local. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!transparent && bgp_community_filter(peer, piattr)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_debug_update(NULL, p, subgrp->update_group, 0))
|
|
|
|
zlog_debug(
|
|
|
|
"subgrpannouncecheck: community filter check fail");
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If the attribute has originator-id and it is same as remote
|
|
|
|
peer's id. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (onlypeer && piattr->flag & ATTR_FLAG_BIT(BGP_ATTR_ORIGINATOR_ID)
|
|
|
|
&& (IPV4_ADDR_SAME(&onlypeer->remote_id, &piattr->originator_id))) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_debug_update(NULL, p, subgrp->update_group, 0))
|
|
|
|
zlog_debug(
|
|
|
|
"%s [Update:SEND] %s originator-id is same as "
|
|
|
|
"remote router-id",
|
|
|
|
onlypeer->host,
|
|
|
|
prefix2str(p, buf, sizeof(buf)));
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* ORF prefix-list filter check */
|
|
|
|
if (CHECK_FLAG(peer->af_cap[afi][safi], PEER_CAP_ORF_PREFIX_RM_ADV)
|
|
|
|
&& (CHECK_FLAG(peer->af_cap[afi][safi], PEER_CAP_ORF_PREFIX_SM_RCV)
|
|
|
|
|| CHECK_FLAG(peer->af_cap[afi][safi],
|
|
|
|
PEER_CAP_ORF_PREFIX_SM_OLD_RCV)))
|
|
|
|
if (peer->orf_plist[afi][safi]) {
|
|
|
|
if (prefix_list_apply(peer->orf_plist[afi][safi], p)
|
|
|
|
== PREFIX_DENY) {
|
|
|
|
if (bgp_debug_update(NULL, p,
|
|
|
|
subgrp->update_group, 0))
|
|
|
|
zlog_debug(
|
|
|
|
"%s [Update:SEND] %s is filtered via ORF",
|
|
|
|
peer->host,
|
|
|
|
prefix2str(p, buf,
|
|
|
|
sizeof(buf)));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Output filter check. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (bgp_output_filter(peer, p, piattr, afi, safi) == FILTER_DENY) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_debug_update(NULL, p, subgrp->update_group, 0))
|
|
|
|
zlog_debug("%s [Update:SEND] %s is filtered",
|
|
|
|
peer->host, prefix2str(p, buf, sizeof(buf)));
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* AS path loop check. */
|
2019-10-29 20:29:09 +01:00
|
|
|
if (onlypeer && onlypeer->as_path_loop_detection
|
|
|
|
&& aspath_loop_check(piattr->aspath, onlypeer->as)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_debug_update(NULL, p, subgrp->update_group, 0))
|
|
|
|
zlog_debug(
|
|
|
|
"%s [Update:SEND] suppress announcement to peer AS %u "
|
|
|
|
"that is part of AS path.",
|
|
|
|
onlypeer->host, onlypeer->as);
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If we're a CONFED we need to loop check the CONFED ID too */
|
|
|
|
if (CHECK_FLAG(bgp->config, BGP_CONFIG_CONFEDERATION)) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (aspath_loop_check(piattr->aspath, bgp->confed_id)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_debug_update(NULL, p, subgrp->update_group, 0))
|
|
|
|
zlog_debug(
|
|
|
|
"%s [Update:SEND] suppress announcement to peer AS %u"
|
|
|
|
" is AS path.",
|
|
|
|
peer->host, bgp->confed_id);
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Route-Reflect check. */
|
|
|
|
if (from->sort == BGP_PEER_IBGP && peer->sort == BGP_PEER_IBGP)
|
|
|
|
reflect = 1;
|
|
|
|
else
|
|
|
|
reflect = 0;
|
|
|
|
|
|
|
|
/* IBGP reflection check. */
|
|
|
|
if (reflect && !samepeer_safe) {
|
|
|
|
/* A route from a Client peer. */
|
|
|
|
if (CHECK_FLAG(from->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_REFLECTOR_CLIENT)) {
|
|
|
|
/* Reflect to all the Non-Client peers and also to the
|
|
|
|
Client peers other than the originator. Originator
|
|
|
|
check
|
|
|
|
is already done. So there is noting to do. */
|
|
|
|
/* no bgp client-to-client reflection check. */
|
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_NO_CLIENT_TO_CLIENT))
|
|
|
|
if (CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_REFLECTOR_CLIENT))
|
|
|
|
return 0;
|
|
|
|
} else {
|
|
|
|
/* A route from a Non-client peer. Reflect to all other
|
|
|
|
clients. */
|
|
|
|
if (!CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_REFLECTOR_CLIENT))
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* For modify attribute, copy it to temporary structure. */
|
2019-12-03 22:01:19 +01:00
|
|
|
*attr = *piattr;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* If local-preference is not set. */
|
|
|
|
if ((peer->sort == BGP_PEER_IBGP || peer->sort == BGP_PEER_CONFED)
|
|
|
|
&& (!(attr->flag & ATTR_FLAG_BIT(BGP_ATTR_LOCAL_PREF)))) {
|
|
|
|
attr->flag |= ATTR_FLAG_BIT(BGP_ATTR_LOCAL_PREF);
|
|
|
|
attr->local_pref = bgp->default_local_pref;
|
2015-05-20 03:03:47 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If originator-id is not set and the route is to be reflected,
|
|
|
|
set the originator id */
|
|
|
|
if (reflect
|
|
|
|
&& (!(attr->flag & ATTR_FLAG_BIT(BGP_ATTR_ORIGINATOR_ID)))) {
|
|
|
|
IPV4_ADDR_COPY(&(attr->originator_id), &(from->remote_id));
|
|
|
|
SET_FLAG(attr->flag, BGP_ATTR_ORIGINATOR_ID);
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Remove MED if its an EBGP peer - will get overwritten by route-maps
|
|
|
|
*/
|
|
|
|
if (peer->sort == BGP_PEER_EBGP
|
|
|
|
&& attr->flag & ATTR_FLAG_BIT(BGP_ATTR_MULTI_EXIT_DISC)) {
|
|
|
|
if (from != bgp->peer_self && !transparent
|
|
|
|
&& !CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_MED_UNCHANGED))
|
|
|
|
attr->flag &=
|
|
|
|
~(ATTR_FLAG_BIT(BGP_ATTR_MULTI_EXIT_DISC));
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Since the nexthop attribute can vary per peer, it is not explicitly
|
|
|
|
* set
|
|
|
|
* in announce check, only certain flags and length (or number of
|
|
|
|
* nexthops
|
|
|
|
* -- for IPv6/MP_REACH) are set here in order to guide the update
|
|
|
|
* formation
|
|
|
|
* code in setting the nexthop(s) on a per peer basis in
|
|
|
|
* reformat_peer().
|
|
|
|
* Typically, the source nexthop in the attribute is preserved but in
|
|
|
|
* the
|
|
|
|
* scenarios where we know it will always be overwritten, we reset the
|
|
|
|
* nexthop to "0" in an attempt to achieve better Update packing. An
|
|
|
|
* example of this is when a prefix from each of 2 IBGP peers needs to
|
|
|
|
* be
|
|
|
|
* announced to an EBGP peer (and they have the same attributes barring
|
|
|
|
* their nexthop).
|
|
|
|
*/
|
|
|
|
if (reflect)
|
|
|
|
SET_FLAG(attr->rmap_change_flags, BATTR_REFLECTED);
|
|
|
|
|
|
|
|
#define NEXTHOP_IS_V6 \
|
|
|
|
((safi != SAFI_ENCAP && safi != SAFI_MPLS_VPN \
|
|
|
|
&& (p->family == AF_INET6 || peer_cap_enhe(peer, afi, safi))) \
|
|
|
|
|| ((safi == SAFI_ENCAP || safi == SAFI_MPLS_VPN) \
|
|
|
|
&& attr->mp_nexthop_len >= IPV6_MAX_BYTELEN))
|
|
|
|
|
|
|
|
/* IPv6/MP starts with 1 nexthop. The link-local address is passed only
|
|
|
|
* if
|
|
|
|
* the peer (group) is configured to receive link-local nexthop
|
|
|
|
* unchanged
|
2019-09-06 11:12:23 +02:00
|
|
|
* and it is available in the prefix OR we're not reflecting the route,
|
|
|
|
* link-local nexthop address is valid and
|
2017-07-17 14:03:14 +02:00
|
|
|
* the peer (group) to whom we're going to announce is on a shared
|
|
|
|
* network
|
|
|
|
* and this is either a self-originated route or the peer is EBGP.
|
2019-09-06 11:12:23 +02:00
|
|
|
* By checking if nexthop LL address is valid we are sure that
|
|
|
|
* we do not announce LL address as `::`.
|
2017-07-17 14:03:14 +02:00
|
|
|
*/
|
|
|
|
if (NEXTHOP_IS_V6) {
|
|
|
|
attr->mp_nexthop_len = BGP_ATTR_NHLEN_IPV6_GLOBAL;
|
|
|
|
if ((CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_NEXTHOP_LOCAL_UNCHANGED)
|
|
|
|
&& IN6_IS_ADDR_LINKLOCAL(&attr->mp_nexthop_local))
|
2019-09-06 11:12:23 +02:00
|
|
|
|| (!reflect
|
|
|
|
&& IN6_IS_ADDR_LINKLOCAL(&peer->nexthop.v6_local)
|
|
|
|
&& peer->shared_network
|
2017-07-17 14:03:14 +02:00
|
|
|
&& (from == bgp->peer_self
|
|
|
|
|| peer->sort == BGP_PEER_EBGP))) {
|
|
|
|
attr->mp_nexthop_len =
|
|
|
|
BGP_ATTR_NHLEN_IPV6_GLOBAL_AND_LL;
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Clear off link-local nexthop in source, whenever it is not
|
|
|
|
* needed to
|
|
|
|
* ensure more prefixes share the same attribute for
|
|
|
|
* announcement.
|
|
|
|
*/
|
|
|
|
if (!(CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_NEXTHOP_LOCAL_UNCHANGED)))
|
|
|
|
memset(&attr->mp_nexthop_local, 0, IPV6_MAX_BYTELEN);
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_peer_remove_private_as(bgp, afi, safi, peer, attr);
|
|
|
|
bgp_peer_as_override(bgp, afi, safi, peer, attr);
|
|
|
|
|
|
|
|
/* Route map & unsuppress-map apply. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (ROUTE_MAP_OUT_NAME(filter) || (pi->extra && pi->extra->suppress)) {
|
2019-10-10 02:19:56 +02:00
|
|
|
struct bgp_path_info rmap_path = {0};
|
|
|
|
struct bgp_path_info_extra dummy_rmap_path_extra = {0};
|
|
|
|
struct attr dummy_attr = {0};
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
memset(&rmap_path, 0, sizeof(struct bgp_path_info));
|
|
|
|
rmap_path.peer = peer;
|
|
|
|
rmap_path.attr = attr;
|
2019-11-13 01:51:24 +01:00
|
|
|
rmap_path.net = rn;
|
2017-06-21 10:02:46 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->extra) {
|
|
|
|
memcpy(&dummy_rmap_path_extra, pi->extra,
|
2018-10-02 22:41:30 +02:00
|
|
|
sizeof(struct bgp_path_info_extra));
|
2018-10-03 02:43:07 +02:00
|
|
|
rmap_path.extra = &dummy_rmap_path_extra;
|
2017-06-21 11:00:24 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* don't confuse inbound and outbound setting */
|
|
|
|
RESET_FLAG(attr->rmap_change_flags);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The route reflector is not allowed to modify the attributes
|
|
|
|
* of the reflected IBGP routes unless explicitly allowed.
|
|
|
|
*/
|
|
|
|
if ((from->sort == BGP_PEER_IBGP && peer->sort == BGP_PEER_IBGP)
|
|
|
|
&& !bgp_flag_check(bgp,
|
|
|
|
BGP_FLAG_RR_ALLOW_OUTBOUND_POLICY)) {
|
2019-12-03 22:01:19 +01:00
|
|
|
dummy_attr = *attr;
|
2018-10-03 02:43:07 +02:00
|
|
|
rmap_path.attr = &dummy_attr;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
SET_FLAG(peer->rmap_type, PEER_RMAP_TYPE_OUT);
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->extra && pi->extra->suppress)
|
2017-07-17 14:03:14 +02:00
|
|
|
ret = route_map_apply(UNSUPPRESS_MAP(filter), p,
|
2018-10-03 02:43:07 +02:00
|
|
|
RMAP_BGP, &rmap_path);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
ret = route_map_apply(ROUTE_MAP_OUT(filter), p,
|
2018-10-03 02:43:07 +02:00
|
|
|
RMAP_BGP, &rmap_path);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
peer->rmap_type = 0;
|
|
|
|
|
|
|
|
if (ret == RMAP_DENYMATCH) {
|
2019-05-10 22:34:08 +02:00
|
|
|
if (bgp_debug_update(NULL, p, subgrp->update_group, 0))
|
|
|
|
zlog_debug("%s [Update:SEND] %s is filtered by route-map",
|
|
|
|
peer->host, prefix2str(p, buf, sizeof(buf)));
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_attr_flush(attr);
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
}
|
|
|
|
|
2019-02-07 09:49:04 +01:00
|
|
|
/* RFC 8212 to prevent route leaks.
|
|
|
|
* This specification intends to improve this situation by requiring the
|
|
|
|
* explicit configuration of both BGP Import and Export Policies for any
|
|
|
|
* External BGP (EBGP) session such as customers, peers, or
|
|
|
|
* confederation boundaries for all enabled address families. Through
|
|
|
|
* codification of the aforementioned requirement, operators will
|
|
|
|
* benefit from consistent behavior across different BGP
|
|
|
|
* implementations.
|
|
|
|
*/
|
|
|
|
if (peer->bgp->ebgp_requires_policy
|
|
|
|
== DEFAULT_EBGP_POLICY_ENABLED)
|
|
|
|
if (!bgp_outbound_policy_exists(peer, filter))
|
|
|
|
return 0;
|
|
|
|
|
2019-11-09 19:24:34 +01:00
|
|
|
/* draft-ietf-idr-deprecate-as-set-confed-set
|
|
|
|
* Filter routes having AS_SET or AS_CONFED_SET in the path.
|
|
|
|
* Eventually, This document (if approved) updates RFC 4271
|
|
|
|
* and RFC 5065 by eliminating AS_SET and AS_CONFED_SET types,
|
|
|
|
* and obsoletes RFC 6472.
|
|
|
|
*/
|
|
|
|
if (peer->bgp->reject_as_sets == BGP_REJECT_AS_SETS_ENABLED)
|
|
|
|
if (aspath_check_as_sets(attr->aspath))
|
|
|
|
return 0;
|
|
|
|
|
2017-08-25 20:27:49 +02:00
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_GRACEFUL_SHUTDOWN)) {
|
2018-02-09 19:22:50 +01:00
|
|
|
if (peer->sort == BGP_PEER_IBGP
|
|
|
|
|| peer->sort == BGP_PEER_CONFED) {
|
2017-08-25 20:27:49 +02:00
|
|
|
attr->flag |= ATTR_FLAG_BIT(BGP_ATTR_LOCAL_PREF);
|
|
|
|
attr->local_pref = BGP_GSHUT_LOCAL_PREF;
|
|
|
|
} else {
|
|
|
|
bgp_attr_add_gshut_community(attr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* After route-map has been applied, we check to see if the nexthop to
|
|
|
|
* be carried in the attribute (that is used for the announcement) can
|
|
|
|
* be cleared off or not. We do this in all cases where we would be
|
|
|
|
* setting the nexthop to "ourselves". For IPv6, we only need to
|
|
|
|
* consider
|
|
|
|
* the global nexthop here; the link-local nexthop would have been
|
|
|
|
* cleared
|
|
|
|
* already, and if not, it is required by the update formation code.
|
|
|
|
* Also see earlier comments in this function.
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* If route-map has performed some operation on the nexthop or the peer
|
|
|
|
* configuration says to pass it unchanged, we cannot reset the nexthop
|
|
|
|
* here, so only attempt to do it if these aren't true. Note that the
|
|
|
|
* route-map handler itself might have cleared the nexthop, if for
|
|
|
|
* example,
|
|
|
|
* it is configured as 'peer-address'.
|
|
|
|
*/
|
|
|
|
if (!bgp_rmap_nhop_changed(attr->rmap_change_flags,
|
2018-10-03 02:43:07 +02:00
|
|
|
piattr->rmap_change_flags)
|
2017-07-17 14:03:14 +02:00
|
|
|
&& !transparent
|
|
|
|
&& !CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_NEXTHOP_UNCHANGED)) {
|
|
|
|
/* We can reset the nexthop, if setting (or forcing) it to
|
|
|
|
* 'self' */
|
|
|
|
if (CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_NEXTHOP_SELF)
|
|
|
|
|| CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_FORCE_NEXTHOP_SELF)) {
|
|
|
|
if (!reflect
|
|
|
|
|| CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_FORCE_NEXTHOP_SELF))
|
|
|
|
subgroup_announce_reset_nhop(
|
|
|
|
(peer_cap_enhe(peer, afi, safi)
|
|
|
|
? AF_INET6
|
|
|
|
: p->family),
|
|
|
|
attr);
|
|
|
|
} else if (peer->sort == BGP_PEER_EBGP) {
|
|
|
|
/* Can also reset the nexthop if announcing to EBGP, but
|
|
|
|
* only if
|
|
|
|
* no peer in the subgroup is on a shared subnet.
|
|
|
|
* Note: 3rd party nexthop currently implemented for
|
|
|
|
* IPv4 only.
|
|
|
|
*/
|
2019-04-27 13:27:21 +02:00
|
|
|
if ((p->family == AF_INET) &&
|
|
|
|
(!bgp_subgrp_multiaccess_check_v4(
|
|
|
|
piattr->nexthop,
|
|
|
|
subgrp)))
|
2017-07-17 14:03:14 +02:00
|
|
|
subgroup_announce_reset_nhop(
|
|
|
|
(peer_cap_enhe(peer, afi, safi)
|
2018-02-09 19:22:50 +01:00
|
|
|
? AF_INET6
|
|
|
|
: p->family),
|
2019-04-27 13:27:21 +02:00
|
|
|
attr);
|
|
|
|
|
|
|
|
if ((p->family == AF_INET6) &&
|
|
|
|
(!bgp_subgrp_multiaccess_check_v6(
|
|
|
|
piattr->mp_nexthop_global,
|
|
|
|
subgrp)))
|
|
|
|
subgroup_announce_reset_nhop(
|
|
|
|
(peer_cap_enhe(peer, afi, safi)
|
|
|
|
? AF_INET6
|
|
|
|
: p->family),
|
|
|
|
attr);
|
|
|
|
|
|
|
|
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
} else if (CHECK_FLAG(pi->flags, BGP_PATH_ANNC_NH_SELF)) {
|
2018-03-24 00:57:03 +01:00
|
|
|
/*
|
|
|
|
* This flag is used for leaked vpn-vrf routes
|
|
|
|
*/
|
|
|
|
int family = p->family;
|
|
|
|
|
|
|
|
if (peer_cap_enhe(peer, afi, safi))
|
|
|
|
family = AF_INET6;
|
|
|
|
|
|
|
|
if (bgp_debug_update(NULL, p, subgrp->update_group, 0))
|
|
|
|
zlog_debug(
|
2018-09-14 02:34:42 +02:00
|
|
|
"%s: BGP_PATH_ANNC_NH_SELF, family=%s",
|
2018-03-24 00:57:03 +01:00
|
|
|
__func__, family2str(family));
|
|
|
|
subgroup_announce_reset_nhop(family, attr);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-11-05 06:29:58 +01:00
|
|
|
}
|
2018-03-24 00:57:03 +01:00
|
|
|
|
2019-11-05 06:29:58 +01:00
|
|
|
/* If IPv6/MP and nexthop does not have any override and happens
|
|
|
|
* to
|
|
|
|
* be a link-local address, reset it so that we don't pass along
|
|
|
|
* the
|
|
|
|
* source's link-local IPv6 address to recipients who may not be
|
|
|
|
* on
|
|
|
|
* the same interface.
|
|
|
|
*/
|
|
|
|
if (p->family == AF_INET6 || peer_cap_enhe(peer, afi, safi)) {
|
|
|
|
if (IN6_IS_ADDR_LINKLOCAL(&attr->mp_nexthop_global))
|
|
|
|
subgroup_announce_reset_nhop(AF_INET6, attr);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return 1;
|
2015-05-20 03:03:47 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_best_selection(struct bgp *bgp, struct bgp_node *rn,
|
|
|
|
struct bgp_maxpaths_cfg *mpath_cfg,
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info_pair *result, afi_t afi,
|
|
|
|
safi_t safi)
|
|
|
|
{
|
|
|
|
struct bgp_path_info *new_select;
|
|
|
|
struct bgp_path_info *old_select;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
|
|
|
struct bgp_path_info *pi1;
|
|
|
|
struct bgp_path_info *pi2;
|
|
|
|
struct bgp_path_info *nextpi = NULL;
|
2017-07-17 14:03:14 +02:00
|
|
|
int paths_eq, do_mpath, debug;
|
|
|
|
struct list mp_list;
|
|
|
|
char pfx_buf[PREFIX2STR_BUFFER];
|
|
|
|
char path_buf[PATH_ADDPATH_STR_BUFFER];
|
|
|
|
|
|
|
|
bgp_mp_list_init(&mp_list);
|
|
|
|
do_mpath =
|
|
|
|
(mpath_cfg->maxpaths_ebgp > 1 || mpath_cfg->maxpaths_ibgp > 1);
|
|
|
|
|
|
|
|
debug = bgp_debug_bestpath(&rn->p);
|
|
|
|
|
|
|
|
if (debug)
|
|
|
|
prefix2str(&rn->p, pfx_buf, sizeof(pfx_buf));
|
|
|
|
|
|
|
|
/* bgp deterministic-med */
|
|
|
|
new_select = NULL;
|
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_DETERMINISTIC_MED)) {
|
|
|
|
|
2018-09-14 02:34:42 +02:00
|
|
|
/* Clear BGP_PATH_DMED_SELECTED for all paths */
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi1 = bgp_node_get_bgp_path_info(rn); pi1;
|
|
|
|
pi1 = pi1->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_unset_flag(rn, pi1,
|
2018-10-03 00:15:34 +02:00
|
|
|
BGP_PATH_DMED_SELECTED);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi1 = bgp_node_get_bgp_path_info(rn); pi1;
|
|
|
|
pi1 = pi1->next) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi1->flags, BGP_PATH_DMED_CHECK))
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
2018-10-03 02:43:07 +02:00
|
|
|
if (BGP_PATH_HOLDDOWN(pi1))
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
2019-04-07 01:53:55 +02:00
|
|
|
if (pi1->peer != bgp->peer_self)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi1->peer->status != Established)
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
new_select = pi1;
|
|
|
|
if (pi1->next) {
|
|
|
|
for (pi2 = pi1->next; pi2; pi2 = pi2->next) {
|
|
|
|
if (CHECK_FLAG(pi2->flags,
|
2018-09-14 02:34:42 +02:00
|
|
|
BGP_PATH_DMED_CHECK))
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
2018-10-03 02:43:07 +02:00
|
|
|
if (BGP_PATH_HOLDDOWN(pi2))
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
2019-04-07 01:53:55 +02:00
|
|
|
if (pi2->peer != bgp->peer_self
|
2017-07-17 14:03:14 +02:00
|
|
|
&& !CHECK_FLAG(
|
2019-04-07 01:53:55 +02:00
|
|
|
pi2->peer->sflags,
|
|
|
|
PEER_STATUS_NSF_WAIT))
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi2->peer->status
|
2017-07-17 14:03:14 +02:00
|
|
|
!= Established)
|
|
|
|
continue;
|
|
|
|
|
2018-10-03 14:22:38 +02:00
|
|
|
if (!aspath_cmp_left(pi1->attr->aspath,
|
|
|
|
pi2->attr->aspath)
|
|
|
|
&& !aspath_cmp_left_confed(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi1->attr->aspath,
|
2018-10-03 14:22:38 +02:00
|
|
|
pi2->attr->aspath))
|
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 14:22:38 +02:00
|
|
|
if (bgp_path_info_cmp(
|
|
|
|
bgp, pi2, new_select,
|
|
|
|
&paths_eq, mpath_cfg, debug,
|
2019-05-16 03:05:37 +02:00
|
|
|
pfx_buf, afi, safi,
|
|
|
|
&rn->reason)) {
|
2018-10-03 14:22:38 +02:00
|
|
|
bgp_path_info_unset_flag(
|
|
|
|
rn, new_select,
|
|
|
|
BGP_PATH_DMED_SELECTED);
|
|
|
|
new_select = pi2;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2018-10-03 14:22:38 +02:00
|
|
|
|
|
|
|
bgp_path_info_set_flag(
|
|
|
|
rn, pi2, BGP_PATH_DMED_CHECK);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_set_flag(rn, new_select,
|
|
|
|
BGP_PATH_DMED_CHECK);
|
|
|
|
bgp_path_info_set_flag(rn, new_select,
|
|
|
|
BGP_PATH_DMED_SELECTED);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (debug) {
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_path_with_addpath_rx_str(
|
|
|
|
new_select, path_buf);
|
2018-04-13 03:11:39 +02:00
|
|
|
zlog_debug("%s: %s is the bestpath from AS %u",
|
2017-07-17 14:03:14 +02:00
|
|
|
pfx_buf, path_buf,
|
|
|
|
aspath_get_first_as(
|
|
|
|
new_select->attr->aspath));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2011-07-21 05:45:12 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Check old selected route and new selected route. */
|
|
|
|
old_select = NULL;
|
|
|
|
new_select = NULL;
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn);
|
|
|
|
(pi != NULL) && (nextpi = pi->next, 1); pi = nextpi) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_SELECTED))
|
|
|
|
old_select = pi;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (BGP_PATH_HOLDDOWN(pi)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
/* reap REMOVED routes, if needs be
|
|
|
|
* selected route must stay for a while longer though
|
|
|
|
*/
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_REMOVED)
|
|
|
|
&& (pi != old_select))
|
|
|
|
bgp_path_info_reap(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-03-09 21:52:55 +01:00
|
|
|
if (debug)
|
2018-10-03 02:43:07 +02:00
|
|
|
zlog_debug("%s: pi %p in holddown", __func__,
|
|
|
|
pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
|
|
|
}
|
2011-07-21 05:45:12 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer && pi->peer != bgp->peer_self
|
|
|
|
&& !CHECK_FLAG(pi->peer->sflags, PEER_STATUS_NSF_WAIT))
|
|
|
|
if (pi->peer->status != Established) {
|
2018-03-09 21:52:55 +01:00
|
|
|
|
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
2018-10-03 02:43:07 +02:00
|
|
|
"%s: pi %p non self peer %s not estab state",
|
|
|
|
__func__, pi, pi->peer->host);
|
2018-03-09 21:52:55 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_DETERMINISTIC_MED)
|
2018-10-03 02:43:07 +02:00
|
|
|
&& (!CHECK_FLAG(pi->flags, BGP_PATH_DMED_SELECTED))) {
|
|
|
|
bgp_path_info_unset_flag(rn, pi, BGP_PATH_DMED_CHECK);
|
2018-03-09 21:52:55 +01:00
|
|
|
if (debug)
|
2018-10-03 02:43:07 +02:00
|
|
|
zlog_debug("%s: pi %p dmed", __func__, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_unset_flag(rn, pi, BGP_PATH_DMED_CHECK);
|
2015-11-06 17:34:41 +01:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (bgp_path_info_cmp(bgp, pi, new_select, &paths_eq, mpath_cfg,
|
2019-05-16 03:05:37 +02:00
|
|
|
debug, pfx_buf, afi, safi, &rn->reason)) {
|
2018-10-03 02:43:07 +02:00
|
|
|
new_select = pi;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Now that we know which path is the bestpath see if any of the other
|
|
|
|
* paths
|
|
|
|
* qualify as multipaths
|
|
|
|
*/
|
|
|
|
if (debug) {
|
|
|
|
if (new_select)
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_path_with_addpath_rx_str(new_select,
|
|
|
|
path_buf);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
sprintf(path_buf, "NONE");
|
|
|
|
zlog_debug(
|
|
|
|
"%s: After path selection, newbest is %s oldbest was %s",
|
|
|
|
pfx_buf, path_buf,
|
|
|
|
old_select ? old_select->peer->host : "NONE");
|
2011-07-21 05:45:12 +02:00
|
|
|
}
|
2015-05-20 03:04:02 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (do_mpath && new_select) {
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn);
|
|
|
|
(pi != NULL) && (nextpi = pi->next, 1); pi = nextpi) {
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (debug)
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_path_with_addpath_rx_str(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi, path_buf);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi == new_select) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s is the bestpath, add to the multipath list",
|
|
|
|
pfx_buf, path_buf);
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_mp_list_add(&mp_list, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (BGP_PATH_HOLDDOWN(pi))
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer && pi->peer != bgp->peer_self
|
|
|
|
&& !CHECK_FLAG(pi->peer->sflags,
|
2017-07-17 14:03:14 +02:00
|
|
|
PEER_STATUS_NSF_WAIT))
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer->status != Established)
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!bgp_path_info_nexthop_cmp(pi, new_select)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s has the same nexthop as the bestpath, skip it",
|
|
|
|
pfx_buf, path_buf);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_cmp(bgp, pi, new_select, &paths_eq,
|
2019-05-16 03:05:37 +02:00
|
|
|
mpath_cfg, debug, pfx_buf, afi, safi,
|
|
|
|
&rn->reason);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (paths_eq) {
|
|
|
|
if (debug)
|
|
|
|
zlog_debug(
|
|
|
|
"%s: %s is equivalent to the bestpath, add to the multipath list",
|
|
|
|
pfx_buf, path_buf);
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_mp_list_add(&mp_list, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_mpath_update(rn, new_select, old_select, &mp_list,
|
|
|
|
mpath_cfg);
|
|
|
|
bgp_path_info_mpath_aggregate_update(new_select, old_select);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_mp_list_clear(&mp_list);
|
2011-07-21 05:45:12 +02:00
|
|
|
|
bgpd: Re-use TX Addpath IDs where possible
The motivation for this patch is to address a concerning behavior of
tx-addpath-bestpath-per-AS. Prior to this patch, all paths' TX ID was
pre-determined as the path was received from a peer. However, this meant
that any time the path selected as best from an AS changed, bgpd had no
choice but to withdraw the previous best path, and advertise the new
best-path under a new TX ID. This could cause significant network
disruption, especially for the subset of prefixes coming from only one
AS that were also communicated over a bestpath-per-AS session.
The patch's general approach is best illustrated by
txaddpath_update_ids. After a bestpath run (required for best-per-AS to
know what will and will not be sent as addpaths) ID numbers will be
stripped from paths that no longer need to be sent, and held in a pool.
Then, paths that will be sent as addpaths and do not already have ID
numbers will allocate new ID numbers, pulling first from that pool.
Finally, anything left in the pool will be returned to the allocator.
In order for this to work, ID numbers had to be split by strategy. The
tx-addpath-All strategy would keep every ID number "in use" constantly,
preventing IDs from being transferred to different paths. Rather than
create two variables for ID, this patch create a more generic array that
will easily enable more addpath strategies to be implemented. The
previously described ID manipulations will happen per addpath strategy,
and will only be run for strategies that are enabled on at least one
peer.
Finally, the ID numbers are allocated from an allocator that tracks per
AFI/SAFI/Addpath Strategy which IDs are in use. Though it would be very
improbable, there was the possibility with the free-running counter
approach for rollover to cause two paths on the same prefix to get
assigned the same TX ID. As remote as the possibility is, we prefer to
not leave it to chance.
This ID re-use method is not perfect. In some cases you could still get
withdraw-then-add behaviors where not strictly necessary. In the case of
bestpath-per-AS this requires one AS to advertise a prefix for the first
time, then a second AS withdraws that prefix, all within the space of an
already pending MRAI timer. In those situations a withdraw-then-add is
more forgivable, and fixing it would probably require a much more
significant effort, as IDs would need to be moved to ADVs instead of
paths.
Signed-off-by Mitchell Skiba <mskiba@amazon.com>
2018-05-10 01:10:02 +02:00
|
|
|
bgp_addpath_update_ids(bgp, rn, afi, safi);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
result->old = old_select;
|
|
|
|
result->new = new_select;
|
2011-07-21 05:45:12 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
}
|
|
|
|
|
2015-05-20 03:03:47 +02:00
|
|
|
/*
|
|
|
|
* A new route/change in bestpath of an existing route. Evaluate the path
|
|
|
|
* for advertisement to the subgroup.
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
int subgroup_process_announce_selected(struct update_subgroup *subgrp,
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *selected,
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *rn,
|
2018-03-27 21:13:34 +02:00
|
|
|
uint32_t addpath_tx_id)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct prefix *p;
|
|
|
|
struct peer *onlypeer;
|
|
|
|
struct attr attr;
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
BGP: support for addpath TX
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Reviewed-by: Vivek Venkataraman <vivek@cumulusnetworks.com
Ticket: CM-8014
This implements addpath TX with the first feature to use it
being "neighbor x.x.x.x addpath-tx-all-paths".
One change to show output is 'show ip bgp x.x.x.x'. If no addpath-tx
features are configured for any peers then everything looks the same
as it is today in that "Advertised to" is at the top and refers to
which peers the bestpath was advertise to.
root@superm-redxp-05[quagga-stash5]# vtysh -c 'show ip bgp 1.1.1.1'
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Last update: Fri Oct 30 18:26:44 2015
[snip]
but once you enable an addpath feature we must display "Advertised to" on a path-by-path basis:
superm-redxp-05# show ip bgp 1.1.1.1/32
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:44 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r3(10.0.0.3) (10.0.0.3)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 7
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r6(10.0.0.6) (10.0.0.6)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 6
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r5(10.0.0.5) (10.0.0.5)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 5
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r4(10.0.0.4) (10.0.0.4)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 4
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r1(10.0.0.1) (10.0.0.1)
Origin IGP, metric 0, localpref 100, valid, internal, best
AddPath ID: RX 0, TX 3
Advertised to: r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Last update: Fri Oct 30 18:26:34 2015
superm-redxp-05#
2015-11-05 18:29:43 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
p = &rn->p;
|
|
|
|
afi = SUBGRP_AFI(subgrp);
|
|
|
|
safi = SUBGRP_SAFI(subgrp);
|
|
|
|
onlypeer = ((SUBGRP_PCOUNT(subgrp) == 1) ? (SUBGRP_PFIRST(subgrp))->peer
|
|
|
|
: NULL);
|
|
|
|
|
2018-02-19 23:55:30 +01:00
|
|
|
if (BGP_DEBUG(update, UPDATE_OUT)) {
|
|
|
|
char buf_prefix[PREFIX_STRLEN];
|
|
|
|
prefix2str(p, buf_prefix, sizeof(buf_prefix));
|
2018-03-09 21:52:55 +01:00
|
|
|
zlog_debug("%s: p=%s, selected=%p", __func__, buf_prefix,
|
|
|
|
selected);
|
2018-02-19 23:55:30 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* First update is deferred until ORF or ROUTE-REFRESH is received */
|
2018-03-06 20:02:52 +01:00
|
|
|
if (onlypeer && CHECK_FLAG(onlypeer->af_sflags[afi][safi],
|
|
|
|
PEER_STATUS_ORF_WAIT_REFRESH))
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
memset(&attr, 0, sizeof(struct attr));
|
|
|
|
/* It's initialized in bgp_announce_check() */
|
|
|
|
|
|
|
|
/* Announcement to the subgroup. If the route is filtered withdraw it.
|
|
|
|
*/
|
|
|
|
if (selected) {
|
|
|
|
if (subgroup_announce_check(rn, selected, subgrp, p, &attr))
|
|
|
|
bgp_adj_out_set_subgroup(rn, subgrp, &attr, selected);
|
|
|
|
else
|
|
|
|
bgp_adj_out_unset_subgroup(rn, subgrp, 1,
|
bgpd: Re-use TX Addpath IDs where possible
The motivation for this patch is to address a concerning behavior of
tx-addpath-bestpath-per-AS. Prior to this patch, all paths' TX ID was
pre-determined as the path was received from a peer. However, this meant
that any time the path selected as best from an AS changed, bgpd had no
choice but to withdraw the previous best path, and advertise the new
best-path under a new TX ID. This could cause significant network
disruption, especially for the subset of prefixes coming from only one
AS that were also communicated over a bestpath-per-AS session.
The patch's general approach is best illustrated by
txaddpath_update_ids. After a bestpath run (required for best-per-AS to
know what will and will not be sent as addpaths) ID numbers will be
stripped from paths that no longer need to be sent, and held in a pool.
Then, paths that will be sent as addpaths and do not already have ID
numbers will allocate new ID numbers, pulling first from that pool.
Finally, anything left in the pool will be returned to the allocator.
In order for this to work, ID numbers had to be split by strategy. The
tx-addpath-All strategy would keep every ID number "in use" constantly,
preventing IDs from being transferred to different paths. Rather than
create two variables for ID, this patch create a more generic array that
will easily enable more addpath strategies to be implemented. The
previously described ID manipulations will happen per addpath strategy,
and will only be run for strategies that are enabled on at least one
peer.
Finally, the ID numbers are allocated from an allocator that tracks per
AFI/SAFI/Addpath Strategy which IDs are in use. Though it would be very
improbable, there was the possibility with the free-running counter
approach for rollover to cause two paths on the same prefix to get
assigned the same TX ID. As remote as the possibility is, we prefer to
not leave it to chance.
This ID re-use method is not perfect. In some cases you could still get
withdraw-then-add behaviors where not strictly necessary. In the case of
bestpath-per-AS this requires one AS to advertise a prefix for the first
time, then a second AS withdraws that prefix, all within the space of an
already pending MRAI timer. In those situations a withdraw-then-add is
more forgivable, and fixing it would probably require a much more
significant effort, as IDs would need to be moved to ADVs instead of
paths.
Signed-off-by Mitchell Skiba <mskiba@amazon.com>
2018-05-10 01:10:02 +02:00
|
|
|
addpath_tx_id);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* If selected is NULL we must withdraw the path using addpath_tx_id */
|
|
|
|
else {
|
|
|
|
bgp_adj_out_unset_subgroup(rn, subgrp, 1, addpath_tx_id);
|
|
|
|
}
|
2012-05-07 18:53:05 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
}
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
|
2016-09-05 19:35:19 +02:00
|
|
|
/*
|
2016-09-05 19:49:16 +02:00
|
|
|
* Clear IGP changed flag and attribute changed flag for a route (all paths).
|
|
|
|
* This is called at the end of route processing.
|
2016-09-05 19:35:19 +02:00
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_zebra_clear_route_change_flags(struct bgp_node *rn)
|
2016-09-05 19:35:19 +02:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2016-09-05 19:35:19 +02:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (BGP_PATH_HOLDDOWN(pi))
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
2018-10-03 02:43:07 +02:00
|
|
|
UNSET_FLAG(pi->flags, BGP_PATH_IGP_CHANGED);
|
|
|
|
UNSET_FLAG(pi->flags, BGP_PATH_ATTR_CHANGED);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2016-09-05 19:35:19 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Has the route changed from the RIB's perspective? This is invoked only
|
|
|
|
* if the route selection returns the same best route as earlier - to
|
|
|
|
* determine if we need to update zebra or not.
|
|
|
|
*/
|
2018-10-02 22:41:30 +02:00
|
|
|
int bgp_zebra_has_route_changed(struct bgp_node *rn,
|
|
|
|
struct bgp_path_info *selected)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *mpinfo;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-05-16 14:17:53 +02:00
|
|
|
/* If this is multipath, check all selected paths for any nexthop
|
|
|
|
* change or attribute change. Some attribute changes (e.g., community)
|
|
|
|
* aren't of relevance to the RIB, but we'll update zebra to ensure
|
|
|
|
* we handle the case of BGP nexthop change. This is the behavior
|
|
|
|
* when the best path has an attribute change anyway.
|
2017-07-17 14:03:14 +02:00
|
|
|
*/
|
2018-09-14 02:34:42 +02:00
|
|
|
if (CHECK_FLAG(selected->flags, BGP_PATH_IGP_CHANGED)
|
|
|
|
|| CHECK_FLAG(selected->flags, BGP_PATH_MULTIPATH_CHG))
|
2017-07-17 14:03:14 +02:00
|
|
|
return 1;
|
|
|
|
|
2018-05-16 14:17:53 +02:00
|
|
|
/*
|
|
|
|
* If this is multipath, check all selected paths for any nexthop change
|
2017-07-17 14:03:14 +02:00
|
|
|
*/
|
2018-10-03 00:15:34 +02:00
|
|
|
for (mpinfo = bgp_path_info_mpath_first(selected); mpinfo;
|
|
|
|
mpinfo = bgp_path_info_mpath_next(mpinfo)) {
|
2018-09-14 02:34:42 +02:00
|
|
|
if (CHECK_FLAG(mpinfo->flags, BGP_PATH_IGP_CHANGED)
|
|
|
|
|| CHECK_FLAG(mpinfo->flags, BGP_PATH_ATTR_CHANGED))
|
2017-07-17 14:03:14 +02:00
|
|
|
return 1;
|
|
|
|
}
|
2016-09-05 19:35:19 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Nothing has changed from the RIB's perspective. */
|
|
|
|
return 0;
|
2016-09-05 19:35:19 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_process_queue {
|
|
|
|
struct bgp *bgp;
|
2018-02-09 19:22:50 +01:00
|
|
|
STAILQ_HEAD(, bgp_node) pqueue;
|
2017-08-05 12:59:05 +02:00
|
|
|
#define BGP_PROCESS_QUEUE_EOIU_MARKER (1 << 0)
|
|
|
|
unsigned int flags;
|
|
|
|
unsigned int queued;
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
};
|
|
|
|
|
2018-03-20 14:18:01 +01:00
|
|
|
/*
|
|
|
|
* old_select = The old best path
|
|
|
|
* new_select = the new best path
|
|
|
|
*
|
|
|
|
* if (!old_select && new_select)
|
|
|
|
* We are sending new information on.
|
|
|
|
*
|
|
|
|
* if (old_select && new_select) {
|
|
|
|
* if (new_select != old_select)
|
|
|
|
* We have a new best path send a change
|
|
|
|
* else
|
|
|
|
* We've received a update with new attributes that needs
|
|
|
|
* to be passed on.
|
|
|
|
* }
|
|
|
|
*
|
|
|
|
* if (old_select && !new_select)
|
|
|
|
* We have no eligible route that we can announce or the rn
|
|
|
|
* is being removed.
|
|
|
|
*/
|
2017-08-05 12:59:05 +02:00
|
|
|
static void bgp_process_main_one(struct bgp *bgp, struct bgp_node *rn,
|
|
|
|
afi_t afi, safi_t safi)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *new_select;
|
|
|
|
struct bgp_path_info *old_select;
|
|
|
|
struct bgp_path_info_pair old_and_new;
|
2018-03-09 21:52:55 +01:00
|
|
|
char pfx_buf[PREFIX2STR_BUFFER];
|
|
|
|
int debug = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-06-13 22:35:57 +02:00
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_DELETE_IN_PROGRESS)) {
|
|
|
|
if (rn)
|
|
|
|
debug = bgp_debug_bestpath(&rn->p);
|
|
|
|
if (debug) {
|
|
|
|
prefix2str(&rn->p, pfx_buf, sizeof(pfx_buf));
|
|
|
|
zlog_debug(
|
|
|
|
"%s: bgp delete in progress, ignoring event, p=%s",
|
|
|
|
__func__, pfx_buf);
|
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Is it end of initial update? (after startup) */
|
|
|
|
if (!rn) {
|
|
|
|
quagga_timestamp(3, bgp->update_delay_zebra_resume_time,
|
|
|
|
sizeof(bgp->update_delay_zebra_resume_time));
|
|
|
|
|
|
|
|
bgp->main_zebra_update_hold = 0;
|
2017-11-21 19:02:06 +01:00
|
|
|
FOREACH_AFI_SAFI (afi, safi) {
|
|
|
|
if (bgp_fibupd_safi(safi))
|
|
|
|
bgp_zebra_announce_table(bgp, afi, safi);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp->main_peers_update_hold = 0;
|
|
|
|
|
|
|
|
bgp_start_routeadv(bgp);
|
2017-08-05 12:59:05 +02:00
|
|
|
return;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
bgpd: bgpd-mrai.patch
BGP: Event-driven route announcement taking into account min route advertisement interval
ISSUE
BGP starts the routeadv timer (peer->t_routeadv) to expire in 1 sec
when a peer is established. From then on, the timer expires
periodically based on the configured MRAI value (default: 30sec for
EBGP, 5sec for IBGP). At the expiry, the write thread is triggered
that takes the routes from peer's sync FIFO (adj-rib-out) and sends
UPDATEs. This has a few drawbacks:
(1) Delay in new route announcement: Even when the last UPDATE message
was sent a while back, the next route change will necessarily have
to wait for routeadv expiry
(2) CPU usage: The timer is always armed. If the operator chooses to
configure a lower value of MRAI (zero second is a preferred choice
in many deployments) for better convergence, it leads to high CPU
usage for BGP process, even at the times of no network churn.
PATCH
Make the route advertisement event-driven - When routes are added to
peer's sync FIFO, check if the routeadv timer needs to be adjusted (or
started). Conversely, do not arm the routeadv timer unconditionally.
The patch also addresses route announcements during read-only mode
(update-delay). During read-only mode operation, the routeadv timer
is not started. When BGP comes out of read-only mode and all the
routes are processed, the timer is started for all peers with zero
expiry, so that the UPDATEs can be sent all at once. This leads to
(near-)optimal UPDATE packing.
Finally, the patch makes the "max # packets to write to peer socket at
a time" configurable. Currently it is hard-coded to 10. The command is
at the top router-bgp mode and is called "write-quanta <number>". It
is a useful convergence parameter to tweak.
Signed-off-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
2015-05-20 02:40:37 +02:00
|
|
|
|
2018-07-03 15:39:50 +02:00
|
|
|
struct prefix *p = &rn->p;
|
|
|
|
|
2018-03-09 21:52:55 +01:00
|
|
|
debug = bgp_debug_bestpath(&rn->p);
|
|
|
|
if (debug) {
|
|
|
|
prefix2str(&rn->p, pfx_buf, sizeof(pfx_buf));
|
|
|
|
zlog_debug("%s: p=%s afi=%s, safi=%s start", __func__, pfx_buf,
|
|
|
|
afi2str(afi), safi2str(safi));
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Best path selection. */
|
|
|
|
bgp_best_selection(bgp, rn, &bgp->maxpaths[afi][safi], &old_and_new,
|
|
|
|
afi, safi);
|
|
|
|
old_select = old_and_new.old;
|
|
|
|
new_select = old_and_new.new;
|
|
|
|
|
|
|
|
/* Do we need to allocate or free labels?
|
|
|
|
* Right now, since we only deal with per-prefix labels, it is not
|
2018-11-14 04:14:04 +01:00
|
|
|
* necessary to do this upon changes to best path. Exceptions:
|
|
|
|
* - label index has changed -> recalculate resulting label
|
|
|
|
* - path_info sub_type changed -> switch to/from implicit-null
|
|
|
|
* - no valid label (due to removed static label binding) -> get new one
|
2017-07-17 14:03:14 +02:00
|
|
|
*/
|
2017-08-22 20:14:50 +02:00
|
|
|
if (bgp->allocate_mpls_labels[afi][safi]) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (new_select) {
|
|
|
|
if (!old_select
|
|
|
|
|| bgp_label_index_differs(new_select, old_select)
|
2018-11-14 04:14:04 +01:00
|
|
|
|| new_select->sub_type != old_select->sub_type
|
|
|
|
|| !bgp_is_valid_label(&rn->local_label)) {
|
|
|
|
/* Enforced penultimate hop popping:
|
|
|
|
* implicit-null for local routes, aggregate
|
|
|
|
* and redistributed routes
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
if (new_select->sub_type == BGP_ROUTE_STATIC
|
2018-11-14 04:14:04 +01:00
|
|
|
|| new_select->sub_type
|
|
|
|
== BGP_ROUTE_AGGREGATE
|
|
|
|
|| new_select->sub_type
|
|
|
|
== BGP_ROUTE_REDISTRIBUTE) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (CHECK_FLAG(
|
|
|
|
rn->flags,
|
|
|
|
BGP_NODE_REGISTERED_FOR_LABEL))
|
|
|
|
bgp_unregister_for_label(rn);
|
2018-02-01 00:24:06 +01:00
|
|
|
label_ntop(MPLS_LABEL_IMPLICIT_NULL, 1,
|
2017-07-17 14:03:14 +02:00
|
|
|
&rn->local_label);
|
|
|
|
bgp_set_valid_label(&rn->local_label);
|
|
|
|
} else
|
|
|
|
bgp_register_for_label(rn, new_select);
|
|
|
|
}
|
2018-02-09 19:22:50 +01:00
|
|
|
} else if (CHECK_FLAG(rn->flags,
|
|
|
|
BGP_NODE_REGISTERED_FOR_LABEL)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unregister_for_label(rn);
|
2017-08-22 20:14:50 +02:00
|
|
|
}
|
|
|
|
} else if (CHECK_FLAG(rn->flags, BGP_NODE_REGISTERED_FOR_LABEL)) {
|
|
|
|
bgp_unregister_for_label(rn);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-03-09 15:54:20 +01:00
|
|
|
|
2018-03-09 21:52:55 +01:00
|
|
|
if (debug) {
|
|
|
|
prefix2str(&rn->p, pfx_buf, sizeof(pfx_buf));
|
|
|
|
zlog_debug(
|
|
|
|
"%s: p=%s afi=%s, safi=%s, old_select=%p, new_select=%p",
|
|
|
|
__func__, pfx_buf, afi2str(afi), safi2str(safi),
|
|
|
|
old_select, new_select);
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If best route remains the same and this is not due to user-initiated
|
|
|
|
* clear, see exactly what needs to be done.
|
|
|
|
*/
|
|
|
|
if (old_select && old_select == new_select
|
|
|
|
&& !CHECK_FLAG(rn->flags, BGP_NODE_USER_CLEAR)
|
2018-09-14 02:34:42 +02:00
|
|
|
&& !CHECK_FLAG(old_select->flags, BGP_PATH_ATTR_CHANGED)
|
bgpd: Re-use TX Addpath IDs where possible
The motivation for this patch is to address a concerning behavior of
tx-addpath-bestpath-per-AS. Prior to this patch, all paths' TX ID was
pre-determined as the path was received from a peer. However, this meant
that any time the path selected as best from an AS changed, bgpd had no
choice but to withdraw the previous best path, and advertise the new
best-path under a new TX ID. This could cause significant network
disruption, especially for the subset of prefixes coming from only one
AS that were also communicated over a bestpath-per-AS session.
The patch's general approach is best illustrated by
txaddpath_update_ids. After a bestpath run (required for best-per-AS to
know what will and will not be sent as addpaths) ID numbers will be
stripped from paths that no longer need to be sent, and held in a pool.
Then, paths that will be sent as addpaths and do not already have ID
numbers will allocate new ID numbers, pulling first from that pool.
Finally, anything left in the pool will be returned to the allocator.
In order for this to work, ID numbers had to be split by strategy. The
tx-addpath-All strategy would keep every ID number "in use" constantly,
preventing IDs from being transferred to different paths. Rather than
create two variables for ID, this patch create a more generic array that
will easily enable more addpath strategies to be implemented. The
previously described ID manipulations will happen per addpath strategy,
and will only be run for strategies that are enabled on at least one
peer.
Finally, the ID numbers are allocated from an allocator that tracks per
AFI/SAFI/Addpath Strategy which IDs are in use. Though it would be very
improbable, there was the possibility with the free-running counter
approach for rollover to cause two paths on the same prefix to get
assigned the same TX ID. As remote as the possibility is, we prefer to
not leave it to chance.
This ID re-use method is not perfect. In some cases you could still get
withdraw-then-add behaviors where not strictly necessary. In the case of
bestpath-per-AS this requires one AS to advertise a prefix for the first
time, then a second AS withdraws that prefix, all within the space of an
already pending MRAI timer. In those situations a withdraw-then-add is
more forgivable, and fixing it would probably require a much more
significant effort, as IDs would need to be moved to ADVs instead of
paths.
Signed-off-by Mitchell Skiba <mskiba@amazon.com>
2018-05-10 01:10:02 +02:00
|
|
|
&& !bgp_addpath_is_addpath_used(&bgp->tx_addpath, afi, safi)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_zebra_has_route_changed(rn, old_select)) {
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
vnc_import_bgp_add_route(bgp, p, old_select);
|
|
|
|
vnc_import_bgp_exterior_add_route(bgp, p, old_select);
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
bgpd: multipath change for VRF route is not updated in zebra
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
If you are doing multipath in a VRF and bounce one of the multipaths for
a prefix, bgp is not updating the zebra entry for that prefix with the
new multipaths. We start with:
cel-redxp-10# show bgp vrf RED ipv4 unicast 6.0.0.16/32
BGP routing table entry for 6.0.0.16/32
Paths: (4 available, best #4, table RED)
Advertised to non peer-group peers:
spine-1(swp1) spine-2(swp2) spine-3(swp3) spine-4(swp4)
104 65104 65002
fe80::202:ff:fe00:2d from spine-4(swp4) (6.0.0.12)
(fe80::202:ff:fe00:2d) (used)
Origin incomplete, localpref 100, valid, external, multipath, bestpath-from-AS 104
AddPath ID: RX 0, TX 21
Last update: Tue Aug 1 18:28:33 2017
102 65104 65002
fe80::202:ff:fe00:25 from spine-2(swp2) (6.0.0.10)
(fe80::202:ff:fe00:25) (used)
Origin incomplete, localpref 100, valid, external, multipath, bestpath-from-AS 102
AddPath ID: RX 0, TX 20
Last update: Tue Aug 1 18:28:33 2017
103 65104 65002
fe80::202:ff:fe00:29 from spine-3(swp3) (6.0.0.11)
(fe80::202:ff:fe00:29) (used)
Origin incomplete, localpref 100, valid, external, multipath, bestpath-from-AS 103
AddPath ID: RX 0, TX 17
Last update: Tue Aug 1 18:28:33 2017
101 65104 65002
fe80::202:ff:fe00:21 from spine-1(swp1) (6.0.0.9)
(fe80::202:ff:fe00:21) (used)
Origin incomplete, localpref 100, valid, external, multipath, bestpath-from-AS 101, best
AddPath ID: RX 0, TX 8
Last update: Tue Aug 1 18:28:33 2017
cel-redxp-10#
cel-redxp-10# show ip route vrf RED 6.0.0.16/32
Routing entry for 6.0.0.16/32
Known via "bgp", distance 20, metric 0, vrf RED, best
Last update 00:00:25 ago
* fe80::202:ff:fe00:21, via swp1
* fe80::202:ff:fe00:25, via swp2
* fe80::202:ff:fe00:29, via swp3
* fe80::202:ff:fe00:2d, via swp4
cel-redxp-10#
And then on spine-1 we bounce all peers
spine-1# clear ip bgp *
spine-1#
On the leaf (cel-redxp-10) we remove the route from spine-1
cel-redxp-10# show ip route vrf RED 6.0.0.16/32
Routing entry for 6.0.0.16/32
Known via "bgp", distance 20, metric 0, vrf RED, best
Last update 00:00:01 ago
* fe80::202:ff:fe00:25, via swp2
* fe80::202:ff:fe00:29, via swp3
* fe80::202:ff:fe00:2d, via swp4
cel-redxp-10#
So far so good. The problem is when the session to spine-1 comes back up
bgp will mark the flag from spine-1 as `multipath` but does not update
zebra. We end up in a state where BGP has 4 paths flags as multipath but
only 3 paths are in the RIB.
2017-08-01 20:31:56 +02:00
|
|
|
if (bgp_fibupd_safi(safi)
|
2018-03-09 21:52:55 +01:00
|
|
|
&& !bgp_option_check(BGP_OPT_NO_FIB)) {
|
|
|
|
|
|
|
|
if (new_select->type == ZEBRA_ROUTE_BGP
|
|
|
|
&& (new_select->sub_type == BGP_ROUTE_NORMAL
|
|
|
|
|| new_select->sub_type
|
|
|
|
== BGP_ROUTE_IMPORTED))
|
|
|
|
|
|
|
|
bgp_zebra_announce(rn, p, old_select,
|
|
|
|
bgp, afi, safi);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2018-09-14 02:34:42 +02:00
|
|
|
UNSET_FLAG(old_select->flags, BGP_PATH_MULTIPATH_CHG);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_zebra_clear_route_change_flags(rn);
|
|
|
|
|
|
|
|
/* If there is a change of interest to peers, reannounce the
|
|
|
|
* route. */
|
2018-09-14 02:34:42 +02:00
|
|
|
if (CHECK_FLAG(old_select->flags, BGP_PATH_ATTR_CHANGED)
|
2017-07-17 14:03:14 +02:00
|
|
|
|| CHECK_FLAG(rn->flags, BGP_NODE_LABEL_CHANGED)) {
|
|
|
|
group_announce_route(bgp, afi, safi, rn, new_select);
|
|
|
|
|
|
|
|
/* unicast routes must also be annouced to
|
|
|
|
* labeled-unicast update-groups */
|
|
|
|
if (safi == SAFI_UNICAST)
|
|
|
|
group_announce_route(bgp, afi,
|
|
|
|
SAFI_LABELED_UNICAST, rn,
|
|
|
|
new_select);
|
|
|
|
|
2018-09-14 02:34:42 +02:00
|
|
|
UNSET_FLAG(old_select->flags, BGP_PATH_ATTR_CHANGED);
|
2017-07-17 14:03:14 +02:00
|
|
|
UNSET_FLAG(rn->flags, BGP_NODE_LABEL_CHANGED);
|
|
|
|
}
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
UNSET_FLAG(rn->flags, BGP_NODE_PROCESS_SCHEDULED);
|
2017-08-05 12:59:05 +02:00
|
|
|
return;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2015-05-20 02:58:10 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If the user did "clear ip bgp prefix x.x.x.x" this flag will be set
|
|
|
|
*/
|
|
|
|
UNSET_FLAG(rn->flags, BGP_NODE_USER_CLEAR);
|
|
|
|
|
|
|
|
/* bestpath has changed; bump version */
|
|
|
|
if (old_select || new_select) {
|
|
|
|
bgp_bump_version(rn);
|
|
|
|
|
|
|
|
if (!bgp->t_rmap_def_originate_eval) {
|
|
|
|
bgp_lock(bgp);
|
|
|
|
thread_add_timer(
|
|
|
|
bm->master,
|
|
|
|
update_group_refresh_default_originate_route_map,
|
|
|
|
bgp, RMAP_DEFAULT_ORIGINATE_EVAL_TIMER,
|
|
|
|
&bgp->t_rmap_def_originate_eval);
|
|
|
|
}
|
|
|
|
}
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (old_select)
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_unset_flag(rn, old_select, BGP_PATH_SELECTED);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (new_select) {
|
2018-03-09 21:52:55 +01:00
|
|
|
if (debug)
|
|
|
|
zlog_debug("%s: setting SELECTED flag", __func__);
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_set_flag(rn, new_select, BGP_PATH_SELECTED);
|
|
|
|
bgp_path_info_unset_flag(rn, new_select, BGP_PATH_ATTR_CHANGED);
|
2018-09-14 02:34:42 +02:00
|
|
|
UNSET_FLAG(new_select->flags, BGP_PATH_MULTIPATH_CHG);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2005-02-23 15:27:24 +01:00
|
|
|
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
if ((afi == AFI_IP || afi == AFI_IP6) && (safi == SAFI_UNICAST)) {
|
|
|
|
if (old_select != new_select) {
|
|
|
|
if (old_select) {
|
|
|
|
vnc_import_bgp_exterior_del_route(bgp, p,
|
|
|
|
old_select);
|
|
|
|
vnc_import_bgp_del_route(bgp, p, old_select);
|
|
|
|
}
|
|
|
|
if (new_select) {
|
|
|
|
vnc_import_bgp_exterior_add_route(bgp, p,
|
|
|
|
new_select);
|
|
|
|
vnc_import_bgp_add_route(bgp, p, new_select);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
group_announce_route(bgp, afi, safi, rn, new_select);
|
|
|
|
|
|
|
|
/* unicast routes must also be annouced to labeled-unicast update-groups
|
|
|
|
*/
|
|
|
|
if (safi == SAFI_UNICAST)
|
|
|
|
group_announce_route(bgp, afi, SAFI_LABELED_UNICAST, rn,
|
|
|
|
new_select);
|
|
|
|
|
|
|
|
/* FIB update. */
|
|
|
|
if (bgp_fibupd_safi(safi) && (bgp->inst_type != BGP_INSTANCE_TYPE_VIEW)
|
|
|
|
&& !bgp_option_check(BGP_OPT_NO_FIB)) {
|
|
|
|
if (new_select && new_select->type == ZEBRA_ROUTE_BGP
|
|
|
|
&& (new_select->sub_type == BGP_ROUTE_NORMAL
|
2018-03-09 21:52:55 +01:00
|
|
|
|| new_select->sub_type == BGP_ROUTE_AGGREGATE
|
2018-04-11 11:29:46 +02:00
|
|
|
|| new_select->sub_type == BGP_ROUTE_IMPORTED)) {
|
|
|
|
|
|
|
|
/* if this is an evpn imported type-5 prefix,
|
|
|
|
* we need to withdraw the route first to clear
|
|
|
|
* the nh neigh and the RMAC entry.
|
|
|
|
*/
|
|
|
|
if (old_select &&
|
|
|
|
is_route_parent_evpn(old_select))
|
|
|
|
bgp_zebra_withdraw(p, old_select, bgp, safi);
|
2018-03-09 21:52:55 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_zebra_announce(rn, p, new_select, bgp, afi, safi);
|
2018-04-11 11:29:46 +02:00
|
|
|
} else {
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Withdraw the route from the kernel. */
|
|
|
|
if (old_select && old_select->type == ZEBRA_ROUTE_BGP
|
|
|
|
&& (old_select->sub_type == BGP_ROUTE_NORMAL
|
2018-03-09 21:52:55 +01:00
|
|
|
|| old_select->sub_type == BGP_ROUTE_AGGREGATE
|
|
|
|
|| old_select->sub_type == BGP_ROUTE_IMPORTED))
|
|
|
|
|
2017-11-01 21:36:46 +01:00
|
|
|
bgp_zebra_withdraw(p, old_select, bgp, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2016-09-05 19:35:19 +02:00
|
|
|
|
2017-11-09 11:37:09 +01:00
|
|
|
/* advertise/withdraw type-5 routes */
|
|
|
|
if ((afi == AFI_IP || afi == AFI_IP6) && (safi == SAFI_UNICAST)) {
|
2019-02-28 17:01:38 +01:00
|
|
|
if (advertise_type5_routes(bgp, afi) &&
|
|
|
|
new_select &&
|
|
|
|
is_route_injectable_into_evpn(new_select)) {
|
2018-04-16 10:09:03 +02:00
|
|
|
|
|
|
|
/* apply the route-map */
|
|
|
|
if (bgp->adv_cmd_rmap[afi][safi].map) {
|
lib: Introducing a 3rd state for route-map match cmd: RMAP_NOOP
Introducing a 3rd state for route_map_apply library function: RMAP_NOOP
Traditionally route map MATCH rule apis were designed to return
a binary response, consisting of either RMAP_MATCH or RMAP_NOMATCH.
(Route-map SET rule apis return RMAP_OKAY or RMAP_ERROR).
Depending on this response, the following statemachine decided the
course of action:
State1:
If match cmd returns RMAP_MATCH then, keep existing behaviour.
If routemap type is PERMIT, execute set cmds or call cmds if applicable,
otherwise PERMIT!
Else If routemap type is DENY, we DENYMATCH right away
State2:
If match cmd returns RMAP_NOMATCH, continue on to next route-map. If there
are no other rules or if all the rules return RMAP_NOMATCH, return DENYMATCH
We require a 3rd state because of the following situation:
The issue - what if, the rule api needs to abort or ignore a rule?:
"match evpn vni xx" route-map filter can be applied to incoming routes
regardless of whether the tunnel type is vxlan or mpls.
This rule should be N/A for mpls based evpn route, but applicable to only
vxlan based evpn route.
Also, this rule should be applicable for routes with VNI label only, and
not for routes without labels. For example, type 3 and type 4 EVPN routes
do not have labels, so, this match cmd should let them through.
Today, the filter produces either a match or nomatch response regardless of
whether it is mpls/vxlan, resulting in either permitting or denying the
route.. So an mpls evpn route may get filtered out incorrectly.
Eg: "route-map RM1 permit 10 ; match evpn vni 20" or
"route-map RM2 deny 20 ; match vni 20"
With the introduction of the 3rd state, we can abort this rule check safely.
How? The rules api can now return RMAP_NOOP to indicate
that it encountered an invalid check, and needs to abort just that rule,
but continue with other rules.
As a result we have a 3rd state:
State3:
If match cmd returned RMAP_NOOP
Then, proceed to other route-map, otherwise if there are no more
rules or if all the rules return RMAP_NOOP, then, return RMAP_PERMITMATCH.
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-06-19 23:04:36 +02:00
|
|
|
route_map_result_t ret;
|
2018-04-16 10:09:03 +02:00
|
|
|
|
2018-04-29 20:35:39 +02:00
|
|
|
ret = route_map_apply(
|
|
|
|
bgp->adv_cmd_rmap[afi][safi].map,
|
|
|
|
&rn->p, RMAP_BGP, new_select);
|
lib: Introducing a 3rd state for route-map match cmd: RMAP_NOOP
Introducing a 3rd state for route_map_apply library function: RMAP_NOOP
Traditionally route map MATCH rule apis were designed to return
a binary response, consisting of either RMAP_MATCH or RMAP_NOMATCH.
(Route-map SET rule apis return RMAP_OKAY or RMAP_ERROR).
Depending on this response, the following statemachine decided the
course of action:
State1:
If match cmd returns RMAP_MATCH then, keep existing behaviour.
If routemap type is PERMIT, execute set cmds or call cmds if applicable,
otherwise PERMIT!
Else If routemap type is DENY, we DENYMATCH right away
State2:
If match cmd returns RMAP_NOMATCH, continue on to next route-map. If there
are no other rules or if all the rules return RMAP_NOMATCH, return DENYMATCH
We require a 3rd state because of the following situation:
The issue - what if, the rule api needs to abort or ignore a rule?:
"match evpn vni xx" route-map filter can be applied to incoming routes
regardless of whether the tunnel type is vxlan or mpls.
This rule should be N/A for mpls based evpn route, but applicable to only
vxlan based evpn route.
Also, this rule should be applicable for routes with VNI label only, and
not for routes without labels. For example, type 3 and type 4 EVPN routes
do not have labels, so, this match cmd should let them through.
Today, the filter produces either a match or nomatch response regardless of
whether it is mpls/vxlan, resulting in either permitting or denying the
route.. So an mpls evpn route may get filtered out incorrectly.
Eg: "route-map RM1 permit 10 ; match evpn vni 20" or
"route-map RM2 deny 20 ; match vni 20"
With the introduction of the 3rd state, we can abort this rule check safely.
How? The rules api can now return RMAP_NOOP to indicate
that it encountered an invalid check, and needs to abort just that rule,
but continue with other rules.
As a result we have a 3rd state:
State3:
If match cmd returned RMAP_NOOP
Then, proceed to other route-map, otherwise if there are no more
rules or if all the rules return RMAP_NOOP, then, return RMAP_PERMITMATCH.
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-06-19 23:04:36 +02:00
|
|
|
if (ret == RMAP_PERMITMATCH)
|
2018-04-29 20:35:39 +02:00
|
|
|
bgp_evpn_advertise_type5_route(
|
|
|
|
bgp, &rn->p, new_select->attr,
|
|
|
|
afi, safi);
|
2019-02-27 09:19:06 +01:00
|
|
|
else
|
|
|
|
bgp_evpn_withdraw_type5_route(
|
|
|
|
bgp, &rn->p, afi, safi);
|
2018-04-16 10:09:03 +02:00
|
|
|
} else {
|
|
|
|
bgp_evpn_advertise_type5_route(bgp,
|
|
|
|
&rn->p,
|
|
|
|
new_select->attr,
|
|
|
|
afi, safi);
|
|
|
|
|
|
|
|
}
|
2019-02-28 17:01:38 +01:00
|
|
|
} else if (advertise_type5_routes(bgp, afi) &&
|
|
|
|
old_select &&
|
|
|
|
is_route_injectable_into_evpn(old_select))
|
2017-11-20 06:47:04 +01:00
|
|
|
bgp_evpn_withdraw_type5_route(bgp, &rn->p, afi, safi);
|
2017-11-09 11:37:09 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Clear any route change flags. */
|
|
|
|
bgp_zebra_clear_route_change_flags(rn);
|
2016-09-05 19:35:19 +02:00
|
|
|
|
2018-10-03 00:15:34 +02:00
|
|
|
/* Reap old select bgp_path_info, if it has been removed */
|
2018-09-14 02:34:42 +02:00
|
|
|
if (old_select && CHECK_FLAG(old_select->flags, BGP_PATH_REMOVED))
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_reap(rn, old_select);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
UNSET_FLAG(rn->flags, BGP_NODE_PROCESS_SCHEDULED);
|
2017-08-05 12:59:05 +02:00
|
|
|
return;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-08-05 12:59:05 +02:00
|
|
|
static wq_item_status bgp_process_wq(struct work_queue *wq, void *data)
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
{
|
2017-08-05 12:59:05 +02:00
|
|
|
struct bgp_process_queue *pqnode = data;
|
|
|
|
struct bgp *bgp = pqnode->bgp;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_table *table;
|
2017-08-29 00:34:45 +02:00
|
|
|
struct bgp_node *rn;
|
2017-08-05 12:59:05 +02:00
|
|
|
|
|
|
|
/* eoiu marker */
|
|
|
|
if (CHECK_FLAG(pqnode->flags, BGP_PROCESS_QUEUE_EOIU_MARKER)) {
|
|
|
|
bgp_process_main_one(bgp, NULL, 0, 0);
|
2017-08-30 17:23:01 +02:00
|
|
|
/* should always have dedicated wq call */
|
|
|
|
assert(STAILQ_FIRST(&pqnode->pqueue) == NULL);
|
2017-08-05 12:59:05 +02:00
|
|
|
return WQ_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2017-08-29 00:34:45 +02:00
|
|
|
while (!STAILQ_EMPTY(&pqnode->pqueue)) {
|
|
|
|
rn = STAILQ_FIRST(&pqnode->pqueue);
|
|
|
|
STAILQ_REMOVE_HEAD(&pqnode->pqueue, pq);
|
2017-08-29 21:30:34 +02:00
|
|
|
STAILQ_NEXT(rn, pq) = NULL; /* complete unlink */
|
2017-08-05 12:59:05 +02:00
|
|
|
table = bgp_node_table(rn);
|
2017-08-29 00:34:45 +02:00
|
|
|
/* note, new RNs may be added as part of processing */
|
2017-08-05 12:59:05 +02:00
|
|
|
bgp_process_main_one(bgp, rn, table->afi, table->safi);
|
bgpd: bgpd-mrai.patch
BGP: Event-driven route announcement taking into account min route advertisement interval
ISSUE
BGP starts the routeadv timer (peer->t_routeadv) to expire in 1 sec
when a peer is established. From then on, the timer expires
periodically based on the configured MRAI value (default: 30sec for
EBGP, 5sec for IBGP). At the expiry, the write thread is triggered
that takes the routes from peer's sync FIFO (adj-rib-out) and sends
UPDATEs. This has a few drawbacks:
(1) Delay in new route announcement: Even when the last UPDATE message
was sent a while back, the next route change will necessarily have
to wait for routeadv expiry
(2) CPU usage: The timer is always armed. If the operator chooses to
configure a lower value of MRAI (zero second is a preferred choice
in many deployments) for better convergence, it leads to high CPU
usage for BGP process, even at the times of no network churn.
PATCH
Make the route advertisement event-driven - When routes are added to
peer's sync FIFO, check if the routeadv timer needs to be adjusted (or
started). Conversely, do not arm the routeadv timer unconditionally.
The patch also addresses route announcements during read-only mode
(update-delay). During read-only mode operation, the routeadv timer
is not started. When BGP comes out of read-only mode and all the
routes are processed, the timer is started for all peers with zero
expiry, so that the UPDATEs can be sent all at once. This leads to
(near-)optimal UPDATE packing.
Finally, the patch makes the "max # packets to write to peer socket at
a time" configurable. Currently it is hard-coded to 10. The command is
at the top router-bgp mode and is called "write-quanta <number>". It
is a useful convergence parameter to tweak.
Signed-off-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
2015-05-20 02:40:37 +02:00
|
|
|
|
2017-08-05 12:59:05 +02:00
|
|
|
bgp_unlock_node(rn);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_table_unlock(table);
|
|
|
|
}
|
2017-08-05 12:59:05 +02:00
|
|
|
|
|
|
|
return WQ_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void bgp_processq_del(struct work_queue *wq, void *data)
|
|
|
|
{
|
|
|
|
struct bgp_process_queue *pqnode = data;
|
|
|
|
|
|
|
|
bgp_unlock(pqnode->bgp);
|
|
|
|
|
|
|
|
XFREE(MTYPE_BGP_PROCESS_QUEUE, pqnode);
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_process_queue_init(void)
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
{
|
2018-06-14 14:58:05 +02:00
|
|
|
if (!bm->process_main_queue)
|
2017-07-17 14:03:14 +02:00
|
|
|
bm->process_main_queue =
|
|
|
|
work_queue_new(bm->master, "process_main_queue");
|
|
|
|
|
2017-08-05 12:59:05 +02:00
|
|
|
bm->process_main_queue->spec.workfunc = &bgp_process_wq;
|
2017-07-17 14:03:14 +02:00
|
|
|
bm->process_main_queue->spec.del_item_data = &bgp_processq_del;
|
|
|
|
bm->process_main_queue->spec.max_retries = 0;
|
|
|
|
bm->process_main_queue->spec.hold = 50;
|
|
|
|
/* Use a higher yield value of 50ms for main queue processing */
|
|
|
|
bm->process_main_queue->spec.yield = 50 * 1000L;
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
}
|
|
|
|
|
2017-08-29 00:30:09 +02:00
|
|
|
static struct bgp_process_queue *bgp_processq_alloc(struct bgp *bgp)
|
2017-08-05 12:59:05 +02:00
|
|
|
{
|
|
|
|
struct bgp_process_queue *pqnode;
|
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
pqnode = XCALLOC(MTYPE_BGP_PROCESS_QUEUE,
|
|
|
|
sizeof(struct bgp_process_queue));
|
2017-08-05 12:59:05 +02:00
|
|
|
|
|
|
|
/* unlocked in bgp_processq_del */
|
|
|
|
pqnode->bgp = bgp_lock(bgp);
|
|
|
|
STAILQ_INIT(&pqnode->pqueue);
|
|
|
|
|
|
|
|
return pqnode;
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_process(struct bgp *bgp, struct bgp_node *rn, afi_t afi, safi_t safi)
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
{
|
2017-08-05 12:59:05 +02:00
|
|
|
#define ARBITRARY_PROCESS_QLEN 10000
|
|
|
|
struct work_queue *wq = bm->process_main_queue;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_process_queue *pqnode;
|
2017-08-29 00:30:09 +02:00
|
|
|
int pqnode_reuse = 0;
|
2015-09-02 14:19:44 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* already scheduled for processing? */
|
|
|
|
if (CHECK_FLAG(rn->flags, BGP_NODE_PROCESS_SCHEDULED))
|
|
|
|
return;
|
2015-09-08 15:24:21 +02:00
|
|
|
|
2017-08-05 12:59:05 +02:00
|
|
|
if (wq == NULL)
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
|
|
|
|
2017-08-05 12:59:05 +02:00
|
|
|
/* Add route nodes to an existing work queue item until reaching the
|
2018-02-09 19:22:50 +01:00
|
|
|
limit only if is from the same BGP view and it's not an EOIU marker
|
|
|
|
*/
|
2017-08-05 12:59:05 +02:00
|
|
|
if (work_queue_item_count(wq)) {
|
|
|
|
struct work_queue_item *item = work_queue_last_item(wq);
|
|
|
|
pqnode = item->data;
|
[bgpd] Stability fixes including bugs 397, 492
I've spent the last several weeks working on stability fixes to bgpd.
These patches fix all of the numerous crashes, assertion failures, memory
leaks and memory stomping I could find. Valgrind was used extensively.
Added new function bgp_exit() to help catch problems. If "debug bgp" is
configured and bgpd exits with status of 0, statistics on remaining
lib/memory.c allocations are printed to stderr. It is my hope that other
developers will use this to stay on top of memory issues.
Example questionable exit:
bgpd: memstats: Current memory utilization in module LIB:
bgpd: memstats: Link List : 6
bgpd: memstats: Link Node : 5
bgpd: memstats: Hash : 8
bgpd: memstats: Hash Bucket : 2
bgpd: memstats: Hash Index : 8
bgpd: memstats: Work queue : 3
bgpd: memstats: Work queue item : 2
bgpd: memstats: Work queue name string : 3
bgpd: memstats: Current memory utilization in module BGP:
bgpd: memstats: BGP instance : 1
bgpd: memstats: BGP peer : 1
bgpd: memstats: BGP peer hostname : 1
bgpd: memstats: BGP attribute : 1
bgpd: memstats: BGP extra attributes : 1
bgpd: memstats: BGP aspath : 1
bgpd: memstats: BGP aspath str : 1
bgpd: memstats: BGP table : 24
bgpd: memstats: BGP node : 1
bgpd: memstats: BGP route : 1
bgpd: memstats: BGP synchronise : 8
bgpd: memstats: BGP Process queue : 1
bgpd: memstats: BGP node clear queue : 1
bgpd: memstats: NOTE: If configuration exists, utilization may be expected.
Example clean exit:
bgpd: memstats: No remaining tracked memory utilization.
This patch fixes bug #397: "Invalid free in bgp_announce_check()".
This patch fixes bug #492: "SIGBUS in bgpd/bgp_route.c:
bgp_clear_route_node()".
My apologies for not separating out these changes into individual patches.
The complexity of doing so boggled what is left of my brain. I hope this
is all still useful to the community.
This code has been production tested, in non-route-server-client mode, on
a linux 32-bit box and a 64-bit box.
Release/reset functions, used by bgp_exit(), added to:
bgpd/bgp_attr.c,h
bgpd/bgp_community.c,h
bgpd/bgp_dump.c,h
bgpd/bgp_ecommunity.c,h
bgpd/bgp_filter.c,h
bgpd/bgp_nexthop.c,h
bgpd/bgp_route.c,h
lib/routemap.c,h
File by file analysis:
* bgpd/bgp_aspath.c: Prevent re-use of ashash after it is released.
* bgpd/bgp_attr.c: #if removed uncalled cluster_dup().
* bgpd/bgp_clist.c,h: Allow community_list_terminate() to be called from
bgp_exit().
* bgpd/bgp_filter.c: Fix aslist->name use without allocation check, and
also fix memory leak.
* bgpd/bgp_main.c: Created bgp_exit() exit routine. This function frees
allocations made as part of bgpd initialization and, to some extent,
configuration. If "debug bgp" is configured, memory stats are printed
as described above.
* bgpd/bgp_nexthop.c: zclient_new() already allocates stream for
ibuf/obuf, so bgp_scan_init() shouldn't do it too. Also, made it so
zlookup is global so bgp_exit() can use it.
* bgpd/bgp_packet.c: bgp_capability_msg_parse() call to bgp_clear_route()
adjusted to use new BGP_CLEAR_ROUTE_NORMAL flag.
* bgpd/bgp_route.h: Correct reference counter "lock" to be signed.
bgp_clear_route() now accepts a bgp_clear_route_type of either
BGP_CLEAR_ROUTE_NORMAL or BGP_CLEAR_ROUTE_MY_RSCLIENT.
* bgpd/bgp_route.c:
- bgp_process_rsclient(): attr was being zero'ed and then
bgp_attr_extra_free() was being called with it, even though it was
never filled with valid data.
- bgp_process_rsclient(): Make sure rsclient->group is not NULL before
use.
- bgp_processq_del(): Add call to bgp_table_unlock().
- bgp_process(): Add call to bgp_table_lock().
- bgp_update_rsclient(): memset clearing of new_attr not needed since
declarationw with "= { 0 }" does it. memset was already commented
out.
- bgp_update_rsclient(): Fix screwed up misleading indentation.
- bgp_withdraw_rsclient(): Fix screwed up misleading indentation.
- bgp_clear_route_node(): Support BGP_CLEAR_ROUTE_MY_RSCLIENT.
- bgp_clear_node_queue_del(): Add call to bgp_table_unlock() and also
free struct bgp_clear_node_queue used for work item.
- bgp_clear_node_complete(): Do peer_unlock() after BGP_EVENT_ADD() in
case peer is released by peer_unlock() call.
- bgp_clear_route_table(): Support BGP_CLEAR_ROUTE_MY_RSCLIENT. Use
struct bgp_clear_node_queue to supply data to worker. Add call to
bgp_table_lock().
- bgp_clear_route(): Add support for BGP_CLEAR_ROUTE_NORMAL or
BGP_CLEAR_ROUTE_MY_RSCLIENT.
- bgp_clear_route_all(): Use BGP_CLEAR_ROUTE_NORMAL.
Bug 397 fixes:
- bgp_default_originate()
- bgp_announce_table()
* bgpd/bgp_table.h:
- struct bgp_table: Added reference count. Changed type of owner to be
"struct peer *" rather than "void *".
- struct bgp_node: Correct reference counter "lock" to be signed.
* bgpd/bgp_table.c:
- Added bgp_table reference counting.
- bgp_table_free(): Fixed cleanup code. Call peer_unlock() on owner if
set.
- bgp_unlock_node(): Added assertion.
- bgp_node_get(): Added call to bgp_lock_node() to code path that it was
missing from.
* bgpd/bgp_vty.c:
- peer_rsclient_set_vty(): Call peer_lock() as part of peer assignment
to owner. Handle failure gracefully.
- peer_rsclient_unset_vty(): Add call to bgp_clear_route() with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
* bgpd/bgp_zebra.c: Made it so zclient is global so bgp_exit() can use it.
* bgpd/bgpd.c:
- peer_lock(): Allow to be called when status is "Deleted".
- peer_deactivate(): Supply BGP_CLEAR_ROUTE_NORMAL purpose to
bgp_clear_route() call.
- peer_delete(): Common variable listnode pn. Fix bug in which rsclient
was only dealt with if not part of a peer group. Call
bgp_clear_route() for rsclient, if appropriate, and do so with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
- peer_group_get(): Use XSTRDUP() instead of strdup() for conf->host.
- peer_group_bind(): Call bgp_clear_route() for rsclient, and do so with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
- bgp_create(): Use XSTRDUP() instead of strdup() for peer_self->host.
- bgp_delete(): Delete peers before groups, rather than after. And then
rather than deleting rsclients, verify that there are none at this
point.
- bgp_unlock(): Add assertion.
- bgp_free(): Call bgp_table_finish() rather than doing XFREE() itself.
* lib/command.c,h: Compiler warning fixes. Add cmd_terminate(). Fixed
massive leak in install_element() in which cmd_make_descvec() was being
called more than once for the same cmd->strvec/string/doc.
* lib/log.c: Make closezlog() check fp before calling fclose().
* lib/memory.c: Catch when alloc count goes negative by using signed
counts. Correct #endif comment. Add log_memstats_stderr().
* lib/memory.h: Add log_memstats_stderr().
* lib/thread.c: thread->funcname was being accessed in thread_call() after
it had been freed. Rearranged things so that thread_call() frees
funcname. Also made it so thread_master_free() cleans up cpu_record.
* lib/vty.c,h: Use global command_cr. Add vty_terminate().
* lib/zclient.c,h: Re-enable zclient_free().
2009-07-18 07:44:03 +02:00
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
if (CHECK_FLAG(pqnode->flags, BGP_PROCESS_QUEUE_EOIU_MARKER)
|
|
|
|
|| pqnode->bgp != bgp
|
|
|
|
|| pqnode->queued >= ARBITRARY_PROCESS_QLEN)
|
2017-08-29 00:30:09 +02:00
|
|
|
pqnode = bgp_processq_alloc(bgp);
|
|
|
|
else
|
|
|
|
pqnode_reuse = 1;
|
2017-08-05 12:59:05 +02:00
|
|
|
} else
|
2017-08-29 00:30:09 +02:00
|
|
|
pqnode = bgp_processq_alloc(bgp);
|
2017-08-05 12:59:05 +02:00
|
|
|
/* all unlocked in bgp_process_wq */
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_table_lock(bgp_node_table(rn));
|
2017-08-05 12:59:05 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
SET_FLAG(rn->flags, BGP_NODE_PROCESS_SCHEDULED);
|
2017-08-05 12:59:05 +02:00
|
|
|
bgp_lock_node(rn);
|
|
|
|
|
2017-08-30 17:23:01 +02:00
|
|
|
/* can't be enqueued twice */
|
|
|
|
assert(STAILQ_NEXT(rn, pq) == NULL);
|
2017-08-05 12:59:05 +02:00
|
|
|
STAILQ_INSERT_TAIL(&pqnode->pqueue, rn, pq);
|
|
|
|
pqnode->queued++;
|
|
|
|
|
2017-08-29 00:30:09 +02:00
|
|
|
if (!pqnode_reuse)
|
|
|
|
work_queue_add(wq, pqnode);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
}
|
2005-02-01 21:57:17 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_add_eoiu_mark(struct bgp *bgp)
|
bgpd: bgpd-mrai.patch
BGP: Event-driven route announcement taking into account min route advertisement interval
ISSUE
BGP starts the routeadv timer (peer->t_routeadv) to expire in 1 sec
when a peer is established. From then on, the timer expires
periodically based on the configured MRAI value (default: 30sec for
EBGP, 5sec for IBGP). At the expiry, the write thread is triggered
that takes the routes from peer's sync FIFO (adj-rib-out) and sends
UPDATEs. This has a few drawbacks:
(1) Delay in new route announcement: Even when the last UPDATE message
was sent a while back, the next route change will necessarily have
to wait for routeadv expiry
(2) CPU usage: The timer is always armed. If the operator chooses to
configure a lower value of MRAI (zero second is a preferred choice
in many deployments) for better convergence, it leads to high CPU
usage for BGP process, even at the times of no network churn.
PATCH
Make the route advertisement event-driven - When routes are added to
peer's sync FIFO, check if the routeadv timer needs to be adjusted (or
started). Conversely, do not arm the routeadv timer unconditionally.
The patch also addresses route announcements during read-only mode
(update-delay). During read-only mode operation, the routeadv timer
is not started. When BGP comes out of read-only mode and all the
routes are processed, the timer is started for all peers with zero
expiry, so that the UPDATEs can be sent all at once. This leads to
(near-)optimal UPDATE packing.
Finally, the patch makes the "max # packets to write to peer socket at
a time" configurable. Currently it is hard-coded to 10. The command is
at the top router-bgp mode and is called "write-quanta <number>". It
is a useful convergence parameter to tweak.
Signed-off-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
2015-05-20 02:40:37 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_process_queue *pqnode;
|
bgpd: bgpd-mrai.patch
BGP: Event-driven route announcement taking into account min route advertisement interval
ISSUE
BGP starts the routeadv timer (peer->t_routeadv) to expire in 1 sec
when a peer is established. From then on, the timer expires
periodically based on the configured MRAI value (default: 30sec for
EBGP, 5sec for IBGP). At the expiry, the write thread is triggered
that takes the routes from peer's sync FIFO (adj-rib-out) and sends
UPDATEs. This has a few drawbacks:
(1) Delay in new route announcement: Even when the last UPDATE message
was sent a while back, the next route change will necessarily have
to wait for routeadv expiry
(2) CPU usage: The timer is always armed. If the operator chooses to
configure a lower value of MRAI (zero second is a preferred choice
in many deployments) for better convergence, it leads to high CPU
usage for BGP process, even at the times of no network churn.
PATCH
Make the route advertisement event-driven - When routes are added to
peer's sync FIFO, check if the routeadv timer needs to be adjusted (or
started). Conversely, do not arm the routeadv timer unconditionally.
The patch also addresses route announcements during read-only mode
(update-delay). During read-only mode operation, the routeadv timer
is not started. When BGP comes out of read-only mode and all the
routes are processed, the timer is started for all peers with zero
expiry, so that the UPDATEs can be sent all at once. This leads to
(near-)optimal UPDATE packing.
Finally, the patch makes the "max # packets to write to peer socket at
a time" configurable. Currently it is hard-coded to 10. The command is
at the top router-bgp mode and is called "write-quanta <number>". It
is a useful convergence parameter to tweak.
Signed-off-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
2015-05-20 02:40:37 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bm->process_main_queue == NULL)
|
|
|
|
return;
|
2015-09-08 15:24:21 +02:00
|
|
|
|
2017-08-29 00:30:09 +02:00
|
|
|
pqnode = bgp_processq_alloc(bgp);
|
bgpd: bgpd-mrai.patch
BGP: Event-driven route announcement taking into account min route advertisement interval
ISSUE
BGP starts the routeadv timer (peer->t_routeadv) to expire in 1 sec
when a peer is established. From then on, the timer expires
periodically based on the configured MRAI value (default: 30sec for
EBGP, 5sec for IBGP). At the expiry, the write thread is triggered
that takes the routes from peer's sync FIFO (adj-rib-out) and sends
UPDATEs. This has a few drawbacks:
(1) Delay in new route announcement: Even when the last UPDATE message
was sent a while back, the next route change will necessarily have
to wait for routeadv expiry
(2) CPU usage: The timer is always armed. If the operator chooses to
configure a lower value of MRAI (zero second is a preferred choice
in many deployments) for better convergence, it leads to high CPU
usage for BGP process, even at the times of no network churn.
PATCH
Make the route advertisement event-driven - When routes are added to
peer's sync FIFO, check if the routeadv timer needs to be adjusted (or
started). Conversely, do not arm the routeadv timer unconditionally.
The patch also addresses route announcements during read-only mode
(update-delay). During read-only mode operation, the routeadv timer
is not started. When BGP comes out of read-only mode and all the
routes are processed, the timer is started for all peers with zero
expiry, so that the UPDATEs can be sent all at once. This leads to
(near-)optimal UPDATE packing.
Finally, the patch makes the "max # packets to write to peer socket at
a time" configurable. Currently it is hard-coded to 10. The command is
at the top router-bgp mode and is called "write-quanta <number>". It
is a useful convergence parameter to tweak.
Signed-off-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
2015-05-20 02:40:37 +02:00
|
|
|
|
2017-08-05 12:59:05 +02:00
|
|
|
SET_FLAG(pqnode->flags, BGP_PROCESS_QUEUE_EOIU_MARKER);
|
2017-08-29 00:30:09 +02:00
|
|
|
work_queue_add(bm->process_main_queue, pqnode);
|
bgpd: bgpd-mrai.patch
BGP: Event-driven route announcement taking into account min route advertisement interval
ISSUE
BGP starts the routeadv timer (peer->t_routeadv) to expire in 1 sec
when a peer is established. From then on, the timer expires
periodically based on the configured MRAI value (default: 30sec for
EBGP, 5sec for IBGP). At the expiry, the write thread is triggered
that takes the routes from peer's sync FIFO (adj-rib-out) and sends
UPDATEs. This has a few drawbacks:
(1) Delay in new route announcement: Even when the last UPDATE message
was sent a while back, the next route change will necessarily have
to wait for routeadv expiry
(2) CPU usage: The timer is always armed. If the operator chooses to
configure a lower value of MRAI (zero second is a preferred choice
in many deployments) for better convergence, it leads to high CPU
usage for BGP process, even at the times of no network churn.
PATCH
Make the route advertisement event-driven - When routes are added to
peer's sync FIFO, check if the routeadv timer needs to be adjusted (or
started). Conversely, do not arm the routeadv timer unconditionally.
The patch also addresses route announcements during read-only mode
(update-delay). During read-only mode operation, the routeadv timer
is not started. When BGP comes out of read-only mode and all the
routes are processed, the timer is started for all peers with zero
expiry, so that the UPDATEs can be sent all at once. This leads to
(near-)optimal UPDATE packing.
Finally, the patch makes the "max # packets to write to peer socket at
a time" configurable. Currently it is hard-coded to 10. The command is
at the top router-bgp mode and is called "write-quanta <number>". It
is a useful convergence parameter to tweak.
Signed-off-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
2015-05-20 02:40:37 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_maximum_prefix_restart_timer(struct thread *thread)
|
2005-02-01 21:57:17 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct peer *peer;
|
2005-02-01 21:57:17 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
peer = THREAD_ARG(thread);
|
|
|
|
peer->t_pmax_restart = NULL;
|
2005-02-01 21:57:17 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_debug_neighbor_events(peer))
|
|
|
|
zlog_debug(
|
|
|
|
"%s Maximum-prefix restart timer expired, restore peering",
|
|
|
|
peer->host);
|
2005-02-01 21:57:17 +01:00
|
|
|
|
2018-05-30 15:37:03 +02:00
|
|
|
if ((peer_clear(peer, NULL) < 0) && bgp_debug_neighbor_events(peer))
|
|
|
|
zlog_debug("%s: %s peer_clear failed",
|
|
|
|
__PRETTY_FUNCTION__, peer->host);
|
2005-02-01 21:57:17 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
2005-02-01 21:57:17 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
int bgp_maximum_prefix_overflow(struct peer *peer, afi_t afi, safi_t safi,
|
|
|
|
int always)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
iana_afi_t pkt_afi;
|
2017-08-01 02:06:40 +02:00
|
|
|
iana_safi_t pkt_safi;
|
2016-06-15 19:25:35 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!CHECK_FLAG(peer->af_flags[afi][safi], PEER_FLAG_MAX_PREFIX))
|
|
|
|
return 0;
|
2004-05-20 11:19:34 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (peer->pcount[afi][safi] > peer->pmax[afi][safi]) {
|
|
|
|
if (CHECK_FLAG(peer->af_sflags[afi][safi],
|
|
|
|
PEER_STATUS_PREFIX_LIMIT)
|
|
|
|
&& !always)
|
|
|
|
return 0;
|
2004-05-20 11:19:34 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
zlog_info(
|
2019-10-03 23:30:28 +02:00
|
|
|
"%%MAXPFXEXCEED: No. of %s prefix received from %s %" PRIu32
|
|
|
|
" exceed, limit %" PRIu32,
|
2019-08-27 03:48:53 +02:00
|
|
|
get_afi_safi_str(afi, safi, false), peer->host,
|
2017-07-17 14:03:14 +02:00
|
|
|
peer->pcount[afi][safi], peer->pmax[afi][safi]);
|
|
|
|
SET_FLAG(peer->af_sflags[afi][safi], PEER_STATUS_PREFIX_LIMIT);
|
|
|
|
|
|
|
|
if (CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_MAX_PREFIX_WARNING))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Convert AFI, SAFI to values for packet. */
|
|
|
|
pkt_afi = afi_int2iana(afi);
|
|
|
|
pkt_safi = safi_int2iana(safi);
|
|
|
|
{
|
2018-03-27 21:13:34 +02:00
|
|
|
uint8_t ndata[7];
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
ndata[0] = (pkt_afi >> 8);
|
|
|
|
ndata[1] = pkt_afi;
|
|
|
|
ndata[2] = pkt_safi;
|
|
|
|
ndata[3] = (peer->pmax[afi][safi] >> 24);
|
|
|
|
ndata[4] = (peer->pmax[afi][safi] >> 16);
|
|
|
|
ndata[5] = (peer->pmax[afi][safi] >> 8);
|
|
|
|
ndata[6] = (peer->pmax[afi][safi]);
|
|
|
|
|
|
|
|
SET_FLAG(peer->sflags, PEER_STATUS_PREFIX_OVERFLOW);
|
|
|
|
bgp_notify_send_with_data(peer, BGP_NOTIFY_CEASE,
|
|
|
|
BGP_NOTIFY_CEASE_MAX_PREFIX,
|
|
|
|
ndata, 7);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Dynamic peers will just close their connection. */
|
|
|
|
if (peer_dynamic_neighbor(peer))
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
/* restart timer start */
|
|
|
|
if (peer->pmax_restart[afi][safi]) {
|
|
|
|
peer->v_pmax_restart =
|
|
|
|
peer->pmax_restart[afi][safi] * 60;
|
|
|
|
|
|
|
|
if (bgp_debug_neighbor_events(peer))
|
|
|
|
zlog_debug(
|
|
|
|
"%s Maximum-prefix restart timer started for %d secs",
|
|
|
|
peer->host, peer->v_pmax_restart);
|
|
|
|
|
|
|
|
BGP_TIMER_ON(peer->t_pmax_restart,
|
|
|
|
bgp_maximum_prefix_restart_timer,
|
|
|
|
peer->v_pmax_restart);
|
|
|
|
}
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
} else
|
|
|
|
UNSET_FLAG(peer->af_sflags[afi][safi],
|
|
|
|
PEER_STATUS_PREFIX_LIMIT);
|
|
|
|
|
|
|
|
if (peer->pcount[afi][safi]
|
|
|
|
> (peer->pmax[afi][safi] * peer->pmax_threshold[afi][safi] / 100)) {
|
|
|
|
if (CHECK_FLAG(peer->af_sflags[afi][safi],
|
|
|
|
PEER_STATUS_PREFIX_THRESHOLD)
|
|
|
|
&& !always)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
zlog_info(
|
2019-10-03 23:30:28 +02:00
|
|
|
"%%MAXPFX: No. of %s prefix received from %s reaches %" PRIu32
|
|
|
|
", max %" PRIu32,
|
2019-08-27 03:48:53 +02:00
|
|
|
get_afi_safi_str(afi, safi, false), peer->host,
|
2017-07-17 14:03:14 +02:00
|
|
|
peer->pcount[afi][safi], peer->pmax[afi][safi]);
|
|
|
|
SET_FLAG(peer->af_sflags[afi][safi],
|
|
|
|
PEER_STATUS_PREFIX_THRESHOLD);
|
|
|
|
} else
|
|
|
|
UNSET_FLAG(peer->af_sflags[afi][safi],
|
|
|
|
PEER_STATUS_PREFIX_THRESHOLD);
|
|
|
|
return 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2005-08-22 Paul Jakma <paul.jakma@sun.com>
* bgp_route.h: (struct bgp_info) add a new flag, BGP_INFO_REMOVED.
BGP_INFO_VALID is already overloaded, don't care to do same thing
to STALE or HISTORY.
* bgpd.h: (BGP_INFO_HOLDDOWN) Add INFO_REMOVED to the macro, as a
route which should generally be ignored.
* bgp_route.c: (bgp_info_delete) Just set the REMOVE flag, rather
than doing actual work, so that bgp_process (called directly,
or indirectly via the scanner) can catch withdrawn routes.
(bgp_info_reap) Actually remove the route, what bgp_info_delete
used to do, only for use by bgp_process.
(bgp_best_selection) reap any REMOVED routes, other than the old
selected route.
(bgp_process_rsclient) reap the old-selected route, if appropriate
(bgp_process_main) ditto
(bgp_rib_withdraw, bgp_rib_remove) make them more consistent with
each other. Don't play games with the VALID flag, bgp_process
is async now, so it didn't make a difference anyway.
Remove the 'force' argument from bgp_rib_withdraw, withdraw+force
is equivalent to bgp_rib_remove. Update all its callers.
(bgp_update_rsclient) bgp_rib_withdraw and force set is same as
bgp_rib_remove.
(route_vty_short_status_out) new helper to print the leading
route-status string used in many command outputs. Consolidate.
(route_vty_out, route_vty_out_tag, damp_route_vty_out,
flap_route_vty_out) use route_vty_short_status_out rather than
duplicate.
(route_vty_out_detail) print state of REMOVED flag.
(BGP_SHOW_SCODE_HEADER) update for Removed flag.
2005-08-23 00:34:41 +02:00
|
|
|
/* Unconditionally remove the route from the RIB, without taking
|
|
|
|
* damping into consideration (eg, because the session went down)
|
|
|
|
*/
|
2018-10-03 02:43:07 +02:00
|
|
|
void bgp_rib_remove(struct bgp_node *rn, struct bgp_path_info *pi,
|
2018-10-02 22:41:30 +02:00
|
|
|
struct peer *peer, afi_t afi, safi_t safi)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_decrement(peer->bgp, &rn->p, pi, afi, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!CHECK_FLAG(pi->flags, BGP_PATH_HISTORY))
|
|
|
|
bgp_path_info_delete(rn, pi); /* keep historical info */
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-05-09 11:12:14 +02:00
|
|
|
hook_call(bgp_process, peer->bgp, afi, safi, rn, peer, true);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_process(peer->bgp, rn, afi, safi);
|
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
static void bgp_rib_withdraw(struct bgp_node *rn, struct bgp_path_info *pi,
|
2017-07-17 14:03:14 +02:00
|
|
|
struct peer *peer, afi_t afi, safi_t safi,
|
|
|
|
struct prefix_rd *prd)
|
|
|
|
{
|
|
|
|
/* apply dampening, if result is suppressed, we'll be retaining
|
2018-10-03 00:15:34 +02:00
|
|
|
* the bgp_path_info in the RIB for historical reference.
|
2017-07-17 14:03:14 +02:00
|
|
|
*/
|
|
|
|
if (CHECK_FLAG(peer->bgp->af_flags[afi][safi], BGP_CONFIG_DAMPENING)
|
|
|
|
&& peer->sort == BGP_PEER_EBGP)
|
2018-10-03 02:43:07 +02:00
|
|
|
if ((bgp_damp_withdraw(pi, rn, afi, safi, 0))
|
2017-07-17 14:03:14 +02:00
|
|
|
== BGP_DAMP_SUPPRESSED) {
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_decrement(peer->bgp, &rn->p, pi, afi,
|
2017-07-17 14:03:14 +02:00
|
|
|
safi);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
if (safi == SAFI_MPLS_VPN) {
|
|
|
|
struct bgp_node *prn = NULL;
|
|
|
|
struct bgp_table *table = NULL;
|
|
|
|
|
|
|
|
prn = bgp_node_get(peer->bgp->rib[afi][safi],
|
|
|
|
(struct prefix *)prd);
|
2018-09-26 02:37:16 +02:00
|
|
|
if (bgp_node_has_bgp_path_info_data(prn)) {
|
|
|
|
table = bgp_node_get_bgp_table_info(prn);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
vnc_import_bgp_del_vnc_host_route_mode_resolve_nve(
|
2018-10-03 02:43:07 +02:00
|
|
|
peer->bgp, prd, table, &rn->p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
bgp_unlock_node(prn);
|
|
|
|
}
|
|
|
|
if ((afi == AFI_IP || afi == AFI_IP6) && (safi == SAFI_UNICAST)) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_SELECTED)) {
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
vnc_import_bgp_del_route(peer->bgp, &rn->p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
vnc_import_bgp_exterior_del_route(peer->bgp, &rn->p,
|
2018-10-03 02:43:07 +02:00
|
|
|
pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
#endif
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If this is an EVPN route, process for un-import. */
|
|
|
|
if (safi == SAFI_EVPN)
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_evpn_unimport_route(peer->bgp, afi, safi, &rn->p, pi);
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_rib_remove(rn, pi, peer, afi, safi);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *info_make(int type, int sub_type, unsigned short instance,
|
|
|
|
struct peer *peer, struct attr *attr,
|
|
|
|
struct bgp_node *rn)
|
2015-05-20 02:40:34 +02:00
|
|
|
{
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *new;
|
2015-05-20 02:40:34 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Make new BGP info. */
|
2018-10-02 22:41:30 +02:00
|
|
|
new = XCALLOC(MTYPE_BGP_ROUTE, sizeof(struct bgp_path_info));
|
2017-07-17 14:03:14 +02:00
|
|
|
new->type = type;
|
|
|
|
new->instance = instance;
|
|
|
|
new->sub_type = sub_type;
|
|
|
|
new->peer = peer;
|
|
|
|
new->attr = attr;
|
|
|
|
new->uptime = bgp_clock();
|
|
|
|
new->net = rn;
|
|
|
|
return new;
|
2015-05-20 02:40:34 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static void overlay_index_update(struct attr *attr,
|
|
|
|
struct eth_segment_id *eth_s_id,
|
|
|
|
union gw_addr *gw_ip)
|
2016-09-05 14:07:01 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!attr)
|
|
|
|
return;
|
2016-09-05 14:07:01 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (eth_s_id == NULL) {
|
|
|
|
memset(&(attr->evpn_overlay.eth_s_id), 0,
|
|
|
|
sizeof(struct eth_segment_id));
|
|
|
|
} else {
|
|
|
|
memcpy(&(attr->evpn_overlay.eth_s_id), eth_s_id,
|
|
|
|
sizeof(struct eth_segment_id));
|
|
|
|
}
|
|
|
|
if (gw_ip == NULL) {
|
|
|
|
memset(&(attr->evpn_overlay.gw_ip), 0, sizeof(union gw_addr));
|
|
|
|
} else {
|
|
|
|
memcpy(&(attr->evpn_overlay.gw_ip), gw_ip,
|
|
|
|
sizeof(union gw_addr));
|
|
|
|
}
|
2016-09-05 14:07:01 +02:00
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
static bool overlay_index_equal(afi_t afi, struct bgp_path_info *path,
|
2017-07-17 14:03:14 +02:00
|
|
|
struct eth_segment_id *eth_s_id,
|
|
|
|
union gw_addr *gw_ip)
|
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
struct eth_segment_id *path_eth_s_id, *path_eth_s_id_remote;
|
|
|
|
union gw_addr *path_gw_ip, *path_gw_ip_remote;
|
2018-10-01 20:15:58 +02:00
|
|
|
union {
|
|
|
|
struct eth_segment_id esi;
|
|
|
|
union gw_addr ip;
|
|
|
|
} temp;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (afi != AFI_L2VPN)
|
|
|
|
return true;
|
2018-10-01 20:15:58 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
path_eth_s_id = &(path->attr->evpn_overlay.eth_s_id);
|
|
|
|
path_gw_ip = &(path->attr->evpn_overlay.gw_ip);
|
2018-10-01 20:15:58 +02:00
|
|
|
|
|
|
|
if (gw_ip == NULL) {
|
|
|
|
memset(&temp, 0, sizeof(temp));
|
2018-10-03 02:43:07 +02:00
|
|
|
path_gw_ip_remote = &temp.ip;
|
2018-10-01 20:15:58 +02:00
|
|
|
} else
|
2018-10-03 02:43:07 +02:00
|
|
|
path_gw_ip_remote = gw_ip;
|
2018-10-01 20:15:58 +02:00
|
|
|
|
|
|
|
if (eth_s_id == NULL) {
|
|
|
|
memset(&temp, 0, sizeof(temp));
|
2018-10-03 02:43:07 +02:00
|
|
|
path_eth_s_id_remote = &temp.esi;
|
2018-10-01 20:15:58 +02:00
|
|
|
} else
|
2018-10-03 02:43:07 +02:00
|
|
|
path_eth_s_id_remote = eth_s_id;
|
2018-10-01 20:15:58 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!memcmp(path_gw_ip, path_gw_ip_remote, sizeof(union gw_addr)))
|
2017-07-17 14:03:14 +02:00
|
|
|
return false;
|
2018-10-01 20:15:58 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
return !memcmp(path_eth_s_id, path_eth_s_id_remote,
|
2017-07-17 14:03:14 +02:00
|
|
|
sizeof(struct eth_segment_id));
|
2016-09-05 14:07:01 +02:00
|
|
|
}
|
|
|
|
|
2015-06-12 16:59:09 +02:00
|
|
|
/* Check if received nexthop is valid or not. */
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_update_martian_nexthop(struct bgp *bgp, afi_t afi, safi_t safi,
|
2019-10-30 10:42:25 +01:00
|
|
|
uint8_t type, uint8_t stype,
|
|
|
|
struct attr *attr, struct bgp_node *rn)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
/* Only validated for unicast and multicast currently. */
|
|
|
|
/* Also valid for EVPN where the nexthop is an IP address. */
|
|
|
|
if (safi != SAFI_UNICAST && safi != SAFI_MULTICAST && safi != SAFI_EVPN)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* If NEXT_HOP is present, validate it. */
|
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_NEXT_HOP)) {
|
|
|
|
if (attr->nexthop.s_addr == 0
|
|
|
|
|| IPV4_CLASS_DE(ntohl(attr->nexthop.s_addr))
|
2019-10-30 10:42:25 +01:00
|
|
|
|| bgp_nexthop_self(bgp, afi, type, stype,
|
|
|
|
attr, rn))
|
2017-07-17 14:03:14 +02:00
|
|
|
return 1;
|
|
|
|
}
|
2015-06-12 16:59:09 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If MP_NEXTHOP is present, validate it. */
|
|
|
|
/* Note: For IPv6 nexthops, we only validate the global (1st) nexthop;
|
|
|
|
* there is code in bgp_attr.c to ignore the link-local (2nd) nexthop if
|
|
|
|
* it is not an IPv6 link-local address.
|
|
|
|
*/
|
|
|
|
if (attr->mp_nexthop_len) {
|
|
|
|
switch (attr->mp_nexthop_len) {
|
|
|
|
case BGP_ATTR_NHLEN_IPV4:
|
|
|
|
case BGP_ATTR_NHLEN_VPNV4:
|
|
|
|
ret = (attr->mp_nexthop_global_in.s_addr == 0
|
|
|
|
|| IPV4_CLASS_DE(ntohl(
|
|
|
|
attr->mp_nexthop_global_in.s_addr))
|
2019-10-30 10:42:25 +01:00
|
|
|
|| bgp_nexthop_self(bgp, afi, type, stype,
|
|
|
|
attr, rn));
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
case BGP_ATTR_NHLEN_IPV6_GLOBAL:
|
|
|
|
case BGP_ATTR_NHLEN_IPV6_GLOBAL_AND_LL:
|
|
|
|
case BGP_ATTR_NHLEN_VPNV6_GLOBAL:
|
|
|
|
ret = (IN6_IS_ADDR_UNSPECIFIED(&attr->mp_nexthop_global)
|
|
|
|
|| IN6_IS_ADDR_LOOPBACK(&attr->mp_nexthop_global)
|
|
|
|
|| IN6_IS_ADDR_MULTICAST(
|
2019-10-30 10:42:25 +01:00
|
|
|
&attr->mp_nexthop_global)
|
|
|
|
|| bgp_nexthop_self(bgp, afi, type, stype,
|
|
|
|
attr, rn));
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
ret = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2015-06-12 16:59:09 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-03-27 21:13:34 +02:00
|
|
|
int bgp_update(struct peer *peer, struct prefix *p, uint32_t addpath_id,
|
2017-07-17 14:03:14 +02:00
|
|
|
struct attr *attr, afi_t afi, safi_t safi, int type,
|
2018-02-09 19:22:50 +01:00
|
|
|
int sub_type, struct prefix_rd *prd, mpls_label_t *label,
|
2018-03-27 21:13:34 +02:00
|
|
|
uint32_t num_labels, int soft_reconfig,
|
2018-02-09 19:22:50 +01:00
|
|
|
struct bgp_route_evpn *evpn)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
int aspath_loop_count = 0;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp *bgp;
|
|
|
|
struct attr new_attr;
|
|
|
|
struct attr *attr_new;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *new;
|
|
|
|
struct bgp_path_info_extra *extra;
|
2017-07-17 14:03:14 +02:00
|
|
|
const char *reason;
|
|
|
|
char pfx_buf[BGP_PRD_PATH_STRLEN];
|
|
|
|
int connected = 0;
|
|
|
|
int do_loop_check = 1;
|
|
|
|
int has_valid_label = 0;
|
2019-11-14 01:46:56 +01:00
|
|
|
afi_t nh_afi;
|
2019-10-30 10:42:25 +01:00
|
|
|
uint8_t pi_type = 0;
|
|
|
|
uint8_t pi_sub_type = 0;
|
|
|
|
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
int vnc_implicit_withdraw = 0;
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2017-07-17 14:03:14 +02:00
|
|
|
int same_attr = 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
memset(&new_attr, 0, sizeof(struct attr));
|
|
|
|
new_attr.label_index = BGP_INVALID_LABEL_INDEX;
|
|
|
|
new_attr.label = MPLS_INVALID_LABEL;
|
2016-02-05 03:29:49 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp = peer->bgp;
|
|
|
|
rn = bgp_afi_node_get(bgp->rib[afi][safi], afi, safi, p, prd);
|
2017-11-21 11:42:05 +01:00
|
|
|
/* TODO: Check to see if we can get rid of "is_valid_label" */
|
|
|
|
if (afi == AFI_L2VPN && safi == SAFI_EVPN)
|
|
|
|
has_valid_label = (num_labels > 0) ? 1 : 0;
|
|
|
|
else
|
|
|
|
has_valid_label = bgp_is_valid_label(label);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* When peer's soft reconfiguration enabled. Record input packet in
|
|
|
|
Adj-RIBs-In. */
|
|
|
|
if (!soft_reconfig
|
|
|
|
&& CHECK_FLAG(peer->af_flags[afi][safi], PEER_FLAG_SOFT_RECONFIG)
|
|
|
|
&& peer != bgp->peer_self)
|
|
|
|
bgp_adj_in_set(rn, peer, attr, addpath_id);
|
|
|
|
|
|
|
|
/* Check previously received route. */
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == peer && pi->type == type
|
|
|
|
&& pi->sub_type == sub_type
|
|
|
|
&& pi->addpath_rx_id == addpath_id)
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
/* AS path local-as loop check. */
|
|
|
|
if (peer->change_local_as) {
|
2018-01-12 16:31:16 +01:00
|
|
|
if (peer->allowas_in[afi][safi])
|
|
|
|
aspath_loop_count = peer->allowas_in[afi][safi];
|
2018-02-09 19:22:50 +01:00
|
|
|
else if (!CHECK_FLAG(peer->flags,
|
|
|
|
PEER_FLAG_LOCAL_AS_NO_PREPEND))
|
2017-07-17 14:03:14 +02:00
|
|
|
aspath_loop_count = 1;
|
|
|
|
|
|
|
|
if (aspath_loop_check(attr->aspath, peer->change_local_as)
|
|
|
|
> aspath_loop_count) {
|
2019-04-24 21:40:50 +02:00
|
|
|
peer->stat_pfx_aspath_loop++;
|
2017-07-17 14:03:14 +02:00
|
|
|
reason = "as-path contains our own AS;";
|
|
|
|
goto filtered;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If the peer is configured for "allowas-in origin" and the last ASN in
|
|
|
|
* the
|
|
|
|
* as-path is our ASN then we do not need to call aspath_loop_check
|
|
|
|
*/
|
|
|
|
if (CHECK_FLAG(peer->af_flags[afi][safi], PEER_FLAG_ALLOWAS_IN_ORIGIN))
|
|
|
|
if (aspath_get_last_as(attr->aspath) == bgp->as)
|
|
|
|
do_loop_check = 0;
|
|
|
|
|
|
|
|
/* AS path loop check. */
|
|
|
|
if (do_loop_check) {
|
|
|
|
if (aspath_loop_check(attr->aspath, bgp->as)
|
|
|
|
> peer->allowas_in[afi][safi]
|
|
|
|
|| (CHECK_FLAG(bgp->config, BGP_CONFIG_CONFEDERATION)
|
|
|
|
&& aspath_loop_check(attr->aspath, bgp->confed_id)
|
|
|
|
> peer->allowas_in[afi][safi])) {
|
2019-04-24 21:40:50 +02:00
|
|
|
peer->stat_pfx_aspath_loop++;
|
2017-07-17 14:03:14 +02:00
|
|
|
reason = "as-path contains our own AS;";
|
|
|
|
goto filtered;
|
|
|
|
}
|
|
|
|
}
|
bgpd: add 'neighbor x.x.x.x allowas-in origin' knob
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Don Slice <dslice@cumulusnetworks.com>
Ticket: CM-13207
normal table on spine-1....we do not see 6.0.0.10 (spine-2's loopback)
spine-1 and spine-2 are in AS 65200
superm-redxp-05# show ip bgp
BGP table version is 13, local router ID is 6.0.0.9
Status codes: s suppressed, d damped, h history, * valid, > best, =
multipath,
i internal, r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 6.0.0.5/32 swp1 0 0 65101 ?
*> 6.0.0.6/32 swp2 0 0 65101 ?
*> 6.0.0.7/32 swp3 0 0 65104 ?
*> 6.0.0.8/32 swp4 0 0 65104 ?
*> 6.0.0.9/32 0.0.0.0 0 32768 ?
*= 6.0.0.11/32 swp2 0 65101 65001 ?
*> swp1 0 65101 65001 ?
*= 6.0.0.12/32 swp2 0 65101 65002 ?
*> swp1 0 65101 65002 ?
*= 6.0.0.13/32 swp4 0 65104 65001 ?
*> swp3 0 65104 65001 ?
*= 6.0.0.14/32 swp4 0 65104 65002 ?
*> swp3 0 65104 65002 ?
Displayed 9 out of 13 total prefixes
superm-redxp-05#
spine-1 with "neighbor x.x.x.x allowas-in origin", we now see 6.0.0.10
superm-redxp-05# show ip bgp
BGP table version is 14, local router ID is 6.0.0.9
Status codes: s suppressed, d damped, h history, * valid, > best, =
multipath,
i internal, r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 6.0.0.5/32 swp1 0 0 65101 ?
*> 6.0.0.6/32 swp2 0 0 65101 ?
*> 6.0.0.7/32 swp3 0 0 65104 ?
*> 6.0.0.8/32 swp4 0 0 65104 ?
* 6.0.0.9/32 swp2 0 65101 65200 ?
* swp1 0 65101 65200 ?
* swp3 0 65104 65200 ?
* swp4 0 65104 65200 ?
*> 0.0.0.0 0 32768 ?
*= 6.0.0.10/32 swp2 0 65101 65200 ?
*> swp1 0 65101 65200 ?
*= swp3 0 65104 65200 ?
*= swp4 0 65104 65200 ?
*= 6.0.0.11/32 swp2 0 65101 65001 ?
*> swp1 0 65101 65001 ?
*= 6.0.0.12/32 swp2 0 65101 65002 ?
*> swp1 0 65101 65002 ?
*= 6.0.0.13/32 swp4 0 65104 65001 ?
*> swp3 0 65104 65001 ?
*= 6.0.0.14/32 swp4 0 65104 65002 ?
*> swp3 0 65104 65002 ?
Displayed 10 out of 21 total prefixes
superm-redxp-05#
The only as-paths with 65200 that made it through were the ones that
originated from 65200
superm-redxp-05# show ip bgp regexp _65200_
BGP table version is 14, local router ID is 6.0.0.9
Status codes: s suppressed, d damped, h history, * valid, > best, =
multipath,
i internal, r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
* 6.0.0.9/32 swp2 0 65101 65200 ?
* swp1 0 65101 65200 ?
* swp3 0 65104 65200 ?
* swp4 0 65104 65200 ?
*= 6.0.0.10/32 swp2 0 65101 65200 ?
*> swp1 0 65101 65200 ?
*= swp3 0 65104 65200 ?
*= swp4 0 65104 65200 ?
Displayed 2 out of 21 total prefixes
superm-redxp-05#
2016-10-21 19:51:05 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Route reflector originator ID check. */
|
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_ORIGINATOR_ID)
|
|
|
|
&& IPV4_ADDR_SAME(&bgp->router_id, &attr->originator_id)) {
|
2019-04-24 21:40:50 +02:00
|
|
|
peer->stat_pfx_originator_loop++;
|
2017-07-17 14:03:14 +02:00
|
|
|
reason = "originator is us;";
|
|
|
|
goto filtered;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Route reflector cluster ID check. */
|
|
|
|
if (bgp_cluster_filter(peer, attr)) {
|
2019-04-24 21:40:50 +02:00
|
|
|
peer->stat_pfx_cluster_loop++;
|
2017-07-17 14:03:14 +02:00
|
|
|
reason = "reflected from the same cluster;";
|
|
|
|
goto filtered;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Apply incoming filter. */
|
|
|
|
if (bgp_input_filter(peer, p, attr, afi, safi) == FILTER_DENY) {
|
2019-04-24 21:40:50 +02:00
|
|
|
peer->stat_pfx_filter++;
|
2017-07-17 14:03:14 +02:00
|
|
|
reason = "filter;";
|
|
|
|
goto filtered;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-05-10 16:01:39 +02:00
|
|
|
/* RFC 8212 to prevent route leaks.
|
|
|
|
* This specification intends to improve this situation by requiring the
|
|
|
|
* explicit configuration of both BGP Import and Export Policies for any
|
|
|
|
* External BGP (EBGP) session such as customers, peers, or
|
|
|
|
* confederation boundaries for all enabled address families. Through
|
|
|
|
* codification of the aforementioned requirement, operators will
|
|
|
|
* benefit from consistent behavior across different BGP
|
|
|
|
* implementations.
|
|
|
|
*/
|
|
|
|
if (peer->bgp->ebgp_requires_policy == DEFAULT_EBGP_POLICY_ENABLED)
|
|
|
|
if (!bgp_inbound_policy_exists(peer,
|
|
|
|
&peer->filter[afi][safi])) {
|
|
|
|
reason = "inbound policy missing";
|
|
|
|
goto filtered;
|
|
|
|
}
|
|
|
|
|
2019-11-09 19:24:34 +01:00
|
|
|
/* draft-ietf-idr-deprecate-as-set-confed-set
|
|
|
|
* Filter routes having AS_SET or AS_CONFED_SET in the path.
|
|
|
|
* Eventually, This document (if approved) updates RFC 4271
|
|
|
|
* and RFC 5065 by eliminating AS_SET and AS_CONFED_SET types,
|
|
|
|
* and obsoletes RFC 6472.
|
|
|
|
*/
|
|
|
|
if (peer->bgp->reject_as_sets == BGP_REJECT_AS_SETS_ENABLED)
|
|
|
|
if (aspath_check_as_sets(attr->aspath)) {
|
|
|
|
reason =
|
|
|
|
"as-path contains AS_SET or AS_CONFED_SET type;";
|
|
|
|
goto filtered;
|
|
|
|
}
|
|
|
|
|
2019-12-03 22:01:19 +01:00
|
|
|
new_attr = *attr;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Apply incoming route-map.
|
|
|
|
* NB: new_attr may now contain newly allocated values from route-map
|
|
|
|
* "set"
|
|
|
|
* commands, so we need bgp_attr_flush in the error paths, until we
|
|
|
|
* intern
|
|
|
|
* the attr (which takes over the memory references) */
|
2019-06-19 23:29:34 +02:00
|
|
|
if (bgp_input_modifier(peer, p, &new_attr, afi, safi, NULL,
|
2019-11-13 01:51:24 +01:00
|
|
|
label, num_labels, rn) == RMAP_DENY) {
|
2019-04-24 21:40:50 +02:00
|
|
|
peer->stat_pfx_filter++;
|
2017-07-17 14:03:14 +02:00
|
|
|
reason = "route-map;";
|
|
|
|
bgp_attr_flush(&new_attr);
|
|
|
|
goto filtered;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (pi && pi->attr->rmap_table_id != new_attr.rmap_table_id) {
|
2019-07-09 10:59:14 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_SELECTED))
|
|
|
|
/* remove from RIB previous entry */
|
|
|
|
bgp_zebra_withdraw(p, pi, bgp, safi);
|
|
|
|
}
|
|
|
|
|
2017-08-25 20:27:49 +02:00
|
|
|
if (peer->sort == BGP_PEER_EBGP) {
|
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
/* If we receive the graceful-shutdown community from an eBGP
|
|
|
|
* peer we must lower local-preference */
|
|
|
|
if (new_attr.community
|
|
|
|
&& community_include(new_attr.community, COMMUNITY_GSHUT)) {
|
2017-08-25 20:27:49 +02:00
|
|
|
new_attr.flag |= ATTR_FLAG_BIT(BGP_ATTR_LOCAL_PREF);
|
|
|
|
new_attr.local_pref = BGP_GSHUT_LOCAL_PREF;
|
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
/* If graceful-shutdown is configured then add the GSHUT
|
|
|
|
* community to all paths received from eBGP peers */
|
|
|
|
} else if (bgp_flag_check(peer->bgp,
|
|
|
|
BGP_FLAG_GRACEFUL_SHUTDOWN)) {
|
2017-08-25 20:27:49 +02:00
|
|
|
bgp_attr_add_gshut_community(&new_attr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-10-30 10:42:25 +01:00
|
|
|
if (pi) {
|
|
|
|
pi_type = pi->type;
|
|
|
|
pi_sub_type = pi->sub_type;
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* next hop check. */
|
2018-02-09 19:22:50 +01:00
|
|
|
if (!CHECK_FLAG(peer->flags, PEER_FLAG_IS_RFAPI_HD)
|
2019-10-30 10:42:25 +01:00
|
|
|
&& bgp_update_martian_nexthop(bgp, afi, safi, pi_type,
|
|
|
|
pi_sub_type, &new_attr, rn)) {
|
2019-04-24 21:40:50 +02:00
|
|
|
peer->stat_pfx_nh_invalid++;
|
2017-07-17 14:03:14 +02:00
|
|
|
reason = "martian or self next-hop;";
|
|
|
|
bgp_attr_flush(&new_attr);
|
|
|
|
goto filtered;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-02-28 01:36:47 +01:00
|
|
|
if (bgp_mac_entry_exists(p) || bgp_mac_exist(&attr->rmac)) {
|
2019-04-24 21:40:50 +02:00
|
|
|
peer->stat_pfx_nh_invalid++;
|
2018-10-12 16:23:08 +02:00
|
|
|
reason = "self mac;";
|
|
|
|
goto filtered;
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
attr_new = bgp_attr_intern(&new_attr);
|
|
|
|
|
|
|
|
/* If the update is implicit withdraw. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi) {
|
|
|
|
pi->uptime = bgp_clock();
|
|
|
|
same_attr = attrhash_cmp(pi->attr, attr_new);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-05-09 11:12:14 +02:00
|
|
|
hook_call(bgp_process, bgp, afi, safi, rn, peer, true);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Same attribute comes in. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!CHECK_FLAG(pi->flags, BGP_PATH_REMOVED)
|
|
|
|
&& attrhash_cmp(pi->attr, attr_new)
|
2017-07-17 14:03:14 +02:00
|
|
|
&& (!has_valid_label
|
2018-10-03 02:43:07 +02:00
|
|
|
|| memcmp(&(bgp_path_info_extra_get(pi))->label, label,
|
2017-11-21 11:42:05 +01:00
|
|
|
num_labels * sizeof(mpls_label_t))
|
2017-07-17 14:03:14 +02:00
|
|
|
== 0)
|
|
|
|
&& (overlay_index_equal(
|
2018-10-03 02:43:07 +02:00
|
|
|
afi, pi, evpn == NULL ? NULL : &evpn->eth_s_id,
|
2017-07-17 14:03:14 +02:00
|
|
|
evpn == NULL ? NULL : &evpn->gw_ip))) {
|
|
|
|
if (CHECK_FLAG(bgp->af_flags[afi][safi],
|
|
|
|
BGP_CONFIG_DAMPENING)
|
|
|
|
&& peer->sort == BGP_PEER_EBGP
|
2018-10-03 02:43:07 +02:00
|
|
|
&& CHECK_FLAG(pi->flags, BGP_PATH_HISTORY)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_debug_update(peer, p, NULL, 1)) {
|
|
|
|
bgp_debug_rdpfxpath2str(
|
2018-02-09 19:22:50 +01:00
|
|
|
afi, safi, prd, p, label,
|
|
|
|
num_labels, addpath_id ? 1 : 0,
|
|
|
|
addpath_id, pfx_buf,
|
|
|
|
sizeof(pfx_buf));
|
2017-07-17 14:03:14 +02:00
|
|
|
zlog_debug("%s rcvd %s", peer->host,
|
|
|
|
pfx_buf);
|
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (bgp_damp_update(pi, rn, afi, safi)
|
2017-07-17 14:03:14 +02:00
|
|
|
!= BGP_DAMP_SUPPRESSED) {
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_increment(bgp, p, pi, afi,
|
2017-07-17 14:03:14 +02:00
|
|
|
safi);
|
|
|
|
bgp_process(bgp, rn, afi, safi);
|
|
|
|
}
|
|
|
|
} else /* Duplicate - odd */
|
|
|
|
{
|
|
|
|
if (bgp_debug_update(peer, p, NULL, 1)) {
|
|
|
|
if (!peer->rcvd_attr_printed) {
|
|
|
|
zlog_debug(
|
|
|
|
"%s rcvd UPDATE w/ attr: %s",
|
|
|
|
peer->host,
|
|
|
|
peer->rcvd_attr_str);
|
|
|
|
peer->rcvd_attr_printed = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
bgp_debug_rdpfxpath2str(
|
2018-02-09 19:22:50 +01:00
|
|
|
afi, safi, prd, p, label,
|
|
|
|
num_labels, addpath_id ? 1 : 0,
|
|
|
|
addpath_id, pfx_buf,
|
|
|
|
sizeof(pfx_buf));
|
2017-07-17 14:03:14 +02:00
|
|
|
zlog_debug(
|
|
|
|
"%s rcvd %s...duplicate ignored",
|
|
|
|
peer->host, pfx_buf);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* graceful restart STALE flag unset. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_STALE)) {
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_unset_flag(
|
2018-10-03 02:43:07 +02:00
|
|
|
rn, pi, BGP_PATH_STALE);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_process(bgp, rn, afi, safi);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
bgp_attr_unintern(&attr_new);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Withdraw/Announce before we fully processed the withdraw */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_REMOVED)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_debug_update(peer, p, NULL, 1)) {
|
|
|
|
bgp_debug_rdpfxpath2str(
|
2018-02-09 19:22:50 +01:00
|
|
|
afi, safi, prd, p, label, num_labels,
|
2017-07-17 14:03:14 +02:00
|
|
|
addpath_id ? 1 : 0, addpath_id, pfx_buf,
|
|
|
|
sizeof(pfx_buf));
|
|
|
|
zlog_debug(
|
|
|
|
"%s rcvd %s, flapped quicker than processing",
|
|
|
|
peer->host, pfx_buf);
|
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_restore(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Received Logging. */
|
|
|
|
if (bgp_debug_update(peer, p, NULL, 1)) {
|
2018-02-09 19:22:50 +01:00
|
|
|
bgp_debug_rdpfxpath2str(afi, safi, prd, p, label,
|
|
|
|
num_labels, addpath_id ? 1 : 0,
|
|
|
|
addpath_id, pfx_buf,
|
|
|
|
sizeof(pfx_buf));
|
2017-07-17 14:03:14 +02:00
|
|
|
zlog_debug("%s rcvd %s", peer->host, pfx_buf);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* graceful restart STALE flag unset. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_STALE))
|
|
|
|
bgp_path_info_unset_flag(rn, pi, BGP_PATH_STALE);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* The attribute is changed. */
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_set_flag(rn, pi, BGP_PATH_ATTR_CHANGED);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* implicit withdraw, decrement aggregate and pcount here.
|
|
|
|
* only if update is accepted, they'll increment below.
|
|
|
|
*/
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_decrement(bgp, p, pi, afi, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Update bgp route dampening information. */
|
|
|
|
if (CHECK_FLAG(bgp->af_flags[afi][safi], BGP_CONFIG_DAMPENING)
|
|
|
|
&& peer->sort == BGP_PEER_EBGP) {
|
|
|
|
/* This is implicit withdraw so we should update
|
|
|
|
dampening
|
|
|
|
information. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!CHECK_FLAG(pi->flags, BGP_PATH_HISTORY))
|
|
|
|
bgp_damp_withdraw(pi, rn, afi, safi, 1);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
if (safi == SAFI_MPLS_VPN) {
|
|
|
|
struct bgp_node *prn = NULL;
|
|
|
|
struct bgp_table *table = NULL;
|
|
|
|
|
|
|
|
prn = bgp_node_get(bgp->rib[afi][safi],
|
|
|
|
(struct prefix *)prd);
|
2018-09-26 02:37:16 +02:00
|
|
|
if (bgp_node_has_bgp_path_info_data(prn)) {
|
|
|
|
table = bgp_node_get_bgp_table_info(prn);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
vnc_import_bgp_del_vnc_host_route_mode_resolve_nve(
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp, prd, table, p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
bgp_unlock_node(prn);
|
|
|
|
}
|
|
|
|
if ((afi == AFI_IP || afi == AFI_IP6)
|
|
|
|
&& (safi == SAFI_UNICAST)) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_SELECTED)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
/*
|
|
|
|
* Implicit withdraw case.
|
|
|
|
*/
|
|
|
|
++vnc_implicit_withdraw;
|
2018-10-03 02:43:07 +02:00
|
|
|
vnc_import_bgp_del_route(bgp, p, pi);
|
|
|
|
vnc_import_bgp_exterior_del_route(bgp, p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Special handling for EVPN update of an existing route. If the
|
|
|
|
* extended community attribute has changed, we need to
|
|
|
|
* un-import
|
|
|
|
* the route using its existing extended community. It will be
|
|
|
|
* subsequently processed for import with the new extended
|
|
|
|
* community.
|
|
|
|
*/
|
|
|
|
if (safi == SAFI_EVPN && !same_attr) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if ((pi->attr->flag
|
2017-07-17 14:03:14 +02:00
|
|
|
& ATTR_FLAG_BIT(BGP_ATTR_EXT_COMMUNITIES))
|
|
|
|
&& (attr_new->flag
|
|
|
|
& ATTR_FLAG_BIT(BGP_ATTR_EXT_COMMUNITIES))) {
|
|
|
|
int cmp;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
cmp = ecommunity_cmp(pi->attr->ecommunity,
|
2017-07-17 14:03:14 +02:00
|
|
|
attr_new->ecommunity);
|
|
|
|
if (!cmp) {
|
|
|
|
if (bgp_debug_update(peer, p, NULL, 1))
|
|
|
|
zlog_debug(
|
|
|
|
"Change in EXT-COMM, existing %s new %s",
|
|
|
|
ecommunity_str(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->attr->ecommunity),
|
2017-07-17 14:03:14 +02:00
|
|
|
ecommunity_str(
|
|
|
|
attr_new->ecommunity));
|
|
|
|
bgp_evpn_unimport_route(bgp, afi, safi,
|
2018-10-03 02:43:07 +02:00
|
|
|
p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Update to new attribute. */
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_attr_unintern(&pi->attr);
|
|
|
|
pi->attr = attr_new;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Update MPLS label */
|
|
|
|
if (has_valid_label) {
|
2018-10-03 02:43:07 +02:00
|
|
|
extra = bgp_path_info_extra_get(pi);
|
2019-01-29 15:29:57 +01:00
|
|
|
if (extra->label != label) {
|
|
|
|
memcpy(&extra->label, label,
|
2019-02-26 19:41:06 +01:00
|
|
|
num_labels * sizeof(mpls_label_t));
|
2019-01-29 15:29:57 +01:00
|
|
|
extra->num_labels = num_labels;
|
|
|
|
}
|
2017-11-21 11:42:05 +01:00
|
|
|
if (!(afi == AFI_L2VPN && safi == SAFI_EVPN))
|
|
|
|
bgp_set_valid_label(&extra->label[0]);
|
2015-05-20 02:47:21 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
if ((afi == AFI_IP || afi == AFI_IP6)
|
|
|
|
&& (safi == SAFI_UNICAST)) {
|
|
|
|
if (vnc_implicit_withdraw) {
|
|
|
|
/*
|
|
|
|
* Add back the route with its new attributes
|
|
|
|
* (e.g., nexthop).
|
|
|
|
* The route is still selected, until the route
|
|
|
|
* selection
|
|
|
|
* queued by bgp_process actually runs. We have
|
|
|
|
* to make this
|
|
|
|
* update to the VNC side immediately to avoid
|
|
|
|
* racing against
|
|
|
|
* configuration changes (e.g., route-map
|
|
|
|
* changes) which
|
|
|
|
* trigger re-importation of the entire RIB.
|
|
|
|
*/
|
2018-10-03 02:43:07 +02:00
|
|
|
vnc_import_bgp_add_route(bgp, p, pi);
|
|
|
|
vnc_import_bgp_exterior_add_route(bgp, p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Update Overlay Index */
|
|
|
|
if (afi == AFI_L2VPN) {
|
|
|
|
overlay_index_update(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->attr, evpn == NULL ? NULL : &evpn->eth_s_id,
|
2017-07-17 14:03:14 +02:00
|
|
|
evpn == NULL ? NULL : &evpn->gw_ip);
|
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Update bgp route dampening information. */
|
|
|
|
if (CHECK_FLAG(bgp->af_flags[afi][safi], BGP_CONFIG_DAMPENING)
|
|
|
|
&& peer->sort == BGP_PEER_EBGP) {
|
|
|
|
/* Now we do normal update dampening. */
|
2018-10-03 02:43:07 +02:00
|
|
|
ret = bgp_damp_update(pi, rn, afi, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (ret == BGP_DAMP_SUPPRESSED) {
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Nexthop reachability check - for unicast and
|
|
|
|
* labeled-unicast.. */
|
2019-11-14 01:46:56 +01:00
|
|
|
if (((afi == AFI_IP || afi == AFI_IP6)
|
|
|
|
&& (safi == SAFI_UNICAST || safi == SAFI_LABELED_UNICAST))
|
|
|
|
|| (safi == SAFI_EVPN &&
|
|
|
|
bgp_evpn_is_prefix_nht_supported(p))) {
|
2019-11-27 09:48:17 +01:00
|
|
|
if (safi != SAFI_EVPN && peer->sort == BGP_PEER_EBGP
|
|
|
|
&& peer->ttl == BGP_DEFAULT_TTL
|
2017-07-17 14:03:14 +02:00
|
|
|
&& !CHECK_FLAG(peer->flags,
|
|
|
|
PEER_FLAG_DISABLE_CONNECTED_CHECK)
|
|
|
|
&& !bgp_flag_check(
|
|
|
|
bgp, BGP_FLAG_DISABLE_NH_CONNECTED_CHK))
|
|
|
|
connected = 1;
|
|
|
|
else
|
|
|
|
connected = 0;
|
|
|
|
|
2018-03-24 00:57:03 +01:00
|
|
|
struct bgp *bgp_nexthop = bgp;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->extra && pi->extra->bgp_orig)
|
|
|
|
bgp_nexthop = pi->extra->bgp_orig;
|
2018-03-24 00:57:03 +01:00
|
|
|
|
2019-11-14 01:46:56 +01:00
|
|
|
nh_afi = BGP_ATTR_NH_AFI(afi, pi->attr);
|
|
|
|
|
|
|
|
if (bgp_find_or_add_nexthop(bgp, bgp_nexthop, nh_afi,
|
|
|
|
pi, NULL, connected)
|
2018-02-09 19:22:50 +01:00
|
|
|
|| CHECK_FLAG(peer->flags, PEER_FLAG_IS_RFAPI_HD))
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_set_flag(rn, pi, BGP_PATH_VALID);
|
2017-07-17 14:03:14 +02:00
|
|
|
else {
|
|
|
|
if (BGP_DEBUG(nht, NHT)) {
|
|
|
|
char buf1[INET6_ADDRSTRLEN];
|
|
|
|
inet_ntop(AF_INET,
|
|
|
|
(const void *)&attr_new
|
|
|
|
->nexthop,
|
|
|
|
buf1, INET6_ADDRSTRLEN);
|
|
|
|
zlog_debug("%s(%s): NH unresolved",
|
|
|
|
__FUNCTION__, buf1);
|
|
|
|
}
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_unset_flag(rn, pi,
|
2018-10-03 00:15:34 +02:00
|
|
|
BGP_PATH_VALID);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
} else
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_set_flag(rn, pi, BGP_PATH_VALID);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
#if ENABLE_BGP_VNC
|
|
|
|
if (safi == SAFI_MPLS_VPN) {
|
|
|
|
struct bgp_node *prn = NULL;
|
|
|
|
struct bgp_table *table = NULL;
|
|
|
|
|
|
|
|
prn = bgp_node_get(bgp->rib[afi][safi],
|
|
|
|
(struct prefix *)prd);
|
2018-09-26 02:37:16 +02:00
|
|
|
if (bgp_node_has_bgp_path_info_data(prn)) {
|
|
|
|
table = bgp_node_get_bgp_table_info(prn);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
vnc_import_bgp_add_vnc_host_route_mode_resolve_nve(
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp, prd, table, p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
bgp_unlock_node(prn);
|
|
|
|
}
|
|
|
|
#endif
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If this is an EVPN route and some attribute has changed,
|
|
|
|
* process
|
|
|
|
* route for import. If the extended community has changed, we
|
|
|
|
* would
|
|
|
|
* have done the un-import earlier and the import would result
|
|
|
|
* in the
|
|
|
|
* route getting injected into appropriate L2 VNIs. If it is
|
|
|
|
* just
|
|
|
|
* some other attribute change, the import will result in
|
|
|
|
* updating
|
|
|
|
* the attributes for the route in the VNI(s).
|
|
|
|
*/
|
2019-11-14 01:46:56 +01:00
|
|
|
if (safi == SAFI_EVPN && !same_attr &&
|
|
|
|
CHECK_FLAG(pi->flags, BGP_PATH_VALID))
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_evpn_import_route(bgp, afi, safi, p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Process change. */
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_increment(bgp, p, pi, afi, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
bgp_process(bgp, rn, afi, safi);
|
|
|
|
bgp_unlock_node(rn);
|
2012-05-07 18:53:05 +02:00
|
|
|
|
2018-03-09 21:52:55 +01:00
|
|
|
if (SAFI_UNICAST == safi
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_VRF
|
|
|
|
|| bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
vpn_leak_from_vrf_update(bgp_get_default(), bgp, pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
|
|
|
if ((SAFI_MPLS_VPN == safi)
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
vpn_leak_to_vrf_update(bgp, pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
|
|
|
|
2017-01-16 20:09:12 +01:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
if (SAFI_MPLS_VPN == safi) {
|
|
|
|
mpls_label_t label_decoded = decode_label(label);
|
2017-01-16 20:09:12 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
rfapiProcessUpdate(peer, NULL, p, prd, attr, afi, safi,
|
|
|
|
type, sub_type, &label_decoded);
|
|
|
|
}
|
|
|
|
if (SAFI_ENCAP == safi) {
|
|
|
|
rfapiProcessUpdate(peer, NULL, p, prd, attr, afi, safi,
|
|
|
|
type, sub_type, NULL);
|
|
|
|
}
|
2017-01-16 20:09:12 +01:00
|
|
|
#endif
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
|
|
|
} // End of implicit withdraw
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Received Logging. */
|
|
|
|
if (bgp_debug_update(peer, p, NULL, 1)) {
|
|
|
|
if (!peer->rcvd_attr_printed) {
|
|
|
|
zlog_debug("%s rcvd UPDATE w/ attr: %s", peer->host,
|
|
|
|
peer->rcvd_attr_str);
|
|
|
|
peer->rcvd_attr_printed = 1;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
bgp_debug_rdpfxpath2str(afi, safi, prd, p, label, num_labels,
|
2017-07-17 14:03:14 +02:00
|
|
|
addpath_id ? 1 : 0, addpath_id, pfx_buf,
|
|
|
|
sizeof(pfx_buf));
|
|
|
|
zlog_debug("%s rcvd %s", peer->host, pfx_buf);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Make new BGP info. */
|
|
|
|
new = info_make(type, sub_type, 0, peer, attr_new, rn);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Update MPLS label */
|
|
|
|
if (has_valid_label) {
|
2018-10-03 00:15:34 +02:00
|
|
|
extra = bgp_path_info_extra_get(new);
|
2019-01-29 15:29:57 +01:00
|
|
|
if (extra->label != label) {
|
2019-02-26 19:41:06 +01:00
|
|
|
memcpy(&extra->label, label,
|
|
|
|
num_labels * sizeof(mpls_label_t));
|
2019-01-29 15:29:57 +01:00
|
|
|
extra->num_labels = num_labels;
|
|
|
|
}
|
2017-11-21 11:42:05 +01:00
|
|
|
if (!(afi == AFI_L2VPN && safi == SAFI_EVPN))
|
|
|
|
bgp_set_valid_label(&extra->label[0]);
|
2015-05-20 02:47:21 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Update Overlay Index */
|
|
|
|
if (afi == AFI_L2VPN) {
|
|
|
|
overlay_index_update(new->attr,
|
|
|
|
evpn == NULL ? NULL : &evpn->eth_s_id,
|
|
|
|
evpn == NULL ? NULL : &evpn->gw_ip);
|
|
|
|
}
|
|
|
|
/* Nexthop reachability check. */
|
2019-11-14 01:46:56 +01:00
|
|
|
if (((afi == AFI_IP || afi == AFI_IP6)
|
|
|
|
&& (safi == SAFI_UNICAST || safi == SAFI_LABELED_UNICAST))
|
|
|
|
|| (safi == SAFI_EVPN && bgp_evpn_is_prefix_nht_supported(p))) {
|
2019-11-27 09:48:17 +01:00
|
|
|
if (safi != SAFI_EVPN && peer->sort == BGP_PEER_EBGP
|
|
|
|
&& peer->ttl == BGP_DEFAULT_TTL
|
2017-07-17 14:03:14 +02:00
|
|
|
&& !CHECK_FLAG(peer->flags,
|
|
|
|
PEER_FLAG_DISABLE_CONNECTED_CHECK)
|
|
|
|
&& !bgp_flag_check(bgp, BGP_FLAG_DISABLE_NH_CONNECTED_CHK))
|
|
|
|
connected = 1;
|
|
|
|
else
|
|
|
|
connected = 0;
|
|
|
|
|
2019-11-14 01:46:56 +01:00
|
|
|
nh_afi = BGP_ATTR_NH_AFI(afi, new->attr);
|
|
|
|
|
|
|
|
if (bgp_find_or_add_nexthop(bgp, bgp, nh_afi, new, NULL,
|
|
|
|
connected)
|
2018-02-09 19:22:50 +01:00
|
|
|
|| CHECK_FLAG(peer->flags, PEER_FLAG_IS_RFAPI_HD))
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_set_flag(rn, new, BGP_PATH_VALID);
|
2017-07-17 14:03:14 +02:00
|
|
|
else {
|
|
|
|
if (BGP_DEBUG(nht, NHT)) {
|
|
|
|
char buf1[INET6_ADDRSTRLEN];
|
|
|
|
inet_ntop(AF_INET,
|
|
|
|
(const void *)&attr_new->nexthop,
|
|
|
|
buf1, INET6_ADDRSTRLEN);
|
|
|
|
zlog_debug("%s(%s): NH unresolved",
|
|
|
|
__FUNCTION__, buf1);
|
|
|
|
}
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_unset_flag(rn, new, BGP_PATH_VALID);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
} else
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_set_flag(rn, new, BGP_PATH_VALID);
|
2015-05-20 03:03:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Addpath ID */
|
|
|
|
new->addpath_rx_id = addpath_id;
|
|
|
|
|
|
|
|
/* Increment prefix */
|
|
|
|
bgp_aggregate_increment(bgp, p, new, afi, safi);
|
|
|
|
|
|
|
|
/* Register new BGP information. */
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_add(rn, new);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* route_node_get lock */
|
|
|
|
bgp_unlock_node(rn);
|
2012-05-07 18:53:05 +02:00
|
|
|
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
if (safi == SAFI_MPLS_VPN) {
|
|
|
|
struct bgp_node *prn = NULL;
|
|
|
|
struct bgp_table *table = NULL;
|
|
|
|
|
|
|
|
prn = bgp_node_get(bgp->rib[afi][safi], (struct prefix *)prd);
|
2018-09-26 02:37:16 +02:00
|
|
|
if (bgp_node_has_bgp_path_info_data(prn)) {
|
|
|
|
table = bgp_node_get_bgp_table_info(prn);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
vnc_import_bgp_add_vnc_host_route_mode_resolve_nve(
|
|
|
|
bgp, prd, table, p, new);
|
|
|
|
}
|
|
|
|
bgp_unlock_node(prn);
|
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If maximum prefix count is configured and current prefix
|
|
|
|
count exeed it. */
|
|
|
|
if (bgp_maximum_prefix_overflow(peer, afi, safi, 0))
|
|
|
|
return -1;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If this is an EVPN route, process for import. */
|
2019-11-14 01:46:56 +01:00
|
|
|
if (safi == SAFI_EVPN && CHECK_FLAG(new->flags, BGP_PATH_VALID))
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_evpn_import_route(bgp, afi, safi, p, new);
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2019-05-09 11:12:14 +02:00
|
|
|
hook_call(bgp_process, bgp, afi, safi, rn, peer, false);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Process change. */
|
|
|
|
bgp_process(bgp, rn, afi, safi);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-03-09 21:52:55 +01:00
|
|
|
if (SAFI_UNICAST == safi
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_VRF
|
|
|
|
|| bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
vpn_leak_from_vrf_update(bgp_get_default(), bgp, new);
|
|
|
|
}
|
|
|
|
if ((SAFI_MPLS_VPN == safi)
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
|
|
|
|
vpn_leak_to_vrf_update(bgp, new);
|
|
|
|
}
|
2017-01-16 20:09:12 +01:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
if (SAFI_MPLS_VPN == safi) {
|
|
|
|
mpls_label_t label_decoded = decode_label(label);
|
2017-01-16 20:09:12 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
rfapiProcessUpdate(peer, NULL, p, prd, attr, afi, safi, type,
|
|
|
|
sub_type, &label_decoded);
|
|
|
|
}
|
|
|
|
if (SAFI_ENCAP == safi) {
|
|
|
|
rfapiProcessUpdate(peer, NULL, p, prd, attr, afi, safi, type,
|
|
|
|
sub_type, NULL);
|
|
|
|
}
|
2017-01-16 20:09:12 +01:00
|
|
|
#endif
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* This BGP update is filtered. Log the reason then update BGP
|
|
|
|
entry. */
|
|
|
|
filtered:
|
2019-05-09 11:12:14 +02:00
|
|
|
hook_call(bgp_process, bgp, afi, safi, rn, peer, true);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_debug_update(peer, p, NULL, 1)) {
|
|
|
|
if (!peer->rcvd_attr_printed) {
|
|
|
|
zlog_debug("%s rcvd UPDATE w/ attr: %s", peer->host,
|
|
|
|
peer->rcvd_attr_str);
|
|
|
|
peer->rcvd_attr_printed = 1;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
bgp_debug_rdpfxpath2str(afi, safi, prd, p, label, num_labels,
|
2017-07-17 14:03:14 +02:00
|
|
|
addpath_id ? 1 : 0, addpath_id, pfx_buf,
|
|
|
|
sizeof(pfx_buf));
|
|
|
|
zlog_debug("%s rcvd UPDATE about %s -- DENIED due to: %s",
|
|
|
|
peer->host, pfx_buf, reason);
|
|
|
|
}
|
2017-05-15 23:34:04 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi) {
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If this is an EVPN route, un-import it as it is now filtered.
|
|
|
|
*/
|
|
|
|
if (safi == SAFI_EVPN)
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_evpn_unimport_route(bgp, afi, safi, p, pi);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-03-09 21:52:55 +01:00
|
|
|
if (SAFI_UNICAST == safi
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_VRF
|
|
|
|
|| bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
vpn_leak_from_vrf_withdraw(bgp_get_default(), bgp, pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
|
|
|
if ((SAFI_MPLS_VPN == safi)
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
vpn_leak_to_vrf_withdraw(bgp, pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_rib_remove(rn, pi, peer, afi, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
bgp_unlock_node(rn);
|
2012-05-07 18:53:05 +02:00
|
|
|
|
2017-02-02 01:13:33 +01:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
/*
|
|
|
|
* Filtered update is treated as an implicit withdrawal (see
|
|
|
|
* bgp_rib_remove()
|
|
|
|
* a few lines above)
|
|
|
|
*/
|
|
|
|
if ((SAFI_MPLS_VPN == safi) || (SAFI_ENCAP == safi)) {
|
|
|
|
rfapiProcessWithdraw(peer, NULL, p, prd, NULL, afi, safi, type,
|
|
|
|
0);
|
|
|
|
}
|
2017-02-02 01:13:33 +01:00
|
|
|
#endif
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2018-03-27 21:13:34 +02:00
|
|
|
int bgp_withdraw(struct peer *peer, struct prefix *p, uint32_t addpath_id,
|
2017-07-17 14:03:14 +02:00
|
|
|
struct attr *attr, afi_t afi, safi_t safi, int type,
|
2018-02-09 19:22:50 +01:00
|
|
|
int sub_type, struct prefix_rd *prd, mpls_label_t *label,
|
2018-03-27 21:13:34 +02:00
|
|
|
uint32_t num_labels, struct bgp_route_evpn *evpn)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp *bgp;
|
|
|
|
char pfx_buf[BGP_PRD_PATH_STRLEN];
|
|
|
|
struct bgp_node *rn;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-01-16 20:09:12 +01:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
if ((SAFI_MPLS_VPN == safi) || (SAFI_ENCAP == safi)) {
|
|
|
|
rfapiProcessWithdraw(peer, NULL, p, prd, NULL, afi, safi, type,
|
|
|
|
0);
|
|
|
|
}
|
2017-01-16 20:09:12 +01:00
|
|
|
#endif
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp = peer->bgp;
|
|
|
|
|
|
|
|
/* Lookup node. */
|
|
|
|
rn = bgp_afi_node_get(bgp->rib[afi][safi], afi, safi, p, prd);
|
|
|
|
|
|
|
|
/* If peer is soft reconfiguration enabled. Record input packet for
|
|
|
|
* further calculation.
|
|
|
|
*
|
|
|
|
* Cisco IOS 12.4(24)T4 on session establishment sends withdraws for all
|
|
|
|
* routes that are filtered. This tanks out Quagga RS pretty badly due
|
|
|
|
* to
|
|
|
|
* the iteration over all RS clients.
|
|
|
|
* Since we need to remove the entry from adj_in anyway, do that first
|
|
|
|
* and
|
|
|
|
* if there was no entry, we don't need to do anything more.
|
|
|
|
*/
|
|
|
|
if (CHECK_FLAG(peer->af_flags[afi][safi], PEER_FLAG_SOFT_RECONFIG)
|
|
|
|
&& peer != bgp->peer_self)
|
|
|
|
if (!bgp_adj_in_unset(rn, peer, addpath_id)) {
|
2019-04-24 21:40:50 +02:00
|
|
|
peer->stat_pfx_dup_withdraw++;
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_debug_update(peer, p, NULL, 1)) {
|
|
|
|
bgp_debug_rdpfxpath2str(
|
2018-02-09 19:22:50 +01:00
|
|
|
afi, safi, prd, p, label, num_labels,
|
2017-07-17 14:03:14 +02:00
|
|
|
addpath_id ? 1 : 0, addpath_id, pfx_buf,
|
|
|
|
sizeof(pfx_buf));
|
|
|
|
zlog_debug(
|
|
|
|
"%s withdrawing route %s not in adj-in",
|
|
|
|
peer->host, pfx_buf);
|
|
|
|
}
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
return 0;
|
|
|
|
}
|
2015-05-20 03:03:56 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Lookup withdrawn route. */
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == peer && pi->type == type
|
|
|
|
&& pi->sub_type == sub_type
|
|
|
|
&& pi->addpath_rx_id == addpath_id)
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
/* Logging. */
|
|
|
|
if (bgp_debug_update(peer, p, NULL, 1)) {
|
2018-02-09 19:22:50 +01:00
|
|
|
bgp_debug_rdpfxpath2str(afi, safi, prd, p, label, num_labels,
|
2017-07-17 14:03:14 +02:00
|
|
|
addpath_id ? 1 : 0, addpath_id, pfx_buf,
|
|
|
|
sizeof(pfx_buf));
|
|
|
|
zlog_debug("%s rcvd UPDATE about %s -- withdrawn", peer->host,
|
|
|
|
pfx_buf);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Withdraw specified route from routing table. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi && !CHECK_FLAG(pi->flags, BGP_PATH_HISTORY)) {
|
|
|
|
bgp_rib_withdraw(rn, pi, peer, afi, safi, prd);
|
2018-03-09 21:52:55 +01:00
|
|
|
if (SAFI_UNICAST == safi
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_VRF
|
|
|
|
|| bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
2018-10-03 02:43:07 +02:00
|
|
|
vpn_leak_from_vrf_withdraw(bgp_get_default(), bgp, pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
|
|
|
if ((SAFI_MPLS_VPN == safi)
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
vpn_leak_to_vrf_withdraw(bgp, pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
|
|
|
} else if (bgp_debug_update(peer, p, NULL, 1)) {
|
2018-02-09 19:22:50 +01:00
|
|
|
bgp_debug_rdpfxpath2str(afi, safi, prd, p, label, num_labels,
|
2017-07-17 14:03:14 +02:00
|
|
|
addpath_id ? 1 : 0, addpath_id, pfx_buf,
|
|
|
|
sizeof(pfx_buf));
|
|
|
|
zlog_debug("%s Can't find the route %s", peer->host, pfx_buf);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Unlock bgp_node_get() lock. */
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
|
|
|
|
return 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_default_originate(struct peer *peer, afi_t afi, safi_t safi,
|
|
|
|
int withdraw)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct update_subgroup *subgrp;
|
|
|
|
subgrp = peer_subgroup(peer, afi, safi);
|
|
|
|
subgroup_default_originate(subgrp, withdraw);
|
2015-05-20 03:03:47 +02:00
|
|
|
}
|
2012-05-07 18:53:02 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2015-05-20 03:03:47 +02:00
|
|
|
/*
|
|
|
|
* bgp_stop_announce_route_timer
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_stop_announce_route_timer(struct peer_af *paf)
|
2015-05-20 03:03:47 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!paf->t_announce_route)
|
|
|
|
return;
|
|
|
|
|
|
|
|
THREAD_TIMER_OFF(paf->t_announce_route);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2015-05-20 03:03:47 +02:00
|
|
|
/*
|
|
|
|
* bgp_announce_route_timer_expired
|
|
|
|
*
|
|
|
|
* Callback that is invoked when the route announcement timer for a
|
|
|
|
* peer_af expires.
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_announce_route_timer_expired(struct thread *t)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct peer_af *paf;
|
|
|
|
struct peer *peer;
|
2012-05-07 18:53:05 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
paf = THREAD_ARG(t);
|
|
|
|
peer = paf->peer;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (peer->status != Established)
|
|
|
|
return 0;
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!peer->afc_nego[paf->afi][paf->safi])
|
|
|
|
return 0;
|
2015-05-20 03:03:47 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
peer_af_announce_route(paf, 1);
|
|
|
|
return 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2015-05-20 03:03:47 +02:00
|
|
|
/*
|
|
|
|
* bgp_announce_route
|
|
|
|
*
|
|
|
|
* *Triggers* announcement of routes of a given AFI/SAFI to a peer.
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_announce_route(struct peer *peer, afi_t afi, safi_t safi)
|
|
|
|
{
|
|
|
|
struct peer_af *paf;
|
|
|
|
struct update_subgroup *subgrp;
|
|
|
|
|
|
|
|
paf = peer_af_find(peer, afi, safi);
|
|
|
|
if (!paf)
|
|
|
|
return;
|
|
|
|
subgrp = PAF_SUBGRP(paf);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ignore if subgroup doesn't exist (implies AF is not negotiated)
|
|
|
|
* or a refresh has already been triggered.
|
|
|
|
*/
|
|
|
|
if (!subgrp || paf->t_announce_route)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Start a timer to stagger/delay the announce. This serves
|
|
|
|
* two purposes - announcement can potentially be combined for
|
|
|
|
* multiple peers and the announcement doesn't happen in the
|
|
|
|
* vty context.
|
|
|
|
*/
|
|
|
|
thread_add_timer_msec(bm->master, bgp_announce_route_timer_expired, paf,
|
|
|
|
(subgrp->peer_count == 1)
|
|
|
|
? BGP_ANNOUNCE_ROUTE_SHORT_DELAY_MS
|
|
|
|
: BGP_ANNOUNCE_ROUTE_DELAY_MS,
|
|
|
|
&paf->t_announce_route);
|
2015-05-20 03:03:47 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Announce routes from all AF tables to a peer.
|
|
|
|
*
|
|
|
|
* This should ONLY be called when there is a need to refresh the
|
|
|
|
* routes to the peer based on a policy change for this peer alone
|
|
|
|
* or a route refresh request received from the peer.
|
|
|
|
* The operation will result in splitting the peer from its existing
|
|
|
|
* subgroups and putting it in new subgroups.
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_announce_route_all(struct peer *peer)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
|
|
|
|
2017-11-21 19:02:06 +01:00
|
|
|
FOREACH_AFI_SAFI (afi, safi)
|
|
|
|
bgp_announce_route(peer, afi, safi);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_soft_reconfig_table(struct peer *peer, afi_t afi, safi_t safi,
|
|
|
|
struct bgp_table *table,
|
|
|
|
struct prefix_rd *prd)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int ret;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_adj_in *ain;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!table)
|
|
|
|
table = peer->bgp->rib[afi][safi];
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
for (rn = bgp_table_top(table); rn; rn = bgp_route_next(rn))
|
|
|
|
for (ain = rn->adj_in; ain; ain = ain->next) {
|
2017-08-27 22:51:35 +02:00
|
|
|
if (ain->peer != peer)
|
|
|
|
continue;
|
2012-05-07 17:17:34 +02:00
|
|
|
|
2019-10-15 14:27:22 +02:00
|
|
|
struct bgp_path_info *pi;
|
2018-03-27 21:13:34 +02:00
|
|
|
uint32_t num_labels = 0;
|
2017-11-21 11:42:05 +01:00
|
|
|
mpls_label_t *label_pnt = NULL;
|
2018-10-30 14:11:46 +01:00
|
|
|
struct bgp_route_evpn evpn;
|
2017-11-21 11:42:05 +01:00
|
|
|
|
2019-10-15 14:27:22 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi;
|
|
|
|
pi = pi->next)
|
|
|
|
if (pi->peer == peer)
|
|
|
|
break;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi && pi->extra)
|
|
|
|
num_labels = pi->extra->num_labels;
|
2017-11-21 11:42:05 +01:00
|
|
|
if (num_labels)
|
2018-10-03 02:43:07 +02:00
|
|
|
label_pnt = &pi->extra->label[0];
|
2018-10-30 14:11:46 +01:00
|
|
|
if (pi)
|
|
|
|
memcpy(&evpn, &pi->attr->evpn_overlay,
|
|
|
|
sizeof(evpn));
|
|
|
|
else
|
|
|
|
memset(&evpn, 0, sizeof(evpn));
|
2012-05-07 17:17:34 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
ret = bgp_update(peer, &rn->p, ain->addpath_rx_id,
|
|
|
|
ain->attr, afi, safi, ZEBRA_ROUTE_BGP,
|
2018-02-09 19:22:50 +01:00
|
|
|
BGP_ROUTE_NORMAL, prd, label_pnt,
|
2018-10-30 14:11:46 +01:00
|
|
|
num_labels, 1, &evpn);
|
2017-08-27 22:51:35 +02:00
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
return;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_soft_reconfig_in(struct peer *peer, afi_t afi, safi_t safi)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_table *table;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (peer->status != Established)
|
|
|
|
return;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if ((safi != SAFI_MPLS_VPN) && (safi != SAFI_ENCAP)
|
|
|
|
&& (safi != SAFI_EVPN))
|
|
|
|
bgp_soft_reconfig_table(peer, afi, safi, NULL, NULL);
|
|
|
|
else
|
|
|
|
for (rn = bgp_table_top(peer->bgp->rib[afi][safi]); rn;
|
2018-09-26 02:37:16 +02:00
|
|
|
rn = bgp_route_next(rn)) {
|
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (table != NULL) {
|
2017-07-17 14:03:14 +02:00
|
|
|
struct prefix_rd prd;
|
2018-09-26 02:37:16 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
prd.family = AF_UNSPEC;
|
|
|
|
prd.prefixlen = 64;
|
|
|
|
memcpy(&prd.val, rn->p.u.val, 8);
|
2012-05-07 17:17:34 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_soft_reconfig_table(peer, afi, safi, table,
|
|
|
|
&prd);
|
|
|
|
}
|
2018-09-26 02:37:16 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
[bgpd] Stability fixes including bugs 397, 492
I've spent the last several weeks working on stability fixes to bgpd.
These patches fix all of the numerous crashes, assertion failures, memory
leaks and memory stomping I could find. Valgrind was used extensively.
Added new function bgp_exit() to help catch problems. If "debug bgp" is
configured and bgpd exits with status of 0, statistics on remaining
lib/memory.c allocations are printed to stderr. It is my hope that other
developers will use this to stay on top of memory issues.
Example questionable exit:
bgpd: memstats: Current memory utilization in module LIB:
bgpd: memstats: Link List : 6
bgpd: memstats: Link Node : 5
bgpd: memstats: Hash : 8
bgpd: memstats: Hash Bucket : 2
bgpd: memstats: Hash Index : 8
bgpd: memstats: Work queue : 3
bgpd: memstats: Work queue item : 2
bgpd: memstats: Work queue name string : 3
bgpd: memstats: Current memory utilization in module BGP:
bgpd: memstats: BGP instance : 1
bgpd: memstats: BGP peer : 1
bgpd: memstats: BGP peer hostname : 1
bgpd: memstats: BGP attribute : 1
bgpd: memstats: BGP extra attributes : 1
bgpd: memstats: BGP aspath : 1
bgpd: memstats: BGP aspath str : 1
bgpd: memstats: BGP table : 24
bgpd: memstats: BGP node : 1
bgpd: memstats: BGP route : 1
bgpd: memstats: BGP synchronise : 8
bgpd: memstats: BGP Process queue : 1
bgpd: memstats: BGP node clear queue : 1
bgpd: memstats: NOTE: If configuration exists, utilization may be expected.
Example clean exit:
bgpd: memstats: No remaining tracked memory utilization.
This patch fixes bug #397: "Invalid free in bgp_announce_check()".
This patch fixes bug #492: "SIGBUS in bgpd/bgp_route.c:
bgp_clear_route_node()".
My apologies for not separating out these changes into individual patches.
The complexity of doing so boggled what is left of my brain. I hope this
is all still useful to the community.
This code has been production tested, in non-route-server-client mode, on
a linux 32-bit box and a 64-bit box.
Release/reset functions, used by bgp_exit(), added to:
bgpd/bgp_attr.c,h
bgpd/bgp_community.c,h
bgpd/bgp_dump.c,h
bgpd/bgp_ecommunity.c,h
bgpd/bgp_filter.c,h
bgpd/bgp_nexthop.c,h
bgpd/bgp_route.c,h
lib/routemap.c,h
File by file analysis:
* bgpd/bgp_aspath.c: Prevent re-use of ashash after it is released.
* bgpd/bgp_attr.c: #if removed uncalled cluster_dup().
* bgpd/bgp_clist.c,h: Allow community_list_terminate() to be called from
bgp_exit().
* bgpd/bgp_filter.c: Fix aslist->name use without allocation check, and
also fix memory leak.
* bgpd/bgp_main.c: Created bgp_exit() exit routine. This function frees
allocations made as part of bgpd initialization and, to some extent,
configuration. If "debug bgp" is configured, memory stats are printed
as described above.
* bgpd/bgp_nexthop.c: zclient_new() already allocates stream for
ibuf/obuf, so bgp_scan_init() shouldn't do it too. Also, made it so
zlookup is global so bgp_exit() can use it.
* bgpd/bgp_packet.c: bgp_capability_msg_parse() call to bgp_clear_route()
adjusted to use new BGP_CLEAR_ROUTE_NORMAL flag.
* bgpd/bgp_route.h: Correct reference counter "lock" to be signed.
bgp_clear_route() now accepts a bgp_clear_route_type of either
BGP_CLEAR_ROUTE_NORMAL or BGP_CLEAR_ROUTE_MY_RSCLIENT.
* bgpd/bgp_route.c:
- bgp_process_rsclient(): attr was being zero'ed and then
bgp_attr_extra_free() was being called with it, even though it was
never filled with valid data.
- bgp_process_rsclient(): Make sure rsclient->group is not NULL before
use.
- bgp_processq_del(): Add call to bgp_table_unlock().
- bgp_process(): Add call to bgp_table_lock().
- bgp_update_rsclient(): memset clearing of new_attr not needed since
declarationw with "= { 0 }" does it. memset was already commented
out.
- bgp_update_rsclient(): Fix screwed up misleading indentation.
- bgp_withdraw_rsclient(): Fix screwed up misleading indentation.
- bgp_clear_route_node(): Support BGP_CLEAR_ROUTE_MY_RSCLIENT.
- bgp_clear_node_queue_del(): Add call to bgp_table_unlock() and also
free struct bgp_clear_node_queue used for work item.
- bgp_clear_node_complete(): Do peer_unlock() after BGP_EVENT_ADD() in
case peer is released by peer_unlock() call.
- bgp_clear_route_table(): Support BGP_CLEAR_ROUTE_MY_RSCLIENT. Use
struct bgp_clear_node_queue to supply data to worker. Add call to
bgp_table_lock().
- bgp_clear_route(): Add support for BGP_CLEAR_ROUTE_NORMAL or
BGP_CLEAR_ROUTE_MY_RSCLIENT.
- bgp_clear_route_all(): Use BGP_CLEAR_ROUTE_NORMAL.
Bug 397 fixes:
- bgp_default_originate()
- bgp_announce_table()
* bgpd/bgp_table.h:
- struct bgp_table: Added reference count. Changed type of owner to be
"struct peer *" rather than "void *".
- struct bgp_node: Correct reference counter "lock" to be signed.
* bgpd/bgp_table.c:
- Added bgp_table reference counting.
- bgp_table_free(): Fixed cleanup code. Call peer_unlock() on owner if
set.
- bgp_unlock_node(): Added assertion.
- bgp_node_get(): Added call to bgp_lock_node() to code path that it was
missing from.
* bgpd/bgp_vty.c:
- peer_rsclient_set_vty(): Call peer_lock() as part of peer assignment
to owner. Handle failure gracefully.
- peer_rsclient_unset_vty(): Add call to bgp_clear_route() with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
* bgpd/bgp_zebra.c: Made it so zclient is global so bgp_exit() can use it.
* bgpd/bgpd.c:
- peer_lock(): Allow to be called when status is "Deleted".
- peer_deactivate(): Supply BGP_CLEAR_ROUTE_NORMAL purpose to
bgp_clear_route() call.
- peer_delete(): Common variable listnode pn. Fix bug in which rsclient
was only dealt with if not part of a peer group. Call
bgp_clear_route() for rsclient, if appropriate, and do so with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
- peer_group_get(): Use XSTRDUP() instead of strdup() for conf->host.
- peer_group_bind(): Call bgp_clear_route() for rsclient, and do so with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
- bgp_create(): Use XSTRDUP() instead of strdup() for peer_self->host.
- bgp_delete(): Delete peers before groups, rather than after. And then
rather than deleting rsclients, verify that there are none at this
point.
- bgp_unlock(): Add assertion.
- bgp_free(): Call bgp_table_finish() rather than doing XFREE() itself.
* lib/command.c,h: Compiler warning fixes. Add cmd_terminate(). Fixed
massive leak in install_element() in which cmd_make_descvec() was being
called more than once for the same cmd->strvec/string/doc.
* lib/log.c: Make closezlog() check fp before calling fclose().
* lib/memory.c: Catch when alloc count goes negative by using signed
counts. Correct #endif comment. Add log_memstats_stderr().
* lib/memory.h: Add log_memstats_stderr().
* lib/thread.c: thread->funcname was being accessed in thread_call() after
it had been freed. Rearranged things so that thread_call() frees
funcname. Also made it so thread_master_free() cleans up cpu_record.
* lib/vty.c,h: Use global command_cr. Add vty_terminate().
* lib/zclient.c,h: Re-enable zclient_free().
2009-07-18 07:44:03 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_clear_node_queue {
|
|
|
|
struct bgp_node *rn;
|
[bgpd] Stability fixes including bugs 397, 492
I've spent the last several weeks working on stability fixes to bgpd.
These patches fix all of the numerous crashes, assertion failures, memory
leaks and memory stomping I could find. Valgrind was used extensively.
Added new function bgp_exit() to help catch problems. If "debug bgp" is
configured and bgpd exits with status of 0, statistics on remaining
lib/memory.c allocations are printed to stderr. It is my hope that other
developers will use this to stay on top of memory issues.
Example questionable exit:
bgpd: memstats: Current memory utilization in module LIB:
bgpd: memstats: Link List : 6
bgpd: memstats: Link Node : 5
bgpd: memstats: Hash : 8
bgpd: memstats: Hash Bucket : 2
bgpd: memstats: Hash Index : 8
bgpd: memstats: Work queue : 3
bgpd: memstats: Work queue item : 2
bgpd: memstats: Work queue name string : 3
bgpd: memstats: Current memory utilization in module BGP:
bgpd: memstats: BGP instance : 1
bgpd: memstats: BGP peer : 1
bgpd: memstats: BGP peer hostname : 1
bgpd: memstats: BGP attribute : 1
bgpd: memstats: BGP extra attributes : 1
bgpd: memstats: BGP aspath : 1
bgpd: memstats: BGP aspath str : 1
bgpd: memstats: BGP table : 24
bgpd: memstats: BGP node : 1
bgpd: memstats: BGP route : 1
bgpd: memstats: BGP synchronise : 8
bgpd: memstats: BGP Process queue : 1
bgpd: memstats: BGP node clear queue : 1
bgpd: memstats: NOTE: If configuration exists, utilization may be expected.
Example clean exit:
bgpd: memstats: No remaining tracked memory utilization.
This patch fixes bug #397: "Invalid free in bgp_announce_check()".
This patch fixes bug #492: "SIGBUS in bgpd/bgp_route.c:
bgp_clear_route_node()".
My apologies for not separating out these changes into individual patches.
The complexity of doing so boggled what is left of my brain. I hope this
is all still useful to the community.
This code has been production tested, in non-route-server-client mode, on
a linux 32-bit box and a 64-bit box.
Release/reset functions, used by bgp_exit(), added to:
bgpd/bgp_attr.c,h
bgpd/bgp_community.c,h
bgpd/bgp_dump.c,h
bgpd/bgp_ecommunity.c,h
bgpd/bgp_filter.c,h
bgpd/bgp_nexthop.c,h
bgpd/bgp_route.c,h
lib/routemap.c,h
File by file analysis:
* bgpd/bgp_aspath.c: Prevent re-use of ashash after it is released.
* bgpd/bgp_attr.c: #if removed uncalled cluster_dup().
* bgpd/bgp_clist.c,h: Allow community_list_terminate() to be called from
bgp_exit().
* bgpd/bgp_filter.c: Fix aslist->name use without allocation check, and
also fix memory leak.
* bgpd/bgp_main.c: Created bgp_exit() exit routine. This function frees
allocations made as part of bgpd initialization and, to some extent,
configuration. If "debug bgp" is configured, memory stats are printed
as described above.
* bgpd/bgp_nexthop.c: zclient_new() already allocates stream for
ibuf/obuf, so bgp_scan_init() shouldn't do it too. Also, made it so
zlookup is global so bgp_exit() can use it.
* bgpd/bgp_packet.c: bgp_capability_msg_parse() call to bgp_clear_route()
adjusted to use new BGP_CLEAR_ROUTE_NORMAL flag.
* bgpd/bgp_route.h: Correct reference counter "lock" to be signed.
bgp_clear_route() now accepts a bgp_clear_route_type of either
BGP_CLEAR_ROUTE_NORMAL or BGP_CLEAR_ROUTE_MY_RSCLIENT.
* bgpd/bgp_route.c:
- bgp_process_rsclient(): attr was being zero'ed and then
bgp_attr_extra_free() was being called with it, even though it was
never filled with valid data.
- bgp_process_rsclient(): Make sure rsclient->group is not NULL before
use.
- bgp_processq_del(): Add call to bgp_table_unlock().
- bgp_process(): Add call to bgp_table_lock().
- bgp_update_rsclient(): memset clearing of new_attr not needed since
declarationw with "= { 0 }" does it. memset was already commented
out.
- bgp_update_rsclient(): Fix screwed up misleading indentation.
- bgp_withdraw_rsclient(): Fix screwed up misleading indentation.
- bgp_clear_route_node(): Support BGP_CLEAR_ROUTE_MY_RSCLIENT.
- bgp_clear_node_queue_del(): Add call to bgp_table_unlock() and also
free struct bgp_clear_node_queue used for work item.
- bgp_clear_node_complete(): Do peer_unlock() after BGP_EVENT_ADD() in
case peer is released by peer_unlock() call.
- bgp_clear_route_table(): Support BGP_CLEAR_ROUTE_MY_RSCLIENT. Use
struct bgp_clear_node_queue to supply data to worker. Add call to
bgp_table_lock().
- bgp_clear_route(): Add support for BGP_CLEAR_ROUTE_NORMAL or
BGP_CLEAR_ROUTE_MY_RSCLIENT.
- bgp_clear_route_all(): Use BGP_CLEAR_ROUTE_NORMAL.
Bug 397 fixes:
- bgp_default_originate()
- bgp_announce_table()
* bgpd/bgp_table.h:
- struct bgp_table: Added reference count. Changed type of owner to be
"struct peer *" rather than "void *".
- struct bgp_node: Correct reference counter "lock" to be signed.
* bgpd/bgp_table.c:
- Added bgp_table reference counting.
- bgp_table_free(): Fixed cleanup code. Call peer_unlock() on owner if
set.
- bgp_unlock_node(): Added assertion.
- bgp_node_get(): Added call to bgp_lock_node() to code path that it was
missing from.
* bgpd/bgp_vty.c:
- peer_rsclient_set_vty(): Call peer_lock() as part of peer assignment
to owner. Handle failure gracefully.
- peer_rsclient_unset_vty(): Add call to bgp_clear_route() with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
* bgpd/bgp_zebra.c: Made it so zclient is global so bgp_exit() can use it.
* bgpd/bgpd.c:
- peer_lock(): Allow to be called when status is "Deleted".
- peer_deactivate(): Supply BGP_CLEAR_ROUTE_NORMAL purpose to
bgp_clear_route() call.
- peer_delete(): Common variable listnode pn. Fix bug in which rsclient
was only dealt with if not part of a peer group. Call
bgp_clear_route() for rsclient, if appropriate, and do so with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
- peer_group_get(): Use XSTRDUP() instead of strdup() for conf->host.
- peer_group_bind(): Call bgp_clear_route() for rsclient, and do so with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
- bgp_create(): Use XSTRDUP() instead of strdup() for peer_self->host.
- bgp_delete(): Delete peers before groups, rather than after. And then
rather than deleting rsclients, verify that there are none at this
point.
- bgp_unlock(): Add assertion.
- bgp_free(): Call bgp_table_finish() rather than doing XFREE() itself.
* lib/command.c,h: Compiler warning fixes. Add cmd_terminate(). Fixed
massive leak in install_element() in which cmd_make_descvec() was being
called more than once for the same cmd->strvec/string/doc.
* lib/log.c: Make closezlog() check fp before calling fclose().
* lib/memory.c: Catch when alloc count goes negative by using signed
counts. Correct #endif comment. Add log_memstats_stderr().
* lib/memory.h: Add log_memstats_stderr().
* lib/thread.c: thread->funcname was being accessed in thread_call() after
it had been freed. Rearranged things so that thread_call() frees
funcname. Also made it so thread_master_free() cleans up cpu_record.
* lib/vty.c,h: Use global command_cr. Add vty_terminate().
* lib/zclient.c,h: Re-enable zclient_free().
2009-07-18 07:44:03 +02:00
|
|
|
};
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static wq_item_status bgp_clear_route_node(struct work_queue *wq, void *data)
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_clear_node_queue *cnq = data;
|
|
|
|
struct bgp_node *rn = cnq->rn;
|
|
|
|
struct peer *peer = wq->spec.data;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2018-03-20 14:18:01 +01:00
|
|
|
struct bgp *bgp;
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi = bgp_node_table(rn)->afi;
|
|
|
|
safi_t safi = bgp_node_table(rn)->safi;
|
|
|
|
|
|
|
|
assert(rn && peer);
|
2018-03-20 14:18:01 +01:00
|
|
|
bgp = peer->bgp;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* It is possible that we have multiple paths for a prefix from a peer
|
|
|
|
* if that peer is using AddPath.
|
|
|
|
*/
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer != peer)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/* graceful restart STALE flag set. */
|
|
|
|
if (CHECK_FLAG(peer->sflags, PEER_STATUS_NSF_WAIT)
|
|
|
|
&& peer->nsf[afi][safi]
|
2018-10-03 02:43:07 +02:00
|
|
|
&& !CHECK_FLAG(pi->flags, BGP_PATH_STALE)
|
|
|
|
&& !CHECK_FLAG(pi->flags, BGP_PATH_UNUSEABLE))
|
|
|
|
bgp_path_info_set_flag(rn, pi, BGP_PATH_STALE);
|
2017-08-27 22:51:35 +02:00
|
|
|
else {
|
|
|
|
/* If this is an EVPN route, process for
|
|
|
|
* un-import. */
|
|
|
|
if (safi == SAFI_EVPN)
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_evpn_unimport_route(bgp, afi, safi, &rn->p,
|
|
|
|
pi);
|
2018-03-20 14:18:01 +01:00
|
|
|
/* Handle withdraw for VRF route-leaking and L3VPN */
|
|
|
|
if (SAFI_UNICAST == safi
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_VRF ||
|
2018-03-24 00:57:03 +01:00
|
|
|
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
2018-03-20 14:18:01 +01:00
|
|
|
vpn_leak_from_vrf_withdraw(bgp_get_default(),
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp, pi);
|
2018-03-24 00:57:03 +01:00
|
|
|
}
|
2018-03-20 14:18:01 +01:00
|
|
|
if (SAFI_MPLS_VPN == safi &&
|
2018-03-24 00:57:03 +01:00
|
|
|
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT) {
|
2018-10-03 02:43:07 +02:00
|
|
|
vpn_leak_to_vrf_withdraw(bgp, pi);
|
2018-03-24 00:57:03 +01:00
|
|
|
}
|
2018-03-20 14:18:01 +01:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_rib_remove(rn, pi, peer, afi, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-08-27 22:51:35 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
return WQ_SUCCESS;
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_clear_node_queue_del(struct work_queue *wq, void *data)
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_clear_node_queue *cnq = data;
|
|
|
|
struct bgp_node *rn = cnq->rn;
|
|
|
|
struct bgp_table *table = bgp_node_table(rn);
|
[bgpd] Stability fixes including bugs 397, 492
I've spent the last several weeks working on stability fixes to bgpd.
These patches fix all of the numerous crashes, assertion failures, memory
leaks and memory stomping I could find. Valgrind was used extensively.
Added new function bgp_exit() to help catch problems. If "debug bgp" is
configured and bgpd exits with status of 0, statistics on remaining
lib/memory.c allocations are printed to stderr. It is my hope that other
developers will use this to stay on top of memory issues.
Example questionable exit:
bgpd: memstats: Current memory utilization in module LIB:
bgpd: memstats: Link List : 6
bgpd: memstats: Link Node : 5
bgpd: memstats: Hash : 8
bgpd: memstats: Hash Bucket : 2
bgpd: memstats: Hash Index : 8
bgpd: memstats: Work queue : 3
bgpd: memstats: Work queue item : 2
bgpd: memstats: Work queue name string : 3
bgpd: memstats: Current memory utilization in module BGP:
bgpd: memstats: BGP instance : 1
bgpd: memstats: BGP peer : 1
bgpd: memstats: BGP peer hostname : 1
bgpd: memstats: BGP attribute : 1
bgpd: memstats: BGP extra attributes : 1
bgpd: memstats: BGP aspath : 1
bgpd: memstats: BGP aspath str : 1
bgpd: memstats: BGP table : 24
bgpd: memstats: BGP node : 1
bgpd: memstats: BGP route : 1
bgpd: memstats: BGP synchronise : 8
bgpd: memstats: BGP Process queue : 1
bgpd: memstats: BGP node clear queue : 1
bgpd: memstats: NOTE: If configuration exists, utilization may be expected.
Example clean exit:
bgpd: memstats: No remaining tracked memory utilization.
This patch fixes bug #397: "Invalid free in bgp_announce_check()".
This patch fixes bug #492: "SIGBUS in bgpd/bgp_route.c:
bgp_clear_route_node()".
My apologies for not separating out these changes into individual patches.
The complexity of doing so boggled what is left of my brain. I hope this
is all still useful to the community.
This code has been production tested, in non-route-server-client mode, on
a linux 32-bit box and a 64-bit box.
Release/reset functions, used by bgp_exit(), added to:
bgpd/bgp_attr.c,h
bgpd/bgp_community.c,h
bgpd/bgp_dump.c,h
bgpd/bgp_ecommunity.c,h
bgpd/bgp_filter.c,h
bgpd/bgp_nexthop.c,h
bgpd/bgp_route.c,h
lib/routemap.c,h
File by file analysis:
* bgpd/bgp_aspath.c: Prevent re-use of ashash after it is released.
* bgpd/bgp_attr.c: #if removed uncalled cluster_dup().
* bgpd/bgp_clist.c,h: Allow community_list_terminate() to be called from
bgp_exit().
* bgpd/bgp_filter.c: Fix aslist->name use without allocation check, and
also fix memory leak.
* bgpd/bgp_main.c: Created bgp_exit() exit routine. This function frees
allocations made as part of bgpd initialization and, to some extent,
configuration. If "debug bgp" is configured, memory stats are printed
as described above.
* bgpd/bgp_nexthop.c: zclient_new() already allocates stream for
ibuf/obuf, so bgp_scan_init() shouldn't do it too. Also, made it so
zlookup is global so bgp_exit() can use it.
* bgpd/bgp_packet.c: bgp_capability_msg_parse() call to bgp_clear_route()
adjusted to use new BGP_CLEAR_ROUTE_NORMAL flag.
* bgpd/bgp_route.h: Correct reference counter "lock" to be signed.
bgp_clear_route() now accepts a bgp_clear_route_type of either
BGP_CLEAR_ROUTE_NORMAL or BGP_CLEAR_ROUTE_MY_RSCLIENT.
* bgpd/bgp_route.c:
- bgp_process_rsclient(): attr was being zero'ed and then
bgp_attr_extra_free() was being called with it, even though it was
never filled with valid data.
- bgp_process_rsclient(): Make sure rsclient->group is not NULL before
use.
- bgp_processq_del(): Add call to bgp_table_unlock().
- bgp_process(): Add call to bgp_table_lock().
- bgp_update_rsclient(): memset clearing of new_attr not needed since
declarationw with "= { 0 }" does it. memset was already commented
out.
- bgp_update_rsclient(): Fix screwed up misleading indentation.
- bgp_withdraw_rsclient(): Fix screwed up misleading indentation.
- bgp_clear_route_node(): Support BGP_CLEAR_ROUTE_MY_RSCLIENT.
- bgp_clear_node_queue_del(): Add call to bgp_table_unlock() and also
free struct bgp_clear_node_queue used for work item.
- bgp_clear_node_complete(): Do peer_unlock() after BGP_EVENT_ADD() in
case peer is released by peer_unlock() call.
- bgp_clear_route_table(): Support BGP_CLEAR_ROUTE_MY_RSCLIENT. Use
struct bgp_clear_node_queue to supply data to worker. Add call to
bgp_table_lock().
- bgp_clear_route(): Add support for BGP_CLEAR_ROUTE_NORMAL or
BGP_CLEAR_ROUTE_MY_RSCLIENT.
- bgp_clear_route_all(): Use BGP_CLEAR_ROUTE_NORMAL.
Bug 397 fixes:
- bgp_default_originate()
- bgp_announce_table()
* bgpd/bgp_table.h:
- struct bgp_table: Added reference count. Changed type of owner to be
"struct peer *" rather than "void *".
- struct bgp_node: Correct reference counter "lock" to be signed.
* bgpd/bgp_table.c:
- Added bgp_table reference counting.
- bgp_table_free(): Fixed cleanup code. Call peer_unlock() on owner if
set.
- bgp_unlock_node(): Added assertion.
- bgp_node_get(): Added call to bgp_lock_node() to code path that it was
missing from.
* bgpd/bgp_vty.c:
- peer_rsclient_set_vty(): Call peer_lock() as part of peer assignment
to owner. Handle failure gracefully.
- peer_rsclient_unset_vty(): Add call to bgp_clear_route() with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
* bgpd/bgp_zebra.c: Made it so zclient is global so bgp_exit() can use it.
* bgpd/bgpd.c:
- peer_lock(): Allow to be called when status is "Deleted".
- peer_deactivate(): Supply BGP_CLEAR_ROUTE_NORMAL purpose to
bgp_clear_route() call.
- peer_delete(): Common variable listnode pn. Fix bug in which rsclient
was only dealt with if not part of a peer group. Call
bgp_clear_route() for rsclient, if appropriate, and do so with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
- peer_group_get(): Use XSTRDUP() instead of strdup() for conf->host.
- peer_group_bind(): Call bgp_clear_route() for rsclient, and do so with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
- bgp_create(): Use XSTRDUP() instead of strdup() for peer_self->host.
- bgp_delete(): Delete peers before groups, rather than after. And then
rather than deleting rsclients, verify that there are none at this
point.
- bgp_unlock(): Add assertion.
- bgp_free(): Call bgp_table_finish() rather than doing XFREE() itself.
* lib/command.c,h: Compiler warning fixes. Add cmd_terminate(). Fixed
massive leak in install_element() in which cmd_make_descvec() was being
called more than once for the same cmd->strvec/string/doc.
* lib/log.c: Make closezlog() check fp before calling fclose().
* lib/memory.c: Catch when alloc count goes negative by using signed
counts. Correct #endif comment. Add log_memstats_stderr().
* lib/memory.h: Add log_memstats_stderr().
* lib/thread.c: thread->funcname was being accessed in thread_call() after
it had been freed. Rearranged things so that thread_call() frees
funcname. Also made it so thread_master_free() cleans up cpu_record.
* lib/vty.c,h: Use global command_cr. Add vty_terminate().
* lib/zclient.c,h: Re-enable zclient_free().
2009-07-18 07:44:03 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
bgp_table_unlock(table);
|
|
|
|
XFREE(MTYPE_BGP_CLEAR_NODE_QUEUE, cnq);
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_clear_node_complete(struct work_queue *wq)
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct peer *peer = wq->spec.data;
|
2006-02-21 02:09:01 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Tickle FSM to start moving again */
|
|
|
|
BGP_EVENT_ADD(peer, Clearing_Completed);
|
|
|
|
|
|
|
|
peer_unlock(peer); /* bgp_clear_route */
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_clear_node_queue_init(struct peer *peer)
|
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgpd/bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgpd/bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-06-01 13:17:05 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
char wname[sizeof("clear xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx")];
|
|
|
|
|
|
|
|
snprintf(wname, sizeof(wname), "clear %s", peer->host);
|
|
|
|
#undef CLEAR_QUEUE_NAME_LEN
|
|
|
|
|
2018-06-14 14:58:05 +02:00
|
|
|
peer->clear_node_queue = work_queue_new(bm->master, wname);
|
2017-07-17 14:03:14 +02:00
|
|
|
peer->clear_node_queue->spec.hold = 10;
|
|
|
|
peer->clear_node_queue->spec.workfunc = &bgp_clear_route_node;
|
|
|
|
peer->clear_node_queue->spec.del_item_data = &bgp_clear_node_queue_del;
|
|
|
|
peer->clear_node_queue->spec.completion_func = &bgp_clear_node_complete;
|
|
|
|
peer->clear_node_queue->spec.max_retries = 0;
|
|
|
|
|
|
|
|
/* we only 'lock' this peer reference when the queue is actually active
|
|
|
|
*/
|
|
|
|
peer->clear_node_queue->spec.data = peer;
|
[bgpd] Fix bug where FSM can stay hung forever in Idle/Clrng
2006-05-04 Paul Jakma <paul.jakma@sun.com>
* bgp_route.c: (general) Fix logical bug in clearing, noted
by Chris Caputo in [quagga-users 6728] - clearing depended on
at least one route being added to workqueue, in order for
workqueue completion function to restart FSM. However, if no
routes are cleared, then the completion function never is
called, it needs to be called manually if the workqueue
didn't get scheduled.
Finally, clearing is per-peer-session, not per AFI/SAFI, so
the FSM synchronisation should be in bgp_clear_route_table.
(bgp_clear_route_table) Wrong place for FSM/clearing
synchronisation, move to..
(bgp_clear_route) FSM/clearing synchronisation should be
here.
If no routes were cleared, no workqueue scheduled, call
the completion func to ensure FSM kicks off again.
2006-05-04 10:08:15 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_clear_route_table(struct peer *peer, afi_t afi, safi_t safi,
|
|
|
|
struct bgp_table *table)
|
[bgpd] Fix bug where FSM can stay hung forever in Idle/Clrng
2006-05-04 Paul Jakma <paul.jakma@sun.com>
* bgp_route.c: (general) Fix logical bug in clearing, noted
by Chris Caputo in [quagga-users 6728] - clearing depended on
at least one route being added to workqueue, in order for
workqueue completion function to restart FSM. However, if no
routes are cleared, then the completion function never is
called, it needs to be called manually if the workqueue
didn't get scheduled.
Finally, clearing is per-peer-session, not per AFI/SAFI, so
the FSM synchronisation should be in bgp_clear_route_table.
(bgp_clear_route_table) Wrong place for FSM/clearing
synchronisation, move to..
(bgp_clear_route) FSM/clearing synchronisation should be
here.
If no routes were cleared, no workqueue scheduled, call
the completion func to ensure FSM kicks off again.
2006-05-04 10:08:15 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *rn;
|
|
|
|
int force = bm->process_main_queue ? 0 : 1;
|
2005-03-21 11:28:14 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!table)
|
|
|
|
table = peer->bgp->rib[afi][safi];
|
2015-05-20 03:04:12 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If still no table => afi/safi isn't configured at all or smth. */
|
|
|
|
if (!table)
|
|
|
|
return;
|
2015-05-20 03:04:12 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
for (rn = bgp_table_top(table); rn; rn = bgp_route_next(rn)) {
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi, *next;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_adj_in *ain;
|
|
|
|
struct bgp_adj_in *ain_next;
|
|
|
|
|
|
|
|
/* XXX:TODO: This is suboptimal, every non-empty route_node is
|
|
|
|
* queued for every clearing peer, regardless of whether it is
|
|
|
|
* relevant to the peer at hand.
|
|
|
|
*
|
|
|
|
* Overview: There are 3 different indices which need to be
|
|
|
|
* scrubbed, potentially, when a peer is removed:
|
|
|
|
*
|
|
|
|
* 1 peer's routes visible via the RIB (ie accepted routes)
|
|
|
|
* 2 peer's routes visible by the (optional) peer's adj-in index
|
|
|
|
* 3 other routes visible by the peer's adj-out index
|
|
|
|
*
|
|
|
|
* 3 there is no hurry in scrubbing, once the struct peer is
|
|
|
|
* removed from bgp->peer, we could just GC such deleted peer's
|
|
|
|
* adj-outs at our leisure.
|
|
|
|
*
|
|
|
|
* 1 and 2 must be 'scrubbed' in some way, at least made
|
|
|
|
* invisible via RIB index before peer session is allowed to be
|
|
|
|
* brought back up. So one needs to know when such a 'search' is
|
|
|
|
* complete.
|
|
|
|
*
|
|
|
|
* Ideally:
|
|
|
|
*
|
|
|
|
* - there'd be a single global queue or a single RIB walker
|
|
|
|
* - rather than tracking which route_nodes still need to be
|
|
|
|
* examined on a peer basis, we'd track which peers still
|
|
|
|
* aren't cleared
|
|
|
|
*
|
|
|
|
* Given that our per-peer prefix-counts now should be reliable,
|
|
|
|
* this may actually be achievable. It doesn't seem to be a huge
|
|
|
|
* problem at this time,
|
|
|
|
*
|
|
|
|
* It is possible that we have multiple paths for a prefix from
|
|
|
|
* a peer
|
|
|
|
* if that peer is using AddPath.
|
|
|
|
*/
|
|
|
|
ain = rn->adj_in;
|
|
|
|
while (ain) {
|
|
|
|
ain_next = ain->next;
|
|
|
|
|
|
|
|
if (ain->peer == peer) {
|
|
|
|
bgp_adj_in_remove(rn, ain);
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
}
|
|
|
|
|
|
|
|
ain = ain_next;
|
|
|
|
}
|
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = next) {
|
2018-10-03 02:43:07 +02:00
|
|
|
next = pi->next;
|
|
|
|
if (pi->peer != peer)
|
2017-07-17 14:03:14 +02:00
|
|
|
continue;
|
|
|
|
|
|
|
|
if (force)
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_reap(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
else {
|
|
|
|
struct bgp_clear_node_queue *cnq;
|
|
|
|
|
|
|
|
/* both unlocked in bgp_clear_node_queue_del */
|
|
|
|
bgp_table_lock(bgp_node_table(rn));
|
|
|
|
bgp_lock_node(rn);
|
|
|
|
cnq = XCALLOC(
|
|
|
|
MTYPE_BGP_CLEAR_NODE_QUEUE,
|
|
|
|
sizeof(struct bgp_clear_node_queue));
|
|
|
|
cnq->rn = rn;
|
|
|
|
work_queue_add(peer->clear_node_queue, cnq);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
void bgp_clear_route(struct peer *peer, afi_t afi, safi_t safi)
|
|
|
|
{
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_table *table;
|
|
|
|
|
|
|
|
if (peer->clear_node_queue == NULL)
|
|
|
|
bgp_clear_node_queue_init(peer);
|
|
|
|
|
|
|
|
/* bgp_fsm.c keeps sessions in state Clearing, not transitioning to
|
|
|
|
* Idle until it receives a Clearing_Completed event. This protects
|
|
|
|
* against peers which flap faster than we can we clear, which could
|
|
|
|
* lead to:
|
|
|
|
*
|
|
|
|
* a) race with routes from the new session being installed before
|
|
|
|
* clear_route_node visits the node (to delete the route of that
|
|
|
|
* peer)
|
|
|
|
* b) resource exhaustion, clear_route_node likely leads to an entry
|
|
|
|
* on the process_main queue. Fast-flapping could cause that queue
|
|
|
|
* to grow and grow.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* lock peer in assumption that clear-node-queue will get nodes; if so,
|
|
|
|
* the unlock will happen upon work-queue completion; other wise, the
|
|
|
|
* unlock happens at the end of this function.
|
|
|
|
*/
|
|
|
|
if (!peer->clear_node_queue->thread)
|
|
|
|
peer_lock(peer);
|
|
|
|
|
|
|
|
if (safi != SAFI_MPLS_VPN && safi != SAFI_ENCAP && safi != SAFI_EVPN)
|
|
|
|
bgp_clear_route_table(peer, afi, safi, NULL);
|
|
|
|
else
|
|
|
|
for (rn = bgp_table_top(peer->bgp->rib[afi][safi]); rn;
|
2018-09-26 02:37:16 +02:00
|
|
|
rn = bgp_route_next(rn)) {
|
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (!table)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
bgp_clear_route_table(peer, afi, safi, table);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* unlock if no nodes got added to the clear-node-queue. */
|
|
|
|
if (!peer->clear_node_queue->thread)
|
|
|
|
peer_unlock(peer);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
void bgp_clear_route_all(struct peer *peer)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-11-21 19:02:06 +01:00
|
|
|
FOREACH_AFI_SAFI (afi, safi)
|
|
|
|
bgp_clear_route(peer, afi, safi);
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
|
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
rfapiProcessPeerDown(peer);
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_clear_adj_in(struct peer *peer, afi_t afi, safi_t safi)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_table *table;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_adj_in *ain;
|
|
|
|
struct bgp_adj_in *ain_next;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
table = peer->bgp->rib[afi][safi];
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* It is possible that we have multiple paths for a prefix from a peer
|
|
|
|
* if that peer is using AddPath.
|
|
|
|
*/
|
|
|
|
for (rn = bgp_table_top(table); rn; rn = bgp_route_next(rn)) {
|
|
|
|
ain = rn->adj_in;
|
2015-05-20 03:04:01 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
while (ain) {
|
|
|
|
ain_next = ain->next;
|
2015-05-20 03:04:01 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (ain->peer == peer) {
|
|
|
|
bgp_adj_in_remove(rn, ain);
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
}
|
2015-05-20 03:04:01 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
ain = ain_next;
|
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2005-02-02 15:40:33 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_clear_stale_route(struct peer *peer, afi_t afi, safi_t safi)
|
|
|
|
{
|
|
|
|
struct bgp_node *rn;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_table *table;
|
|
|
|
|
|
|
|
if (safi == SAFI_MPLS_VPN) {
|
|
|
|
for (rn = bgp_table_top(peer->bgp->rib[afi][safi]); rn;
|
|
|
|
rn = bgp_route_next(rn)) {
|
|
|
|
struct bgp_node *rm;
|
|
|
|
|
|
|
|
/* look for neighbor in tables */
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (!table)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
|
|
|
|
|
|
|
for (rm = bgp_table_top(table); rm;
|
|
|
|
rm = bgp_route_next(rm))
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rm); pi;
|
|
|
|
pi = pi->next) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer != peer)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!CHECK_FLAG(pi->flags,
|
2018-09-14 02:34:42 +02:00
|
|
|
BGP_PATH_STALE))
|
2017-08-27 22:51:35 +02:00
|
|
|
break;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_rib_remove(rm, pi, peer, afi, safi);
|
2017-08-27 22:51:35 +02:00
|
|
|
break;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
for (rn = bgp_table_top(peer->bgp->rib[afi][safi]); rn;
|
|
|
|
rn = bgp_route_next(rn))
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi;
|
|
|
|
pi = pi->next) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer != peer)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!CHECK_FLAG(pi->flags, BGP_PATH_STALE))
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_rib_remove(rn, pi, peer, afi, safi);
|
2017-08-27 22:51:35 +02:00
|
|
|
break;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2005-02-02 15:40:33 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2019-02-07 09:49:04 +01:00
|
|
|
int bgp_outbound_policy_exists(struct peer *peer, struct bgp_filter *filter)
|
|
|
|
{
|
2019-12-19 09:51:13 +01:00
|
|
|
if (peer->sort == BGP_PEER_IBGP)
|
|
|
|
return 1;
|
|
|
|
|
2019-02-07 09:49:04 +01:00
|
|
|
if (peer->sort == BGP_PEER_EBGP
|
|
|
|
&& (ROUTE_MAP_OUT_NAME(filter) || PREFIX_LIST_OUT_NAME(filter)
|
|
|
|
|| FILTER_LIST_OUT_NAME(filter)
|
|
|
|
|| DISTRIBUTE_OUT_NAME(filter)))
|
|
|
|
return 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int bgp_inbound_policy_exists(struct peer *peer, struct bgp_filter *filter)
|
|
|
|
{
|
2019-12-19 09:51:13 +01:00
|
|
|
if (peer->sort == BGP_PEER_IBGP)
|
|
|
|
return 1;
|
|
|
|
|
2019-02-07 09:49:04 +01:00
|
|
|
if (peer->sort == BGP_PEER_EBGP
|
|
|
|
&& (ROUTE_MAP_IN_NAME(filter) || PREFIX_LIST_IN_NAME(filter)
|
|
|
|
|| FILTER_LIST_IN_NAME(filter)
|
|
|
|
|| DISTRIBUTE_IN_NAME(filter)))
|
|
|
|
return 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-11-01 21:36:46 +01:00
|
|
|
static void bgp_cleanup_table(struct bgp *bgp, struct bgp_table *table,
|
|
|
|
safi_t safi)
|
2016-01-12 19:41:57 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *rn;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *next;
|
2016-01-12 19:41:57 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
for (rn = bgp_table_top(table); rn; rn = bgp_route_next(rn))
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = next) {
|
2018-10-03 02:43:07 +02:00
|
|
|
next = pi->next;
|
2019-04-18 18:11:04 +02:00
|
|
|
|
|
|
|
/* Unimport EVPN routes from VRFs */
|
|
|
|
if (safi == SAFI_EVPN)
|
|
|
|
bgp_evpn_unimport_route(bgp, AFI_L2VPN,
|
|
|
|
SAFI_EVPN,
|
|
|
|
&rn->p, pi);
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_SELECTED)
|
|
|
|
&& pi->type == ZEBRA_ROUTE_BGP
|
|
|
|
&& (pi->sub_type == BGP_ROUTE_NORMAL
|
|
|
|
|| pi->sub_type == BGP_ROUTE_AGGREGATE
|
|
|
|
|| pi->sub_type == BGP_ROUTE_IMPORTED)) {
|
2018-03-09 21:52:55 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_fibupd_safi(safi))
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_zebra_withdraw(&rn->p, pi, bgp,
|
|
|
|
safi);
|
|
|
|
bgp_path_info_reap(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2016-01-12 19:41:57 +01:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Delete all kernel routes. */
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_cleanup_routes(struct bgp *bgp)
|
|
|
|
{
|
|
|
|
afi_t afi;
|
|
|
|
struct bgp_node *rn;
|
2018-09-26 02:37:16 +02:00
|
|
|
struct bgp_table *table;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
for (afi = AFI_IP; afi < AFI_MAX; ++afi) {
|
|
|
|
if (afi == AFI_L2VPN)
|
|
|
|
continue;
|
2017-11-01 21:36:46 +01:00
|
|
|
bgp_cleanup_table(bgp, bgp->rib[afi][SAFI_UNICAST],
|
|
|
|
SAFI_UNICAST);
|
2017-07-17 14:03:14 +02:00
|
|
|
/*
|
|
|
|
* VPN and ENCAP and EVPN tables are two-level (RD is top level)
|
|
|
|
*/
|
|
|
|
if (afi != AFI_L2VPN) {
|
|
|
|
safi_t safi;
|
|
|
|
safi = SAFI_MPLS_VPN;
|
|
|
|
for (rn = bgp_table_top(bgp->rib[afi][safi]); rn;
|
|
|
|
rn = bgp_route_next(rn)) {
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (table != NULL) {
|
|
|
|
bgp_cleanup_table(bgp, table, safi);
|
|
|
|
bgp_table_finish(&table);
|
|
|
|
bgp_node_set_bgp_table_info(rn, NULL);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
safi = SAFI_ENCAP;
|
|
|
|
for (rn = bgp_table_top(bgp->rib[afi][safi]); rn;
|
|
|
|
rn = bgp_route_next(rn)) {
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (table != NULL) {
|
|
|
|
bgp_cleanup_table(bgp, table, safi);
|
|
|
|
bgp_table_finish(&table);
|
|
|
|
bgp_node_set_bgp_table_info(rn, NULL);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
for (rn = bgp_table_top(bgp->rib[AFI_L2VPN][SAFI_EVPN]); rn;
|
|
|
|
rn = bgp_route_next(rn)) {
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (table != NULL) {
|
|
|
|
bgp_cleanup_table(bgp, table, SAFI_EVPN);
|
|
|
|
bgp_table_finish(&table);
|
|
|
|
bgp_node_set_bgp_table_info(rn, NULL);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
}
|
2016-01-12 19:41:57 +01:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_reset(void)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_reset();
|
|
|
|
bgp_zclient_reset();
|
|
|
|
access_list_reset();
|
|
|
|
prefix_list_reset();
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_addpath_encode_rx(struct peer *peer, afi_t afi, safi_t safi)
|
BGP: support for addpath TX
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Reviewed-by: Vivek Venkataraman <vivek@cumulusnetworks.com
Ticket: CM-8014
This implements addpath TX with the first feature to use it
being "neighbor x.x.x.x addpath-tx-all-paths".
One change to show output is 'show ip bgp x.x.x.x'. If no addpath-tx
features are configured for any peers then everything looks the same
as it is today in that "Advertised to" is at the top and refers to
which peers the bestpath was advertise to.
root@superm-redxp-05[quagga-stash5]# vtysh -c 'show ip bgp 1.1.1.1'
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Last update: Fri Oct 30 18:26:44 2015
[snip]
but once you enable an addpath feature we must display "Advertised to" on a path-by-path basis:
superm-redxp-05# show ip bgp 1.1.1.1/32
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:44 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r3(10.0.0.3) (10.0.0.3)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 7
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r6(10.0.0.6) (10.0.0.6)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 6
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r5(10.0.0.5) (10.0.0.5)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 5
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r4(10.0.0.4) (10.0.0.4)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 4
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r1(10.0.0.1) (10.0.0.1)
Origin IGP, metric 0, localpref 100, valid, internal, best
AddPath ID: RX 0, TX 3
Advertised to: r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Last update: Fri Oct 30 18:26:34 2015
superm-redxp-05#
2015-11-05 18:29:43 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
return (CHECK_FLAG(peer->af_cap[afi][safi], PEER_CAP_ADDPATH_AF_RX_ADV)
|
|
|
|
&& CHECK_FLAG(peer->af_cap[afi][safi],
|
|
|
|
PEER_CAP_ADDPATH_AF_TX_RCV));
|
BGP: support for addpath TX
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Reviewed-by: Vivek Venkataraman <vivek@cumulusnetworks.com
Ticket: CM-8014
This implements addpath TX with the first feature to use it
being "neighbor x.x.x.x addpath-tx-all-paths".
One change to show output is 'show ip bgp x.x.x.x'. If no addpath-tx
features are configured for any peers then everything looks the same
as it is today in that "Advertised to" is at the top and refers to
which peers the bestpath was advertise to.
root@superm-redxp-05[quagga-stash5]# vtysh -c 'show ip bgp 1.1.1.1'
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Last update: Fri Oct 30 18:26:44 2015
[snip]
but once you enable an addpath feature we must display "Advertised to" on a path-by-path basis:
superm-redxp-05# show ip bgp 1.1.1.1/32
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:44 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r3(10.0.0.3) (10.0.0.3)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 7
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r6(10.0.0.6) (10.0.0.6)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 6
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r5(10.0.0.5) (10.0.0.5)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 5
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r4(10.0.0.4) (10.0.0.4)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 4
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r1(10.0.0.1) (10.0.0.1)
Origin IGP, metric 0, localpref 100, valid, internal, best
AddPath ID: RX 0, TX 3
Advertised to: r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Last update: Fri Oct 30 18:26:34 2015
superm-redxp-05#
2015-11-05 18:29:43 +01:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Parse NLRI stream. Withdraw NLRI is recognized by NULL attr
|
|
|
|
value. */
|
2017-07-17 14:03:14 +02:00
|
|
|
int bgp_nlri_parse_ip(struct peer *peer, struct attr *attr,
|
|
|
|
struct bgp_nlri *packet)
|
|
|
|
{
|
2018-03-27 21:13:34 +02:00
|
|
|
uint8_t *pnt;
|
|
|
|
uint8_t *lim;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct prefix p;
|
|
|
|
int psize;
|
|
|
|
int ret;
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
|
|
|
int addpath_encoded;
|
2018-03-27 21:13:34 +02:00
|
|
|
uint32_t addpath_id;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
pnt = packet->nlri;
|
|
|
|
lim = pnt + packet->length;
|
|
|
|
afi = packet->afi;
|
|
|
|
safi = packet->safi;
|
|
|
|
addpath_id = 0;
|
|
|
|
addpath_encoded = bgp_addpath_encode_rx(peer, afi, safi);
|
|
|
|
|
|
|
|
/* RFC4771 6.3 The NLRI field in the UPDATE message is checked for
|
|
|
|
syntactic validity. If the field is syntactically incorrect,
|
|
|
|
then the Error Subcode is set to Invalid Network Field. */
|
|
|
|
for (; pnt < lim; pnt += psize) {
|
|
|
|
/* Clear prefix structure. */
|
|
|
|
memset(&p, 0, sizeof(struct prefix));
|
|
|
|
|
|
|
|
if (addpath_encoded) {
|
|
|
|
|
|
|
|
/* When packet overflow occurs return immediately. */
|
2019-11-24 08:02:54 +01:00
|
|
|
if (pnt + BGP_ADDPATH_ID_LEN >= lim)
|
2019-04-15 22:53:20 +02:00
|
|
|
return BGP_NLRI_PARSE_ERROR_PACKET_OVERFLOW;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2020-01-07 16:47:13 +01:00
|
|
|
memcpy(&addpath_id, pnt, BGP_ADDPATH_ID_LEN);
|
2020-01-07 02:09:23 +01:00
|
|
|
addpath_id = ntohl(addpath_id);
|
2017-07-17 14:03:14 +02:00
|
|
|
pnt += BGP_ADDPATH_ID_LEN;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Fetch prefix length. */
|
|
|
|
p.prefixlen = *pnt++;
|
|
|
|
/* afi/safi validity already verified by caller,
|
|
|
|
* bgp_update_receive */
|
|
|
|
p.family = afi2family(afi);
|
|
|
|
|
|
|
|
/* Prefix length check. */
|
|
|
|
if (p.prefixlen > prefix_blen(&p) * 8) {
|
2018-08-03 20:03:29 +02:00
|
|
|
flog_err(
|
2018-09-13 20:23:42 +02:00
|
|
|
EC_BGP_UPDATE_RCV,
|
2018-06-15 23:08:53 +02:00
|
|
|
"%s [Error] Update packet error (wrong prefix length %d for afi %u)",
|
2017-07-17 14:03:14 +02:00
|
|
|
peer->host, p.prefixlen, packet->afi);
|
2019-04-15 22:53:20 +02:00
|
|
|
return BGP_NLRI_PARSE_ERROR_PREFIX_LENGTH;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Packet size overflow check. */
|
|
|
|
psize = PSIZE(p.prefixlen);
|
|
|
|
|
|
|
|
/* When packet overflow occur return immediately. */
|
|
|
|
if (pnt + psize > lim) {
|
2018-08-03 20:03:29 +02:00
|
|
|
flog_err(
|
2018-09-13 20:23:42 +02:00
|
|
|
EC_BGP_UPDATE_RCV,
|
2017-07-17 14:03:14 +02:00
|
|
|
"%s [Error] Update packet error (prefix length %d overflows packet)",
|
|
|
|
peer->host, p.prefixlen);
|
2019-04-15 22:53:20 +02:00
|
|
|
return BGP_NLRI_PARSE_ERROR_PACKET_OVERFLOW;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Defensive coding, double-check the psize fits in a struct
|
|
|
|
* prefix */
|
|
|
|
if (psize > (ssize_t)sizeof(p.u)) {
|
2018-08-03 20:03:29 +02:00
|
|
|
flog_err(
|
2018-09-13 20:23:42 +02:00
|
|
|
EC_BGP_UPDATE_RCV,
|
2017-07-17 14:03:14 +02:00
|
|
|
"%s [Error] Update packet error (prefix length %d too large for prefix storage %zu)",
|
|
|
|
peer->host, p.prefixlen, sizeof(p.u));
|
2019-04-15 22:53:20 +02:00
|
|
|
return BGP_NLRI_PARSE_ERROR_PACKET_LENGTH;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Fetch prefix from NLRI packet. */
|
2018-07-02 17:05:17 +02:00
|
|
|
memcpy(p.u.val, pnt, psize);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Check address. */
|
|
|
|
if (afi == AFI_IP && safi == SAFI_UNICAST) {
|
|
|
|
if (IN_CLASSD(ntohl(p.u.prefix4.s_addr))) {
|
|
|
|
/* From RFC4271 Section 6.3:
|
|
|
|
*
|
|
|
|
* If a prefix in the NLRI field is semantically
|
|
|
|
* incorrect
|
|
|
|
* (e.g., an unexpected multicast IP address),
|
|
|
|
* an error SHOULD
|
|
|
|
* be logged locally, and the prefix SHOULD be
|
|
|
|
* ignored.
|
2018-02-09 19:22:50 +01:00
|
|
|
*/
|
2018-08-03 20:03:29 +02:00
|
|
|
flog_err(
|
2018-09-13 20:23:42 +02:00
|
|
|
EC_BGP_UPDATE_RCV,
|
2017-07-17 14:03:14 +02:00
|
|
|
"%s: IPv4 unicast NLRI is multicast address %s, ignoring",
|
|
|
|
peer->host, inet_ntoa(p.u.prefix4));
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check address. */
|
|
|
|
if (afi == AFI_IP6 && safi == SAFI_UNICAST) {
|
|
|
|
if (IN6_IS_ADDR_LINKLOCAL(&p.u.prefix6)) {
|
|
|
|
char buf[BUFSIZ];
|
|
|
|
|
2018-08-03 20:03:29 +02:00
|
|
|
flog_err(
|
2018-09-13 20:23:42 +02:00
|
|
|
EC_BGP_UPDATE_RCV,
|
2017-07-17 14:03:14 +02:00
|
|
|
"%s: IPv6 unicast NLRI is link-local address %s, ignoring",
|
|
|
|
peer->host,
|
|
|
|
inet_ntop(AF_INET6, &p.u.prefix6, buf,
|
|
|
|
BUFSIZ));
|
|
|
|
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (IN6_IS_ADDR_MULTICAST(&p.u.prefix6)) {
|
|
|
|
char buf[BUFSIZ];
|
|
|
|
|
2018-08-03 20:03:29 +02:00
|
|
|
flog_err(
|
2018-09-13 20:23:42 +02:00
|
|
|
EC_BGP_UPDATE_RCV,
|
2017-07-17 14:03:14 +02:00
|
|
|
"%s: IPv6 unicast NLRI is multicast address %s, ignoring",
|
|
|
|
peer->host,
|
|
|
|
inet_ntop(AF_INET6, &p.u.prefix6, buf,
|
|
|
|
BUFSIZ));
|
|
|
|
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Normal process. */
|
|
|
|
if (attr)
|
|
|
|
ret = bgp_update(peer, &p, addpath_id, attr, afi, safi,
|
|
|
|
ZEBRA_ROUTE_BGP, BGP_ROUTE_NORMAL,
|
2017-11-21 11:42:05 +01:00
|
|
|
NULL, NULL, 0, 0, NULL);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
ret = bgp_withdraw(peer, &p, addpath_id, attr, afi,
|
|
|
|
safi, ZEBRA_ROUTE_BGP,
|
2018-02-09 19:22:50 +01:00
|
|
|
BGP_ROUTE_NORMAL, NULL, NULL, 0,
|
|
|
|
NULL);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-04-15 22:53:20 +02:00
|
|
|
/* Do not send BGP notification twice when maximum-prefix count
|
|
|
|
* overflow. */
|
|
|
|
if (CHECK_FLAG(peer->sflags, PEER_STATUS_PREFIX_OVERFLOW))
|
|
|
|
return BGP_NLRI_PARSE_ERROR_PREFIX_OVERFLOW;
|
|
|
|
|
|
|
|
/* Address family configuration mismatch. */
|
2017-07-17 14:03:14 +02:00
|
|
|
if (ret < 0)
|
2019-04-15 22:53:20 +02:00
|
|
|
return BGP_NLRI_PARSE_ERROR_ADDRESS_FAMILY;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Packet length consistency check. */
|
|
|
|
if (pnt != lim) {
|
2018-08-03 20:03:29 +02:00
|
|
|
flog_err(
|
2018-09-13 20:23:42 +02:00
|
|
|
EC_BGP_UPDATE_RCV,
|
2017-07-17 14:03:14 +02:00
|
|
|
"%s [Error] Update packet error (prefix length mismatch with total length)",
|
|
|
|
peer->host);
|
2019-04-15 22:53:20 +02:00
|
|
|
return BGP_NLRI_PARSE_ERROR_PACKET_LENGTH;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2019-04-15 22:53:20 +02:00
|
|
|
return BGP_NLRI_PARSE_OK;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static struct bgp_static *bgp_static_new(void)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
return XCALLOC(MTYPE_BGP_STATIC, sizeof(struct bgp_static));
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_static_free(struct bgp_static *bgp_static)
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
{
|
2019-02-25 21:18:13 +01:00
|
|
|
XFREE(MTYPE_ROUTE_MAP_NAME, bgp_static->rmap.name);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_decrement(bgp_static->rmap.map);
|
|
|
|
|
2019-02-25 21:18:13 +01:00
|
|
|
XFREE(MTYPE_ATTR, bgp_static->eth_s_id);
|
2017-07-17 14:03:14 +02:00
|
|
|
XFREE(MTYPE_BGP_STATIC, bgp_static);
|
|
|
|
}
|
|
|
|
|
|
|
|
void bgp_static_update(struct bgp *bgp, struct prefix *p,
|
|
|
|
struct bgp_static *bgp_static, afi_t afi, safi_t safi)
|
|
|
|
{
|
|
|
|
struct bgp_node *rn;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *new;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info rmap_path;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct attr attr;
|
|
|
|
struct attr *attr_new;
|
lib: Introducing a 3rd state for route-map match cmd: RMAP_NOOP
Introducing a 3rd state for route_map_apply library function: RMAP_NOOP
Traditionally route map MATCH rule apis were designed to return
a binary response, consisting of either RMAP_MATCH or RMAP_NOMATCH.
(Route-map SET rule apis return RMAP_OKAY or RMAP_ERROR).
Depending on this response, the following statemachine decided the
course of action:
State1:
If match cmd returns RMAP_MATCH then, keep existing behaviour.
If routemap type is PERMIT, execute set cmds or call cmds if applicable,
otherwise PERMIT!
Else If routemap type is DENY, we DENYMATCH right away
State2:
If match cmd returns RMAP_NOMATCH, continue on to next route-map. If there
are no other rules or if all the rules return RMAP_NOMATCH, return DENYMATCH
We require a 3rd state because of the following situation:
The issue - what if, the rule api needs to abort or ignore a rule?:
"match evpn vni xx" route-map filter can be applied to incoming routes
regardless of whether the tunnel type is vxlan or mpls.
This rule should be N/A for mpls based evpn route, but applicable to only
vxlan based evpn route.
Also, this rule should be applicable for routes with VNI label only, and
not for routes without labels. For example, type 3 and type 4 EVPN routes
do not have labels, so, this match cmd should let them through.
Today, the filter produces either a match or nomatch response regardless of
whether it is mpls/vxlan, resulting in either permitting or denying the
route.. So an mpls evpn route may get filtered out incorrectly.
Eg: "route-map RM1 permit 10 ; match evpn vni 20" or
"route-map RM2 deny 20 ; match vni 20"
With the introduction of the 3rd state, we can abort this rule check safely.
How? The rules api can now return RMAP_NOOP to indicate
that it encountered an invalid check, and needs to abort just that rule,
but continue with other rules.
As a result we have a 3rd state:
State3:
If match cmd returned RMAP_NOOP
Then, proceed to other route-map, otherwise if there are no more
rules or if all the rules return RMAP_NOOP, then, return RMAP_PERMITMATCH.
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-06-19 23:04:36 +02:00
|
|
|
route_map_result_t ret;
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
int vnc_implicit_withdraw = 0;
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
assert(bgp_static);
|
|
|
|
if (!bgp_static)
|
|
|
|
return;
|
2006-05-13 01:27:30 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
rn = bgp_afi_node_get(bgp->rib[afi][safi], afi, safi, p, NULL);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_attr_default_set(&attr, BGP_ORIGIN_IGP);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
attr.nexthop = bgp_static->igpnexthop;
|
|
|
|
attr.med = bgp_static->igpmetric;
|
|
|
|
attr.flag |= ATTR_FLAG_BIT(BGP_ATTR_MULTI_EXIT_DISC);
|
2007-08-06 17:24:51 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_static->atomic)
|
|
|
|
attr.flag |= ATTR_FLAG_BIT(BGP_ATTR_ATOMIC_AGGREGATE);
|
2017-03-09 18:22:04 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Store label index, if required. */
|
|
|
|
if (bgp_static->label_index != BGP_INVALID_LABEL_INDEX) {
|
|
|
|
attr.label_index = bgp_static->label_index;
|
|
|
|
attr.flag |= ATTR_FLAG_BIT(BGP_ATTR_PREFIX_SID);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Apply route-map. */
|
|
|
|
if (bgp_static->rmap.name) {
|
|
|
|
struct attr attr_tmp = attr;
|
2018-06-13 07:13:05 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
memset(&rmap_path, 0, sizeof(struct bgp_path_info));
|
|
|
|
rmap_path.peer = bgp->peer_self;
|
|
|
|
rmap_path.attr = &attr_tmp;
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
SET_FLAG(bgp->peer_self->rmap_type, PEER_RMAP_TYPE_NETWORK);
|
2003-08-08 02:24:31 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
ret = route_map_apply(bgp_static->rmap.map, p, RMAP_BGP,
|
|
|
|
&rmap_path);
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp->peer_self->rmap_type = 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (ret == RMAP_DENYMATCH) {
|
|
|
|
/* Free uninterned attribute. */
|
|
|
|
bgp_attr_flush(&attr_tmp);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Unintern original. */
|
|
|
|
aspath_unintern(&attr.aspath);
|
|
|
|
bgp_static_withdraw(bgp, p, afi, safi);
|
|
|
|
return;
|
|
|
|
}
|
2017-08-25 20:27:49 +02:00
|
|
|
|
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_GRACEFUL_SHUTDOWN))
|
|
|
|
bgp_attr_add_gshut_community(&attr_tmp);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
attr_new = bgp_attr_intern(&attr_tmp);
|
2017-08-25 20:27:49 +02:00
|
|
|
} else {
|
|
|
|
|
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_GRACEFUL_SHUTDOWN))
|
|
|
|
bgp_attr_add_gshut_community(&attr);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
attr_new = bgp_attr_intern(&attr);
|
2017-08-25 20:27:49 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == bgp->peer_self && pi->type == ZEBRA_ROUTE_BGP
|
|
|
|
&& pi->sub_type == BGP_ROUTE_STATIC)
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi) {
|
|
|
|
if (attrhash_cmp(pi->attr, attr_new)
|
|
|
|
&& !CHECK_FLAG(pi->flags, BGP_PATH_REMOVED)
|
2017-07-17 14:03:14 +02:00
|
|
|
&& !bgp_flag_check(bgp, BGP_FLAG_FORCE_STATIC_PROCESS)) {
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
bgp_attr_unintern(&attr_new);
|
|
|
|
aspath_unintern(&attr.aspath);
|
|
|
|
return;
|
|
|
|
} else {
|
|
|
|
/* The attribute is changed. */
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_set_flag(rn, pi, BGP_PATH_ATTR_CHANGED);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Rewrite BGP route information. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_REMOVED))
|
|
|
|
bgp_path_info_restore(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_decrement(bgp, p, pi, afi, safi);
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
if ((afi == AFI_IP || afi == AFI_IP6)
|
|
|
|
&& (safi == SAFI_UNICAST)) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_SELECTED)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
/*
|
|
|
|
* Implicit withdraw case.
|
2018-10-03 02:43:07 +02:00
|
|
|
* We have to do this before pi is
|
2017-07-17 14:03:14 +02:00
|
|
|
* changed
|
|
|
|
*/
|
|
|
|
++vnc_implicit_withdraw;
|
2018-10-03 02:43:07 +02:00
|
|
|
vnc_import_bgp_del_route(bgp, p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
vnc_import_bgp_exterior_del_route(
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp, p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_attr_unintern(&pi->attr);
|
|
|
|
pi->attr = attr_new;
|
|
|
|
pi->uptime = bgp_clock();
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
if ((afi == AFI_IP || afi == AFI_IP6)
|
|
|
|
&& (safi == SAFI_UNICAST)) {
|
|
|
|
if (vnc_implicit_withdraw) {
|
2018-10-03 02:43:07 +02:00
|
|
|
vnc_import_bgp_add_route(bgp, p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
vnc_import_bgp_exterior_add_route(
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp, p, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Nexthop reachability check. */
|
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_IMPORT_CHECK)
|
|
|
|
&& (safi == SAFI_UNICAST
|
|
|
|
|| safi == SAFI_LABELED_UNICAST)) {
|
2018-03-24 00:57:03 +01:00
|
|
|
|
|
|
|
struct bgp *bgp_nexthop = bgp;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->extra && pi->extra->bgp_orig)
|
|
|
|
bgp_nexthop = pi->extra->bgp_orig;
|
2018-03-24 00:57:03 +01:00
|
|
|
|
|
|
|
if (bgp_find_or_add_nexthop(bgp, bgp_nexthop,
|
2018-10-03 02:43:07 +02:00
|
|
|
afi, pi, NULL, 0))
|
|
|
|
bgp_path_info_set_flag(rn, pi,
|
2018-10-03 00:15:34 +02:00
|
|
|
BGP_PATH_VALID);
|
2017-07-17 14:03:14 +02:00
|
|
|
else {
|
|
|
|
if (BGP_DEBUG(nht, NHT)) {
|
|
|
|
char buf1[INET6_ADDRSTRLEN];
|
|
|
|
inet_ntop(p->family,
|
|
|
|
&p->u.prefix, buf1,
|
|
|
|
INET6_ADDRSTRLEN);
|
|
|
|
zlog_debug(
|
|
|
|
"%s(%s): Route not in table, not advertising",
|
|
|
|
__FUNCTION__, buf1);
|
|
|
|
}
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_unset_flag(
|
2018-10-03 02:43:07 +02:00
|
|
|
rn, pi, BGP_PATH_VALID);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* Delete the NHT structure if any, if we're
|
|
|
|
* toggling between
|
|
|
|
* enabling/disabling import check. We
|
|
|
|
* deregister the route
|
|
|
|
* from NHT to avoid overloading NHT and the
|
|
|
|
* process interaction
|
|
|
|
*/
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_unlink_nexthop(pi);
|
|
|
|
bgp_path_info_set_flag(rn, pi, BGP_PATH_VALID);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
/* Process change. */
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_increment(bgp, p, pi, afi, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_process(bgp, rn, afi, safi);
|
2018-03-09 21:52:55 +01:00
|
|
|
|
|
|
|
if (SAFI_UNICAST == safi
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_VRF
|
|
|
|
|| bgp->inst_type
|
|
|
|
== BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
vpn_leak_from_vrf_update(bgp_get_default(), bgp,
|
2018-10-03 02:43:07 +02:00
|
|
|
pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
aspath_unintern(&attr.aspath);
|
|
|
|
return;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Make new BGP info. */
|
|
|
|
new = info_make(ZEBRA_ROUTE_BGP, BGP_ROUTE_STATIC, 0, bgp->peer_self,
|
|
|
|
attr_new, rn);
|
|
|
|
/* Nexthop reachability check. */
|
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_IMPORT_CHECK)
|
|
|
|
&& (safi == SAFI_UNICAST || safi == SAFI_LABELED_UNICAST)) {
|
2018-03-24 00:57:03 +01:00
|
|
|
if (bgp_find_or_add_nexthop(bgp, bgp, afi, new, NULL, 0))
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_set_flag(rn, new, BGP_PATH_VALID);
|
2017-07-17 14:03:14 +02:00
|
|
|
else {
|
|
|
|
if (BGP_DEBUG(nht, NHT)) {
|
|
|
|
char buf1[INET6_ADDRSTRLEN];
|
|
|
|
inet_ntop(p->family, &p->u.prefix, buf1,
|
|
|
|
INET6_ADDRSTRLEN);
|
|
|
|
zlog_debug(
|
|
|
|
"%s(%s): Route not in table, not advertising",
|
|
|
|
__FUNCTION__, buf1);
|
|
|
|
}
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_unset_flag(rn, new, BGP_PATH_VALID);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* Delete the NHT structure if any, if we're toggling between
|
|
|
|
* enabling/disabling import check. We deregister the route
|
|
|
|
* from NHT to avoid overloading NHT and the process interaction
|
|
|
|
*/
|
|
|
|
bgp_unlink_nexthop(new);
|
|
|
|
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_set_flag(rn, new, BGP_PATH_VALID);
|
2015-05-20 02:47:21 +02:00
|
|
|
}
|
2015-05-20 03:04:20 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Aggregate address increment. */
|
|
|
|
bgp_aggregate_increment(bgp, p, new, afi, safi);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Register new BGP information. */
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_add(rn, new);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* route_node_get lock */
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
|
|
|
|
/* Process change. */
|
|
|
|
bgp_process(bgp, rn, afi, safi);
|
|
|
|
|
2018-03-09 21:52:55 +01:00
|
|
|
if (SAFI_UNICAST == safi
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_VRF
|
|
|
|
|| bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
vpn_leak_from_vrf_update(bgp_get_default(), bgp, new);
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Unintern original. */
|
|
|
|
aspath_unintern(&attr.aspath);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_static_withdraw(struct bgp *bgp, struct prefix *p, afi_t afi,
|
|
|
|
safi_t safi)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *rn;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
rn = bgp_afi_node_get(bgp->rib[afi][safi], afi, safi, p, NULL);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Check selected route and self inserted route. */
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == bgp->peer_self && pi->type == ZEBRA_ROUTE_BGP
|
|
|
|
&& pi->sub_type == BGP_ROUTE_STATIC)
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
/* Withdraw static BGP route from routing table. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi) {
|
2018-03-09 21:52:55 +01:00
|
|
|
if (SAFI_UNICAST == safi
|
|
|
|
&& (bgp->inst_type == BGP_INSTANCE_TYPE_VRF
|
|
|
|
|| bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
2018-10-03 02:43:07 +02:00
|
|
|
vpn_leak_from_vrf_withdraw(bgp_get_default(), bgp, pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_decrement(bgp, p, pi, afi, safi);
|
|
|
|
bgp_unlink_nexthop(pi);
|
|
|
|
bgp_path_info_delete(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_process(bgp, rn, afi, safi);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Unlock bgp_node_lookup. */
|
|
|
|
bgp_unlock_node(rn);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2016-01-12 19:41:53 +01:00
|
|
|
/*
|
|
|
|
* Used for SAFI_MPLS_VPN and SAFI_ENCAP
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_static_withdraw_safi(struct bgp *bgp, struct prefix *p,
|
|
|
|
afi_t afi, safi_t safi,
|
|
|
|
struct prefix_rd *prd)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *rn;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
rn = bgp_afi_node_get(bgp->rib[afi][safi], afi, safi, p, prd);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Check selected route and self inserted route. */
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == bgp->peer_self && pi->type == ZEBRA_ROUTE_BGP
|
|
|
|
&& pi->sub_type == BGP_ROUTE_STATIC)
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Withdraw static BGP route from routing table. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi) {
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
rfapiProcessWithdraw(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->peer, NULL, p, prd, pi->attr, afi, safi, pi->type,
|
2017-07-17 14:03:14 +02:00
|
|
|
1); /* Kill, since it is an administrative change */
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2018-03-09 21:52:55 +01:00
|
|
|
if (SAFI_MPLS_VPN == safi
|
|
|
|
&& bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT) {
|
2018-10-03 02:43:07 +02:00
|
|
|
vpn_leak_to_vrf_withdraw(bgp, pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_decrement(bgp, p, pi, afi, safi);
|
|
|
|
bgp_path_info_delete(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_process(bgp, rn, afi, safi);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Unlock bgp_node_lookup. */
|
|
|
|
bgp_unlock_node(rn);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_static_update_safi(struct bgp *bgp, struct prefix *p,
|
|
|
|
struct bgp_static *bgp_static, afi_t afi,
|
|
|
|
safi_t safi)
|
2016-01-12 19:41:53 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *rn;
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *new;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct attr *attr_new;
|
|
|
|
struct attr attr = {0};
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
mpls_label_t label = 0;
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2018-03-27 21:13:34 +02:00
|
|
|
uint32_t num_labels = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
union gw_addr add;
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
assert(bgp_static);
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-11-21 11:42:05 +01:00
|
|
|
if (bgp_static->label != MPLS_INVALID_LABEL)
|
|
|
|
num_labels = 1;
|
2017-07-17 14:03:14 +02:00
|
|
|
rn = bgp_afi_node_get(bgp->rib[afi][safi], afi, safi, p,
|
|
|
|
&bgp_static->prd);
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_attr_default_set(&attr, BGP_ORIGIN_IGP);
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
attr.nexthop = bgp_static->igpnexthop;
|
|
|
|
attr.med = bgp_static->igpmetric;
|
|
|
|
attr.flag |= ATTR_FLAG_BIT(BGP_ATTR_MULTI_EXIT_DISC);
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if ((safi == SAFI_EVPN) || (safi == SAFI_MPLS_VPN)
|
|
|
|
|| (safi == SAFI_ENCAP)) {
|
|
|
|
if (afi == AFI_IP) {
|
|
|
|
attr.mp_nexthop_global_in = bgp_static->igpnexthop;
|
|
|
|
attr.mp_nexthop_len = IPV4_MAX_BYTELEN;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (afi == AFI_L2VPN) {
|
|
|
|
if (bgp_static->gatewayIp.family == AF_INET)
|
|
|
|
add.ipv4.s_addr =
|
|
|
|
bgp_static->gatewayIp.u.prefix4.s_addr;
|
|
|
|
else if (bgp_static->gatewayIp.family == AF_INET6)
|
|
|
|
memcpy(&(add.ipv6), &(bgp_static->gatewayIp.u.prefix6),
|
|
|
|
sizeof(struct in6_addr));
|
|
|
|
overlay_index_update(&attr, bgp_static->eth_s_id, &add);
|
|
|
|
if (bgp_static->encap_tunneltype == BGP_ENCAP_TYPE_VXLAN) {
|
|
|
|
struct bgp_encap_type_vxlan bet;
|
|
|
|
memset(&bet, 0, sizeof(struct bgp_encap_type_vxlan));
|
2018-04-14 00:37:30 +02:00
|
|
|
bet.vnid = p->u.prefix_evpn.prefix_addr.eth_tag;
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_encap_type_vxlan_to_tlv(&bet, &attr);
|
|
|
|
}
|
|
|
|
if (bgp_static->router_mac) {
|
|
|
|
bgp_add_routermac_ecom(&attr, bgp_static->router_mac);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* Apply route-map. */
|
|
|
|
if (bgp_static->rmap.name) {
|
|
|
|
struct attr attr_tmp = attr;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info rmap_path;
|
lib: Introducing a 3rd state for route-map match cmd: RMAP_NOOP
Introducing a 3rd state for route_map_apply library function: RMAP_NOOP
Traditionally route map MATCH rule apis were designed to return
a binary response, consisting of either RMAP_MATCH or RMAP_NOMATCH.
(Route-map SET rule apis return RMAP_OKAY or RMAP_ERROR).
Depending on this response, the following statemachine decided the
course of action:
State1:
If match cmd returns RMAP_MATCH then, keep existing behaviour.
If routemap type is PERMIT, execute set cmds or call cmds if applicable,
otherwise PERMIT!
Else If routemap type is DENY, we DENYMATCH right away
State2:
If match cmd returns RMAP_NOMATCH, continue on to next route-map. If there
are no other rules or if all the rules return RMAP_NOMATCH, return DENYMATCH
We require a 3rd state because of the following situation:
The issue - what if, the rule api needs to abort or ignore a rule?:
"match evpn vni xx" route-map filter can be applied to incoming routes
regardless of whether the tunnel type is vxlan or mpls.
This rule should be N/A for mpls based evpn route, but applicable to only
vxlan based evpn route.
Also, this rule should be applicable for routes with VNI label only, and
not for routes without labels. For example, type 3 and type 4 EVPN routes
do not have labels, so, this match cmd should let them through.
Today, the filter produces either a match or nomatch response regardless of
whether it is mpls/vxlan, resulting in either permitting or denying the
route.. So an mpls evpn route may get filtered out incorrectly.
Eg: "route-map RM1 permit 10 ; match evpn vni 20" or
"route-map RM2 deny 20 ; match vni 20"
With the introduction of the 3rd state, we can abort this rule check safely.
How? The rules api can now return RMAP_NOOP to indicate
that it encountered an invalid check, and needs to abort just that rule,
but continue with other rules.
As a result we have a 3rd state:
State3:
If match cmd returned RMAP_NOOP
Then, proceed to other route-map, otherwise if there are no more
rules or if all the rules return RMAP_NOOP, then, return RMAP_PERMITMATCH.
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-06-19 23:04:36 +02:00
|
|
|
route_map_result_t ret;
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
rmap_path.peer = bgp->peer_self;
|
|
|
|
rmap_path.attr = &attr_tmp;
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
SET_FLAG(bgp->peer_self->rmap_type, PEER_RMAP_TYPE_NETWORK);
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
ret = route_map_apply(bgp_static->rmap.map, p, RMAP_BGP,
|
|
|
|
&rmap_path);
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp->peer_self->rmap_type = 0;
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (ret == RMAP_DENYMATCH) {
|
|
|
|
/* Free uninterned attribute. */
|
|
|
|
bgp_attr_flush(&attr_tmp);
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Unintern original. */
|
|
|
|
aspath_unintern(&attr.aspath);
|
|
|
|
bgp_static_withdraw_safi(bgp, p, afi, safi,
|
|
|
|
&bgp_static->prd);
|
|
|
|
return;
|
|
|
|
}
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
attr_new = bgp_attr_intern(&attr_tmp);
|
|
|
|
} else {
|
|
|
|
attr_new = bgp_attr_intern(&attr);
|
|
|
|
}
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == bgp->peer_self && pi->type == ZEBRA_ROUTE_BGP
|
|
|
|
&& pi->sub_type == BGP_ROUTE_STATIC)
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi) {
|
2017-07-17 14:03:14 +02:00
|
|
|
memset(&add, 0, sizeof(union gw_addr));
|
2018-10-03 02:43:07 +02:00
|
|
|
if (attrhash_cmp(pi->attr, attr_new)
|
|
|
|
&& overlay_index_equal(afi, pi, bgp_static->eth_s_id, &add)
|
|
|
|
&& !CHECK_FLAG(pi->flags, BGP_PATH_REMOVED)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
bgp_attr_unintern(&attr_new);
|
|
|
|
aspath_unintern(&attr.aspath);
|
|
|
|
return;
|
|
|
|
} else {
|
|
|
|
/* The attribute is changed. */
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_set_flag(rn, pi, BGP_PATH_ATTR_CHANGED);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Rewrite BGP route information. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_REMOVED))
|
|
|
|
bgp_path_info_restore(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_decrement(bgp, p, pi, afi, safi);
|
|
|
|
bgp_attr_unintern(&pi->attr);
|
|
|
|
pi->attr = attr_new;
|
|
|
|
pi->uptime = bgp_clock();
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->extra)
|
|
|
|
label = decode_label(&pi->extra->label[0]);
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Process change. */
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_increment(bgp, p, pi, afi, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_process(bgp, rn, afi, safi);
|
2018-03-09 21:52:55 +01:00
|
|
|
|
|
|
|
if (SAFI_MPLS_VPN == safi
|
|
|
|
&& bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT) {
|
2018-10-03 02:43:07 +02:00
|
|
|
vpn_leak_to_vrf_update(bgp, pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2018-10-03 02:43:07 +02:00
|
|
|
rfapiProcessUpdate(pi->peer, NULL, p, &bgp_static->prd,
|
|
|
|
pi->attr, afi, safi, pi->type,
|
|
|
|
pi->sub_type, &label);
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
aspath_unintern(&attr.aspath);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2016-01-12 19:41:53 +01:00
|
|
|
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Make new BGP info. */
|
|
|
|
new = info_make(ZEBRA_ROUTE_BGP, BGP_ROUTE_STATIC, 0, bgp->peer_self,
|
|
|
|
attr_new, rn);
|
2018-09-14 02:34:42 +02:00
|
|
|
SET_FLAG(new->flags, BGP_PATH_VALID);
|
2018-10-03 00:15:34 +02:00
|
|
|
new->extra = bgp_path_info_extra_new();
|
2017-11-21 11:42:05 +01:00
|
|
|
if (num_labels) {
|
|
|
|
new->extra->label[0] = bgp_static->label;
|
|
|
|
new->extra->num_labels = num_labels;
|
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
label = decode_label(&bgp_static->label);
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Aggregate address increment. */
|
|
|
|
bgp_aggregate_increment(bgp, p, new, afi, safi);
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Register new BGP information. */
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_add(rn, new);
|
2017-07-17 14:03:14 +02:00
|
|
|
/* route_node_get lock */
|
|
|
|
bgp_unlock_node(rn);
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Process change. */
|
|
|
|
bgp_process(bgp, rn, afi, safi);
|
2016-01-12 19:41:53 +01:00
|
|
|
|
2018-03-09 21:52:55 +01:00
|
|
|
if (SAFI_MPLS_VPN == safi
|
|
|
|
&& bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT) {
|
|
|
|
vpn_leak_to_vrf_update(bgp, new);
|
|
|
|
}
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
rfapiProcessUpdate(new->peer, NULL, p, &bgp_static->prd, new->attr, afi,
|
|
|
|
safi, new->type, new->sub_type, &label);
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Unintern original. */
|
|
|
|
aspath_unintern(&attr.aspath);
|
2016-01-12 19:41:53 +01:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Configure static BGP network. When user don't run zebra, static
|
|
|
|
route should be installed as valid. */
|
2017-12-18 16:40:56 +01:00
|
|
|
static int bgp_static_set(struct vty *vty, const char *negate,
|
|
|
|
const char *ip_str, afi_t afi, safi_t safi,
|
2018-03-27 21:13:34 +02:00
|
|
|
const char *rmap, int backdoor, uint32_t label_index)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
VTY_DECLVAR_CONTEXT(bgp, bgp);
|
|
|
|
int ret;
|
|
|
|
struct prefix p;
|
|
|
|
struct bgp_static *bgp_static;
|
|
|
|
struct bgp_node *rn;
|
2018-03-27 21:13:34 +02:00
|
|
|
uint8_t need_update = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Convert IP prefix string to struct prefix. */
|
|
|
|
ret = str2prefix(ip_str, &p);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Malformed prefix\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
if (afi == AFI_IP6 && IN6_IS_ADDR_LINKLOCAL(&p.u.prefix6)) {
|
|
|
|
vty_out(vty, "%% Malformed prefix (link-local address)\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
apply_mask(&p);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
if (negate) {
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
/* Set BGP static route configuration. */
|
|
|
|
rn = bgp_node_lookup(bgp->route[afi][safi], &p);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
if (!rn) {
|
2018-02-09 19:22:50 +01:00
|
|
|
vty_out(vty, "%% Can't find static route specified\n");
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_static = bgp_node_get_bgp_static_info(rn);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
if ((label_index != BGP_INVALID_LABEL_INDEX)
|
|
|
|
&& (label_index != bgp_static->label_index)) {
|
|
|
|
vty_out(vty,
|
|
|
|
"%% label-index doesn't match static route\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
if ((rmap && bgp_static->rmap.name)
|
|
|
|
&& strcmp(rmap, bgp_static->rmap.name)) {
|
|
|
|
vty_out(vty,
|
|
|
|
"%% route-map name doesn't match static route\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
/* Update BGP RIB. */
|
|
|
|
if (!bgp_static->backdoor)
|
|
|
|
bgp_static_withdraw(bgp, &p, afi, safi);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
/* Clear configuration. */
|
|
|
|
bgp_static_free(bgp_static);
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_node_set_bgp_static_info(rn, NULL);
|
2017-12-18 16:40:56 +01:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
} else {
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
/* Set BGP static route configuration. */
|
|
|
|
rn = bgp_node_get(bgp->route[afi][safi], &p);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_static = bgp_node_get_bgp_static_info(rn);
|
2018-07-30 16:30:41 +02:00
|
|
|
if (bgp_static) {
|
2017-12-18 16:40:56 +01:00
|
|
|
/* Configuration change. */
|
|
|
|
/* Label index cannot be changed. */
|
|
|
|
if (bgp_static->label_index != label_index) {
|
|
|
|
vty_out(vty, "%% cannot change label-index\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
/* Check previous routes are installed into BGP. */
|
2018-02-09 19:22:50 +01:00
|
|
|
if (bgp_static->valid
|
|
|
|
&& bgp_static->backdoor != backdoor)
|
2017-12-18 16:40:56 +01:00
|
|
|
need_update = 1;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
bgp_static->backdoor = backdoor;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
if (rmap) {
|
2019-02-25 21:18:13 +01:00
|
|
|
XFREE(MTYPE_ROUTE_MAP_NAME,
|
|
|
|
bgp_static->rmap.name);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_decrement(
|
|
|
|
bgp_static->rmap.map);
|
2017-12-18 16:40:56 +01:00
|
|
|
bgp_static->rmap.name =
|
|
|
|
XSTRDUP(MTYPE_ROUTE_MAP_NAME, rmap);
|
|
|
|
bgp_static->rmap.map =
|
|
|
|
route_map_lookup_by_name(rmap);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_increment(
|
|
|
|
bgp_static->rmap.map);
|
2017-12-18 16:40:56 +01:00
|
|
|
} else {
|
2019-02-25 21:18:13 +01:00
|
|
|
XFREE(MTYPE_ROUTE_MAP_NAME,
|
|
|
|
bgp_static->rmap.name);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_decrement(
|
|
|
|
bgp_static->rmap.map);
|
2017-12-18 16:40:56 +01:00
|
|
|
bgp_static->rmap.name = NULL;
|
|
|
|
bgp_static->rmap.map = NULL;
|
|
|
|
bgp_static->valid = 0;
|
|
|
|
}
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
} else {
|
|
|
|
/* New configuration. */
|
|
|
|
bgp_static = bgp_static_new();
|
|
|
|
bgp_static->backdoor = backdoor;
|
|
|
|
bgp_static->valid = 0;
|
|
|
|
bgp_static->igpmetric = 0;
|
|
|
|
bgp_static->igpnexthop.s_addr = 0;
|
|
|
|
bgp_static->label_index = label_index;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
if (rmap) {
|
2019-02-25 21:18:13 +01:00
|
|
|
XFREE(MTYPE_ROUTE_MAP_NAME,
|
|
|
|
bgp_static->rmap.name);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_decrement(
|
|
|
|
bgp_static->rmap.map);
|
2017-12-18 16:40:56 +01:00
|
|
|
bgp_static->rmap.name =
|
|
|
|
XSTRDUP(MTYPE_ROUTE_MAP_NAME, rmap);
|
|
|
|
bgp_static->rmap.map =
|
|
|
|
route_map_lookup_by_name(rmap);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_increment(
|
|
|
|
bgp_static->rmap.map);
|
2017-12-18 16:40:56 +01:00
|
|
|
}
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_node_set_bgp_static_info(rn, bgp_static);
|
2017-12-18 16:40:56 +01:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
bgp_static->valid = 1;
|
|
|
|
if (need_update)
|
|
|
|
bgp_static_withdraw(bgp, &p, afi, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
if (!bgp_static->backdoor)
|
|
|
|
bgp_static_update(bgp, &p, bgp_static, afi, safi);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
void bgp_static_add(struct bgp *bgp)
|
|
|
|
{
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_node *rm;
|
|
|
|
struct bgp_table *table;
|
|
|
|
struct bgp_static *bgp_static;
|
|
|
|
|
2017-11-21 19:02:06 +01:00
|
|
|
FOREACH_AFI_SAFI (afi, safi)
|
|
|
|
for (rn = bgp_table_top(bgp->route[afi][safi]); rn;
|
|
|
|
rn = bgp_route_next(rn)) {
|
2018-09-26 02:37:16 +02:00
|
|
|
if (!bgp_node_has_bgp_path_info_data(rn))
|
2017-11-21 19:02:06 +01:00
|
|
|
continue;
|
2017-08-27 22:51:35 +02:00
|
|
|
|
2017-11-21 19:02:06 +01:00
|
|
|
if ((safi == SAFI_MPLS_VPN) || (safi == SAFI_ENCAP)
|
|
|
|
|| (safi == SAFI_EVPN)) {
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
2017-11-21 19:02:06 +01:00
|
|
|
|
|
|
|
for (rm = bgp_table_top(table); rm;
|
|
|
|
rm = bgp_route_next(rm)) {
|
2018-07-30 16:30:41 +02:00
|
|
|
bgp_static =
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_node_get_bgp_static_info(
|
|
|
|
rm);
|
2017-11-21 19:02:06 +01:00
|
|
|
bgp_static_update_safi(bgp, &rm->p,
|
|
|
|
bgp_static, afi,
|
|
|
|
safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-11-21 19:02:06 +01:00
|
|
|
} else {
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_static_update(
|
|
|
|
bgp, &rn->p,
|
|
|
|
bgp_node_get_bgp_static_info(rn), afi,
|
|
|
|
safi);
|
2017-08-27 22:51:35 +02:00
|
|
|
}
|
2017-11-21 19:02:06 +01:00
|
|
|
}
|
2016-02-02 13:36:20 +01:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Called from bgp_delete(). Delete all static routes from the BGP
|
|
|
|
instance. */
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_static_delete(struct bgp *bgp)
|
|
|
|
{
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_node *rm;
|
|
|
|
struct bgp_table *table;
|
|
|
|
struct bgp_static *bgp_static;
|
|
|
|
|
2017-11-21 19:02:06 +01:00
|
|
|
FOREACH_AFI_SAFI (afi, safi)
|
|
|
|
for (rn = bgp_table_top(bgp->route[afi][safi]); rn;
|
|
|
|
rn = bgp_route_next(rn)) {
|
2018-09-26 02:37:16 +02:00
|
|
|
if (!bgp_node_has_bgp_path_info_data(rn))
|
2017-11-21 19:02:06 +01:00
|
|
|
continue;
|
2017-08-27 22:51:35 +02:00
|
|
|
|
2017-11-21 19:02:06 +01:00
|
|
|
if ((safi == SAFI_MPLS_VPN) || (safi == SAFI_ENCAP)
|
|
|
|
|| (safi == SAFI_EVPN)) {
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
2017-11-21 19:02:06 +01:00
|
|
|
|
|
|
|
for (rm = bgp_table_top(table); rm;
|
|
|
|
rm = bgp_route_next(rm)) {
|
2018-07-30 16:30:41 +02:00
|
|
|
bgp_static =
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_node_get_bgp_static_info(
|
|
|
|
rm);
|
2019-01-21 17:19:53 +01:00
|
|
|
if (!bgp_static)
|
|
|
|
continue;
|
|
|
|
|
2017-11-21 19:02:06 +01:00
|
|
|
bgp_static_withdraw_safi(
|
|
|
|
bgp, &rm->p, AFI_IP, safi,
|
|
|
|
(struct prefix_rd *)&rn->p);
|
2017-08-27 22:51:35 +02:00
|
|
|
bgp_static_free(bgp_static);
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_node_set_bgp_static_info(rn, NULL);
|
2017-08-27 22:51:35 +02:00
|
|
|
bgp_unlock_node(rn);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-11-21 19:02:06 +01:00
|
|
|
} else {
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_static = bgp_node_get_bgp_static_info(rn);
|
2017-11-21 19:02:06 +01:00
|
|
|
bgp_static_withdraw(bgp, &rn->p, afi, safi);
|
|
|
|
bgp_static_free(bgp_static);
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_node_set_bgp_static_info(rn, NULL);
|
2017-11-21 19:02:06 +01:00
|
|
|
bgp_unlock_node(rn);
|
2017-08-27 22:51:35 +02:00
|
|
|
}
|
2017-11-21 19:02:06 +01:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
void bgp_static_redo_import_check(struct bgp *bgp)
|
|
|
|
{
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_node *rm;
|
|
|
|
struct bgp_table *table;
|
|
|
|
struct bgp_static *bgp_static;
|
|
|
|
|
|
|
|
/* Use this flag to force reprocessing of the route */
|
|
|
|
bgp_flag_set(bgp, BGP_FLAG_FORCE_STATIC_PROCESS);
|
2017-11-21 19:02:06 +01:00
|
|
|
FOREACH_AFI_SAFI (afi, safi) {
|
|
|
|
for (rn = bgp_table_top(bgp->route[afi][safi]); rn;
|
|
|
|
rn = bgp_route_next(rn)) {
|
2018-09-26 02:37:16 +02:00
|
|
|
if (!bgp_node_has_bgp_path_info_data(rn))
|
2017-11-21 19:02:06 +01:00
|
|
|
continue;
|
2017-08-27 22:51:35 +02:00
|
|
|
|
2017-11-21 19:02:06 +01:00
|
|
|
if ((safi == SAFI_MPLS_VPN) || (safi == SAFI_ENCAP)
|
|
|
|
|| (safi == SAFI_EVPN)) {
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
2017-11-21 19:02:06 +01:00
|
|
|
|
|
|
|
for (rm = bgp_table_top(table); rm;
|
|
|
|
rm = bgp_route_next(rm)) {
|
2018-07-30 16:30:41 +02:00
|
|
|
bgp_static =
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_node_get_bgp_static_info(
|
|
|
|
rm);
|
2017-11-21 19:02:06 +01:00
|
|
|
bgp_static_update_safi(bgp, &rm->p,
|
|
|
|
bgp_static, afi,
|
|
|
|
safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-11-21 19:02:06 +01:00
|
|
|
} else {
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_static = bgp_node_get_bgp_static_info(rn);
|
2017-11-21 19:02:06 +01:00
|
|
|
bgp_static_update(bgp, &rn->p, bgp_static, afi,
|
|
|
|
safi);
|
2017-08-27 22:51:35 +02:00
|
|
|
}
|
2017-11-21 19:02:06 +01:00
|
|
|
}
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_flag_unset(bgp, BGP_FLAG_FORCE_STATIC_PROCESS);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void bgp_purge_af_static_redist_routes(struct bgp *bgp, afi_t afi,
|
|
|
|
safi_t safi)
|
|
|
|
{
|
|
|
|
struct bgp_table *table;
|
|
|
|
struct bgp_node *rn;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-08-11 12:56:12 +02:00
|
|
|
/* Do not install the aggregate route if BGP is in the
|
|
|
|
* process of termination.
|
|
|
|
*/
|
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_DELETE_IN_PROGRESS) ||
|
|
|
|
(bgp->peer_self == NULL))
|
|
|
|
return;
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
table = bgp->rib[afi][safi];
|
|
|
|
for (rn = bgp_table_top(table); rn; rn = bgp_route_next(rn)) {
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == bgp->peer_self
|
|
|
|
&& ((pi->type == ZEBRA_ROUTE_BGP
|
|
|
|
&& pi->sub_type == BGP_ROUTE_STATIC)
|
|
|
|
|| (pi->type != ZEBRA_ROUTE_BGP
|
|
|
|
&& pi->sub_type
|
2017-07-17 14:03:14 +02:00
|
|
|
== BGP_ROUTE_REDISTRIBUTE))) {
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_decrement(bgp, &rn->p, pi, afi,
|
2017-07-17 14:03:14 +02:00
|
|
|
safi);
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_unlink_nexthop(pi);
|
|
|
|
bgp_path_info_delete(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_process(bgp, rn, afi, safi);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2016-02-12 21:18:28 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Purge all networks and redistributed routes from routing table.
|
|
|
|
* Invoked upon the instance going down.
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_purge_static_redist_routes(struct bgp *bgp)
|
2016-02-12 21:18:28 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
2016-02-12 21:18:28 +01:00
|
|
|
|
2017-11-21 19:02:06 +01:00
|
|
|
FOREACH_AFI_SAFI (afi, safi)
|
|
|
|
bgp_purge_af_static_redist_routes(bgp, afi, safi);
|
2016-02-12 21:18:28 +01:00
|
|
|
}
|
|
|
|
|
2016-01-12 19:41:53 +01:00
|
|
|
/*
|
|
|
|
* gpz 110624
|
|
|
|
* Currently this is used to set static routes for VPN and ENCAP.
|
|
|
|
* I think it can probably be factored with bgp_static_set.
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
int bgp_static_set_safi(afi_t afi, safi_t safi, struct vty *vty,
|
|
|
|
const char *ip_str, const char *rd_str,
|
|
|
|
const char *label_str, const char *rmap_str,
|
|
|
|
int evpn_type, const char *esi, const char *gwip,
|
|
|
|
const char *ethtag, const char *routermac)
|
|
|
|
{
|
|
|
|
VTY_DECLVAR_CONTEXT(bgp, bgp);
|
|
|
|
int ret;
|
|
|
|
struct prefix p;
|
|
|
|
struct prefix_rd prd;
|
|
|
|
struct bgp_node *prn;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_table *table;
|
|
|
|
struct bgp_static *bgp_static;
|
|
|
|
mpls_label_t label = MPLS_INVALID_LABEL;
|
|
|
|
struct prefix gw_ip;
|
|
|
|
|
|
|
|
/* validate ip prefix */
|
|
|
|
ret = str2prefix(ip_str, &p);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Malformed prefix\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
apply_mask(&p);
|
|
|
|
if ((afi == AFI_L2VPN)
|
|
|
|
&& (bgp_build_evpn_prefix(evpn_type,
|
|
|
|
ethtag != NULL ? atol(ethtag) : 0, &p))) {
|
|
|
|
vty_out(vty, "%% L2VPN prefix could not be forged\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
ret = str2prefix_rd(rd_str, &prd);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Malformed rd\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (label_str) {
|
|
|
|
unsigned long label_val;
|
|
|
|
label_val = strtoul(label_str, NULL, 10);
|
|
|
|
encode_label(label_val, &label);
|
|
|
|
}
|
2017-06-16 21:12:57 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (safi == SAFI_EVPN) {
|
|
|
|
if (esi && str2esi(esi, NULL) == 0) {
|
|
|
|
vty_out(vty, "%% Malformed ESI\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
if (routermac && prefix_str2mac(routermac, NULL) == 0) {
|
|
|
|
vty_out(vty, "%% Malformed Router MAC\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
if (gwip) {
|
|
|
|
memset(&gw_ip, 0, sizeof(struct prefix));
|
|
|
|
ret = str2prefix(gwip, &gw_ip);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Malformed GatewayIp\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
if ((gw_ip.family == AF_INET
|
2018-04-14 00:37:30 +02:00
|
|
|
&& is_evpn_prefix_ipaddr_v6(
|
2017-07-17 14:03:14 +02:00
|
|
|
(struct prefix_evpn *)&p))
|
|
|
|
|| (gw_ip.family == AF_INET6
|
2018-04-14 00:37:30 +02:00
|
|
|
&& is_evpn_prefix_ipaddr_v4(
|
2017-07-17 14:03:14 +02:00
|
|
|
(struct prefix_evpn *)&p))) {
|
|
|
|
vty_out(vty,
|
|
|
|
"%% GatewayIp family differs with IP prefix\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
prn = bgp_node_get(bgp->route[afi][safi], (struct prefix *)&prd);
|
2018-09-26 02:37:16 +02:00
|
|
|
if (!bgp_node_has_bgp_path_info_data(prn))
|
|
|
|
bgp_node_set_bgp_table_info(prn,
|
|
|
|
bgp_table_init(bgp, afi, safi));
|
|
|
|
table = bgp_node_get_bgp_table_info(prn);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
rn = bgp_node_get(table, &p);
|
|
|
|
|
2018-09-26 02:37:16 +02:00
|
|
|
if (bgp_node_has_bgp_path_info_data(rn)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "%% Same network configuration exists\n");
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
} else {
|
|
|
|
/* New configuration. */
|
|
|
|
bgp_static = bgp_static_new();
|
|
|
|
bgp_static->backdoor = 0;
|
|
|
|
bgp_static->valid = 0;
|
|
|
|
bgp_static->igpmetric = 0;
|
|
|
|
bgp_static->igpnexthop.s_addr = 0;
|
|
|
|
bgp_static->label = label;
|
|
|
|
bgp_static->prd = prd;
|
|
|
|
|
|
|
|
if (rmap_str) {
|
2019-02-25 21:18:13 +01:00
|
|
|
XFREE(MTYPE_ROUTE_MAP_NAME, bgp_static->rmap.name);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_decrement(bgp_static->rmap.map);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_static->rmap.name =
|
|
|
|
XSTRDUP(MTYPE_ROUTE_MAP_NAME, rmap_str);
|
|
|
|
bgp_static->rmap.map =
|
|
|
|
route_map_lookup_by_name(rmap_str);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_increment(bgp_static->rmap.map);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (safi == SAFI_EVPN) {
|
|
|
|
if (esi) {
|
|
|
|
bgp_static->eth_s_id =
|
|
|
|
XCALLOC(MTYPE_ATTR,
|
|
|
|
sizeof(struct eth_segment_id));
|
|
|
|
str2esi(esi, bgp_static->eth_s_id);
|
|
|
|
}
|
|
|
|
if (routermac) {
|
|
|
|
bgp_static->router_mac =
|
2017-08-03 14:45:27 +02:00
|
|
|
XCALLOC(MTYPE_ATTR, ETH_ALEN + 1);
|
2018-08-06 18:17:39 +02:00
|
|
|
(void)prefix_str2mac(routermac,
|
|
|
|
bgp_static->router_mac);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
if (gwip)
|
|
|
|
prefix_copy(&bgp_static->gatewayIp, &gw_ip);
|
|
|
|
}
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_node_set_bgp_static_info(rn, bgp_static);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_static->valid = 1;
|
|
|
|
bgp_static_update_safi(bgp, &p, bgp_static, afi, safi);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Configure static BGP network. */
|
2017-07-17 14:03:14 +02:00
|
|
|
int bgp_static_unset_safi(afi_t afi, safi_t safi, struct vty *vty,
|
|
|
|
const char *ip_str, const char *rd_str,
|
|
|
|
const char *label_str, int evpn_type, const char *esi,
|
|
|
|
const char *gwip, const char *ethtag)
|
|
|
|
{
|
|
|
|
VTY_DECLVAR_CONTEXT(bgp, bgp);
|
|
|
|
int ret;
|
|
|
|
struct prefix p;
|
|
|
|
struct prefix_rd prd;
|
|
|
|
struct bgp_node *prn;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_table *table;
|
|
|
|
struct bgp_static *bgp_static;
|
|
|
|
mpls_label_t label = MPLS_INVALID_LABEL;
|
|
|
|
|
|
|
|
/* Convert IP prefix string to struct prefix. */
|
|
|
|
ret = str2prefix(ip_str, &p);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Malformed prefix\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
apply_mask(&p);
|
|
|
|
if ((afi == AFI_L2VPN)
|
|
|
|
&& (bgp_build_evpn_prefix(evpn_type,
|
|
|
|
ethtag != NULL ? atol(ethtag) : 0, &p))) {
|
|
|
|
vty_out(vty, "%% L2VPN prefix could not be forged\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
ret = str2prefix_rd(rd_str, &prd);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Malformed rd\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (label_str) {
|
|
|
|
unsigned long label_val;
|
|
|
|
label_val = strtoul(label_str, NULL, 10);
|
|
|
|
encode_label(label_val, &label);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
prn = bgp_node_get(bgp->route[afi][safi], (struct prefix *)&prd);
|
2018-09-26 02:37:16 +02:00
|
|
|
if (!bgp_node_has_bgp_path_info_data(prn))
|
|
|
|
bgp_node_set_bgp_table_info(prn,
|
|
|
|
bgp_table_init(bgp, afi, safi));
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
bgp_unlock_node(prn);
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(prn);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
rn = bgp_node_lookup(table, &p);
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (rn) {
|
|
|
|
bgp_static_withdraw_safi(bgp, &p, afi, safi, &prd);
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_static = bgp_node_get_bgp_static_info(rn);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_static_free(bgp_static);
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_node_set_bgp_static_info(rn, NULL);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
} else
|
|
|
|
vty_out(vty, "%% Can't find the route\n");
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int bgp_table_map_set(struct vty *vty, afi_t afi, safi_t safi,
|
|
|
|
const char *rmap_name)
|
|
|
|
{
|
|
|
|
VTY_DECLVAR_CONTEXT(bgp, bgp);
|
|
|
|
struct bgp_rmap *rmap;
|
|
|
|
|
|
|
|
rmap = &bgp->table_map[afi][safi];
|
|
|
|
if (rmap_name) {
|
2019-02-25 21:18:13 +01:00
|
|
|
XFREE(MTYPE_ROUTE_MAP_NAME, rmap->name);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_decrement(rmap->map);
|
2017-07-17 14:03:14 +02:00
|
|
|
rmap->name = XSTRDUP(MTYPE_ROUTE_MAP_NAME, rmap_name);
|
|
|
|
rmap->map = route_map_lookup_by_name(rmap_name);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_increment(rmap->map);
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
2019-02-25 21:18:13 +01:00
|
|
|
XFREE(MTYPE_ROUTE_MAP_NAME, rmap->name);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_decrement(rmap->map);
|
2017-07-17 14:03:14 +02:00
|
|
|
rmap->name = NULL;
|
|
|
|
rmap->map = NULL;
|
|
|
|
}
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_fibupd_safi(safi))
|
|
|
|
bgp_zebra_announce_table(bgp, afi, safi);
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_table_map_unset(struct vty *vty, afi_t afi, safi_t safi,
|
|
|
|
const char *rmap_name)
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
VTY_DECLVAR_CONTEXT(bgp, bgp);
|
|
|
|
struct bgp_rmap *rmap;
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
rmap = &bgp->table_map[afi][safi];
|
2019-02-25 21:18:13 +01:00
|
|
|
XFREE(MTYPE_ROUTE_MAP_NAME, rmap->name);
|
2019-02-04 14:27:56 +01:00
|
|
|
route_map_counter_decrement(rmap->map);
|
2017-07-17 14:03:14 +02:00
|
|
|
rmap->name = NULL;
|
|
|
|
rmap->map = NULL;
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_fibupd_safi(safi))
|
|
|
|
bgp_zebra_announce_table(bgp, afi, safi);
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
}
|
|
|
|
|
2017-08-27 22:18:32 +02:00
|
|
|
void bgp_config_write_table_map(struct vty *vty, struct bgp *bgp, afi_t afi,
|
2017-08-30 17:23:01 +02:00
|
|
|
safi_t safi)
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp->table_map[afi][safi].name) {
|
|
|
|
vty_out(vty, " table-map %s\n",
|
|
|
|
bgp->table_map[afi][safi].name);
|
|
|
|
}
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (bgp_table_map,
|
|
|
|
bgp_table_map_cmd,
|
|
|
|
"table-map WORD",
|
|
|
|
"BGP table to RIB route download filter\n"
|
|
|
|
"Name of the route map\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx_word = 1;
|
|
|
|
return bgp_table_map_set(vty, bgp_node_afi(vty), bgp_node_safi(vty),
|
|
|
|
argv[idx_word]->arg);
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
}
|
|
|
|
DEFUN (no_bgp_table_map,
|
|
|
|
no_bgp_table_map_cmd,
|
|
|
|
"no table-map WORD",
|
2016-11-30 00:07:11 +01:00
|
|
|
NO_STR
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
"BGP table to RIB route download filter\n"
|
|
|
|
"Name of the route map\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx_word = 2;
|
|
|
|
return bgp_table_map_unset(vty, bgp_node_afi(vty), bgp_node_safi(vty),
|
|
|
|
argv[idx_word]->arg);
|
bgpd: bgpd-table-map.patch
COMMAND:
table-map <route-map-name>
DESCRIPTION:
This feature is used to apply a route-map on route updates from BGP to Zebra.
All the applicable match operations are allowed, such as match on prefix,
next-hop, communities, etc. Set operations for this attach-point are limited
to metric and next-hop only. Any operation of this feature does not affect
BGPs internal RIB.
Supported for ipv4 and ipv6 address families. It works on multi-paths as well,
however, metric setting is based on the best-path only.
IMPLEMENTATION NOTES:
The route-map application at this point is not supposed to modify any of BGP
route's attributes (anything in bgp_info for that matter). To achieve that,
creating a copy of the bgp_attr was inevitable. Implementation tries to keep
the memory footprint low, code comments do point out the rationale behind a
few choices made.
bgp_zebra_announce() was already a big routine, adding this feature would
extend it further. Patch has created a few smaller routines/macros whereever
possible to keep the size of the routine in check without compromising on the
readability of the code/flow inside this routine.
For updating a partially filtered route (with its nexthops), BGP to Zebra
replacement semantic of the next-hops serves the purpose well. However, with
this patch there could be some redundant withdraws each time BGP announces a
route thats (all the nexthops) gets denied by the route-map application.
Handling of this case could be optimized by keeping state with the prefix and
the nexthops in BGP. The patch doesn't optimizing that case, as even with the
redundant withdraws the total number of updates to zebra are still be capped
by the total number of routes in the table.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Pradosh Mohapatra <pmohapat@cumulusnetworks.com>
2015-05-20 02:40:34 +02:00
|
|
|
}
|
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
DEFPY(bgp_network,
|
|
|
|
bgp_network_cmd,
|
|
|
|
"[no] network \
|
|
|
|
<A.B.C.D/M$prefix|A.B.C.D$address [mask A.B.C.D$netmask]> \
|
|
|
|
[{route-map WORD$map_name|label-index (0-1048560)$label_index| \
|
|
|
|
backdoor$backdoor}]",
|
|
|
|
NO_STR
|
|
|
|
"Specify a network to announce via BGP\n"
|
|
|
|
"IPv4 prefix\n"
|
|
|
|
"Network number\n"
|
|
|
|
"Network mask\n"
|
|
|
|
"Network mask\n"
|
|
|
|
"Route-map to modify the attributes\n"
|
|
|
|
"Name of the route map\n"
|
|
|
|
"Label index to associate with the prefix\n"
|
|
|
|
"Label index value\n"
|
|
|
|
"Specify a BGP backdoor route\n")
|
|
|
|
{
|
|
|
|
char addr_prefix_str[BUFSIZ];
|
|
|
|
|
|
|
|
if (address_str) {
|
|
|
|
int ret;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
ret = netmask_str2prefix_str(address_str, netmask_str,
|
|
|
|
addr_prefix_str);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Inconsistent address and mask\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
return bgp_static_set(
|
|
|
|
vty, no, address_str ? addr_prefix_str : prefix_str, AFI_IP,
|
|
|
|
bgp_node_safi(vty), map_name, backdoor ? 1 : 0,
|
|
|
|
label_index ? (uint32_t)label_index : BGP_INVALID_LABEL_INDEX);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-12-18 16:40:56 +01:00
|
|
|
DEFPY(ipv6_bgp_network,
|
|
|
|
ipv6_bgp_network_cmd,
|
|
|
|
"[no] network X:X::X:X/M$prefix \
|
|
|
|
[{route-map WORD$map_name|label-index (0-1048560)$label_index}]",
|
|
|
|
NO_STR
|
|
|
|
"Specify a network to announce via BGP\n"
|
|
|
|
"IPv6 prefix\n"
|
|
|
|
"Route-map to modify the attributes\n"
|
|
|
|
"Name of the route map\n"
|
|
|
|
"Label index to associate with the prefix\n"
|
|
|
|
"Label index value\n")
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2018-02-09 19:22:50 +01:00
|
|
|
return bgp_static_set(
|
|
|
|
vty, no, prefix_str, AFI_IP6, bgp_node_safi(vty), map_name, 0,
|
|
|
|
label_index ? (uint32_t)label_index : BGP_INVALID_LABEL_INDEX);
|
2017-03-09 17:43:59 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static struct bgp_aggregate *bgp_aggregate_new(void)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
return XCALLOC(MTYPE_BGP_AGGREGATE, sizeof(struct bgp_aggregate));
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_aggregate_free(struct bgp_aggregate *aggregate)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2019-08-21 17:16:05 +02:00
|
|
|
XFREE(MTYPE_ROUTE_MAP_NAME, aggregate->rmap.name);
|
|
|
|
route_map_counter_decrement(aggregate->rmap.map);
|
2017-07-17 14:03:14 +02:00
|
|
|
XFREE(MTYPE_BGP_AGGREGATE, aggregate);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
static int bgp_aggregate_info_same(struct bgp_path_info *pi, uint8_t origin,
|
2018-09-10 16:19:03 +02:00
|
|
|
struct aspath *aspath,
|
2018-10-16 14:13:03 +02:00
|
|
|
struct community *comm,
|
2018-10-16 14:24:01 +02:00
|
|
|
struct ecommunity *ecomm,
|
|
|
|
struct lcommunity *lcomm)
|
2018-06-06 19:13:00 +02:00
|
|
|
{
|
|
|
|
static struct aspath *ae = NULL;
|
|
|
|
|
|
|
|
if (!ae)
|
|
|
|
ae = aspath_empty();
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!pi)
|
2018-06-06 19:13:00 +02:00
|
|
|
return 0;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (origin != pi->attr->origin)
|
2018-06-06 19:13:00 +02:00
|
|
|
return 0;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!aspath_cmp(pi->attr->aspath, (aspath) ? aspath : ae))
|
2018-09-10 16:19:03 +02:00
|
|
|
return 0;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!community_cmp(pi->attr->community, comm))
|
2018-06-06 19:13:00 +02:00
|
|
|
return 0;
|
|
|
|
|
2018-10-16 14:13:03 +02:00
|
|
|
if (!ecommunity_cmp(pi->attr->ecommunity, ecomm))
|
2018-06-06 19:13:00 +02:00
|
|
|
return 0;
|
|
|
|
|
2018-10-16 14:24:01 +02:00
|
|
|
if (!lcommunity_cmp(pi->attr->lcommunity, lcomm))
|
|
|
|
return 0;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!CHECK_FLAG(pi->flags, BGP_PATH_VALID))
|
2018-09-28 17:55:39 +02:00
|
|
|
return 0;
|
|
|
|
|
2018-06-06 19:13:00 +02:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2018-06-06 18:31:17 +02:00
|
|
|
static void bgp_aggregate_install(struct bgp *bgp, afi_t afi, safi_t safi,
|
|
|
|
struct prefix *p, uint8_t origin,
|
|
|
|
struct aspath *aspath,
|
|
|
|
struct community *community,
|
2018-10-16 14:13:03 +02:00
|
|
|
struct ecommunity *ecommunity,
|
2018-10-16 14:24:01 +02:00
|
|
|
struct lcommunity *lcommunity,
|
2018-06-06 18:31:17 +02:00
|
|
|
uint8_t atomic_aggregate,
|
|
|
|
struct bgp_aggregate *aggregate)
|
|
|
|
{
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_table *table;
|
2018-07-30 17:40:02 +02:00
|
|
|
struct bgp_path_info *pi, *orig, *new;
|
2019-08-21 17:16:05 +02:00
|
|
|
struct attr *attr;
|
2018-06-06 18:31:17 +02:00
|
|
|
|
|
|
|
table = bgp->rib[afi][safi];
|
|
|
|
|
|
|
|
rn = bgp_node_get(table, p);
|
2018-06-06 19:13:00 +02:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (orig = pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == bgp->peer_self && pi->type == ZEBRA_ROUTE_BGP
|
|
|
|
&& pi->sub_type == BGP_ROUTE_AGGREGATE)
|
2018-06-06 19:13:00 +02:00
|
|
|
break;
|
|
|
|
|
2018-06-06 18:31:17 +02:00
|
|
|
if (aggregate->count > 0) {
|
2018-06-06 19:13:00 +02:00
|
|
|
/*
|
|
|
|
* If the aggregate information has not changed
|
|
|
|
* no need to re-install it again.
|
|
|
|
*/
|
2018-07-30 17:40:02 +02:00
|
|
|
if (bgp_aggregate_info_same(orig, origin, aspath, community,
|
2018-10-16 14:24:01 +02:00
|
|
|
ecommunity, lcommunity)) {
|
2018-06-06 19:13:00 +02:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
|
|
|
|
if (aspath)
|
|
|
|
aspath_free(aspath);
|
|
|
|
if (community)
|
2018-10-22 21:58:39 +02:00
|
|
|
community_free(&community);
|
2018-10-16 14:13:03 +02:00
|
|
|
if (ecommunity)
|
|
|
|
ecommunity_free(&ecommunity);
|
2018-10-16 14:24:01 +02:00
|
|
|
if (lcommunity)
|
|
|
|
lcommunity_free(&lcommunity);
|
2018-06-06 19:13:00 +02:00
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Mark the old as unusable
|
|
|
|
*/
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi)
|
|
|
|
bgp_path_info_delete(rn, pi);
|
2018-06-06 19:13:00 +02:00
|
|
|
|
2019-08-21 17:16:05 +02:00
|
|
|
attr = bgp_attr_aggregate_intern(
|
|
|
|
bgp, origin, aspath, community, ecommunity, lcommunity,
|
|
|
|
aggregate, atomic_aggregate, p);
|
|
|
|
|
|
|
|
if (!attr) {
|
|
|
|
bgp_aggregate_delete(bgp, p, afi, safi, aggregate);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2018-10-16 14:13:03 +02:00
|
|
|
new = info_make(ZEBRA_ROUTE_BGP, BGP_ROUTE_AGGREGATE, 0,
|
2019-08-21 17:16:05 +02:00
|
|
|
bgp->peer_self, attr, rn);
|
|
|
|
|
2018-09-14 02:34:42 +02:00
|
|
|
SET_FLAG(new->flags, BGP_PATH_VALID);
|
2018-06-06 18:31:17 +02:00
|
|
|
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_add(rn, new);
|
2018-06-06 18:31:17 +02:00
|
|
|
bgp_process(bgp, rn, afi, safi);
|
|
|
|
} else {
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = orig; pi; pi = pi->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == bgp->peer_self
|
|
|
|
&& pi->type == ZEBRA_ROUTE_BGP
|
|
|
|
&& pi->sub_type == BGP_ROUTE_AGGREGATE)
|
2018-06-06 18:31:17 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
/* Withdraw static BGP route from routing table. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi) {
|
|
|
|
bgp_path_info_delete(rn, pi);
|
2018-06-06 18:31:17 +02:00
|
|
|
bgp_process(bgp, rn, afi, safi);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
}
|
|
|
|
|
2015-05-20 02:47:24 +02:00
|
|
|
/* Update an aggregate as routes are added/removed from the BGP table */
|
2019-08-21 17:16:05 +02:00
|
|
|
void bgp_aggregate_route(struct bgp *bgp, struct prefix *p,
|
2019-02-06 15:39:03 +01:00
|
|
|
afi_t afi, safi_t safi,
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_aggregate *aggregate)
|
|
|
|
{
|
|
|
|
struct bgp_table *table;
|
|
|
|
struct bgp_node *top;
|
|
|
|
struct bgp_node *rn;
|
2018-03-27 21:13:34 +02:00
|
|
|
uint8_t origin;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct aspath *aspath = NULL;
|
|
|
|
struct community *community = NULL;
|
2018-10-16 14:13:03 +02:00
|
|
|
struct ecommunity *ecommunity = NULL;
|
2018-10-16 14:24:01 +02:00
|
|
|
struct lcommunity *lcommunity = NULL;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2017-07-17 14:03:14 +02:00
|
|
|
unsigned long match = 0;
|
2018-03-27 21:13:34 +02:00
|
|
|
uint8_t atomic_aggregate = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-06-11 15:20:09 +02:00
|
|
|
/* If the bgp instance is being deleted or self peer is deleted
|
|
|
|
* then do not create aggregate route
|
|
|
|
*/
|
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_DELETE_IN_PROGRESS) ||
|
|
|
|
(bgp->peer_self == NULL))
|
|
|
|
return;
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* ORIGIN attribute: If at least one route among routes that are
|
|
|
|
aggregated has ORIGIN with the value INCOMPLETE, then the
|
|
|
|
aggregated route must have the ORIGIN attribute with the value
|
|
|
|
INCOMPLETE. Otherwise, if at least one route among routes that
|
|
|
|
are aggregated has ORIGIN with the value EGP, then the aggregated
|
|
|
|
route must have the origin attribute with the value EGP. In all
|
|
|
|
other case the value of the ORIGIN attribute of the aggregated
|
|
|
|
route is INTERNAL. */
|
|
|
|
origin = BGP_ORIGIN_IGP;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
table = bgp->rib[afi][safi];
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
top = bgp_node_get(table, p);
|
|
|
|
for (rn = bgp_node_get(table, p); rn;
|
2018-06-01 20:33:28 +02:00
|
|
|
rn = bgp_route_next_until(rn, top)) {
|
|
|
|
if (rn->p.prefixlen <= p->prefixlen)
|
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-06-01 20:33:28 +02:00
|
|
|
match = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (BGP_PATH_HOLDDOWN(pi))
|
2018-06-01 20:33:28 +02:00
|
|
|
continue;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->attr->flag
|
2018-06-01 20:33:28 +02:00
|
|
|
& ATTR_FLAG_BIT(BGP_ATTR_ATOMIC_AGGREGATE))
|
|
|
|
atomic_aggregate = 1;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->sub_type == BGP_ROUTE_AGGREGATE)
|
2018-06-01 20:33:28 +02:00
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-06-06 19:33:19 +02:00
|
|
|
/*
|
|
|
|
* summary-only aggregate route suppress
|
|
|
|
* aggregated route announcements.
|
|
|
|
*/
|
2018-06-01 20:33:28 +02:00
|
|
|
if (aggregate->summary_only) {
|
2018-10-03 02:43:07 +02:00
|
|
|
(bgp_path_info_extra_get(pi))->suppress++;
|
|
|
|
bgp_path_info_set_flag(rn, pi,
|
2018-10-03 00:15:34 +02:00
|
|
|
BGP_PATH_ATTR_CHANGED);
|
2018-06-01 20:33:28 +02:00
|
|
|
match++;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2018-06-01 20:33:28 +02:00
|
|
|
|
|
|
|
aggregate->count++;
|
|
|
|
|
2018-06-06 19:33:19 +02:00
|
|
|
/*
|
|
|
|
* If at least one route among routes that are
|
|
|
|
* aggregated has ORIGIN with the value INCOMPLETE,
|
|
|
|
* then the aggregated route MUST have the ORIGIN
|
|
|
|
* attribute with the value INCOMPLETE. Otherwise, if
|
|
|
|
* at least one route among routes that are aggregated
|
|
|
|
* has ORIGIN with the value EGP, then the aggregated
|
|
|
|
* route MUST have the ORIGIN attribute with the value
|
|
|
|
* EGP.
|
|
|
|
*/
|
2019-02-06 15:39:03 +01:00
|
|
|
switch (pi->attr->origin) {
|
|
|
|
case BGP_ORIGIN_INCOMPLETE:
|
|
|
|
aggregate->incomplete_origin_count++;
|
|
|
|
break;
|
|
|
|
case BGP_ORIGIN_EGP:
|
|
|
|
aggregate->egp_origin_count++;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
/*Do nothing.
|
|
|
|
*/
|
|
|
|
break;
|
|
|
|
}
|
2018-06-01 20:33:28 +02:00
|
|
|
|
|
|
|
if (!aggregate->as_set)
|
|
|
|
continue;
|
|
|
|
|
2018-06-06 19:33:19 +02:00
|
|
|
/*
|
|
|
|
* as-set aggregate route generate origin, as path,
|
|
|
|
* and community aggregation.
|
|
|
|
*/
|
2019-02-06 15:39:03 +01:00
|
|
|
/* Compute aggregate route's as-path.
|
|
|
|
*/
|
2019-08-19 09:50:56 +02:00
|
|
|
bgp_compute_aggregate_aspath_hash(aggregate,
|
|
|
|
pi->attr->aspath);
|
2018-06-01 20:33:28 +02:00
|
|
|
|
2019-02-06 15:39:03 +01:00
|
|
|
/* Compute aggregate route's community.
|
|
|
|
*/
|
|
|
|
if (pi->attr->community)
|
2019-08-19 09:47:50 +02:00
|
|
|
bgp_compute_aggregate_community_hash(
|
2019-02-06 15:39:03 +01:00
|
|
|
aggregate,
|
|
|
|
pi->attr->community);
|
2018-10-16 14:24:01 +02:00
|
|
|
|
2019-02-06 15:39:03 +01:00
|
|
|
/* Compute aggregate route's extended community.
|
|
|
|
*/
|
|
|
|
if (pi->attr->ecommunity)
|
2019-08-19 09:50:15 +02:00
|
|
|
bgp_compute_aggregate_ecommunity_hash(
|
2019-02-06 15:39:03 +01:00
|
|
|
aggregate,
|
|
|
|
pi->attr->ecommunity);
|
|
|
|
|
|
|
|
/* Compute aggregate route's large community.
|
|
|
|
*/
|
|
|
|
if (pi->attr->lcommunity)
|
2019-08-12 14:13:14 +02:00
|
|
|
bgp_compute_aggregate_lcommunity_hash(
|
2019-02-06 15:39:03 +01:00
|
|
|
aggregate,
|
|
|
|
pi->attr->lcommunity);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2018-06-01 20:33:28 +02:00
|
|
|
if (match)
|
|
|
|
bgp_process(bgp, rn, afi, safi);
|
|
|
|
}
|
2019-08-19 09:47:50 +02:00
|
|
|
if (aggregate->as_set) {
|
2019-08-19 09:50:56 +02:00
|
|
|
bgp_compute_aggregate_aspath_val(aggregate);
|
2019-08-19 09:47:50 +02:00
|
|
|
bgp_compute_aggregate_community_val(aggregate);
|
2019-08-19 09:50:15 +02:00
|
|
|
bgp_compute_aggregate_ecommunity_val(aggregate);
|
2019-08-12 14:13:14 +02:00
|
|
|
bgp_compute_aggregate_lcommunity_val(aggregate);
|
2019-08-19 09:47:50 +02:00
|
|
|
}
|
|
|
|
|
2019-08-12 14:13:14 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(top);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
|
2019-02-06 15:39:03 +01:00
|
|
|
if (aggregate->incomplete_origin_count > 0)
|
|
|
|
origin = BGP_ORIGIN_INCOMPLETE;
|
|
|
|
else if (aggregate->egp_origin_count > 0)
|
|
|
|
origin = BGP_ORIGIN_EGP;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-02-06 15:39:03 +01:00
|
|
|
if (aggregate->as_set) {
|
|
|
|
if (aggregate->aspath)
|
|
|
|
/* Retrieve aggregate route's as-path.
|
|
|
|
*/
|
|
|
|
aspath = aspath_dup(aggregate->aspath);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-02-06 15:39:03 +01:00
|
|
|
if (aggregate->community)
|
|
|
|
/* Retrieve aggregate route's community.
|
|
|
|
*/
|
|
|
|
community = community_dup(aggregate->community);
|
2018-10-16 14:13:03 +02:00
|
|
|
|
2019-02-06 15:39:03 +01:00
|
|
|
if (aggregate->ecommunity)
|
|
|
|
/* Retrieve aggregate route's ecommunity.
|
|
|
|
*/
|
|
|
|
ecommunity = ecommunity_dup(aggregate->ecommunity);
|
2018-10-16 14:24:01 +02:00
|
|
|
|
2019-02-06 15:39:03 +01:00
|
|
|
if (aggregate->lcommunity)
|
|
|
|
/* Retrieve aggregate route's lcommunity.
|
|
|
|
*/
|
|
|
|
lcommunity = lcommunity_dup(aggregate->lcommunity);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2018-06-06 18:31:17 +02:00
|
|
|
bgp_aggregate_install(bgp, afi, safi, p, origin, aspath, community,
|
2018-10-16 14:24:01 +02:00
|
|
|
ecommunity, lcommunity, atomic_aggregate,
|
|
|
|
aggregate);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2019-08-21 17:16:05 +02:00
|
|
|
void bgp_aggregate_delete(struct bgp *bgp, struct prefix *p, afi_t afi,
|
2018-06-06 18:46:14 +02:00
|
|
|
safi_t safi, struct bgp_aggregate *aggregate)
|
|
|
|
{
|
|
|
|
struct bgp_table *table;
|
|
|
|
struct bgp_node *top;
|
|
|
|
struct bgp_node *rn;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2018-06-06 18:46:14 +02:00
|
|
|
unsigned long match;
|
|
|
|
|
|
|
|
table = bgp->rib[afi][safi];
|
|
|
|
|
|
|
|
/* If routes exists below this node, generate aggregate routes. */
|
|
|
|
top = bgp_node_get(table, p);
|
|
|
|
for (rn = bgp_node_get(table, p); rn;
|
|
|
|
rn = bgp_route_next_until(rn, top)) {
|
|
|
|
if (rn->p.prefixlen <= p->prefixlen)
|
|
|
|
continue;
|
|
|
|
match = 0;
|
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (BGP_PATH_HOLDDOWN(pi))
|
2018-06-06 18:46:14 +02:00
|
|
|
continue;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->sub_type == BGP_ROUTE_AGGREGATE)
|
2018-06-06 18:46:14 +02:00
|
|
|
continue;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (aggregate->summary_only && pi->extra) {
|
|
|
|
pi->extra->suppress--;
|
2018-06-06 18:46:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->extra->suppress == 0) {
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_set_flag(
|
2018-10-03 02:43:07 +02:00
|
|
|
rn, pi, BGP_PATH_ATTR_CHANGED);
|
2018-06-06 18:46:14 +02:00
|
|
|
match++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
aggregate->count--;
|
2019-02-06 15:39:03 +01:00
|
|
|
|
|
|
|
if (pi->attr->origin == BGP_ORIGIN_INCOMPLETE)
|
|
|
|
aggregate->incomplete_origin_count--;
|
|
|
|
else if (pi->attr->origin == BGP_ORIGIN_EGP)
|
|
|
|
aggregate->egp_origin_count--;
|
|
|
|
|
|
|
|
if (aggregate->as_set) {
|
|
|
|
/* Remove as-path from aggregate.
|
|
|
|
*/
|
2019-08-19 09:50:56 +02:00
|
|
|
bgp_remove_aspath_from_aggregate_hash(
|
2019-02-06 15:39:03 +01:00
|
|
|
aggregate,
|
|
|
|
pi->attr->aspath);
|
|
|
|
|
|
|
|
if (pi->attr->community)
|
|
|
|
/* Remove community from aggregate.
|
|
|
|
*/
|
2019-08-19 09:47:50 +02:00
|
|
|
bgp_remove_comm_from_aggregate_hash(
|
2019-02-06 15:39:03 +01:00
|
|
|
aggregate,
|
|
|
|
pi->attr->community);
|
|
|
|
|
|
|
|
if (pi->attr->ecommunity)
|
|
|
|
/* Remove ecommunity from aggregate.
|
|
|
|
*/
|
2019-08-19 09:50:15 +02:00
|
|
|
bgp_remove_ecomm_from_aggregate_hash(
|
2019-02-06 15:39:03 +01:00
|
|
|
aggregate,
|
|
|
|
pi->attr->ecommunity);
|
|
|
|
|
|
|
|
if (pi->attr->lcommunity)
|
|
|
|
/* Remove lcommunity from aggregate.
|
|
|
|
*/
|
2019-08-12 14:13:14 +02:00
|
|
|
bgp_remove_lcomm_from_aggregate_hash(
|
2019-02-06 15:39:03 +01:00
|
|
|
aggregate,
|
|
|
|
pi->attr->lcommunity);
|
|
|
|
}
|
|
|
|
|
2018-06-06 18:46:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* If this node was suppressed, process the change. */
|
|
|
|
if (match)
|
|
|
|
bgp_process(bgp, rn, afi, safi);
|
|
|
|
}
|
2019-08-12 14:13:14 +02:00
|
|
|
if (aggregate->as_set) {
|
2019-08-19 09:50:56 +02:00
|
|
|
aspath_free(aggregate->aspath);
|
|
|
|
aggregate->aspath = NULL;
|
2019-08-19 09:47:50 +02:00
|
|
|
if (aggregate->community)
|
|
|
|
community_free(&aggregate->community);
|
2019-08-19 09:50:15 +02:00
|
|
|
if (aggregate->ecommunity)
|
|
|
|
ecommunity_free(&aggregate->ecommunity);
|
2019-08-12 14:13:14 +02:00
|
|
|
if (aggregate->lcommunity)
|
|
|
|
lcommunity_free(&aggregate->lcommunity);
|
|
|
|
}
|
|
|
|
|
2018-06-06 18:46:14 +02:00
|
|
|
bgp_unlock_node(top);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-02-06 15:39:03 +01:00
|
|
|
static void bgp_add_route_to_aggregate(struct bgp *bgp, struct prefix *aggr_p,
|
|
|
|
struct bgp_path_info *pinew, afi_t afi,
|
|
|
|
safi_t safi,
|
|
|
|
struct bgp_aggregate *aggregate)
|
|
|
|
{
|
|
|
|
uint8_t origin;
|
|
|
|
struct aspath *aspath = NULL;
|
|
|
|
uint8_t atomic_aggregate = 0;
|
|
|
|
struct community *community = NULL;
|
|
|
|
struct ecommunity *ecommunity = NULL;
|
|
|
|
struct lcommunity *lcommunity = NULL;
|
|
|
|
|
|
|
|
/* ORIGIN attribute: If at least one route among routes that are
|
|
|
|
* aggregated has ORIGIN with the value INCOMPLETE, then the
|
|
|
|
* aggregated route must have the ORIGIN attribute with the value
|
|
|
|
* INCOMPLETE. Otherwise, if at least one route among routes that
|
|
|
|
* are aggregated has ORIGIN with the value EGP, then the aggregated
|
|
|
|
* route must have the origin attribute with the value EGP. In all
|
|
|
|
* other case the value of the ORIGIN attribute of the aggregated
|
|
|
|
* route is INTERNAL.
|
|
|
|
*/
|
|
|
|
origin = BGP_ORIGIN_IGP;
|
|
|
|
|
|
|
|
aggregate->count++;
|
|
|
|
|
|
|
|
if (aggregate->summary_only)
|
|
|
|
(bgp_path_info_extra_get(pinew))->suppress++;
|
|
|
|
|
|
|
|
switch (pinew->attr->origin) {
|
|
|
|
case BGP_ORIGIN_INCOMPLETE:
|
|
|
|
aggregate->incomplete_origin_count++;
|
|
|
|
break;
|
|
|
|
case BGP_ORIGIN_EGP:
|
|
|
|
aggregate->egp_origin_count++;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
/* Do nothing.
|
|
|
|
*/
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (aggregate->incomplete_origin_count > 0)
|
|
|
|
origin = BGP_ORIGIN_INCOMPLETE;
|
|
|
|
else if (aggregate->egp_origin_count > 0)
|
|
|
|
origin = BGP_ORIGIN_EGP;
|
|
|
|
|
|
|
|
if (aggregate->as_set) {
|
|
|
|
/* Compute aggregate route's as-path.
|
|
|
|
*/
|
|
|
|
bgp_compute_aggregate_aspath(aggregate,
|
|
|
|
pinew->attr->aspath);
|
|
|
|
|
|
|
|
/* Compute aggregate route's community.
|
|
|
|
*/
|
|
|
|
if (pinew->attr->community)
|
|
|
|
bgp_compute_aggregate_community(
|
|
|
|
aggregate,
|
|
|
|
pinew->attr->community);
|
|
|
|
|
|
|
|
/* Compute aggregate route's extended community.
|
|
|
|
*/
|
|
|
|
if (pinew->attr->ecommunity)
|
|
|
|
bgp_compute_aggregate_ecommunity(
|
|
|
|
aggregate,
|
|
|
|
pinew->attr->ecommunity);
|
|
|
|
|
|
|
|
/* Compute aggregate route's large community.
|
|
|
|
*/
|
|
|
|
if (pinew->attr->lcommunity)
|
|
|
|
bgp_compute_aggregate_lcommunity(
|
|
|
|
aggregate,
|
|
|
|
pinew->attr->lcommunity);
|
|
|
|
|
|
|
|
/* Retrieve aggregate route's as-path.
|
|
|
|
*/
|
|
|
|
if (aggregate->aspath)
|
|
|
|
aspath = aspath_dup(aggregate->aspath);
|
|
|
|
|
|
|
|
/* Retrieve aggregate route's community.
|
|
|
|
*/
|
|
|
|
if (aggregate->community)
|
|
|
|
community = community_dup(aggregate->community);
|
|
|
|
|
|
|
|
/* Retrieve aggregate route's ecommunity.
|
|
|
|
*/
|
|
|
|
if (aggregate->ecommunity)
|
|
|
|
ecommunity = ecommunity_dup(aggregate->ecommunity);
|
|
|
|
|
|
|
|
/* Retrieve aggregate route's lcommunity.
|
|
|
|
*/
|
|
|
|
if (aggregate->lcommunity)
|
|
|
|
lcommunity = lcommunity_dup(aggregate->lcommunity);
|
|
|
|
}
|
|
|
|
|
|
|
|
bgp_aggregate_install(bgp, afi, safi, aggr_p, origin,
|
|
|
|
aspath, community, ecommunity,
|
|
|
|
lcommunity, atomic_aggregate, aggregate);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void bgp_remove_route_from_aggregate(struct bgp *bgp, afi_t afi,
|
|
|
|
safi_t safi,
|
|
|
|
struct bgp_path_info *pi,
|
|
|
|
struct bgp_aggregate *aggregate,
|
|
|
|
struct prefix *aggr_p)
|
|
|
|
{
|
|
|
|
uint8_t origin;
|
|
|
|
struct aspath *aspath = NULL;
|
|
|
|
uint8_t atomic_aggregate = 0;
|
|
|
|
struct community *community = NULL;
|
|
|
|
struct ecommunity *ecommunity = NULL;
|
|
|
|
struct lcommunity *lcommunity = NULL;
|
|
|
|
unsigned long match = 0;
|
|
|
|
|
|
|
|
if (BGP_PATH_HOLDDOWN(pi))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (pi->sub_type == BGP_ROUTE_AGGREGATE)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (aggregate->summary_only
|
|
|
|
&& pi->extra
|
|
|
|
&& pi->extra->suppress > 0) {
|
|
|
|
pi->extra->suppress--;
|
|
|
|
|
|
|
|
if (pi->extra->suppress == 0) {
|
|
|
|
bgp_path_info_set_flag(pi->net, pi,
|
|
|
|
BGP_PATH_ATTR_CHANGED);
|
|
|
|
match++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (aggregate->count > 0)
|
|
|
|
aggregate->count--;
|
|
|
|
|
|
|
|
if (pi->attr->origin == BGP_ORIGIN_INCOMPLETE)
|
|
|
|
aggregate->incomplete_origin_count--;
|
|
|
|
else if (pi->attr->origin == BGP_ORIGIN_EGP)
|
|
|
|
aggregate->egp_origin_count--;
|
|
|
|
|
|
|
|
if (aggregate->as_set) {
|
|
|
|
/* Remove as-path from aggregate.
|
|
|
|
*/
|
|
|
|
bgp_remove_aspath_from_aggregate(aggregate,
|
|
|
|
pi->attr->aspath);
|
|
|
|
|
|
|
|
if (pi->attr->community)
|
|
|
|
/* Remove community from aggregate.
|
|
|
|
*/
|
|
|
|
bgp_remove_community_from_aggregate(
|
|
|
|
aggregate,
|
|
|
|
pi->attr->community);
|
|
|
|
|
|
|
|
if (pi->attr->ecommunity)
|
|
|
|
/* Remove ecommunity from aggregate.
|
|
|
|
*/
|
|
|
|
bgp_remove_ecommunity_from_aggregate(
|
|
|
|
aggregate,
|
|
|
|
pi->attr->ecommunity);
|
|
|
|
|
|
|
|
if (pi->attr->lcommunity)
|
|
|
|
/* Remove lcommunity from aggregate.
|
|
|
|
*/
|
|
|
|
bgp_remove_lcommunity_from_aggregate(
|
|
|
|
aggregate,
|
|
|
|
pi->attr->lcommunity);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If this node was suppressed, process the change. */
|
|
|
|
if (match)
|
|
|
|
bgp_process(bgp, pi->net, afi, safi);
|
|
|
|
|
|
|
|
origin = BGP_ORIGIN_IGP;
|
|
|
|
if (aggregate->incomplete_origin_count > 0)
|
|
|
|
origin = BGP_ORIGIN_INCOMPLETE;
|
|
|
|
else if (aggregate->egp_origin_count > 0)
|
|
|
|
origin = BGP_ORIGIN_EGP;
|
|
|
|
|
|
|
|
if (aggregate->as_set) {
|
|
|
|
/* Retrieve aggregate route's as-path.
|
|
|
|
*/
|
|
|
|
if (aggregate->aspath)
|
|
|
|
aspath = aspath_dup(aggregate->aspath);
|
|
|
|
|
|
|
|
/* Retrieve aggregate route's community.
|
|
|
|
*/
|
|
|
|
if (aggregate->community)
|
|
|
|
community = community_dup(aggregate->community);
|
|
|
|
|
|
|
|
/* Retrieve aggregate route's ecommunity.
|
|
|
|
*/
|
|
|
|
if (aggregate->ecommunity)
|
|
|
|
ecommunity = ecommunity_dup(aggregate->ecommunity);
|
|
|
|
|
|
|
|
/* Retrieve aggregate route's lcommunity.
|
|
|
|
*/
|
|
|
|
if (aggregate->lcommunity)
|
|
|
|
lcommunity = lcommunity_dup(aggregate->lcommunity);
|
|
|
|
}
|
|
|
|
|
|
|
|
bgp_aggregate_install(bgp, afi, safi, aggr_p, origin,
|
|
|
|
aspath, community, ecommunity,
|
|
|
|
lcommunity, atomic_aggregate, aggregate);
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_aggregate_increment(struct bgp *bgp, struct prefix *p,
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi, afi_t afi, safi_t safi)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *child;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_aggregate *aggregate;
|
|
|
|
struct bgp_table *table;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
table = bgp->aggregate[afi][safi];
|
2012-05-07 18:53:10 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* No aggregates configured. */
|
|
|
|
if (bgp_table_top_nolock(table) == NULL)
|
|
|
|
return;
|
2012-05-07 18:53:10 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (p->prefixlen == 0)
|
|
|
|
return;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (BGP_PATH_HOLDDOWN(pi))
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
child = bgp_node_get(table, p);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Aggregate address configuration check. */
|
2018-07-30 14:50:47 +02:00
|
|
|
for (rn = child; rn; rn = bgp_node_parent_nolock(rn)) {
|
2018-11-02 13:31:22 +01:00
|
|
|
aggregate = bgp_node_get_bgp_aggregate_info(rn);
|
2018-07-30 14:50:47 +02:00
|
|
|
if (aggregate != NULL && rn->p.prefixlen < p->prefixlen) {
|
2019-02-06 15:39:03 +01:00
|
|
|
bgp_add_route_to_aggregate(bgp, &rn->p, pi, afi,
|
|
|
|
safi, aggregate);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2018-07-30 14:50:47 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(child);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_aggregate_decrement(struct bgp *bgp, struct prefix *p,
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *del, afi_t afi, safi_t safi)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *child;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_aggregate *aggregate;
|
|
|
|
struct bgp_table *table;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
table = bgp->aggregate[afi][safi];
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* No aggregates configured. */
|
|
|
|
if (bgp_table_top_nolock(table) == NULL)
|
|
|
|
return;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (p->prefixlen == 0)
|
|
|
|
return;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
child = bgp_node_get(table, p);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Aggregate address configuration check. */
|
2018-07-30 14:50:47 +02:00
|
|
|
for (rn = child; rn; rn = bgp_node_parent_nolock(rn)) {
|
2018-11-02 13:31:22 +01:00
|
|
|
aggregate = bgp_node_get_bgp_aggregate_info(rn);
|
2018-07-30 14:50:47 +02:00
|
|
|
if (aggregate != NULL && rn->p.prefixlen < p->prefixlen) {
|
2019-02-06 15:39:03 +01:00
|
|
|
bgp_remove_route_from_aggregate(bgp, afi, safi,
|
|
|
|
del, aggregate, &rn->p);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2018-07-30 14:50:47 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(child);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
/* Aggregate route attribute. */
|
|
|
|
#define AGGREGATE_SUMMARY_ONLY 1
|
|
|
|
#define AGGREGATE_AS_SET 1
|
2019-11-09 19:24:34 +01:00
|
|
|
#define AGGREGATE_AS_UNSET 0
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_aggregate_unset(struct vty *vty, const char *prefix_str,
|
|
|
|
afi_t afi, safi_t safi)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
VTY_DECLVAR_CONTEXT(bgp, bgp);
|
|
|
|
int ret;
|
|
|
|
struct prefix p;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_aggregate *aggregate;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Convert string to prefix structure. */
|
|
|
|
ret = str2prefix(prefix_str, &p);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "Malformed prefix\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
apply_mask(&p);
|
|
|
|
|
|
|
|
/* Old configuration check. */
|
|
|
|
rn = bgp_node_lookup(bgp->aggregate[afi][safi], &p);
|
|
|
|
if (!rn) {
|
|
|
|
vty_out(vty,
|
|
|
|
"%% There is no aggregate-address configuration.\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2010-08-05 19:26:28 +02:00
|
|
|
|
2018-11-02 13:31:22 +01:00
|
|
|
aggregate = bgp_node_get_bgp_aggregate_info(rn);
|
2018-06-01 20:13:58 +02:00
|
|
|
bgp_aggregate_delete(bgp, &p, afi, safi, aggregate);
|
2018-10-16 14:24:01 +02:00
|
|
|
bgp_aggregate_install(bgp, afi, safi, &p, 0, NULL, NULL,
|
|
|
|
NULL, NULL, 0, aggregate);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Unlock aggregate address configuration. */
|
2018-11-02 13:31:22 +01:00
|
|
|
bgp_node_set_bgp_aggregate_info(rn, NULL);
|
2019-02-06 15:39:03 +01:00
|
|
|
|
|
|
|
if (aggregate->community)
|
|
|
|
community_free(&aggregate->community);
|
|
|
|
|
|
|
|
if (aggregate->community_hash) {
|
|
|
|
/* Delete all communities in the hash.
|
|
|
|
*/
|
|
|
|
hash_clean(aggregate->community_hash,
|
|
|
|
bgp_aggr_community_remove);
|
|
|
|
/* Free up the community_hash.
|
|
|
|
*/
|
|
|
|
hash_free(aggregate->community_hash);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (aggregate->ecommunity)
|
|
|
|
ecommunity_free(&aggregate->ecommunity);
|
|
|
|
|
|
|
|
if (aggregate->ecommunity_hash) {
|
|
|
|
/* Delete all ecommunities in the hash.
|
|
|
|
*/
|
|
|
|
hash_clean(aggregate->ecommunity_hash,
|
|
|
|
bgp_aggr_ecommunity_remove);
|
|
|
|
/* Free up the ecommunity_hash.
|
|
|
|
*/
|
|
|
|
hash_free(aggregate->ecommunity_hash);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (aggregate->lcommunity)
|
|
|
|
lcommunity_free(&aggregate->lcommunity);
|
|
|
|
|
|
|
|
if (aggregate->lcommunity_hash) {
|
|
|
|
/* Delete all lcommunities in the hash.
|
|
|
|
*/
|
|
|
|
hash_clean(aggregate->lcommunity_hash,
|
|
|
|
bgp_aggr_lcommunity_remove);
|
|
|
|
/* Free up the lcommunity_hash.
|
|
|
|
*/
|
|
|
|
hash_free(aggregate->lcommunity_hash);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (aggregate->aspath)
|
|
|
|
aspath_free(aggregate->aspath);
|
|
|
|
|
|
|
|
if (aggregate->aspath_hash) {
|
|
|
|
/* Delete all as-paths in the hash.
|
|
|
|
*/
|
|
|
|
hash_clean(aggregate->aspath_hash,
|
|
|
|
bgp_aggr_aspath_remove);
|
|
|
|
/* Free up the aspath_hash.
|
|
|
|
*/
|
|
|
|
hash_free(aggregate->aspath_hash);
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_aggregate_free(aggregate);
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int bgp_aggregate_set(struct vty *vty, const char *prefix_str, afi_t afi,
|
2019-08-21 17:16:05 +02:00
|
|
|
safi_t safi, const char *rmap, uint8_t summary_only,
|
|
|
|
uint8_t as_set)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
VTY_DECLVAR_CONTEXT(bgp, bgp);
|
|
|
|
int ret;
|
|
|
|
struct prefix p;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_aggregate *aggregate;
|
2019-11-09 19:24:34 +01:00
|
|
|
uint8_t as_set_new = as_set;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Convert string to prefix structure. */
|
|
|
|
ret = str2prefix(prefix_str, &p);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "Malformed prefix\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
apply_mask(&p);
|
|
|
|
|
2018-06-05 19:22:11 +02:00
|
|
|
if ((afi == AFI_IP && p.prefixlen == IPV4_MAX_BITLEN) ||
|
|
|
|
(afi == AFI_IP6 && p.prefixlen == IPV6_MAX_BITLEN)) {
|
|
|
|
vty_out(vty, "Specified prefix: %s will not result in any useful aggregation, disallowing\n",
|
|
|
|
prefix_str);
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Old configuration check. */
|
|
|
|
rn = bgp_node_get(bgp->aggregate[afi][safi], &p);
|
2019-08-21 17:16:05 +02:00
|
|
|
aggregate = bgp_node_get_bgp_aggregate_info(rn);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-08-21 17:16:05 +02:00
|
|
|
if (aggregate) {
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "There is already same aggregate network.\n");
|
|
|
|
/* try to remove the old entry */
|
|
|
|
ret = bgp_aggregate_unset(vty, prefix_str, afi, safi);
|
|
|
|
if (ret) {
|
|
|
|
vty_out(vty, "Error deleting aggregate.\n");
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Make aggregate address structure. */
|
|
|
|
aggregate = bgp_aggregate_new();
|
|
|
|
aggregate->summary_only = summary_only;
|
2019-11-09 19:24:34 +01:00
|
|
|
|
|
|
|
/* Network operators MUST NOT locally generate any new
|
|
|
|
* announcements containing AS_SET or AS_CONFED_SET. If they have
|
|
|
|
* announced routes with AS_SET or AS_CONFED_SET in them, then they
|
|
|
|
* SHOULD withdraw those routes and re-announce routes for the
|
|
|
|
* aggregate or component prefixes (i.e., the more-specific routes
|
|
|
|
* subsumed by the previously aggregated route) without AS_SET
|
|
|
|
* or AS_CONFED_SET in the updates.
|
|
|
|
*/
|
|
|
|
if (bgp->reject_as_sets == BGP_REJECT_AS_SETS_ENABLED) {
|
|
|
|
if (as_set == AGGREGATE_AS_SET) {
|
|
|
|
as_set_new = AGGREGATE_AS_UNSET;
|
|
|
|
zlog_warn(
|
|
|
|
"%s: Ignoring as-set because `bgp reject-as-sets` is enabled.\n",
|
|
|
|
__func__);
|
|
|
|
vty_out(vty,
|
|
|
|
"Ignoring as-set because `bgp reject-as-sets` is enabled.\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
aggregate->as_set = as_set_new;
|
2017-07-17 14:03:14 +02:00
|
|
|
aggregate->safi = safi;
|
2019-08-21 17:16:05 +02:00
|
|
|
|
|
|
|
if (rmap) {
|
|
|
|
XFREE(MTYPE_ROUTE_MAP_NAME, aggregate->rmap.name);
|
|
|
|
route_map_counter_decrement(aggregate->rmap.map);
|
|
|
|
aggregate->rmap.name =
|
|
|
|
XSTRDUP(MTYPE_ROUTE_MAP_NAME, rmap);
|
|
|
|
aggregate->rmap.map = route_map_lookup_by_name(rmap);
|
|
|
|
route_map_counter_increment(aggregate->rmap.map);
|
|
|
|
}
|
2018-11-02 13:31:22 +01:00
|
|
|
bgp_node_set_bgp_aggregate_info(rn, aggregate);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Aggregate address insert into BGP routing table. */
|
2019-02-06 15:39:03 +01:00
|
|
|
bgp_aggregate_route(bgp, &p, afi, safi, aggregate);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (aggregate_address,
|
|
|
|
aggregate_address_cmd,
|
2019-08-21 17:16:05 +02:00
|
|
|
"aggregate-address A.B.C.D/M [<as-set [summary-only]|summary-only [as-set]>] [route-map WORD]",
|
2002-12-13 21:15:29 +01:00
|
|
|
"Configure BGP aggregate entries\n"
|
|
|
|
"Aggregate prefix\n"
|
|
|
|
"Generate AS set path information\n"
|
2016-09-26 20:08:45 +02:00
|
|
|
"Filter more specific routes from updates\n"
|
|
|
|
"Filter more specific routes from updates\n"
|
2019-08-21 17:16:05 +02:00
|
|
|
"Generate AS set path information\n"
|
|
|
|
"Apply route map to aggregate network\n"
|
|
|
|
"Name of route map\n")
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx = 0;
|
|
|
|
argv_find(argv, argc, "A.B.C.D/M", &idx);
|
|
|
|
char *prefix = argv[idx]->arg;
|
2019-08-21 17:16:05 +02:00
|
|
|
char *rmap = NULL;
|
2019-11-09 19:24:34 +01:00
|
|
|
int as_set = argv_find(argv, argc, "as-set", &idx) ? AGGREGATE_AS_SET
|
|
|
|
: AGGREGATE_AS_UNSET;
|
2017-07-17 14:03:14 +02:00
|
|
|
idx = 0;
|
|
|
|
int summary_only = argv_find(argv, argc, "summary-only", &idx)
|
|
|
|
? AGGREGATE_SUMMARY_ONLY
|
|
|
|
: 0;
|
2016-10-29 07:34:10 +02:00
|
|
|
|
2019-08-21 17:16:05 +02:00
|
|
|
idx = 0;
|
|
|
|
argv_find(argv, argc, "WORD", &idx);
|
|
|
|
if (idx)
|
|
|
|
rmap = argv[idx]->arg;
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_aggregate_set(vty, prefix, AFI_IP, bgp_node_safi(vty),
|
2019-08-21 17:16:05 +02:00
|
|
|
rmap, summary_only, as_set);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2016-10-29 07:34:10 +02:00
|
|
|
DEFUN (aggregate_address_mask,
|
|
|
|
aggregate_address_mask_cmd,
|
2019-08-21 17:16:05 +02:00
|
|
|
"aggregate-address A.B.C.D A.B.C.D [<as-set [summary-only]|summary-only [as-set]>] [route-map WORD]",
|
2002-12-13 21:15:29 +01:00
|
|
|
"Configure BGP aggregate entries\n"
|
|
|
|
"Aggregate address\n"
|
|
|
|
"Aggregate mask\n"
|
|
|
|
"Generate AS set path information\n"
|
2016-09-26 20:08:45 +02:00
|
|
|
"Filter more specific routes from updates\n"
|
|
|
|
"Filter more specific routes from updates\n"
|
2019-08-21 17:16:05 +02:00
|
|
|
"Generate AS set path information\n"
|
|
|
|
"Apply route map to aggregate network\n"
|
|
|
|
"Name of route map\n")
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx = 0;
|
|
|
|
argv_find(argv, argc, "A.B.C.D", &idx);
|
|
|
|
char *prefix = argv[idx]->arg;
|
|
|
|
char *mask = argv[idx + 1]->arg;
|
2019-09-26 18:37:28 +02:00
|
|
|
bool rmap_found;
|
2019-08-21 17:16:05 +02:00
|
|
|
char *rmap = NULL;
|
2019-11-09 19:24:34 +01:00
|
|
|
int as_set = argv_find(argv, argc, "as-set", &idx) ? AGGREGATE_AS_SET
|
|
|
|
: AGGREGATE_AS_UNSET;
|
2017-07-17 14:03:14 +02:00
|
|
|
idx = 0;
|
|
|
|
int summary_only = argv_find(argv, argc, "summary-only", &idx)
|
|
|
|
? AGGREGATE_SUMMARY_ONLY
|
|
|
|
: 0;
|
|
|
|
|
2019-09-26 18:37:28 +02:00
|
|
|
rmap_found = argv_find(argv, argc, "WORD", &idx);
|
|
|
|
if (rmap_found)
|
2019-08-21 17:16:05 +02:00
|
|
|
rmap = argv[idx]->arg;
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
char prefix_str[BUFSIZ];
|
|
|
|
int ret = netmask_str2prefix_str(prefix, mask, prefix_str);
|
|
|
|
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Inconsistent address and mask\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_aggregate_set(vty, prefix_str, AFI_IP, bgp_node_safi(vty),
|
2019-08-21 17:16:05 +02:00
|
|
|
rmap, summary_only, as_set);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_aggregate_address,
|
|
|
|
no_aggregate_address_cmd,
|
2019-09-26 16:35:25 +02:00
|
|
|
"no aggregate-address A.B.C.D/M [<as-set [summary-only]|summary-only [as-set]>] [route-map WORD]",
|
2002-12-13 21:15:29 +01:00
|
|
|
NO_STR
|
|
|
|
"Configure BGP aggregate entries\n"
|
2016-09-26 20:08:45 +02:00
|
|
|
"Aggregate prefix\n"
|
|
|
|
"Generate AS set path information\n"
|
2016-10-29 07:34:10 +02:00
|
|
|
"Filter more specific routes from updates\n"
|
|
|
|
"Filter more specific routes from updates\n"
|
2019-09-26 16:35:25 +02:00
|
|
|
"Generate AS set path information\n"
|
|
|
|
"Apply route map to aggregate network\n"
|
|
|
|
"Name of route map\n")
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx = 0;
|
|
|
|
argv_find(argv, argc, "A.B.C.D/M", &idx);
|
|
|
|
char *prefix = argv[idx]->arg;
|
|
|
|
return bgp_aggregate_unset(vty, prefix, AFI_IP, bgp_node_safi(vty));
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_aggregate_address_mask,
|
|
|
|
no_aggregate_address_mask_cmd,
|
2019-09-26 16:35:25 +02:00
|
|
|
"no aggregate-address A.B.C.D A.B.C.D [<as-set [summary-only]|summary-only [as-set]>] [route-map WORD]",
|
2002-12-13 21:15:29 +01:00
|
|
|
NO_STR
|
|
|
|
"Configure BGP aggregate entries\n"
|
|
|
|
"Aggregate address\n"
|
2016-09-26 20:08:45 +02:00
|
|
|
"Aggregate mask\n"
|
|
|
|
"Generate AS set path information\n"
|
2016-10-29 07:34:10 +02:00
|
|
|
"Filter more specific routes from updates\n"
|
|
|
|
"Filter more specific routes from updates\n"
|
2019-09-26 16:35:25 +02:00
|
|
|
"Generate AS set path information\n"
|
|
|
|
"Apply route map to aggregate network\n"
|
|
|
|
"Name of route map\n")
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx = 0;
|
|
|
|
argv_find(argv, argc, "A.B.C.D", &idx);
|
|
|
|
char *prefix = argv[idx]->arg;
|
|
|
|
char *mask = argv[idx + 1]->arg;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
char prefix_str[BUFSIZ];
|
|
|
|
int ret = netmask_str2prefix_str(prefix, mask, prefix_str);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Inconsistent address and mask\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_aggregate_unset(vty, prefix_str, AFI_IP, bgp_node_safi(vty));
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (ipv6_aggregate_address,
|
|
|
|
ipv6_aggregate_address_cmd,
|
2019-08-21 17:16:05 +02:00
|
|
|
"aggregate-address X:X::X:X/M [<as-set [summary-only]|summary-only [as-set]>] [route-map WORD]",
|
2002-12-13 21:15:29 +01:00
|
|
|
"Configure BGP aggregate entries\n"
|
|
|
|
"Aggregate prefix\n"
|
2019-05-06 05:57:58 +02:00
|
|
|
"Generate AS set path information\n"
|
|
|
|
"Filter more specific routes from updates\n"
|
|
|
|
"Filter more specific routes from updates\n"
|
2019-08-21 17:16:05 +02:00
|
|
|
"Generate AS set path information\n"
|
|
|
|
"Apply route map to aggregate network\n"
|
|
|
|
"Name of route map\n")
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx = 0;
|
|
|
|
argv_find(argv, argc, "X:X::X:X/M", &idx);
|
|
|
|
char *prefix = argv[idx]->arg;
|
2019-08-21 17:16:05 +02:00
|
|
|
char *rmap = NULL;
|
2019-09-26 20:47:55 +02:00
|
|
|
bool rmap_found;
|
2019-11-09 19:24:34 +01:00
|
|
|
int as_set = argv_find(argv, argc, "as-set", &idx) ? AGGREGATE_AS_SET
|
|
|
|
: AGGREGATE_AS_UNSET;
|
2019-05-06 05:57:58 +02:00
|
|
|
|
|
|
|
idx = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
int sum_only = argv_find(argv, argc, "summary-only", &idx)
|
|
|
|
? AGGREGATE_SUMMARY_ONLY
|
|
|
|
: 0;
|
2019-08-21 17:16:05 +02:00
|
|
|
|
2019-09-26 20:47:55 +02:00
|
|
|
rmap_found = argv_find(argv, argc, "WORD", &idx);
|
|
|
|
if (rmap_found)
|
2019-08-21 17:16:05 +02:00
|
|
|
rmap = argv[idx]->arg;
|
|
|
|
|
|
|
|
return bgp_aggregate_set(vty, prefix, AFI_IP6, SAFI_UNICAST, rmap,
|
|
|
|
sum_only, as_set);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_ipv6_aggregate_address,
|
|
|
|
no_ipv6_aggregate_address_cmd,
|
2019-09-26 16:35:25 +02:00
|
|
|
"no aggregate-address X:X::X:X/M [<as-set [summary-only]|summary-only [as-set]>] [route-map WORD]",
|
2002-12-13 21:15:29 +01:00
|
|
|
NO_STR
|
|
|
|
"Configure BGP aggregate entries\n"
|
2016-11-05 00:03:03 +01:00
|
|
|
"Aggregate prefix\n"
|
2019-05-06 05:57:58 +02:00
|
|
|
"Generate AS set path information\n"
|
|
|
|
"Filter more specific routes from updates\n"
|
|
|
|
"Filter more specific routes from updates\n"
|
2019-09-26 16:35:25 +02:00
|
|
|
"Generate AS set path information\n"
|
|
|
|
"Apply route map to aggregate network\n"
|
|
|
|
"Name of route map\n")
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx = 0;
|
|
|
|
argv_find(argv, argc, "X:X::X:X/M", &idx);
|
|
|
|
char *prefix = argv[idx]->arg;
|
|
|
|
return bgp_aggregate_unset(vty, prefix, AFI_IP6, SAFI_UNICAST);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Redistribute route treatment. */
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_redistribute_add(struct bgp *bgp, struct prefix *p,
|
2017-11-15 19:22:56 +01:00
|
|
|
const union g_addr *nexthop, ifindex_t ifindex,
|
|
|
|
enum nexthop_types_t nhtype, uint32_t metric,
|
2018-03-27 21:13:34 +02:00
|
|
|
uint8_t type, unsigned short instance,
|
|
|
|
route_tag_t tag)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *new;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *bpi;
|
|
|
|
struct bgp_path_info rmap_path;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *bn;
|
|
|
|
struct attr attr;
|
|
|
|
struct attr *new_attr;
|
|
|
|
afi_t afi;
|
lib: Introducing a 3rd state for route-map match cmd: RMAP_NOOP
Introducing a 3rd state for route_map_apply library function: RMAP_NOOP
Traditionally route map MATCH rule apis were designed to return
a binary response, consisting of either RMAP_MATCH or RMAP_NOMATCH.
(Route-map SET rule apis return RMAP_OKAY or RMAP_ERROR).
Depending on this response, the following statemachine decided the
course of action:
State1:
If match cmd returns RMAP_MATCH then, keep existing behaviour.
If routemap type is PERMIT, execute set cmds or call cmds if applicable,
otherwise PERMIT!
Else If routemap type is DENY, we DENYMATCH right away
State2:
If match cmd returns RMAP_NOMATCH, continue on to next route-map. If there
are no other rules or if all the rules return RMAP_NOMATCH, return DENYMATCH
We require a 3rd state because of the following situation:
The issue - what if, the rule api needs to abort or ignore a rule?:
"match evpn vni xx" route-map filter can be applied to incoming routes
regardless of whether the tunnel type is vxlan or mpls.
This rule should be N/A for mpls based evpn route, but applicable to only
vxlan based evpn route.
Also, this rule should be applicable for routes with VNI label only, and
not for routes without labels. For example, type 3 and type 4 EVPN routes
do not have labels, so, this match cmd should let them through.
Today, the filter produces either a match or nomatch response regardless of
whether it is mpls/vxlan, resulting in either permitting or denying the
route.. So an mpls evpn route may get filtered out incorrectly.
Eg: "route-map RM1 permit 10 ; match evpn vni 20" or
"route-map RM2 deny 20 ; match vni 20"
With the introduction of the 3rd state, we can abort this rule check safely.
How? The rules api can now return RMAP_NOOP to indicate
that it encountered an invalid check, and needs to abort just that rule,
but continue with other rules.
As a result we have a 3rd state:
State3:
If match cmd returned RMAP_NOOP
Then, proceed to other route-map, otherwise if there are no more
rules or if all the rules return RMAP_NOOP, then, return RMAP_PERMITMATCH.
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-06-19 23:04:36 +02:00
|
|
|
route_map_result_t ret;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_redist *red;
|
|
|
|
|
|
|
|
/* Make default attribute. */
|
|
|
|
bgp_attr_default_set(&attr, BGP_ORIGIN_INCOMPLETE);
|
2019-10-16 17:03:49 +02:00
|
|
|
/*
|
|
|
|
* This must not be NULL to satisfy Coverity SA
|
|
|
|
*/
|
|
|
|
assert(attr.aspath);
|
2017-11-15 19:22:56 +01:00
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
switch (nhtype) {
|
2017-11-15 19:22:56 +01:00
|
|
|
case NEXTHOP_TYPE_IFINDEX:
|
|
|
|
break;
|
|
|
|
case NEXTHOP_TYPE_IPV4:
|
|
|
|
case NEXTHOP_TYPE_IPV4_IFINDEX:
|
|
|
|
attr.nexthop = nexthop->ipv4;
|
|
|
|
break;
|
|
|
|
case NEXTHOP_TYPE_IPV6:
|
|
|
|
case NEXTHOP_TYPE_IPV6_IFINDEX:
|
|
|
|
attr.mp_nexthop_global = nexthop->ipv6;
|
|
|
|
attr.mp_nexthop_len = BGP_ATTR_NHLEN_IPV6_GLOBAL;
|
|
|
|
break;
|
|
|
|
case NEXTHOP_TYPE_BLACKHOLE:
|
2017-08-21 03:10:50 +02:00
|
|
|
switch (p->family) {
|
|
|
|
case AF_INET:
|
2017-11-15 19:22:56 +01:00
|
|
|
attr.nexthop.s_addr = INADDR_ANY;
|
2017-08-21 03:10:50 +02:00
|
|
|
break;
|
|
|
|
case AF_INET6:
|
2017-11-15 19:22:56 +01:00
|
|
|
memset(&attr.mp_nexthop_global, 0,
|
|
|
|
sizeof(attr.mp_nexthop_global));
|
2017-08-21 03:10:50 +02:00
|
|
|
attr.mp_nexthop_len = BGP_ATTR_NHLEN_IPV6_GLOBAL;
|
2017-11-15 19:22:56 +01:00
|
|
|
break;
|
2017-08-21 03:10:50 +02:00
|
|
|
}
|
2017-11-15 19:22:56 +01:00
|
|
|
break;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-08-21 03:10:50 +02:00
|
|
|
attr.nh_ifindex = ifindex;
|
2011-12-06 11:51:10 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
attr.med = metric;
|
|
|
|
attr.flag |= ATTR_FLAG_BIT(BGP_ATTR_MULTI_EXIT_DISC);
|
|
|
|
attr.tag = tag;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
afi = family2afi(p->family);
|
2016-02-02 13:36:20 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
red = bgp_redist_lookup(bgp, afi, type, instance);
|
|
|
|
if (red) {
|
|
|
|
struct attr attr_new;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Copy attribute for modification. */
|
2019-12-03 22:01:19 +01:00
|
|
|
attr_new = attr;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (red->redist_metric_flag)
|
|
|
|
attr_new.med = red->redist_metric;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Apply route-map. */
|
|
|
|
if (red->rmap.name) {
|
2018-10-03 02:43:07 +02:00
|
|
|
memset(&rmap_path, 0, sizeof(struct bgp_path_info));
|
|
|
|
rmap_path.peer = bgp->peer_self;
|
|
|
|
rmap_path.attr = &attr_new;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
SET_FLAG(bgp->peer_self->rmap_type,
|
|
|
|
PEER_RMAP_TYPE_REDISTRIBUTE);
|
|
|
|
|
|
|
|
ret = route_map_apply(red->rmap.map, p, RMAP_BGP,
|
2018-10-03 02:43:07 +02:00
|
|
|
&rmap_path);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
bgp->peer_self->rmap_type = 0;
|
|
|
|
|
|
|
|
if (ret == RMAP_DENYMATCH) {
|
|
|
|
/* Free uninterned attribute. */
|
|
|
|
bgp_attr_flush(&attr_new);
|
|
|
|
|
|
|
|
/* Unintern original. */
|
|
|
|
aspath_unintern(&attr.aspath);
|
|
|
|
bgp_redistribute_delete(bgp, p, type, instance);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-08-25 20:27:49 +02:00
|
|
|
if (bgp_flag_check(bgp, BGP_FLAG_GRACEFUL_SHUTDOWN))
|
|
|
|
bgp_attr_add_gshut_community(&attr_new);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bn = bgp_afi_node_get(bgp->rib[afi][SAFI_UNICAST], afi,
|
|
|
|
SAFI_UNICAST, p, NULL);
|
|
|
|
|
|
|
|
new_attr = bgp_attr_intern(&attr_new);
|
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (bpi = bgp_node_get_bgp_path_info(bn); bpi;
|
|
|
|
bpi = bpi->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (bpi->peer == bgp->peer_self
|
|
|
|
&& bpi->sub_type == BGP_ROUTE_REDISTRIBUTE)
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (bpi) {
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Ensure the (source route) type is updated. */
|
2018-10-03 02:43:07 +02:00
|
|
|
bpi->type = type;
|
|
|
|
if (attrhash_cmp(bpi->attr, new_attr)
|
|
|
|
&& !CHECK_FLAG(bpi->flags, BGP_PATH_REMOVED)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_attr_unintern(&new_attr);
|
|
|
|
aspath_unintern(&attr.aspath);
|
|
|
|
bgp_unlock_node(bn);
|
|
|
|
return;
|
|
|
|
} else {
|
|
|
|
/* The attribute is changed. */
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_set_flag(bn, bpi,
|
2018-10-03 00:15:34 +02:00
|
|
|
BGP_PATH_ATTR_CHANGED);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Rewrite BGP route information. */
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(bpi->flags, BGP_PATH_REMOVED))
|
|
|
|
bgp_path_info_restore(bn, bpi);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_decrement(
|
|
|
|
bgp, p, bpi, afi, SAFI_UNICAST);
|
|
|
|
bgp_attr_unintern(&bpi->attr);
|
|
|
|
bpi->attr = new_attr;
|
|
|
|
bpi->uptime = bgp_clock();
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Process change. */
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_increment(bgp, p, bpi, afi,
|
2017-07-17 14:03:14 +02:00
|
|
|
SAFI_UNICAST);
|
|
|
|
bgp_process(bgp, bn, afi, SAFI_UNICAST);
|
|
|
|
bgp_unlock_node(bn);
|
|
|
|
aspath_unintern(&attr.aspath);
|
2018-03-09 21:52:55 +01:00
|
|
|
|
|
|
|
if ((bgp->inst_type == BGP_INSTANCE_TYPE_VRF)
|
|
|
|
|| (bgp->inst_type
|
|
|
|
== BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
|
|
|
|
vpn_leak_from_vrf_update(
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_get_default(), bgp, bpi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
new = info_make(type, BGP_ROUTE_REDISTRIBUTE, instance,
|
|
|
|
bgp->peer_self, new_attr, bn);
|
2018-09-14 02:34:42 +02:00
|
|
|
SET_FLAG(new->flags, BGP_PATH_VALID);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
bgp_aggregate_increment(bgp, p, new, afi, SAFI_UNICAST);
|
2018-10-03 00:15:34 +02:00
|
|
|
bgp_path_info_add(bn, new);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(bn);
|
|
|
|
bgp_process(bgp, bn, afi, SAFI_UNICAST);
|
2018-03-09 21:52:55 +01:00
|
|
|
|
|
|
|
if ((bgp->inst_type == BGP_INSTANCE_TYPE_VRF)
|
|
|
|
|| (bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
|
|
|
|
vpn_leak_from_vrf_update(bgp_get_default(), bgp, new);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Unintern original. */
|
|
|
|
aspath_unintern(&attr.aspath);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2018-03-27 21:13:34 +02:00
|
|
|
void bgp_redistribute_delete(struct bgp *bgp, struct prefix *p, uint8_t type,
|
|
|
|
unsigned short instance)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi;
|
|
|
|
struct bgp_node *rn;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_redist *red;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
afi = family2afi(p->family);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
red = bgp_redist_lookup(bgp, afi, type, instance);
|
|
|
|
if (red) {
|
|
|
|
rn = bgp_afi_node_get(bgp->rib[afi][SAFI_UNICAST], afi,
|
|
|
|
SAFI_UNICAST, p, NULL);
|
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == bgp->peer_self && pi->type == type)
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi) {
|
2018-03-09 21:52:55 +01:00
|
|
|
if ((bgp->inst_type == BGP_INSTANCE_TYPE_VRF)
|
|
|
|
|| (bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
|
|
|
|
vpn_leak_from_vrf_withdraw(bgp_get_default(),
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp, pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_decrement(bgp, p, pi, afi, SAFI_UNICAST);
|
|
|
|
bgp_path_info_delete(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_process(bgp, rn, afi, SAFI_UNICAST);
|
|
|
|
}
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Withdraw specified route type's route. */
|
|
|
|
void bgp_redistribute_withdraw(struct bgp *bgp, afi_t afi, int type,
|
2018-03-27 21:13:34 +02:00
|
|
|
unsigned short instance)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct bgp_node *rn;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_table *table;
|
|
|
|
|
|
|
|
table = bgp->rib[afi][SAFI_UNICAST];
|
|
|
|
|
|
|
|
for (rn = bgp_table_top(table); rn; rn = bgp_route_next(rn)) {
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next)
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == bgp->peer_self && pi->type == type
|
|
|
|
&& pi->instance == instance)
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi) {
|
2018-03-09 21:52:55 +01:00
|
|
|
if ((bgp->inst_type == BGP_INSTANCE_TYPE_VRF)
|
|
|
|
|| (bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)) {
|
|
|
|
|
|
|
|
vpn_leak_from_vrf_withdraw(bgp_get_default(),
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp, pi);
|
2018-03-09 21:52:55 +01:00
|
|
|
}
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_aggregate_decrement(bgp, &rn->p, pi, afi,
|
2017-07-17 14:03:14 +02:00
|
|
|
SAFI_UNICAST);
|
2018-10-03 02:43:07 +02:00
|
|
|
bgp_path_info_delete(rn, pi);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_process(bgp, rn, afi, SAFI_UNICAST);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Static function to display route. */
|
2017-07-21 02:42:20 +02:00
|
|
|
static void route_vty_out_route(struct prefix *p, struct vty *vty,
|
|
|
|
json_object *json)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-09-07 15:31:00 +02:00
|
|
|
int len = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
char buf[BUFSIZ];
|
2018-10-05 23:30:59 +02:00
|
|
|
char buf2[BUFSIZ];
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (p->family == AF_INET) {
|
2017-07-26 23:14:07 +02:00
|
|
|
if (!json) {
|
2018-08-14 17:36:41 +02:00
|
|
|
len = vty_out(
|
|
|
|
vty, "%s/%d",
|
|
|
|
inet_ntop(p->family, &p->u.prefix, buf, BUFSIZ),
|
|
|
|
p->prefixlen);
|
2017-07-26 23:14:07 +02:00
|
|
|
} else {
|
|
|
|
json_object_string_add(json, "prefix",
|
|
|
|
inet_ntop(p->family,
|
|
|
|
&p->u.prefix, buf,
|
|
|
|
BUFSIZ));
|
|
|
|
json_object_int_add(json, "prefixLen", p->prefixlen);
|
2018-10-12 00:35:21 +02:00
|
|
|
prefix2str(p, buf2, PREFIX_STRLEN);
|
2018-10-05 23:30:59 +02:00
|
|
|
json_object_string_add(json, "network", buf2);
|
2017-07-26 23:14:07 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
} else if (p->family == AF_ETHERNET) {
|
2017-08-08 16:16:12 +02:00
|
|
|
prefix2str(p, buf, PREFIX_STRLEN);
|
|
|
|
len = vty_out(vty, "%s", buf);
|
|
|
|
} else if (p->family == AF_EVPN) {
|
2017-08-19 02:23:30 +02:00
|
|
|
if (!json)
|
2017-08-30 17:23:01 +02:00
|
|
|
len = vty_out(
|
|
|
|
vty, "%s",
|
|
|
|
bgp_evpn_route2str((struct prefix_evpn *)p, buf,
|
|
|
|
BUFSIZ));
|
2017-08-19 02:23:30 +02:00
|
|
|
else
|
2017-08-30 17:23:01 +02:00
|
|
|
bgp_evpn_route2json((struct prefix_evpn *)p, json);
|
2018-02-19 17:17:41 +01:00
|
|
|
} else if (p->family == AF_FLOWSPEC) {
|
|
|
|
route_vty_out_flowspec(vty, p, NULL,
|
2018-03-07 18:54:09 +01:00
|
|
|
json ?
|
|
|
|
NLRI_STRING_FORMAT_JSON_SIMPLE :
|
|
|
|
NLRI_STRING_FORMAT_MIN, json);
|
2017-07-21 02:42:20 +02:00
|
|
|
} else {
|
2017-07-26 23:14:07 +02:00
|
|
|
if (!json)
|
2017-08-30 17:23:01 +02:00
|
|
|
len = vty_out(
|
|
|
|
vty, "%s/%d",
|
|
|
|
inet_ntop(p->family, &p->u.prefix, buf, BUFSIZ),
|
|
|
|
p->prefixlen);
|
2018-10-05 23:30:59 +02:00
|
|
|
else {
|
|
|
|
json_object_string_add(json, "prefix",
|
|
|
|
inet_ntop(p->family,
|
|
|
|
&p->u.prefix, buf,
|
|
|
|
BUFSIZ));
|
|
|
|
json_object_int_add(json, "prefixLen", p->prefixlen);
|
2018-10-12 00:35:21 +02:00
|
|
|
prefix2str(p, buf2, PREFIX_STRLEN);
|
|
|
|
json_object_string_add(json, "network", buf2);
|
|
|
|
}
|
2017-07-21 02:42:20 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-07-21 02:42:20 +02:00
|
|
|
if (!json) {
|
|
|
|
len = 17 - len;
|
|
|
|
if (len < 1)
|
|
|
|
vty_out(vty, "\n%*s", 20, " ");
|
|
|
|
else
|
|
|
|
vty_out(vty, "%*s", len, " ");
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
enum bgp_display_type {
|
|
|
|
normal_list,
|
2002-12-13 21:15:29 +01:00
|
|
|
};
|
|
|
|
|
2018-10-03 00:15:34 +02:00
|
|
|
/* Print the short form route status for a bgp_path_info */
|
2018-10-02 22:41:30 +02:00
|
|
|
static void route_vty_short_status_out(struct vty *vty,
|
2018-10-03 00:34:03 +02:00
|
|
|
struct bgp_path_info *path,
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object *json_path)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
if (json_path) {
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Route status display. */
|
2018-10-03 00:34:03 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_REMOVED))
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_boolean_true_add(json_path, "removed");
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_STALE))
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_boolean_true_add(json_path, "stale");
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (path->extra && path->extra->suppress)
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_boolean_true_add(json_path, "suppressed");
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_VALID)
|
|
|
|
&& !CHECK_FLAG(path->flags, BGP_PATH_HISTORY))
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_boolean_true_add(json_path, "valid");
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Selected */
|
2018-10-03 00:34:03 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_HISTORY))
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_boolean_true_add(json_path, "history");
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_DAMPED))
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_boolean_true_add(json_path, "damped");
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_SELECTED))
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_boolean_true_add(json_path, "bestpath");
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_MULTIPATH))
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_boolean_true_add(json_path, "multipath");
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Internal route. */
|
2018-10-03 00:34:03 +02:00
|
|
|
if ((path->peer->as)
|
|
|
|
&& (path->peer->as == path->peer->local_as))
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_string_add(json_path, "pathFrom",
|
|
|
|
"internal");
|
|
|
|
else
|
|
|
|
json_object_string_add(json_path, "pathFrom",
|
|
|
|
"external");
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
|
|
|
}
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Route status display. */
|
2018-10-03 00:34:03 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_REMOVED))
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "R");
|
2018-10-03 00:34:03 +02:00
|
|
|
else if (CHECK_FLAG(path->flags, BGP_PATH_STALE))
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "S");
|
2018-10-03 00:34:03 +02:00
|
|
|
else if (path->extra && path->extra->suppress)
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "s");
|
2018-10-03 00:34:03 +02:00
|
|
|
else if (CHECK_FLAG(path->flags, BGP_PATH_VALID)
|
|
|
|
&& !CHECK_FLAG(path->flags, BGP_PATH_HISTORY))
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "*");
|
|
|
|
else
|
|
|
|
vty_out(vty, " ");
|
|
|
|
|
|
|
|
/* Selected */
|
2018-10-03 00:34:03 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_HISTORY))
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "h");
|
2018-10-03 00:34:03 +02:00
|
|
|
else if (CHECK_FLAG(path->flags, BGP_PATH_DAMPED))
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "d");
|
2018-10-03 00:34:03 +02:00
|
|
|
else if (CHECK_FLAG(path->flags, BGP_PATH_SELECTED))
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, ">");
|
2018-10-03 00:34:03 +02:00
|
|
|
else if (CHECK_FLAG(path->flags, BGP_PATH_MULTIPATH))
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "=");
|
|
|
|
else
|
|
|
|
vty_out(vty, " ");
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Internal route. */
|
2018-10-03 00:34:03 +02:00
|
|
|
if (path->peer && (path->peer->as)
|
|
|
|
&& (path->peer->as == path->peer->local_as))
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "i");
|
|
|
|
else
|
|
|
|
vty_out(vty, " ");
|
2005-08-22 Paul Jakma <paul.jakma@sun.com>
* bgp_route.h: (struct bgp_info) add a new flag, BGP_INFO_REMOVED.
BGP_INFO_VALID is already overloaded, don't care to do same thing
to STALE or HISTORY.
* bgpd.h: (BGP_INFO_HOLDDOWN) Add INFO_REMOVED to the macro, as a
route which should generally be ignored.
* bgp_route.c: (bgp_info_delete) Just set the REMOVE flag, rather
than doing actual work, so that bgp_process (called directly,
or indirectly via the scanner) can catch withdrawn routes.
(bgp_info_reap) Actually remove the route, what bgp_info_delete
used to do, only for use by bgp_process.
(bgp_best_selection) reap any REMOVED routes, other than the old
selected route.
(bgp_process_rsclient) reap the old-selected route, if appropriate
(bgp_process_main) ditto
(bgp_rib_withdraw, bgp_rib_remove) make them more consistent with
each other. Don't play games with the VALID flag, bgp_process
is async now, so it didn't make a difference anyway.
Remove the 'force' argument from bgp_rib_withdraw, withdraw+force
is equivalent to bgp_rib_remove. Update all its callers.
(bgp_update_rsclient) bgp_rib_withdraw and force set is same as
bgp_rib_remove.
(route_vty_short_status_out) new helper to print the leading
route-status string used in many command outputs. Consolidate.
(route_vty_out, route_vty_out_tag, damp_route_vty_out,
flap_route_vty_out) use route_vty_short_status_out rather than
duplicate.
(route_vty_out_detail) print state of REMOVED flag.
(BGP_SHOW_SCODE_HEADER) update for Removed flag.
2005-08-23 00:34:41 +02:00
|
|
|
}
|
|
|
|
|
2019-12-06 21:03:50 +01:00
|
|
|
static char *bgp_nexthop_hostname(struct peer *peer, struct attr *attr)
|
2019-05-20 15:43:01 +02:00
|
|
|
{
|
2019-12-06 21:03:50 +01:00
|
|
|
if (peer->hostname && bgp_flag_check(peer->bgp, BGP_FLAG_SHOW_HOSTNAME)
|
|
|
|
&& !(attr->flag & ATTR_FLAG_BIT(BGP_ATTR_ORIGINATOR_ID)))
|
2019-05-20 15:43:01 +02:00
|
|
|
return peer->hostname;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2005-08-22 Paul Jakma <paul.jakma@sun.com>
* bgp_route.h: (struct bgp_info) add a new flag, BGP_INFO_REMOVED.
BGP_INFO_VALID is already overloaded, don't care to do same thing
to STALE or HISTORY.
* bgpd.h: (BGP_INFO_HOLDDOWN) Add INFO_REMOVED to the macro, as a
route which should generally be ignored.
* bgp_route.c: (bgp_info_delete) Just set the REMOVE flag, rather
than doing actual work, so that bgp_process (called directly,
or indirectly via the scanner) can catch withdrawn routes.
(bgp_info_reap) Actually remove the route, what bgp_info_delete
used to do, only for use by bgp_process.
(bgp_best_selection) reap any REMOVED routes, other than the old
selected route.
(bgp_process_rsclient) reap the old-selected route, if appropriate
(bgp_process_main) ditto
(bgp_rib_withdraw, bgp_rib_remove) make them more consistent with
each other. Don't play games with the VALID flag, bgp_process
is async now, so it didn't make a difference anyway.
Remove the 'force' argument from bgp_rib_withdraw, withdraw+force
is equivalent to bgp_rib_remove. Update all its callers.
(bgp_update_rsclient) bgp_rib_withdraw and force set is same as
bgp_rib_remove.
(route_vty_short_status_out) new helper to print the leading
route-status string used in many command outputs. Consolidate.
(route_vty_out, route_vty_out_tag, damp_route_vty_out,
flap_route_vty_out) use route_vty_short_status_out rather than
duplicate.
(route_vty_out_detail) print state of REMOVED flag.
(BGP_SHOW_SCODE_HEADER) update for Removed flag.
2005-08-23 00:34:41 +02:00
|
|
|
/* called from terminal list command */
|
2018-10-02 22:41:30 +02:00
|
|
|
void route_vty_out(struct vty *vty, struct prefix *p,
|
2018-10-03 00:34:03 +02:00
|
|
|
struct bgp_path_info *path, int display, safi_t safi,
|
2018-10-02 22:41:30 +02:00
|
|
|
json_object *json_paths)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
2019-12-06 21:03:50 +01:00
|
|
|
struct attr *attr = path->attr;
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object *json_path = NULL;
|
|
|
|
json_object *json_nexthops = NULL;
|
|
|
|
json_object *json_nexthop_global = NULL;
|
|
|
|
json_object *json_nexthop_ll = NULL;
|
2019-06-14 02:55:38 +02:00
|
|
|
json_object *json_ext_community = NULL;
|
2018-04-09 22:28:11 +02:00
|
|
|
char vrf_id_str[VRF_NAMSIZ] = {0};
|
2018-09-14 02:34:42 +02:00
|
|
|
bool nexthop_self =
|
2018-10-03 00:34:03 +02:00
|
|
|
CHECK_FLAG(path->flags, BGP_PATH_ANNC_NH_SELF) ? true : false;
|
2018-04-09 22:28:11 +02:00
|
|
|
bool nexthop_othervrf = false;
|
2018-05-15 19:57:40 +02:00
|
|
|
vrf_id_t nexthop_vrfid = VRF_DEFAULT;
|
2018-12-17 17:44:02 +01:00
|
|
|
const char *nexthop_vrfname = VRF_DEFAULT_NAME;
|
2019-12-06 21:03:50 +01:00
|
|
|
char *nexthop_hostname = bgp_nexthop_hostname(path->peer, attr);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (json_paths)
|
|
|
|
json_path = json_object_new_object();
|
|
|
|
|
|
|
|
/* short status lead text */
|
2018-10-03 00:34:03 +02:00
|
|
|
route_vty_short_status_out(vty, path, json_path);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (!json_paths) {
|
|
|
|
/* print prefix and mask */
|
|
|
|
if (!display)
|
2017-07-21 02:42:20 +02:00
|
|
|
route_vty_out_route(p, vty, json_path);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
vty_out(vty, "%*s", 17, " ");
|
2017-07-21 02:42:20 +02:00
|
|
|
} else {
|
2017-08-14 06:52:04 +02:00
|
|
|
route_vty_out_route(p, vty, json_path);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2015-05-20 02:40:34 +02:00
|
|
|
|
2018-04-09 22:28:11 +02:00
|
|
|
/*
|
|
|
|
* If vrf id of nexthop is different from that of prefix,
|
|
|
|
* set up printable string to append
|
|
|
|
*/
|
2018-10-03 00:34:03 +02:00
|
|
|
if (path->extra && path->extra->bgp_orig) {
|
2018-04-09 22:28:11 +02:00
|
|
|
const char *self = "";
|
|
|
|
|
|
|
|
if (nexthop_self)
|
|
|
|
self = "<";
|
|
|
|
|
|
|
|
nexthop_othervrf = true;
|
2018-10-03 00:34:03 +02:00
|
|
|
nexthop_vrfid = path->extra->bgp_orig->vrf_id;
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (path->extra->bgp_orig->vrf_id == VRF_UNKNOWN)
|
2018-04-09 22:28:11 +02:00
|
|
|
snprintf(vrf_id_str, sizeof(vrf_id_str),
|
|
|
|
"@%s%s", VRFID_NONE_STR, self);
|
|
|
|
else
|
|
|
|
snprintf(vrf_id_str, sizeof(vrf_id_str), "@%u%s",
|
2018-10-03 00:34:03 +02:00
|
|
|
path->extra->bgp_orig->vrf_id, self);
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (path->extra->bgp_orig->inst_type
|
|
|
|
!= BGP_INSTANCE_TYPE_DEFAULT)
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
nexthop_vrfname = path->extra->bgp_orig->name;
|
2018-04-09 22:28:11 +02:00
|
|
|
} else {
|
|
|
|
const char *self = "";
|
|
|
|
|
|
|
|
if (nexthop_self)
|
|
|
|
self = "<";
|
|
|
|
|
|
|
|
snprintf(vrf_id_str, sizeof(vrf_id_str), "%s", self);
|
|
|
|
}
|
|
|
|
|
2017-10-05 16:11:36 +02:00
|
|
|
/*
|
|
|
|
* For ENCAP and EVPN routes, nexthop address family is not
|
|
|
|
* neccessarily the same as the prefix address family.
|
|
|
|
* Both SAFI_MPLS_VPN and SAFI_ENCAP use the MP nexthop field
|
|
|
|
* EVPN routes are also exchanged with a MP nexthop. Currently,
|
|
|
|
* this
|
|
|
|
* is only IPv4, the value will be present in either
|
|
|
|
* attr->nexthop or
|
|
|
|
* attr->mp_nexthop_global_in
|
|
|
|
*/
|
|
|
|
if ((safi == SAFI_ENCAP) || (safi == SAFI_MPLS_VPN)) {
|
|
|
|
char buf[BUFSIZ];
|
|
|
|
char nexthop[128];
|
|
|
|
int af = NEXTHOP_FAMILY(attr->mp_nexthop_len);
|
|
|
|
|
|
|
|
switch (af) {
|
|
|
|
case AF_INET:
|
|
|
|
sprintf(nexthop, "%s",
|
2018-02-09 19:22:50 +01:00
|
|
|
inet_ntop(af, &attr->mp_nexthop_global_in, buf,
|
|
|
|
BUFSIZ));
|
2017-10-05 16:11:36 +02:00
|
|
|
break;
|
|
|
|
case AF_INET6:
|
|
|
|
sprintf(nexthop, "%s",
|
2018-02-09 19:22:50 +01:00
|
|
|
inet_ntop(af, &attr->mp_nexthop_global, buf,
|
|
|
|
BUFSIZ));
|
2017-10-05 16:11:36 +02:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
sprintf(nexthop, "?");
|
|
|
|
break;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
2017-10-05 16:11:36 +02:00
|
|
|
if (json_paths) {
|
|
|
|
json_nexthop_global = json_object_new_object();
|
|
|
|
|
2019-12-06 21:03:50 +01:00
|
|
|
json_object_string_add(json_nexthop_global, "ip",
|
|
|
|
nexthop);
|
|
|
|
|
|
|
|
if (nexthop_hostname)
|
|
|
|
json_object_string_add(json_nexthop_global,
|
|
|
|
"hostname",
|
|
|
|
nexthop_hostname);
|
|
|
|
|
|
|
|
json_object_string_add(json_nexthop_global, "afi",
|
|
|
|
(af == AF_INET) ? "ipv4"
|
|
|
|
: "ipv6");
|
2017-10-05 16:11:36 +02:00
|
|
|
json_object_boolean_true_add(json_nexthop_global,
|
|
|
|
"used");
|
|
|
|
} else
|
2019-05-20 15:43:01 +02:00
|
|
|
vty_out(vty, "%s%s",
|
2019-12-06 21:03:50 +01:00
|
|
|
nexthop_hostname ? nexthop_hostname : nexthop,
|
2019-05-20 15:43:01 +02:00
|
|
|
vrf_id_str);
|
2017-10-05 16:11:36 +02:00
|
|
|
} else if (safi == SAFI_EVPN) {
|
|
|
|
if (json_paths) {
|
|
|
|
json_nexthop_global = json_object_new_object();
|
|
|
|
|
2019-12-06 21:03:50 +01:00
|
|
|
json_object_string_add(json_nexthop_global, "ip",
|
|
|
|
inet_ntoa(attr->nexthop));
|
|
|
|
|
|
|
|
if (nexthop_hostname)
|
|
|
|
json_object_string_add(json_nexthop_global,
|
|
|
|
"hostname",
|
|
|
|
nexthop_hostname);
|
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
json_object_string_add(json_nexthop_global, "afi",
|
|
|
|
"ipv4");
|
2017-10-05 16:11:36 +02:00
|
|
|
json_object_boolean_true_add(json_nexthop_global,
|
|
|
|
"used");
|
|
|
|
} else
|
2019-05-20 15:43:01 +02:00
|
|
|
vty_out(vty, "%-16s%s",
|
2019-12-06 21:03:50 +01:00
|
|
|
nexthop_hostname ? nexthop_hostname
|
|
|
|
: inet_ntoa(attr->nexthop),
|
2018-04-09 22:28:11 +02:00
|
|
|
vrf_id_str);
|
2018-03-07 18:54:09 +01:00
|
|
|
} else if (safi == SAFI_FLOWSPEC) {
|
2018-04-06 13:17:16 +02:00
|
|
|
if (attr->nexthop.s_addr != 0) {
|
|
|
|
if (json_paths) {
|
|
|
|
json_nexthop_global = json_object_new_object();
|
2019-12-06 21:03:50 +01:00
|
|
|
|
2018-04-06 13:17:16 +02:00
|
|
|
json_object_string_add(json_nexthop_global,
|
|
|
|
"afi", "ipv4");
|
2019-12-06 21:03:50 +01:00
|
|
|
json_object_string_add(
|
|
|
|
json_nexthop_global, "ip",
|
|
|
|
inet_ntoa(attr->nexthop));
|
|
|
|
|
|
|
|
if (nexthop_hostname)
|
|
|
|
json_object_string_add(
|
|
|
|
json_nexthop_global, "hostname",
|
|
|
|
nexthop_hostname);
|
|
|
|
|
2018-10-05 23:30:59 +02:00
|
|
|
json_object_boolean_true_add(
|
|
|
|
json_nexthop_global,
|
2018-04-06 13:17:16 +02:00
|
|
|
"used");
|
|
|
|
} else {
|
2019-05-20 15:43:01 +02:00
|
|
|
vty_out(vty, "%-16s",
|
2019-12-06 21:03:50 +01:00
|
|
|
nexthop_hostname
|
|
|
|
? nexthop_hostname
|
2019-05-20 15:43:01 +02:00
|
|
|
: inet_ntoa(attr->nexthop));
|
2018-04-06 13:17:16 +02:00
|
|
|
}
|
|
|
|
}
|
2018-03-07 18:54:09 +01:00
|
|
|
} else if (p->family == AF_INET && !BGP_ATTR_NEXTHOP_AFI_IP6(attr)) {
|
2017-10-05 16:11:36 +02:00
|
|
|
if (json_paths) {
|
|
|
|
json_nexthop_global = json_object_new_object();
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-12-06 21:03:50 +01:00
|
|
|
json_object_string_add(json_nexthop_global, "ip",
|
|
|
|
inet_ntoa(attr->nexthop));
|
|
|
|
|
|
|
|
if (nexthop_hostname)
|
|
|
|
json_object_string_add(json_nexthop_global,
|
|
|
|
"hostname",
|
|
|
|
nexthop_hostname);
|
2017-10-05 16:11:36 +02:00
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
json_object_string_add(json_nexthop_global, "afi",
|
|
|
|
"ipv4");
|
2017-10-05 16:11:36 +02:00
|
|
|
json_object_boolean_true_add(json_nexthop_global,
|
|
|
|
"used");
|
|
|
|
} else {
|
2018-04-09 22:28:11 +02:00
|
|
|
char buf[BUFSIZ];
|
|
|
|
|
2018-06-21 17:22:55 +02:00
|
|
|
snprintf(buf, sizeof(buf), "%s%s",
|
2019-12-06 21:03:50 +01:00
|
|
|
nexthop_hostname ? nexthop_hostname
|
|
|
|
: inet_ntoa(attr->nexthop),
|
2019-05-20 15:43:01 +02:00
|
|
|
vrf_id_str);
|
2018-04-09 22:28:11 +02:00
|
|
|
vty_out(vty, "%-16s", buf);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-10-05 16:11:36 +02:00
|
|
|
}
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2017-10-05 16:11:36 +02:00
|
|
|
/* IPv6 Next Hop */
|
2018-02-09 19:22:50 +01:00
|
|
|
else if (p->family == AF_INET6 || BGP_ATTR_NEXTHOP_AFI_IP6(attr)) {
|
2017-10-05 16:11:36 +02:00
|
|
|
int len;
|
|
|
|
char buf[BUFSIZ];
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-10-05 16:11:36 +02:00
|
|
|
if (json_paths) {
|
|
|
|
json_nexthop_global = json_object_new_object();
|
2018-02-09 19:22:50 +01:00
|
|
|
json_object_string_add(
|
2019-12-06 21:03:50 +01:00
|
|
|
json_nexthop_global, "ip",
|
|
|
|
inet_ntop(AF_INET6, &attr->mp_nexthop_global,
|
|
|
|
buf, BUFSIZ));
|
|
|
|
|
|
|
|
if (nexthop_hostname)
|
|
|
|
json_object_string_add(json_nexthop_global,
|
|
|
|
"hostname",
|
|
|
|
nexthop_hostname);
|
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
json_object_string_add(json_nexthop_global, "afi",
|
|
|
|
"ipv6");
|
|
|
|
json_object_string_add(json_nexthop_global, "scope",
|
|
|
|
"global");
|
2017-10-05 16:11:36 +02:00
|
|
|
|
|
|
|
/* We display both LL & GL if both have been
|
|
|
|
* received */
|
2019-09-13 10:43:44 +02:00
|
|
|
if ((attr->mp_nexthop_len
|
|
|
|
== BGP_ATTR_NHLEN_IPV6_GLOBAL_AND_LL)
|
2018-10-03 00:34:03 +02:00
|
|
|
|| (path->peer->conf_if)) {
|
2018-02-09 19:22:50 +01:00
|
|
|
json_nexthop_ll = json_object_new_object();
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_string_add(
|
2019-12-06 21:03:50 +01:00
|
|
|
json_nexthop_ll, "ip",
|
|
|
|
inet_ntop(AF_INET6,
|
|
|
|
&attr->mp_nexthop_local, buf,
|
|
|
|
BUFSIZ));
|
|
|
|
|
|
|
|
if (nexthop_hostname)
|
|
|
|
json_object_string_add(
|
|
|
|
json_nexthop_ll, "hostname",
|
|
|
|
nexthop_hostname);
|
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
json_object_string_add(json_nexthop_ll, "afi",
|
|
|
|
"ipv6");
|
|
|
|
json_object_string_add(json_nexthop_ll, "scope",
|
2017-10-05 16:11:36 +02:00
|
|
|
"link-local");
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
if ((IPV6_ADDR_CMP(&attr->mp_nexthop_global,
|
|
|
|
&attr->mp_nexthop_local)
|
2017-10-05 16:11:36 +02:00
|
|
|
!= 0)
|
|
|
|
&& !attr->mp_nexthop_prefer_global)
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_boolean_true_add(
|
2018-02-09 19:22:50 +01:00
|
|
|
json_nexthop_ll, "used");
|
2017-10-05 16:11:36 +02:00
|
|
|
else
|
|
|
|
json_object_boolean_true_add(
|
2018-02-09 19:22:50 +01:00
|
|
|
json_nexthop_global, "used");
|
2017-10-05 16:11:36 +02:00
|
|
|
} else
|
|
|
|
json_object_boolean_true_add(
|
|
|
|
json_nexthop_global, "used");
|
|
|
|
} else {
|
|
|
|
/* Display LL if LL/Global both in table unless
|
|
|
|
* prefer-global is set */
|
2019-09-13 10:43:44 +02:00
|
|
|
if (((attr->mp_nexthop_len
|
|
|
|
== BGP_ATTR_NHLEN_IPV6_GLOBAL_AND_LL)
|
2017-10-05 16:11:36 +02:00
|
|
|
&& !attr->mp_nexthop_prefer_global)
|
2018-10-03 00:34:03 +02:00
|
|
|
|| (path->peer->conf_if)) {
|
|
|
|
if (path->peer->conf_if) {
|
2018-02-09 19:22:50 +01:00
|
|
|
len = vty_out(vty, "%s",
|
2018-10-03 00:34:03 +02:00
|
|
|
path->peer->conf_if);
|
2017-10-05 16:11:36 +02:00
|
|
|
len = 16 - len; /* len of IPv6
|
|
|
|
addr + max
|
|
|
|
len of def
|
|
|
|
ifname */
|
|
|
|
|
|
|
|
if (len < 1)
|
2018-02-09 19:22:50 +01:00
|
|
|
vty_out(vty, "\n%*s", 36, " ");
|
2017-10-05 16:11:36 +02:00
|
|
|
else
|
2018-02-09 19:22:50 +01:00
|
|
|
vty_out(vty, "%*s", len, " ");
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
|
|
|
len = vty_out(
|
2018-04-09 22:28:11 +02:00
|
|
|
vty, "%s%s",
|
2019-12-06 21:03:50 +01:00
|
|
|
nexthop_hostname
|
|
|
|
? nexthop_hostname
|
2019-05-20 15:43:01 +02:00
|
|
|
: inet_ntop(
|
|
|
|
AF_INET6,
|
|
|
|
&attr->mp_nexthop_local,
|
|
|
|
buf, BUFSIZ),
|
2018-04-09 22:28:11 +02:00
|
|
|
vrf_id_str);
|
2017-07-17 14:03:14 +02:00
|
|
|
len = 16 - len;
|
|
|
|
|
|
|
|
if (len < 1)
|
2018-02-09 19:22:50 +01:00
|
|
|
vty_out(vty, "\n%*s", 36, " ");
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2018-02-09 19:22:50 +01:00
|
|
|
vty_out(vty, "%*s", len, " ");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-10-05 16:11:36 +02:00
|
|
|
} else {
|
2018-02-09 19:22:50 +01:00
|
|
|
len = vty_out(
|
2018-04-09 22:28:11 +02:00
|
|
|
vty, "%s%s",
|
2019-12-06 21:03:50 +01:00
|
|
|
nexthop_hostname
|
|
|
|
? nexthop_hostname
|
2019-05-20 15:43:01 +02:00
|
|
|
: inet_ntop(
|
|
|
|
AF_INET6,
|
|
|
|
&attr->mp_nexthop_global,
|
|
|
|
buf, BUFSIZ),
|
|
|
|
vrf_id_str);
|
2017-10-05 16:11:36 +02:00
|
|
|
len = 16 - len;
|
|
|
|
|
|
|
|
if (len < 1)
|
|
|
|
vty_out(vty, "\n%*s", 36, " ");
|
|
|
|
else
|
|
|
|
vty_out(vty, "%*s", len, " ");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2017-10-05 16:11:36 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-10-05 16:11:36 +02:00
|
|
|
/* MED/Metric */
|
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_MULTI_EXIT_DISC))
|
2018-10-05 23:30:59 +02:00
|
|
|
if (json_paths) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Adding "metric" field to match with corresponding
|
|
|
|
* CLI. "med" will be deprecated in future.
|
|
|
|
*/
|
2018-02-09 19:22:50 +01:00
|
|
|
json_object_int_add(json_path, "med", attr->med);
|
2018-10-05 23:30:59 +02:00
|
|
|
json_object_int_add(json_path, "metric", attr->med);
|
|
|
|
} else
|
2017-10-05 16:11:36 +02:00
|
|
|
vty_out(vty, "%10u", attr->med);
|
|
|
|
else if (!json_paths)
|
|
|
|
vty_out(vty, " ");
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-10-05 16:11:36 +02:00
|
|
|
/* Local Pref */
|
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_LOCAL_PREF))
|
2018-10-05 23:30:59 +02:00
|
|
|
if (json_paths) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Adding "locPrf" field to match with corresponding
|
|
|
|
* CLI. "localPref" will be deprecated in future.
|
|
|
|
*/
|
2017-10-05 16:11:36 +02:00
|
|
|
json_object_int_add(json_path, "localpref",
|
|
|
|
attr->local_pref);
|
2018-10-05 23:30:59 +02:00
|
|
|
json_object_int_add(json_path, "locPrf",
|
|
|
|
attr->local_pref);
|
|
|
|
} else
|
2017-10-05 16:11:36 +02:00
|
|
|
vty_out(vty, "%7u", attr->local_pref);
|
|
|
|
else if (!json_paths)
|
|
|
|
vty_out(vty, " ");
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-10-05 16:11:36 +02:00
|
|
|
if (json_paths)
|
|
|
|
json_object_int_add(json_path, "weight", attr->weight);
|
|
|
|
else
|
|
|
|
vty_out(vty, "%7u ", attr->weight);
|
2015-05-20 02:40:34 +02:00
|
|
|
|
2017-10-05 16:11:36 +02:00
|
|
|
if (json_paths) {
|
|
|
|
char buf[BUFSIZ];
|
2018-02-09 19:22:50 +01:00
|
|
|
json_object_string_add(
|
|
|
|
json_path, "peerId",
|
2018-10-03 00:34:03 +02:00
|
|
|
sockunion2str(&path->peer->su, buf, SU_ADDRSTRLEN));
|
2017-10-05 16:11:36 +02:00
|
|
|
}
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2017-10-05 16:11:36 +02:00
|
|
|
/* Print aspath */
|
|
|
|
if (attr->aspath) {
|
2018-10-05 23:30:59 +02:00
|
|
|
if (json_paths) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Adding "path" field to match with corresponding
|
|
|
|
* CLI. "aspath" will be deprecated in future.
|
|
|
|
*/
|
2017-10-05 16:11:36 +02:00
|
|
|
json_object_string_add(json_path, "aspath",
|
|
|
|
attr->aspath->str);
|
2018-10-05 23:30:59 +02:00
|
|
|
json_object_string_add(json_path, "path",
|
|
|
|
attr->aspath->str);
|
|
|
|
} else
|
2017-10-05 16:11:36 +02:00
|
|
|
aspath_print_vty(vty, "%s", attr->aspath, " ");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
Key changes:
- The aspath and community structures now have a json_object where we
store the json representation. This is updated at the same time
the "str" for aspath/community are updated. We do this so that we
do not have to compute the json rep
- Added a small wrappper to libjson0, the wrapper lives in quagga's lib/json.[ch].
- Added more structure to the json output. Sample output:
show ip bgp summary json
------------------------
BGP router identifier 10.0.0.1, local AS number 10
BGP table version 2400
RIB entries 4799, using 562 KiB of memory
Peers 17, using 284 KiB of memory
Peer groups 4, using 224 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
1.1.1.1 4 10 0 0 0 0 0 never Active
10.0.0.2 4 10 104 7 0 0 0 00:02:29 600
10.0.0.3 4 10 104 7 0 0 0 00:02:29 600
10.0.0.4 4 10 204 7 0 0 0 00:02:29 1200
20.1.1.6 4 20 406 210 0 0 0 00:02:44 600
20.1.1.7 4 20 406 210 0 0 0 00:02:44 600
40.1.1.2 4 40 406 210 0 0 0 00:02:44 600
40.1.1.6 4 40 406 210 0 0 0 00:02:44 600
40.1.1.10 4 40 406 210 0 0 0 00:02:44 600
Total number of neighbors 9
{
"as": 10,
"dynamic-peers": 0,
"peer-count": 17,
"peer-group-count": 4,
"peer-group-memory": 224,
"peer-memory": 291312,
"peers": {
"1.1.1.1": {
"inq": 0,
"msgrcvd": 0,
"msgsent": 0,
"outq": 0,
"prefix-advertised-count": 0,
"prefix-received-count": 0,
"remote-as": 10,
"state": "Active",
"table-version": 0,
"uptime": "never",
"version": 4
},
"10.0.0.2": {
"hostname": "r2",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.3": {
"hostname": "r3",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.4": {
"hostname": "r4",
"inq": 0,
"msgrcvd": 204,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 1200,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"20.1.1.6": {
"hostname": "r6",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"20.1.1.7": {
"hostname": "r7",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.10": {
"hostname": "r10",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.2": {
"hostname": "r8",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.6": {
"hostname": "r9",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
}
},
"rib-count": 4799,
"rib-memory": 575880,
"router-id": "10.0.0.1",
"table-version": 2400,
"total-peers": 9
}
show ip bgp json
----------------
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.88.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.89.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
"40.3.88.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
"40.3.89.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
show ip bgp x.x.x.x json
------------------------
BGP routing table entry for 40.3.86.0/24
Paths: (3 available, best #3, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.2 10.0.0.3 10.0.0.4 20.1.1.6 20.1.1.7 40.1.1.2 40.1.1.6 40.1.1.10
100 200 300 400 500 40
40.1.1.6 from 40.1.1.6 (40.0.0.9)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.10 from 40.1.1.10 (40.0.0.10)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.2 from 40.1.1.2 (40.0.0.8)
Origin IGP, metric 0, localpref 100, valid, external, best
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
{
"advertised-to": {
"10.0.0.2": {
"hostname": "r2"
},
"10.0.0.3": {
"hostname": "r3"
},
"10.0.0.4": {
"hostname": "r4"
},
"20.1.1.6": {
"hostname": "r6"
},
"20.1.1.7": {
"hostname": "r7"
},
"40.1.1.10": {
"hostname": "r10"
},
"40.1.1.2": {
"hostname": "r8"
},
"40.1.1.6": {
"hostname": "r9"
}
},
"paths": [
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.6",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r9",
"peer-id": "40.1.1.6",
"router-id": "40.0.0.9",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.10",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r10",
"peer-id": "40.1.1.10",
"router-id": "40.0.0.10",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"bestpath": {
"overall": true
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.2",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r8",
"peer-id": "40.1.1.2",
"router-id": "40.0.0.8",
"type": "external"
},
"valid": true
}
],
"prefix": "40.3.86.0",
"prefixlen": 24
}
2015-06-12 16:59:11 +02:00
|
|
|
|
2017-10-05 16:11:36 +02:00
|
|
|
/* Print origin */
|
|
|
|
if (json_paths)
|
2018-02-09 19:22:50 +01:00
|
|
|
json_object_string_add(json_path, "origin",
|
|
|
|
bgp_origin_long_str[attr->origin]);
|
2017-10-05 16:11:36 +02:00
|
|
|
else
|
|
|
|
vty_out(vty, "%s", bgp_origin_str[attr->origin]);
|
|
|
|
|
2018-04-09 22:28:11 +02:00
|
|
|
if (json_paths) {
|
2019-06-14 02:55:38 +02:00
|
|
|
if (safi == SAFI_EVPN &&
|
|
|
|
attr->flag & ATTR_FLAG_BIT(BGP_ATTR_EXT_COMMUNITIES)) {
|
|
|
|
json_ext_community = json_object_new_object();
|
|
|
|
json_object_string_add(json_ext_community,
|
|
|
|
"string",
|
|
|
|
attr->ecommunity->str);
|
|
|
|
json_object_object_add(json_path,
|
|
|
|
"extendedCommunity",
|
|
|
|
json_ext_community);
|
|
|
|
}
|
|
|
|
|
2018-04-09 22:28:11 +02:00
|
|
|
if (nexthop_self)
|
|
|
|
json_object_boolean_true_add(json_path,
|
|
|
|
"announceNexthopSelf");
|
|
|
|
if (nexthop_othervrf) {
|
|
|
|
json_object_string_add(json_path, "nhVrfName",
|
|
|
|
nexthop_vrfname);
|
|
|
|
|
|
|
|
json_object_int_add(json_path, "nhVrfId",
|
|
|
|
((nexthop_vrfid == VRF_UNKNOWN)
|
|
|
|
? -1
|
|
|
|
: (int)nexthop_vrfid));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (json_paths) {
|
|
|
|
if (json_nexthop_global || json_nexthop_ll) {
|
|
|
|
json_nexthops = json_object_new_array();
|
Key changes:
- The aspath and community structures now have a json_object where we
store the json representation. This is updated at the same time
the "str" for aspath/community are updated. We do this so that we
do not have to compute the json rep
- Added a small wrappper to libjson0, the wrapper lives in quagga's lib/json.[ch].
- Added more structure to the json output. Sample output:
show ip bgp summary json
------------------------
BGP router identifier 10.0.0.1, local AS number 10
BGP table version 2400
RIB entries 4799, using 562 KiB of memory
Peers 17, using 284 KiB of memory
Peer groups 4, using 224 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
1.1.1.1 4 10 0 0 0 0 0 never Active
10.0.0.2 4 10 104 7 0 0 0 00:02:29 600
10.0.0.3 4 10 104 7 0 0 0 00:02:29 600
10.0.0.4 4 10 204 7 0 0 0 00:02:29 1200
20.1.1.6 4 20 406 210 0 0 0 00:02:44 600
20.1.1.7 4 20 406 210 0 0 0 00:02:44 600
40.1.1.2 4 40 406 210 0 0 0 00:02:44 600
40.1.1.6 4 40 406 210 0 0 0 00:02:44 600
40.1.1.10 4 40 406 210 0 0 0 00:02:44 600
Total number of neighbors 9
{
"as": 10,
"dynamic-peers": 0,
"peer-count": 17,
"peer-group-count": 4,
"peer-group-memory": 224,
"peer-memory": 291312,
"peers": {
"1.1.1.1": {
"inq": 0,
"msgrcvd": 0,
"msgsent": 0,
"outq": 0,
"prefix-advertised-count": 0,
"prefix-received-count": 0,
"remote-as": 10,
"state": "Active",
"table-version": 0,
"uptime": "never",
"version": 4
},
"10.0.0.2": {
"hostname": "r2",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.3": {
"hostname": "r3",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.4": {
"hostname": "r4",
"inq": 0,
"msgrcvd": 204,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 1200,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"20.1.1.6": {
"hostname": "r6",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"20.1.1.7": {
"hostname": "r7",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.10": {
"hostname": "r10",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.2": {
"hostname": "r8",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.6": {
"hostname": "r9",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
}
},
"rib-count": 4799,
"rib-memory": 575880,
"router-id": "10.0.0.1",
"table-version": 2400,
"total-peers": 9
}
show ip bgp json
----------------
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.88.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.89.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
"40.3.88.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
"40.3.89.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
show ip bgp x.x.x.x json
------------------------
BGP routing table entry for 40.3.86.0/24
Paths: (3 available, best #3, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.2 10.0.0.3 10.0.0.4 20.1.1.6 20.1.1.7 40.1.1.2 40.1.1.6 40.1.1.10
100 200 300 400 500 40
40.1.1.6 from 40.1.1.6 (40.0.0.9)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.10 from 40.1.1.10 (40.0.0.10)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.2 from 40.1.1.2 (40.0.0.8)
Origin IGP, metric 0, localpref 100, valid, external, best
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
{
"advertised-to": {
"10.0.0.2": {
"hostname": "r2"
},
"10.0.0.3": {
"hostname": "r3"
},
"10.0.0.4": {
"hostname": "r4"
},
"20.1.1.6": {
"hostname": "r6"
},
"20.1.1.7": {
"hostname": "r7"
},
"40.1.1.10": {
"hostname": "r10"
},
"40.1.1.2": {
"hostname": "r8"
},
"40.1.1.6": {
"hostname": "r9"
}
},
"paths": [
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.6",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r9",
"peer-id": "40.1.1.6",
"router-id": "40.0.0.9",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.10",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r10",
"peer-id": "40.1.1.10",
"router-id": "40.0.0.10",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"bestpath": {
"overall": true
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.2",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r8",
"peer-id": "40.1.1.2",
"router-id": "40.0.0.8",
"type": "external"
},
"valid": true
}
],
"prefix": "40.3.86.0",
"prefixlen": 24
}
2015-06-12 16:59:11 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (json_nexthop_global)
|
|
|
|
json_object_array_add(json_nexthops,
|
|
|
|
json_nexthop_global);
|
Key changes:
- The aspath and community structures now have a json_object where we
store the json representation. This is updated at the same time
the "str" for aspath/community are updated. We do this so that we
do not have to compute the json rep
- Added a small wrappper to libjson0, the wrapper lives in quagga's lib/json.[ch].
- Added more structure to the json output. Sample output:
show ip bgp summary json
------------------------
BGP router identifier 10.0.0.1, local AS number 10
BGP table version 2400
RIB entries 4799, using 562 KiB of memory
Peers 17, using 284 KiB of memory
Peer groups 4, using 224 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
1.1.1.1 4 10 0 0 0 0 0 never Active
10.0.0.2 4 10 104 7 0 0 0 00:02:29 600
10.0.0.3 4 10 104 7 0 0 0 00:02:29 600
10.0.0.4 4 10 204 7 0 0 0 00:02:29 1200
20.1.1.6 4 20 406 210 0 0 0 00:02:44 600
20.1.1.7 4 20 406 210 0 0 0 00:02:44 600
40.1.1.2 4 40 406 210 0 0 0 00:02:44 600
40.1.1.6 4 40 406 210 0 0 0 00:02:44 600
40.1.1.10 4 40 406 210 0 0 0 00:02:44 600
Total number of neighbors 9
{
"as": 10,
"dynamic-peers": 0,
"peer-count": 17,
"peer-group-count": 4,
"peer-group-memory": 224,
"peer-memory": 291312,
"peers": {
"1.1.1.1": {
"inq": 0,
"msgrcvd": 0,
"msgsent": 0,
"outq": 0,
"prefix-advertised-count": 0,
"prefix-received-count": 0,
"remote-as": 10,
"state": "Active",
"table-version": 0,
"uptime": "never",
"version": 4
},
"10.0.0.2": {
"hostname": "r2",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.3": {
"hostname": "r3",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.4": {
"hostname": "r4",
"inq": 0,
"msgrcvd": 204,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 1200,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"20.1.1.6": {
"hostname": "r6",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"20.1.1.7": {
"hostname": "r7",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.10": {
"hostname": "r10",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.2": {
"hostname": "r8",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.6": {
"hostname": "r9",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
}
},
"rib-count": 4799,
"rib-memory": 575880,
"router-id": "10.0.0.1",
"table-version": 2400,
"total-peers": 9
}
show ip bgp json
----------------
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.88.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.89.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
"40.3.88.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
"40.3.89.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
show ip bgp x.x.x.x json
------------------------
BGP routing table entry for 40.3.86.0/24
Paths: (3 available, best #3, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.2 10.0.0.3 10.0.0.4 20.1.1.6 20.1.1.7 40.1.1.2 40.1.1.6 40.1.1.10
100 200 300 400 500 40
40.1.1.6 from 40.1.1.6 (40.0.0.9)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.10 from 40.1.1.10 (40.0.0.10)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.2 from 40.1.1.2 (40.0.0.8)
Origin IGP, metric 0, localpref 100, valid, external, best
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
{
"advertised-to": {
"10.0.0.2": {
"hostname": "r2"
},
"10.0.0.3": {
"hostname": "r3"
},
"10.0.0.4": {
"hostname": "r4"
},
"20.1.1.6": {
"hostname": "r6"
},
"20.1.1.7": {
"hostname": "r7"
},
"40.1.1.10": {
"hostname": "r10"
},
"40.1.1.2": {
"hostname": "r8"
},
"40.1.1.6": {
"hostname": "r9"
}
},
"paths": [
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.6",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r9",
"peer-id": "40.1.1.6",
"router-id": "40.0.0.9",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.10",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r10",
"peer-id": "40.1.1.10",
"router-id": "40.0.0.10",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"bestpath": {
"overall": true
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.2",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r8",
"peer-id": "40.1.1.2",
"router-id": "40.0.0.8",
"type": "external"
},
"valid": true
}
],
"prefix": "40.3.86.0",
"prefixlen": 24
}
2015-06-12 16:59:11 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (json_nexthop_ll)
|
|
|
|
json_object_array_add(json_nexthops,
|
|
|
|
json_nexthop_ll);
|
Key changes:
- The aspath and community structures now have a json_object where we
store the json representation. This is updated at the same time
the "str" for aspath/community are updated. We do this so that we
do not have to compute the json rep
- Added a small wrappper to libjson0, the wrapper lives in quagga's lib/json.[ch].
- Added more structure to the json output. Sample output:
show ip bgp summary json
------------------------
BGP router identifier 10.0.0.1, local AS number 10
BGP table version 2400
RIB entries 4799, using 562 KiB of memory
Peers 17, using 284 KiB of memory
Peer groups 4, using 224 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
1.1.1.1 4 10 0 0 0 0 0 never Active
10.0.0.2 4 10 104 7 0 0 0 00:02:29 600
10.0.0.3 4 10 104 7 0 0 0 00:02:29 600
10.0.0.4 4 10 204 7 0 0 0 00:02:29 1200
20.1.1.6 4 20 406 210 0 0 0 00:02:44 600
20.1.1.7 4 20 406 210 0 0 0 00:02:44 600
40.1.1.2 4 40 406 210 0 0 0 00:02:44 600
40.1.1.6 4 40 406 210 0 0 0 00:02:44 600
40.1.1.10 4 40 406 210 0 0 0 00:02:44 600
Total number of neighbors 9
{
"as": 10,
"dynamic-peers": 0,
"peer-count": 17,
"peer-group-count": 4,
"peer-group-memory": 224,
"peer-memory": 291312,
"peers": {
"1.1.1.1": {
"inq": 0,
"msgrcvd": 0,
"msgsent": 0,
"outq": 0,
"prefix-advertised-count": 0,
"prefix-received-count": 0,
"remote-as": 10,
"state": "Active",
"table-version": 0,
"uptime": "never",
"version": 4
},
"10.0.0.2": {
"hostname": "r2",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.3": {
"hostname": "r3",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.4": {
"hostname": "r4",
"inq": 0,
"msgrcvd": 204,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 1200,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"20.1.1.6": {
"hostname": "r6",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"20.1.1.7": {
"hostname": "r7",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.10": {
"hostname": "r10",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.2": {
"hostname": "r8",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.6": {
"hostname": "r9",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
}
},
"rib-count": 4799,
"rib-memory": 575880,
"router-id": "10.0.0.1",
"table-version": 2400,
"total-peers": 9
}
show ip bgp json
----------------
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.88.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.89.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
"40.3.88.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
"40.3.89.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
show ip bgp x.x.x.x json
------------------------
BGP routing table entry for 40.3.86.0/24
Paths: (3 available, best #3, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.2 10.0.0.3 10.0.0.4 20.1.1.6 20.1.1.7 40.1.1.2 40.1.1.6 40.1.1.10
100 200 300 400 500 40
40.1.1.6 from 40.1.1.6 (40.0.0.9)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.10 from 40.1.1.10 (40.0.0.10)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.2 from 40.1.1.2 (40.0.0.8)
Origin IGP, metric 0, localpref 100, valid, external, best
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
{
"advertised-to": {
"10.0.0.2": {
"hostname": "r2"
},
"10.0.0.3": {
"hostname": "r3"
},
"10.0.0.4": {
"hostname": "r4"
},
"20.1.1.6": {
"hostname": "r6"
},
"20.1.1.7": {
"hostname": "r7"
},
"40.1.1.10": {
"hostname": "r10"
},
"40.1.1.2": {
"hostname": "r8"
},
"40.1.1.6": {
"hostname": "r9"
}
},
"paths": [
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.6",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r9",
"peer-id": "40.1.1.6",
"router-id": "40.0.0.9",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.10",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r10",
"peer-id": "40.1.1.10",
"router-id": "40.0.0.10",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"bestpath": {
"overall": true
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.2",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r8",
"peer-id": "40.1.1.2",
"router-id": "40.0.0.8",
"type": "external"
},
"valid": true
}
],
"prefix": "40.3.86.0",
"prefixlen": 24
}
2015-06-12 16:59:11 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_object_add(json_path, "nexthops",
|
|
|
|
json_nexthops);
|
|
|
|
}
|
|
|
|
|
|
|
|
json_object_array_add(json_paths, json_path);
|
|
|
|
} else {
|
|
|
|
vty_out(vty, "\n");
|
2019-06-14 02:55:38 +02:00
|
|
|
|
|
|
|
if (safi == SAFI_EVPN &&
|
|
|
|
attr->flag & ATTR_FLAG_BIT(BGP_ATTR_EXT_COMMUNITIES)) {
|
|
|
|
vty_out(vty, "%*s", 20, " ");
|
|
|
|
vty_out(vty, "%s\n", attr->ecommunity->str);
|
|
|
|
}
|
|
|
|
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#if ENABLE_BGP_VNC
|
2017-07-17 14:03:14 +02:00
|
|
|
/* prints an additional line, indented, with VNC info, if
|
|
|
|
* present */
|
|
|
|
if ((safi == SAFI_MPLS_VPN) || (safi == SAFI_ENCAP))
|
2018-10-03 00:34:03 +02:00
|
|
|
rfapi_vty_out_vncinfo(vty, p, path, safi);
|
bgpd: add L3/L2VPN Virtual Network Control feature
This feature adds an L3 & L2 VPN application that makes use of the VPN
and Encap SAFIs. This code is currently used to support IETF NVO3 style
operation. In NVO3 terminology it provides the Network Virtualization
Authority (NVA) and the ability to import/export IP prefixes and MAC
addresses from Network Virtualization Edges (NVEs). The code supports
per-NVE tables.
The NVE-NVA protocol used to communicate routing and Ethernet / Layer 2
(L2) forwarding information between NVAs and NVEs is referred to as the
Remote Forwarder Protocol (RFP). OpenFlow is an example RFP. For
general background on NVO3 and RFP concepts see [1]. For information on
Openflow see [2].
RFPs are integrated with BGP via the RF API contained in the new "rfapi"
BGP sub-directory. Currently, only a simple example RFP is included in
Quagga. Developers may use this example as a starting point to integrate
Quagga with an RFP of their choosing, e.g., OpenFlow. The RFAPI code
also supports the ability import/export of routing information between
VNC and customer edge routers (CEs) operating within a virtual
network. Import/export may take place between BGP views or to the
default zebera VRF.
BGP, with IP VPNs and Tunnel Encapsulation, is used to distribute VPN
information between NVAs. BGP based IP VPN support is defined in
RFC4364, BGP/MPLS IP Virtual Private Networks (VPNs), and RFC4659,
BGP-MPLS IP Virtual Private Network (VPN) Extension for IPv6 VPN . Use
of both the Encapsulation Subsequent Address Family Identifier (SAFI)
and the Tunnel Encapsulation Attribute, RFC5512, The BGP Encapsulation
Subsequent Address Family Identifier (SAFI) and the BGP Tunnel
Encapsulation Attribute, are supported. MAC address distribution does
not follow any standard BGB encoding, although it was inspired by the
early IETF EVPN concepts.
The feature is conditionally compiled and disabled by default.
Use the --enable-bgp-vnc configure option to enable.
The majority of this code was authored by G. Paul Ziemba
<paulz@labn.net>.
[1] http://tools.ietf.org/html/draft-ietf-nvo3-nve-nva-cp-req
[2] https://www.opennetworking.org/sdn-resources/technical-library
Now includes changes needed to merge with cmaster-next.
2016-05-07 20:18:56 +02:00
|
|
|
#endif
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
/* called from terminal list command */
|
2017-07-17 14:03:14 +02:00
|
|
|
void route_vty_out_tmp(struct vty *vty, struct prefix *p, struct attr *attr,
|
2018-08-29 14:19:54 +02:00
|
|
|
safi_t safi, bool use_json, json_object *json_ar)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
json_object *json_status = NULL;
|
|
|
|
json_object *json_net = NULL;
|
|
|
|
char buff[BUFSIZ];
|
2019-09-27 20:45:38 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Route status display. */
|
|
|
|
if (use_json) {
|
|
|
|
json_status = json_object_new_object();
|
|
|
|
json_net = json_object_new_object();
|
|
|
|
} else {
|
|
|
|
vty_out(vty, "*");
|
|
|
|
vty_out(vty, ">");
|
|
|
|
vty_out(vty, " ");
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* print prefix and mask */
|
2018-10-05 23:30:59 +02:00
|
|
|
if (use_json) {
|
2019-09-27 20:45:38 +02:00
|
|
|
if (safi == SAFI_EVPN)
|
|
|
|
bgp_evpn_route2json((struct prefix_evpn *)p, json_net);
|
|
|
|
else if (p->family == AF_INET || p->family == AF_INET6) {
|
|
|
|
json_object_string_add(
|
|
|
|
json_net, "addrPrefix",
|
|
|
|
inet_ntop(p->family, &p->u.prefix, buff,
|
|
|
|
BUFSIZ));
|
|
|
|
json_object_int_add(json_net, "prefixLen",
|
|
|
|
p->prefixlen);
|
|
|
|
prefix2str(p, buff, PREFIX_STRLEN);
|
|
|
|
json_object_string_add(json_net, "network", buff);
|
|
|
|
}
|
2018-10-05 23:30:59 +02:00
|
|
|
} else
|
2017-07-21 02:42:20 +02:00
|
|
|
route_vty_out_route(p, vty, NULL);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Print attribute */
|
|
|
|
if (attr) {
|
|
|
|
if (use_json) {
|
|
|
|
if (p->family == AF_INET
|
|
|
|
&& (safi == SAFI_MPLS_VPN || safi == SAFI_ENCAP
|
|
|
|
|| !BGP_ATTR_NEXTHOP_AFI_IP6(attr))) {
|
2019-09-27 20:45:38 +02:00
|
|
|
if (safi == SAFI_MPLS_VPN || safi == SAFI_ENCAP)
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_string_add(
|
|
|
|
json_net, "nextHop",
|
|
|
|
inet_ntoa(
|
|
|
|
attr->mp_nexthop_global_in));
|
|
|
|
else
|
|
|
|
json_object_string_add(
|
|
|
|
json_net, "nextHop",
|
|
|
|
inet_ntoa(attr->nexthop));
|
|
|
|
} else if (p->family == AF_INET6
|
|
|
|
|| BGP_ATTR_NEXTHOP_AFI_IP6(attr)) {
|
|
|
|
char buf[BUFSIZ];
|
|
|
|
|
|
|
|
json_object_string_add(
|
2018-10-15 15:08:37 +02:00
|
|
|
json_net, "nextHopGlobal",
|
2017-07-17 14:03:14 +02:00
|
|
|
inet_ntop(AF_INET6,
|
|
|
|
&attr->mp_nexthop_global, buf,
|
|
|
|
BUFSIZ));
|
2019-09-27 20:45:38 +02:00
|
|
|
} else if (p->family == AF_EVPN &&
|
|
|
|
!BGP_ATTR_NEXTHOP_AFI_IP6(attr))
|
|
|
|
json_object_string_add(json_net,
|
|
|
|
"nextHop", inet_ntoa(
|
|
|
|
attr->mp_nexthop_global_in));
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (attr->flag
|
|
|
|
& ATTR_FLAG_BIT(BGP_ATTR_MULTI_EXIT_DISC))
|
|
|
|
json_object_int_add(json_net, "metric",
|
|
|
|
attr->med);
|
|
|
|
|
2018-10-05 23:30:59 +02:00
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_LOCAL_PREF)) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Adding "locPrf" field to match with
|
|
|
|
* corresponding CLI. "localPref" will be
|
|
|
|
* deprecated in future.
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_int_add(json_net, "localPref",
|
|
|
|
attr->local_pref);
|
2018-10-05 23:30:59 +02:00
|
|
|
json_object_int_add(json_net, "locPrf",
|
|
|
|
attr->local_pref);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
json_object_int_add(json_net, "weight", attr->weight);
|
|
|
|
|
|
|
|
/* Print aspath */
|
2018-10-05 23:30:59 +02:00
|
|
|
if (attr->aspath) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Adding "path" field to match with
|
|
|
|
* corresponding CLI. "localPref" will be
|
|
|
|
* deprecated in future.
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_string_add(json_net, "asPath",
|
|
|
|
attr->aspath->str);
|
2018-10-05 23:30:59 +02:00
|
|
|
json_object_string_add(json_net, "path",
|
|
|
|
attr->aspath->str);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Print origin */
|
|
|
|
json_object_string_add(json_net, "bgpOriginCode",
|
|
|
|
bgp_origin_str[attr->origin]);
|
|
|
|
} else {
|
|
|
|
if (p->family == AF_INET
|
|
|
|
&& (safi == SAFI_MPLS_VPN || safi == SAFI_ENCAP
|
|
|
|
|| safi == SAFI_EVPN
|
|
|
|
|| !BGP_ATTR_NEXTHOP_AFI_IP6(attr))) {
|
|
|
|
if (safi == SAFI_MPLS_VPN || safi == SAFI_ENCAP
|
|
|
|
|| safi == SAFI_EVPN)
|
|
|
|
vty_out(vty, "%-16s",
|
|
|
|
inet_ntoa(
|
|
|
|
attr->mp_nexthop_global_in));
|
|
|
|
else
|
|
|
|
vty_out(vty, "%-16s",
|
|
|
|
inet_ntoa(attr->nexthop));
|
|
|
|
} else if (p->family == AF_INET6
|
|
|
|
|| BGP_ATTR_NEXTHOP_AFI_IP6(attr)) {
|
|
|
|
int len;
|
|
|
|
char buf[BUFSIZ];
|
|
|
|
|
|
|
|
len = vty_out(
|
|
|
|
vty, "%s",
|
|
|
|
inet_ntop(AF_INET6,
|
|
|
|
&attr->mp_nexthop_global, buf,
|
|
|
|
BUFSIZ));
|
|
|
|
len = 16 - len;
|
|
|
|
if (len < 1)
|
|
|
|
vty_out(vty, "\n%*s", 36, " ");
|
|
|
|
else
|
|
|
|
vty_out(vty, "%*s", len, " ");
|
|
|
|
}
|
|
|
|
if (attr->flag
|
|
|
|
& ATTR_FLAG_BIT(BGP_ATTR_MULTI_EXIT_DISC))
|
|
|
|
vty_out(vty, "%10u", attr->med);
|
|
|
|
else
|
|
|
|
vty_out(vty, " ");
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_LOCAL_PREF))
|
|
|
|
vty_out(vty, "%7u", attr->local_pref);
|
|
|
|
else
|
|
|
|
vty_out(vty, " ");
|
|
|
|
|
|
|
|
vty_out(vty, "%7u ", attr->weight);
|
|
|
|
|
|
|
|
/* Print aspath */
|
|
|
|
if (attr->aspath)
|
|
|
|
aspath_print_vty(vty, "%s", attr->aspath, " ");
|
|
|
|
|
|
|
|
/* Print origin */
|
|
|
|
vty_out(vty, "%s", bgp_origin_str[attr->origin]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (use_json) {
|
|
|
|
json_object_boolean_true_add(json_status, "*");
|
|
|
|
json_object_boolean_true_add(json_status, ">");
|
|
|
|
json_object_object_add(json_net, "appliedStatusSymbols",
|
|
|
|
json_status);
|
2019-05-13 19:27:54 +02:00
|
|
|
|
2019-09-27 20:45:38 +02:00
|
|
|
prefix2str(p, buff, PREFIX_STRLEN);
|
|
|
|
json_object_object_add(json_ar, buff, json_net);
|
2017-07-17 14:03:14 +02:00
|
|
|
} else
|
|
|
|
vty_out(vty, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
void route_vty_out_tag(struct vty *vty, struct prefix *p,
|
2018-10-03 00:34:03 +02:00
|
|
|
struct bgp_path_info *path, int display, safi_t safi,
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object *json)
|
|
|
|
{
|
|
|
|
json_object *json_out = NULL;
|
|
|
|
struct attr *attr;
|
|
|
|
mpls_label_t label = MPLS_INVALID_LABEL;
|
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (!path->extra)
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
|
|
|
|
|
|
|
if (json)
|
|
|
|
json_out = json_object_new_object();
|
|
|
|
|
|
|
|
/* short status lead text */
|
2018-10-03 00:34:03 +02:00
|
|
|
route_vty_short_status_out(vty, path, json_out);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* print prefix and mask */
|
|
|
|
if (json == NULL) {
|
|
|
|
if (!display)
|
2017-07-21 02:42:20 +02:00
|
|
|
route_vty_out_route(p, vty, NULL);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
vty_out(vty, "%*s", 17, " ");
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Print attribute */
|
2018-10-03 00:34:03 +02:00
|
|
|
attr = path->attr;
|
2019-10-16 16:25:19 +02:00
|
|
|
if (((p->family == AF_INET)
|
|
|
|
&& ((safi == SAFI_MPLS_VPN || safi == SAFI_ENCAP)))
|
|
|
|
|| (safi == SAFI_EVPN && !BGP_ATTR_NEXTHOP_AFI_IP6(attr))
|
|
|
|
|| (!BGP_ATTR_NEXTHOP_AFI_IP6(attr))) {
|
|
|
|
if (safi == SAFI_MPLS_VPN || safi == SAFI_ENCAP
|
|
|
|
|| safi == SAFI_EVPN) {
|
|
|
|
if (json)
|
|
|
|
json_object_string_add(
|
|
|
|
json_out, "mpNexthopGlobalIn",
|
|
|
|
inet_ntoa(attr->mp_nexthop_global_in));
|
|
|
|
else
|
|
|
|
vty_out(vty, "%-16s",
|
|
|
|
inet_ntoa(attr->mp_nexthop_global_in));
|
|
|
|
} else {
|
|
|
|
if (json)
|
|
|
|
json_object_string_add(
|
|
|
|
json_out, "nexthop",
|
|
|
|
inet_ntoa(attr->nexthop));
|
|
|
|
else
|
|
|
|
vty_out(vty, "%-16s", inet_ntoa(attr->nexthop));
|
|
|
|
}
|
|
|
|
} else if (((p->family == AF_INET6)
|
|
|
|
&& ((safi == SAFI_MPLS_VPN || safi == SAFI_ENCAP)))
|
|
|
|
|| (safi == SAFI_EVPN && BGP_ATTR_NEXTHOP_AFI_IP6(attr))
|
|
|
|
|| (BGP_ATTR_NEXTHOP_AFI_IP6(attr))) {
|
|
|
|
char buf_a[512];
|
|
|
|
|
|
|
|
if (attr->mp_nexthop_len == BGP_ATTR_NHLEN_IPV6_GLOBAL) {
|
|
|
|
if (json)
|
|
|
|
json_object_string_add(
|
|
|
|
json_out, "mpNexthopGlobalIn",
|
|
|
|
inet_ntop(AF_INET6,
|
|
|
|
&attr->mp_nexthop_global,
|
|
|
|
buf_a, sizeof(buf_a)));
|
|
|
|
else
|
|
|
|
vty_out(vty, "%s",
|
|
|
|
inet_ntop(AF_INET6,
|
|
|
|
&attr->mp_nexthop_global,
|
|
|
|
buf_a, sizeof(buf_a)));
|
|
|
|
} else if (attr->mp_nexthop_len
|
|
|
|
== BGP_ATTR_NHLEN_IPV6_GLOBAL_AND_LL) {
|
|
|
|
snprintfrr(buf_a, sizeof(buf_a), "%pI6(%pI6)",
|
|
|
|
&attr->mp_nexthop_global,
|
|
|
|
&attr->mp_nexthop_local);
|
|
|
|
if (json)
|
|
|
|
json_object_string_add(json_out,
|
|
|
|
"mpNexthopGlobalLocal",
|
|
|
|
buf_a);
|
|
|
|
else
|
|
|
|
vty_out(vty, "%s", buf_a);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
label = decode_label(&path->extra->label[0]);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (bgp_is_valid_label(&label)) {
|
|
|
|
if (json) {
|
|
|
|
json_object_int_add(json_out, "notag", label);
|
|
|
|
json_object_array_add(json, json_out);
|
|
|
|
} else {
|
|
|
|
vty_out(vty, "notag/%d", label);
|
|
|
|
vty_out(vty, "\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
void route_vty_out_overlay(struct vty *vty, struct prefix *p,
|
2018-10-03 00:34:03 +02:00
|
|
|
struct bgp_path_info *path, int display,
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object *json_paths)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct attr *attr;
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
char buf[BUFSIZ] = {0};
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object *json_path = NULL;
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
json_object *json_nexthop = NULL;
|
|
|
|
json_object *json_overlay = NULL;
|
2015-08-12 15:59:18 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (!path->extra)
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
if (json_paths) {
|
|
|
|
json_path = json_object_new_object();
|
|
|
|
json_overlay = json_object_new_object();
|
|
|
|
json_nexthop = json_object_new_object();
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* short status lead text */
|
2018-10-03 00:34:03 +02:00
|
|
|
route_vty_short_status_out(vty, path, json_path);
|
2015-08-12 15:59:18 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* print prefix and mask */
|
|
|
|
if (!display)
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
route_vty_out_route(p, vty, json_path);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
vty_out(vty, "%*s", 17, " ");
|
|
|
|
|
|
|
|
/* Print attribute */
|
2018-10-03 00:34:03 +02:00
|
|
|
attr = path->attr;
|
2019-10-16 16:25:19 +02:00
|
|
|
char buf1[BUFSIZ];
|
|
|
|
int af = NEXTHOP_FAMILY(attr->mp_nexthop_len);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
switch (af) {
|
|
|
|
case AF_INET:
|
|
|
|
inet_ntop(af, &attr->mp_nexthop_global_in, buf, BUFSIZ);
|
|
|
|
if (!json_path) {
|
|
|
|
vty_out(vty, "%-16s", buf);
|
|
|
|
} else {
|
|
|
|
json_object_string_add(json_nexthop, "ip", buf);
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(json_nexthop, "afi", "ipv4");
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_object_add(json_path, "nexthop",
|
|
|
|
json_nexthop);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case AF_INET6:
|
|
|
|
inet_ntop(af, &attr->mp_nexthop_global, buf, BUFSIZ);
|
|
|
|
inet_ntop(af, &attr->mp_nexthop_local, buf1, BUFSIZ);
|
|
|
|
if (!json_path) {
|
|
|
|
vty_out(vty, "%s(%s)", buf, buf1);
|
|
|
|
} else {
|
|
|
|
json_object_string_add(json_nexthop, "ipv6Global", buf);
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(json_nexthop, "ipv6LinkLocal",
|
|
|
|
buf1);
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(json_nexthop, "afi", "ipv6");
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_object_add(json_path, "nexthop",
|
|
|
|
json_nexthop);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
if (!json_path) {
|
|
|
|
vty_out(vty, "?");
|
|
|
|
} else {
|
|
|
|
json_object_string_add(json_nexthop, "Error",
|
|
|
|
"Unsupported address-family");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2018-04-11 19:16:10 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
char *str = esi2str(&(attr->evpn_overlay.eth_s_id));
|
2018-04-11 19:16:10 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (!json_path)
|
|
|
|
vty_out(vty, "%s", str);
|
|
|
|
else
|
|
|
|
json_object_string_add(json_overlay, "esi", str);
|
2019-05-20 15:43:01 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
XFREE(MTYPE_TMP, str);
|
2018-04-11 19:16:10 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (is_evpn_prefix_ipaddr_v4((struct prefix_evpn *)p)) {
|
|
|
|
inet_ntop(AF_INET, &(attr->evpn_overlay.gw_ip.ipv4), buf,
|
|
|
|
BUFSIZ);
|
|
|
|
} else if (is_evpn_prefix_ipaddr_v6((struct prefix_evpn *)p)) {
|
|
|
|
inet_ntop(AF_INET6, &(attr->evpn_overlay.gw_ip.ipv6), buf,
|
|
|
|
BUFSIZ);
|
|
|
|
}
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (!json_path)
|
|
|
|
vty_out(vty, "/%s", buf);
|
|
|
|
else
|
|
|
|
json_object_string_add(json_overlay, "gw", buf);
|
|
|
|
|
|
|
|
if (attr->ecommunity) {
|
|
|
|
char *mac = NULL;
|
|
|
|
struct ecommunity_val *routermac = ecommunity_lookup(
|
|
|
|
attr->ecommunity, ECOMMUNITY_ENCODE_EVPN,
|
|
|
|
ECOMMUNITY_EVPN_SUBTYPE_ROUTERMAC);
|
|
|
|
|
|
|
|
if (routermac)
|
|
|
|
mac = ecom_mac2str((char *)routermac->val);
|
|
|
|
if (mac) {
|
|
|
|
if (!json_path) {
|
|
|
|
vty_out(vty, "/%s", (char *)mac);
|
|
|
|
} else {
|
|
|
|
json_object_string_add(json_overlay, "rmac",
|
|
|
|
mac);
|
2018-04-11 19:16:10 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
XFREE(MTYPE_TMP, mac);
|
2018-04-11 19:16:10 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (!json_path) {
|
|
|
|
vty_out(vty, "\n");
|
|
|
|
} else {
|
|
|
|
json_object_object_add(json_path, "overlay", json_overlay);
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_array_add(json_paths, json_path);
|
bgpd: json cli output for bgp evpn overlay
This diff provides implementation for the cli:
"show bgp l2vpn evpn all overlay json"
Sample output after this change:
leaf-1# sh bgp l2vpn evpn all overlay json
{
"bgpTableVersion":1,
"bgpLocalRouterId":"10.100.0.1",
"defaultLocPrf":100,
"localAS":65000,
"10.101.1.4:5":{
"rd":"10.101.1.4:5",
"[5]:[0]:[32]:[101.101.101.101]":{
"prefix":"[5]:[0]:[32]:[101.101.101.101]",
"prefixLen":288,
"paths":[
{
"valid":true,
"bestpath":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
},
{
"valid":true,
"pathFrom":"external",
"nexthop":{
"ip":"10.100.0.2",
"afi":"ipv4"
},
"overlay":{
"esi":"00:00:00:00:00:00:00:00:00:00",
"gw":"0.0.0.0",
"rmac":"ea:47:79:75:22:1b"
}
}
]
}
},
...
...
}
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-05-11 18:47:10 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* dampening route */
|
|
|
|
static void damp_route_vty_out(struct vty *vty, struct prefix *p,
|
2019-11-10 19:13:20 +01:00
|
|
|
struct bgp_path_info *path, int display, afi_t afi,
|
2018-10-02 22:41:30 +02:00
|
|
|
safi_t safi, bool use_json, json_object *json)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct attr *attr;
|
|
|
|
int len;
|
|
|
|
char timebuf[BGP_UPTIME_LEN];
|
|
|
|
|
|
|
|
/* short status lead text */
|
2018-10-03 00:34:03 +02:00
|
|
|
route_vty_short_status_out(vty, path, json);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* print prefix and mask */
|
|
|
|
if (!use_json) {
|
|
|
|
if (!display)
|
2017-07-21 02:42:20 +02:00
|
|
|
route_vty_out_route(p, vty, NULL);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
vty_out(vty, "%*s", 17, " ");
|
|
|
|
}
|
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
len = vty_out(vty, "%s", path->peer->host);
|
2017-07-17 14:03:14 +02:00
|
|
|
len = 17 - len;
|
|
|
|
if (len < 1) {
|
|
|
|
if (!use_json)
|
|
|
|
vty_out(vty, "\n%*s", 34, " ");
|
|
|
|
} else {
|
|
|
|
if (use_json)
|
|
|
|
json_object_int_add(json, "peerHost", len);
|
|
|
|
else
|
|
|
|
vty_out(vty, "%*s", len, " ");
|
|
|
|
}
|
|
|
|
|
|
|
|
if (use_json)
|
2019-11-10 19:13:20 +01:00
|
|
|
bgp_damp_reuse_time_vty(vty, path, timebuf, BGP_UPTIME_LEN, afi,
|
|
|
|
safi, use_json, json);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2018-10-03 00:34:03 +02:00
|
|
|
vty_out(vty, "%s ",
|
|
|
|
bgp_damp_reuse_time_vty(vty, path, timebuf,
|
2019-11-10 19:13:20 +01:00
|
|
|
BGP_UPTIME_LEN, afi, safi,
|
|
|
|
use_json, json));
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Print attribute */
|
2018-10-03 00:34:03 +02:00
|
|
|
attr = path->attr;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Print aspath */
|
|
|
|
if (attr->aspath) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (use_json)
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(json, "asPath",
|
|
|
|
attr->aspath->str);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
aspath_print_vty(vty, "%s", attr->aspath, " ");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
|
|
|
|
/* Print origin */
|
|
|
|
if (use_json)
|
|
|
|
json_object_string_add(json, "origin",
|
|
|
|
bgp_origin_str[attr->origin]);
|
|
|
|
else
|
|
|
|
vty_out(vty, "%s", bgp_origin_str[attr->origin]);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!use_json)
|
|
|
|
vty_out(vty, "\n");
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* flap route */
|
|
|
|
static void flap_route_vty_out(struct vty *vty, struct prefix *p,
|
2019-11-10 19:13:20 +01:00
|
|
|
struct bgp_path_info *path, int display, afi_t afi,
|
2018-10-02 22:41:30 +02:00
|
|
|
safi_t safi, bool use_json, json_object *json)
|
2017-01-09 18:26:24 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct attr *attr;
|
|
|
|
struct bgp_damp_info *bdi;
|
|
|
|
char timebuf[BGP_UPTIME_LEN];
|
|
|
|
int len;
|
2017-01-09 18:26:24 +01:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (!path->extra)
|
2017-07-17 14:03:14 +02:00
|
|
|
return;
|
2017-01-09 18:26:24 +01:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
bdi = path->extra->damp_info;
|
2017-01-09 18:26:24 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* short status lead text */
|
2018-10-03 00:34:03 +02:00
|
|
|
route_vty_short_status_out(vty, path, json);
|
2017-01-09 18:26:24 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* print prefix and mask */
|
|
|
|
if (!use_json) {
|
|
|
|
if (!display)
|
2017-07-21 02:42:20 +02:00
|
|
|
route_vty_out_route(p, vty, NULL);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
vty_out(vty, "%*s", 17, " ");
|
|
|
|
}
|
2017-01-09 18:26:24 +01:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
len = vty_out(vty, "%s", path->peer->host);
|
2017-07-17 14:03:14 +02:00
|
|
|
len = 16 - len;
|
|
|
|
if (len < 1) {
|
|
|
|
if (!use_json)
|
|
|
|
vty_out(vty, "\n%*s", 33, " ");
|
|
|
|
} else {
|
|
|
|
if (use_json)
|
|
|
|
json_object_int_add(json, "peerHost", len);
|
|
|
|
else
|
|
|
|
vty_out(vty, "%*s", len, " ");
|
|
|
|
}
|
2017-01-09 18:26:24 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
len = vty_out(vty, "%d", bdi->flap);
|
|
|
|
len = 5 - len;
|
|
|
|
if (len < 1) {
|
|
|
|
if (!use_json)
|
|
|
|
vty_out(vty, " ");
|
|
|
|
} else {
|
|
|
|
if (use_json)
|
|
|
|
json_object_int_add(json, "bdiFlap", len);
|
|
|
|
else
|
|
|
|
vty_out(vty, "%*s", len, " ");
|
|
|
|
}
|
|
|
|
|
|
|
|
if (use_json)
|
|
|
|
peer_uptime(bdi->start_time, timebuf, BGP_UPTIME_LEN, use_json,
|
|
|
|
json);
|
|
|
|
else
|
2018-03-06 20:02:52 +01:00
|
|
|
vty_out(vty, "%s ", peer_uptime(bdi->start_time, timebuf,
|
|
|
|
BGP_UPTIME_LEN, 0, NULL));
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_DAMPED)
|
|
|
|
&& !CHECK_FLAG(path->flags, BGP_PATH_HISTORY)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (use_json)
|
2018-10-03 00:34:03 +02:00
|
|
|
bgp_damp_reuse_time_vty(vty, path, timebuf,
|
2019-11-10 19:13:20 +01:00
|
|
|
BGP_UPTIME_LEN, afi, safi,
|
|
|
|
use_json, json);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
|
|
|
vty_out(vty, "%s ",
|
2018-10-03 00:34:03 +02:00
|
|
|
bgp_damp_reuse_time_vty(vty, path, timebuf,
|
2019-11-10 19:13:20 +01:00
|
|
|
BGP_UPTIME_LEN, afi,
|
|
|
|
safi, use_json, json));
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
|
|
|
if (!use_json)
|
|
|
|
vty_out(vty, "%*s ", 8, " ");
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Print attribute */
|
2018-10-03 00:34:03 +02:00
|
|
|
attr = path->attr;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Print aspath */
|
|
|
|
if (attr->aspath) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (use_json)
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(json, "asPath",
|
|
|
|
attr->aspath->str);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
aspath_print_vty(vty, "%s", attr->aspath, " ");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
|
|
|
|
/* Print origin */
|
|
|
|
if (use_json)
|
|
|
|
json_object_string_add(json, "origin",
|
|
|
|
bgp_origin_str[attr->origin]);
|
|
|
|
else
|
|
|
|
vty_out(vty, "%s", bgp_origin_str[attr->origin]);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!use_json)
|
|
|
|
vty_out(vty, "\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
static void route_vty_out_advertised_to(struct vty *vty, struct peer *peer,
|
|
|
|
int *first, const char *header,
|
|
|
|
json_object *json_adv_to)
|
|
|
|
{
|
|
|
|
char buf1[INET6_ADDRSTRLEN];
|
|
|
|
json_object *json_peer = NULL;
|
|
|
|
|
|
|
|
if (json_adv_to) {
|
|
|
|
/* 'advertised-to' is a dictionary of peers we have advertised
|
|
|
|
* this
|
|
|
|
* prefix too. The key is the peer's IP or swpX, the value is
|
|
|
|
* the
|
|
|
|
* hostname if we know it and "" if not.
|
|
|
|
*/
|
|
|
|
json_peer = json_object_new_object();
|
|
|
|
|
|
|
|
if (peer->hostname)
|
|
|
|
json_object_string_add(json_peer, "hostname",
|
|
|
|
peer->hostname);
|
|
|
|
|
|
|
|
if (peer->conf_if)
|
|
|
|
json_object_object_add(json_adv_to, peer->conf_if,
|
|
|
|
json_peer);
|
|
|
|
else
|
|
|
|
json_object_object_add(
|
|
|
|
json_adv_to,
|
|
|
|
sockunion2str(&peer->su, buf1, SU_ADDRSTRLEN),
|
|
|
|
json_peer);
|
|
|
|
} else {
|
|
|
|
if (*first) {
|
|
|
|
vty_out(vty, "%s", header);
|
|
|
|
*first = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (peer->hostname
|
|
|
|
&& bgp_flag_check(peer->bgp, BGP_FLAG_SHOW_HOSTNAME)) {
|
|
|
|
if (peer->conf_if)
|
|
|
|
vty_out(vty, " %s(%s)", peer->hostname,
|
|
|
|
peer->conf_if);
|
|
|
|
else
|
|
|
|
vty_out(vty, " %s(%s)", peer->hostname,
|
|
|
|
sockunion2str(&peer->su, buf1,
|
|
|
|
SU_ADDRSTRLEN));
|
|
|
|
} else {
|
|
|
|
if (peer->conf_if)
|
|
|
|
vty_out(vty, " %s", peer->conf_if);
|
|
|
|
else
|
|
|
|
vty_out(vty, " %s",
|
|
|
|
sockunion2str(&peer->su, buf1,
|
|
|
|
SU_ADDRSTRLEN));
|
|
|
|
}
|
|
|
|
}
|
2017-01-09 18:26:24 +01:00
|
|
|
}
|
|
|
|
|
bgpd: Re-use TX Addpath IDs where possible
The motivation for this patch is to address a concerning behavior of
tx-addpath-bestpath-per-AS. Prior to this patch, all paths' TX ID was
pre-determined as the path was received from a peer. However, this meant
that any time the path selected as best from an AS changed, bgpd had no
choice but to withdraw the previous best path, and advertise the new
best-path under a new TX ID. This could cause significant network
disruption, especially for the subset of prefixes coming from only one
AS that were also communicated over a bestpath-per-AS session.
The patch's general approach is best illustrated by
txaddpath_update_ids. After a bestpath run (required for best-per-AS to
know what will and will not be sent as addpaths) ID numbers will be
stripped from paths that no longer need to be sent, and held in a pool.
Then, paths that will be sent as addpaths and do not already have ID
numbers will allocate new ID numbers, pulling first from that pool.
Finally, anything left in the pool will be returned to the allocator.
In order for this to work, ID numbers had to be split by strategy. The
tx-addpath-All strategy would keep every ID number "in use" constantly,
preventing IDs from being transferred to different paths. Rather than
create two variables for ID, this patch create a more generic array that
will easily enable more addpath strategies to be implemented. The
previously described ID manipulations will happen per addpath strategy,
and will only be run for strategies that are enabled on at least one
peer.
Finally, the ID numbers are allocated from an allocator that tracks per
AFI/SAFI/Addpath Strategy which IDs are in use. Though it would be very
improbable, there was the possibility with the free-running counter
approach for rollover to cause two paths on the same prefix to get
assigned the same TX ID. As remote as the possibility is, we prefer to
not leave it to chance.
This ID re-use method is not perfect. In some cases you could still get
withdraw-then-add behaviors where not strictly necessary. In the case of
bestpath-per-AS this requires one AS to advertise a prefix for the first
time, then a second AS withdraws that prefix, all within the space of an
already pending MRAI timer. In those situations a withdraw-then-add is
more forgivable, and fixing it would probably require a much more
significant effort, as IDs would need to be moved to ADVs instead of
paths.
Signed-off-by Mitchell Skiba <mskiba@amazon.com>
2018-05-10 01:10:02 +02:00
|
|
|
static void route_vty_out_tx_ids(struct vty *vty,
|
|
|
|
struct bgp_addpath_info_data *d)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < BGP_ADDPATH_MAX; i++) {
|
|
|
|
vty_out(vty, "TX-%s %u%s", bgp_addpath_names(i)->human_name,
|
|
|
|
d->addpath_tx_id[i],
|
|
|
|
i < BGP_ADDPATH_MAX - 1 ? " " : "\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-05-16 03:11:02 +02:00
|
|
|
static const char *bgp_path_selection_reason2str(
|
|
|
|
enum bgp_path_selection_reason reason)
|
|
|
|
{
|
|
|
|
switch (reason) {
|
|
|
|
case bgp_path_selection_none:
|
|
|
|
return "Nothing to Select";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_first:
|
|
|
|
return "First path received";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_evpn_sticky_mac:
|
|
|
|
return "EVPN Sticky Mac";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_evpn_seq:
|
|
|
|
return "EVPN sequence number";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_evpn_lower_ip:
|
|
|
|
return "EVPN lower IP";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_weight:
|
|
|
|
return "Weight";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_local_pref:
|
|
|
|
return "Local Pref";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_local_route:
|
|
|
|
return "Local Route";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_confed_as_path:
|
|
|
|
return "Confederation based AS Path";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_as_path:
|
|
|
|
return "AS Path";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_origin:
|
|
|
|
return "Origin";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_med:
|
|
|
|
return "MED";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_peer:
|
|
|
|
return "Peer Type";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_confed:
|
|
|
|
return "Confed Peer Type";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_igp_metric:
|
|
|
|
return "IGP Metric";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_older:
|
|
|
|
return "Older Path";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_router_id:
|
|
|
|
return "Router ID";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_cluster_length:
|
|
|
|
return "Cluser length";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_stale:
|
|
|
|
return "Path Staleness";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_local_configured:
|
|
|
|
return "Locally configured route";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_neighbor_ip:
|
|
|
|
return "Neighbor IP";
|
|
|
|
break;
|
|
|
|
case bgp_path_selection_default:
|
|
|
|
return "Nothing left to compare";
|
|
|
|
break;
|
|
|
|
}
|
2019-05-20 23:45:34 +02:00
|
|
|
return "Invalid (internal error)";
|
2019-05-16 03:11:02 +02:00
|
|
|
}
|
|
|
|
|
2019-05-16 02:54:34 +02:00
|
|
|
void route_vty_out_detail(struct vty *vty, struct bgp *bgp,
|
|
|
|
struct bgp_node *bn, struct bgp_path_info *path,
|
|
|
|
afi_t afi, safi_t safi, json_object *json_paths)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
char buf[INET6_ADDRSTRLEN];
|
|
|
|
char buf1[BUFSIZ];
|
|
|
|
char buf2[EVPN_ROUTE_STRLEN];
|
2019-12-06 21:03:50 +01:00
|
|
|
struct attr *attr = path->attr;
|
2017-07-17 14:03:14 +02:00
|
|
|
int sockunion_vty_out(struct vty *, union sockunion *);
|
|
|
|
time_t tbuf;
|
|
|
|
json_object *json_bestpath = NULL;
|
|
|
|
json_object *json_cluster_list = NULL;
|
|
|
|
json_object *json_cluster_list_list = NULL;
|
|
|
|
json_object *json_ext_community = NULL;
|
|
|
|
json_object *json_last_update = NULL;
|
2018-03-04 04:28:50 +01:00
|
|
|
json_object *json_pmsi = NULL;
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object *json_nexthop_global = NULL;
|
|
|
|
json_object *json_nexthop_ll = NULL;
|
|
|
|
json_object *json_nexthops = NULL;
|
|
|
|
json_object *json_path = NULL;
|
|
|
|
json_object *json_peer = NULL;
|
|
|
|
json_object *json_string = NULL;
|
|
|
|
json_object *json_adv_to = NULL;
|
|
|
|
int first = 0;
|
|
|
|
struct listnode *node, *nnode;
|
|
|
|
struct peer *peer;
|
|
|
|
int addpath_capable;
|
|
|
|
int has_adj;
|
|
|
|
unsigned int first_as;
|
2018-09-14 02:34:42 +02:00
|
|
|
bool nexthop_self =
|
2018-10-03 00:34:03 +02:00
|
|
|
CHECK_FLAG(path->flags, BGP_PATH_ANNC_NH_SELF) ? true : false;
|
bgpd: Re-use TX Addpath IDs where possible
The motivation for this patch is to address a concerning behavior of
tx-addpath-bestpath-per-AS. Prior to this patch, all paths' TX ID was
pre-determined as the path was received from a peer. However, this meant
that any time the path selected as best from an AS changed, bgpd had no
choice but to withdraw the previous best path, and advertise the new
best-path under a new TX ID. This could cause significant network
disruption, especially for the subset of prefixes coming from only one
AS that were also communicated over a bestpath-per-AS session.
The patch's general approach is best illustrated by
txaddpath_update_ids. After a bestpath run (required for best-per-AS to
know what will and will not be sent as addpaths) ID numbers will be
stripped from paths that no longer need to be sent, and held in a pool.
Then, paths that will be sent as addpaths and do not already have ID
numbers will allocate new ID numbers, pulling first from that pool.
Finally, anything left in the pool will be returned to the allocator.
In order for this to work, ID numbers had to be split by strategy. The
tx-addpath-All strategy would keep every ID number "in use" constantly,
preventing IDs from being transferred to different paths. Rather than
create two variables for ID, this patch create a more generic array that
will easily enable more addpath strategies to be implemented. The
previously described ID manipulations will happen per addpath strategy,
and will only be run for strategies that are enabled on at least one
peer.
Finally, the ID numbers are allocated from an allocator that tracks per
AFI/SAFI/Addpath Strategy which IDs are in use. Though it would be very
improbable, there was the possibility with the free-running counter
approach for rollover to cause two paths on the same prefix to get
assigned the same TX ID. As remote as the possibility is, we prefer to
not leave it to chance.
This ID re-use method is not perfect. In some cases you could still get
withdraw-then-add behaviors where not strictly necessary. In the case of
bestpath-per-AS this requires one AS to advertise a prefix for the first
time, then a second AS withdraws that prefix, all within the space of an
already pending MRAI timer. In those situations a withdraw-then-add is
more forgivable, and fixing it would probably require a much more
significant effort, as IDs would need to be moved to ADVs instead of
paths.
Signed-off-by Mitchell Skiba <mskiba@amazon.com>
2018-05-10 01:10:02 +02:00
|
|
|
int i;
|
2019-12-06 21:03:50 +01:00
|
|
|
char *nexthop_hostname = bgp_nexthop_hostname(path->peer, attr);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (json_paths) {
|
|
|
|
json_path = json_object_new_object();
|
|
|
|
json_peer = json_object_new_object();
|
|
|
|
json_nexthop_global = json_object_new_object();
|
|
|
|
}
|
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
if (path->extra) {
|
2017-11-21 11:42:05 +01:00
|
|
|
char tag_buf[30];
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-08-12 15:20:18 +02:00
|
|
|
buf2[0] = '\0';
|
2017-07-17 14:03:14 +02:00
|
|
|
tag_buf[0] = '\0';
|
2018-10-03 00:34:03 +02:00
|
|
|
if (path->extra && path->extra->num_labels) {
|
|
|
|
bgp_evpn_label2str(path->extra->label,
|
|
|
|
path->extra->num_labels, tag_buf,
|
2018-02-09 19:22:50 +01:00
|
|
|
sizeof(tag_buf));
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-08-12 15:20:18 +02:00
|
|
|
if (safi == SAFI_EVPN) {
|
2019-09-11 09:01:39 +02:00
|
|
|
if (!json_paths) {
|
|
|
|
bgp_evpn_route2str((struct prefix_evpn *)&bn->p,
|
|
|
|
buf2, sizeof(buf2));
|
|
|
|
vty_out(vty, " Route %s", buf2);
|
|
|
|
if (tag_buf[0] != '\0')
|
|
|
|
vty_out(vty, " VNI %s", tag_buf);
|
|
|
|
vty_out(vty, "\n");
|
|
|
|
} else {
|
|
|
|
if (tag_buf[0])
|
|
|
|
json_object_string_add(json_path, "VNI",
|
|
|
|
tag_buf);
|
|
|
|
}
|
2019-08-12 15:20:18 +02:00
|
|
|
}
|
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
if (path->extra && path->extra->parent && !json_paths) {
|
2018-10-02 22:41:30 +02:00
|
|
|
struct bgp_path_info *parent_ri;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *rn, *prn;
|
|
|
|
|
2018-10-03 00:34:03 +02:00
|
|
|
parent_ri = (struct bgp_path_info *)path->extra->parent;
|
2017-07-17 14:03:14 +02:00
|
|
|
rn = parent_ri->net;
|
|
|
|
if (rn && rn->prn) {
|
|
|
|
prn = rn->prn;
|
2019-08-12 15:20:18 +02:00
|
|
|
prefix_rd2str((struct prefix_rd *)&prn->p,
|
|
|
|
buf1, sizeof(buf1));
|
|
|
|
if (is_pi_family_evpn(parent_ri)) {
|
|
|
|
bgp_evpn_route2str((struct prefix_evpn *)&rn->p,
|
|
|
|
buf2, sizeof(buf2));
|
|
|
|
vty_out(vty, " Imported from %s:%s, VNI %s\n", buf1, buf2, tag_buf);
|
|
|
|
} else
|
|
|
|
vty_out(vty, " Imported from %s:%s\n", buf1, buf2);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Line1 display AS-path, Aggregator */
|
|
|
|
if (attr->aspath) {
|
|
|
|
if (json_paths) {
|
|
|
|
if (!attr->aspath->json)
|
|
|
|
aspath_str_update(attr->aspath, true);
|
|
|
|
json_object_lock(attr->aspath->json);
|
|
|
|
json_object_object_add(json_path, "aspath",
|
|
|
|
attr->aspath->json);
|
|
|
|
} else {
|
|
|
|
if (attr->aspath->segments)
|
|
|
|
aspath_print_vty(vty, " %s", attr->aspath, "");
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, " Local");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_REMOVED)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_true_add(json_path, "removed");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", (removed)");
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_STALE)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_true_add(json_path, "stale");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", (stale)");
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (CHECK_FLAG(attr->flag, ATTR_FLAG_BIT(BGP_ATTR_AGGREGATOR))) {
|
|
|
|
if (json_paths) {
|
|
|
|
json_object_int_add(json_path, "aggregatorAs",
|
|
|
|
attr->aggregator_as);
|
|
|
|
json_object_string_add(
|
|
|
|
json_path, "aggregatorId",
|
|
|
|
inet_ntoa(attr->aggregator_addr));
|
|
|
|
} else {
|
|
|
|
vty_out(vty, ", (aggregated by %u %s)",
|
|
|
|
attr->aggregator_as,
|
|
|
|
inet_ntoa(attr->aggregator_addr));
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (CHECK_FLAG(path->peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_REFLECTOR_CLIENT)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_true_add(json_path,
|
|
|
|
"rxedFromRrClient");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", (Received from a RR-client)");
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (CHECK_FLAG(path->peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_RSERVER_CLIENT)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_true_add(json_path,
|
|
|
|
"rxedFromRsClient");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", (Received from a RS-client)");
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_HISTORY)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_true_add(json_path,
|
|
|
|
"dampeningHistoryEntry");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", (history entry)");
|
|
|
|
} else if (CHECK_FLAG(path->flags, BGP_PATH_DAMPED)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_true_add(json_path,
|
|
|
|
"dampeningSuppressed");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", (suppressed due to dampening)");
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (!json_paths)
|
|
|
|
vty_out(vty, "\n");
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Line2 display Next-hop, Neighbor, Router-id */
|
|
|
|
/* Display the nexthop */
|
|
|
|
if ((bn->p.family == AF_INET || bn->p.family == AF_ETHERNET
|
|
|
|
|| bn->p.family == AF_EVPN)
|
|
|
|
&& (safi == SAFI_MPLS_VPN || safi == SAFI_ENCAP || safi == SAFI_EVPN
|
|
|
|
|| !BGP_ATTR_NEXTHOP_AFI_IP6(attr))) {
|
|
|
|
if (safi == SAFI_MPLS_VPN || safi == SAFI_ENCAP
|
|
|
|
|| safi == SAFI_EVPN) {
|
2019-12-06 21:03:50 +01:00
|
|
|
if (json_paths) {
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_string_add(
|
2019-12-06 21:03:50 +01:00
|
|
|
json_nexthop_global, "ip",
|
|
|
|
inet_ntoa(attr->mp_nexthop_global_in));
|
|
|
|
|
|
|
|
if (nexthop_hostname)
|
|
|
|
json_object_string_add(
|
|
|
|
json_nexthop_global, "hostname",
|
|
|
|
nexthop_hostname);
|
|
|
|
} else
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, " %s",
|
2019-12-06 21:03:50 +01:00
|
|
|
nexthop_hostname
|
|
|
|
? nexthop_hostname
|
2019-10-16 16:25:19 +02:00
|
|
|
: inet_ntoa(
|
2019-12-06 21:03:50 +01:00
|
|
|
attr->mp_nexthop_global_in));
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
2019-12-06 21:03:50 +01:00
|
|
|
if (json_paths) {
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(
|
2019-12-06 21:03:50 +01:00
|
|
|
json_nexthop_global, "ip",
|
|
|
|
inet_ntoa(attr->nexthop));
|
|
|
|
|
|
|
|
if (nexthop_hostname)
|
|
|
|
json_object_string_add(
|
|
|
|
json_nexthop_global, "hostname",
|
|
|
|
nexthop_hostname);
|
|
|
|
} else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, " %s",
|
2019-12-06 21:03:50 +01:00
|
|
|
nexthop_hostname
|
|
|
|
? nexthop_hostname
|
2019-10-16 16:25:19 +02:00
|
|
|
: inet_ntoa(attr->nexthop));
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (json_paths)
|
|
|
|
json_object_string_add(json_nexthop_global, "afi",
|
|
|
|
"ipv4");
|
|
|
|
} else {
|
|
|
|
if (json_paths) {
|
|
|
|
json_object_string_add(
|
2019-12-06 21:03:50 +01:00
|
|
|
json_nexthop_global, "ip",
|
|
|
|
inet_ntop(AF_INET6, &attr->mp_nexthop_global,
|
|
|
|
buf, INET6_ADDRSTRLEN));
|
|
|
|
|
|
|
|
if (nexthop_hostname)
|
|
|
|
json_object_string_add(json_nexthop_global,
|
|
|
|
"hostname",
|
|
|
|
nexthop_hostname);
|
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(json_nexthop_global, "afi",
|
|
|
|
"ipv6");
|
|
|
|
json_object_string_add(json_nexthop_global, "scope",
|
|
|
|
"global");
|
|
|
|
} else {
|
|
|
|
vty_out(vty, " %s",
|
2019-12-06 21:03:50 +01:00
|
|
|
nexthop_hostname
|
|
|
|
? nexthop_hostname
|
2019-10-16 16:25:19 +02:00
|
|
|
: inet_ntop(AF_INET6,
|
|
|
|
&attr->mp_nexthop_global,
|
|
|
|
buf, INET6_ADDRSTRLEN));
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Display the IGP cost or 'inaccessible' */
|
|
|
|
if (!CHECK_FLAG(path->flags, BGP_PATH_VALID)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_false_add(json_nexthop_global,
|
|
|
|
"accessible");
|
|
|
|
else
|
|
|
|
vty_out(vty, " (inaccessible)");
|
|
|
|
} else {
|
|
|
|
if (path->extra && path->extra->igpmetric) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (json_paths)
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_int_add(json_nexthop_global,
|
|
|
|
"metric",
|
|
|
|
path->extra->igpmetric);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, " (metric %u)",
|
|
|
|
path->extra->igpmetric);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* IGP cost is 0, display this only for json */
|
2017-07-17 14:03:14 +02:00
|
|
|
else {
|
|
|
|
if (json_paths)
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_int_add(json_nexthop_global,
|
|
|
|
"metric", 0);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_true_add(json_nexthop_global,
|
|
|
|
"accessible");
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Display peer "from" output */
|
|
|
|
/* This path was originated locally */
|
|
|
|
if (path->peer == bgp->peer_self) {
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (safi == SAFI_EVPN
|
|
|
|
|| (bn->p.family == AF_INET
|
|
|
|
&& !BGP_ATTR_NEXTHOP_AFI_IP6(attr))) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (json_paths)
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(json_peer, "peerId",
|
|
|
|
"0.0.0.0");
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, " from 0.0.0.0 ");
|
|
|
|
} else {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (json_paths)
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(json_peer, "peerId",
|
|
|
|
"::");
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, " from :: ");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (json_paths)
|
|
|
|
json_object_string_add(json_peer, "routerId",
|
|
|
|
inet_ntoa(bgp->router_id));
|
|
|
|
else
|
|
|
|
vty_out(vty, "(%s)", inet_ntoa(bgp->router_id));
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* We RXed this path from one of our peers */
|
|
|
|
else {
|
|
|
|
|
|
|
|
if (json_paths) {
|
|
|
|
json_object_string_add(json_peer, "peerId",
|
|
|
|
sockunion2str(&path->peer->su,
|
|
|
|
buf,
|
|
|
|
SU_ADDRSTRLEN));
|
|
|
|
json_object_string_add(json_peer, "routerId",
|
|
|
|
inet_ntop(AF_INET,
|
|
|
|
&path->peer->remote_id,
|
|
|
|
buf1, sizeof(buf1)));
|
|
|
|
|
|
|
|
if (path->peer->hostname)
|
|
|
|
json_object_string_add(json_peer, "hostname",
|
|
|
|
path->peer->hostname);
|
|
|
|
|
|
|
|
if (path->peer->domainname)
|
|
|
|
json_object_string_add(json_peer, "domainname",
|
|
|
|
path->peer->domainname);
|
|
|
|
|
|
|
|
if (path->peer->conf_if)
|
|
|
|
json_object_string_add(json_peer, "interface",
|
|
|
|
path->peer->conf_if);
|
|
|
|
} else {
|
|
|
|
if (path->peer->conf_if) {
|
|
|
|
if (path->peer->hostname
|
|
|
|
&& bgp_flag_check(path->peer->bgp,
|
|
|
|
BGP_FLAG_SHOW_HOSTNAME))
|
|
|
|
vty_out(vty, " from %s(%s)",
|
|
|
|
path->peer->hostname,
|
|
|
|
path->peer->conf_if);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, " from %s",
|
2018-10-03 00:34:03 +02:00
|
|
|
path->peer->conf_if);
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
2019-10-16 16:25:19 +02:00
|
|
|
if (path->peer->hostname
|
|
|
|
&& bgp_flag_check(path->peer->bgp,
|
|
|
|
BGP_FLAG_SHOW_HOSTNAME))
|
|
|
|
vty_out(vty, " from %s(%s)",
|
|
|
|
path->peer->hostname,
|
|
|
|
path->peer->host);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, " from %s",
|
|
|
|
sockunion2str(&path->peer->su,
|
|
|
|
buf,
|
|
|
|
SU_ADDRSTRLEN));
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_ORIGINATOR_ID))
|
|
|
|
vty_out(vty, " (%s)",
|
|
|
|
inet_ntoa(attr->originator_id));
|
|
|
|
else
|
|
|
|
vty_out(vty, " (%s)",
|
|
|
|
inet_ntop(AF_INET,
|
|
|
|
&path->peer->remote_id, buf1,
|
|
|
|
sizeof(buf1)));
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/*
|
|
|
|
* Note when vrfid of nexthop is different from that of prefix
|
|
|
|
*/
|
|
|
|
if (path->extra && path->extra->bgp_orig) {
|
|
|
|
vrf_id_t nexthop_vrfid = path->extra->bgp_orig->vrf_id;
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (json_paths) {
|
|
|
|
const char *vn;
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (path->extra->bgp_orig->inst_type
|
|
|
|
== BGP_INSTANCE_TYPE_DEFAULT)
|
|
|
|
vn = VRF_DEFAULT_NAME;
|
|
|
|
else
|
|
|
|
vn = path->extra->bgp_orig->name;
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(json_path, "nhVrfName", vn);
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (nexthop_vrfid == VRF_UNKNOWN) {
|
|
|
|
json_object_int_add(json_path, "nhVrfId", -1);
|
2018-04-09 22:28:11 +02:00
|
|
|
} else {
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_int_add(json_path, "nhVrfId",
|
|
|
|
(int)nexthop_vrfid);
|
2018-04-09 22:28:11 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
} else {
|
|
|
|
if (nexthop_vrfid == VRF_UNKNOWN)
|
|
|
|
vty_out(vty, " vrf ?");
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, " vrf %u", nexthop_vrfid);
|
2018-04-09 22:28:11 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (nexthop_self) {
|
|
|
|
if (json_paths) {
|
|
|
|
json_object_boolean_true_add(json_path,
|
|
|
|
"announceNexthopSelf");
|
|
|
|
} else {
|
|
|
|
vty_out(vty, " announce-nh-self");
|
2018-04-09 22:28:11 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (!json_paths)
|
|
|
|
vty_out(vty, "\n");
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* display the link-local nexthop */
|
|
|
|
if (attr->mp_nexthop_len == BGP_ATTR_NHLEN_IPV6_GLOBAL_AND_LL) {
|
|
|
|
if (json_paths) {
|
|
|
|
json_nexthop_ll = json_object_new_object();
|
|
|
|
json_object_string_add(
|
2019-12-06 21:03:50 +01:00
|
|
|
json_nexthop_ll, "ip",
|
|
|
|
inet_ntop(AF_INET6, &attr->mp_nexthop_local,
|
|
|
|
buf, INET6_ADDRSTRLEN));
|
|
|
|
|
|
|
|
if (nexthop_hostname)
|
|
|
|
json_object_string_add(json_nexthop_ll,
|
|
|
|
"hostname",
|
|
|
|
nexthop_hostname);
|
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(json_nexthop_ll, "afi", "ipv6");
|
|
|
|
json_object_string_add(json_nexthop_ll, "scope",
|
|
|
|
"link-local");
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_boolean_true_add(json_nexthop_ll,
|
|
|
|
"accessible");
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (!attr->mp_nexthop_prefer_global)
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_boolean_true_add(json_nexthop_ll,
|
2019-10-16 16:25:19 +02:00
|
|
|
"used");
|
|
|
|
else
|
|
|
|
json_object_boolean_true_add(
|
|
|
|
json_nexthop_global, "used");
|
|
|
|
} else {
|
|
|
|
vty_out(vty, " (%s) %s\n",
|
|
|
|
inet_ntop(AF_INET6, &attr->mp_nexthop_local,
|
|
|
|
buf, INET6_ADDRSTRLEN),
|
|
|
|
attr->mp_nexthop_prefer_global
|
|
|
|
? "(prefer-global)"
|
|
|
|
: "(used)");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
|
|
|
/* If we do not have a link-local nexthop then we must flag the
|
|
|
|
global as "used" */
|
|
|
|
else {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_true_add(json_nexthop_global,
|
|
|
|
"used");
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Line 3 display Origin, Med, Locpref, Weight, Tag, valid,
|
|
|
|
* Int/Ext/Local, Atomic, best */
|
|
|
|
if (json_paths)
|
|
|
|
json_object_string_add(json_path, "origin",
|
|
|
|
bgp_origin_long_str[attr->origin]);
|
|
|
|
else
|
|
|
|
vty_out(vty, " Origin %s",
|
|
|
|
bgp_origin_long_str[attr->origin]);
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_MULTI_EXIT_DISC)) {
|
|
|
|
if (json_paths) {
|
|
|
|
/*
|
|
|
|
* Adding "metric" field to match with
|
|
|
|
* corresponding CLI. "med" will be
|
|
|
|
* deprecated in future.
|
|
|
|
*/
|
|
|
|
json_object_int_add(json_path, "med", attr->med);
|
|
|
|
json_object_int_add(json_path, "metric", attr->med);
|
|
|
|
} else
|
|
|
|
vty_out(vty, ", metric %u", attr->med);
|
|
|
|
}
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_LOCAL_PREF)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_int_add(json_path, "localpref",
|
|
|
|
attr->local_pref);
|
|
|
|
else
|
|
|
|
vty_out(vty, ", localpref %u", attr->local_pref);
|
|
|
|
}
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (attr->weight != 0) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_int_add(json_path, "weight", attr->weight);
|
|
|
|
else
|
|
|
|
vty_out(vty, ", weight %u", attr->weight);
|
|
|
|
}
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (attr->tag != 0) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_int_add(json_path, "tag", attr->tag);
|
|
|
|
else
|
|
|
|
vty_out(vty, ", tag %" ROUTE_TAG_PRI, attr->tag);
|
|
|
|
}
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (!CHECK_FLAG(path->flags, BGP_PATH_VALID)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_false_add(json_path, "valid");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", invalid");
|
|
|
|
} else if (!CHECK_FLAG(path->flags, BGP_PATH_HISTORY)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_true_add(json_path, "valid");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", valid");
|
|
|
|
}
|
2018-04-09 22:28:11 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (path->peer != bgp->peer_self) {
|
|
|
|
if (path->peer->as == path->peer->local_as) {
|
|
|
|
if (CHECK_FLAG(bgp->config, BGP_CONFIG_CONFEDERATION)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_string_add(
|
|
|
|
json_peer, "type",
|
|
|
|
"confed-internal");
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, ", confed-internal");
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
2019-10-16 16:25:19 +02:00
|
|
|
if (json_paths)
|
|
|
|
json_object_string_add(
|
|
|
|
json_peer, "type", "internal");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", internal");
|
2018-04-09 22:28:11 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
} else {
|
|
|
|
if (bgp_confederation_peers_check(bgp,
|
|
|
|
path->peer->as)) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_string_add(
|
|
|
|
json_peer, "type",
|
|
|
|
"confed-external");
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, ", confed-external");
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
2019-10-16 16:25:19 +02:00
|
|
|
if (json_paths)
|
|
|
|
json_object_string_add(
|
|
|
|
json_peer, "type", "external");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", external");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
} else if (path->sub_type == BGP_ROUTE_AGGREGATE) {
|
|
|
|
if (json_paths) {
|
|
|
|
json_object_boolean_true_add(json_path, "aggregated");
|
|
|
|
json_object_boolean_true_add(json_path, "local");
|
|
|
|
} else {
|
|
|
|
vty_out(vty, ", aggregated, local");
|
|
|
|
}
|
|
|
|
} else if (path->type != ZEBRA_ROUTE_BGP) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_true_add(json_path, "sourced");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", sourced");
|
|
|
|
} else {
|
|
|
|
if (json_paths) {
|
|
|
|
json_object_boolean_true_add(json_path, "sourced");
|
|
|
|
json_object_boolean_true_add(json_path, "local");
|
|
|
|
} else {
|
|
|
|
vty_out(vty, ", sourced, local");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_ATOMIC_AGGREGATE)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (json_paths)
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_boolean_true_add(json_path,
|
|
|
|
"atomicAggregate");
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, ", atomic-aggregate");
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_MULTIPATH)
|
|
|
|
|| (CHECK_FLAG(path->flags, BGP_PATH_SELECTED)
|
|
|
|
&& bgp_path_info_mpath_count(path))) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_boolean_true_add(json_path, "multipath");
|
|
|
|
else
|
|
|
|
vty_out(vty, ", multipath");
|
|
|
|
}
|
2018-10-05 23:30:59 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
// Mark the bestpath(s)
|
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_DMED_SELECTED)) {
|
|
|
|
first_as = aspath_get_first_as(attr->aspath);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (json_paths) {
|
|
|
|
if (!json_bestpath)
|
|
|
|
json_bestpath = json_object_new_object();
|
|
|
|
json_object_int_add(json_bestpath, "bestpathFromAs",
|
|
|
|
first_as);
|
|
|
|
} else {
|
|
|
|
if (first_as)
|
|
|
|
vty_out(vty, ", bestpath-from-AS %u", first_as);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, ", bestpath-from-AS Local");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (CHECK_FLAG(path->flags, BGP_PATH_SELECTED)) {
|
|
|
|
if (json_paths) {
|
|
|
|
if (!json_bestpath)
|
|
|
|
json_bestpath = json_object_new_object();
|
|
|
|
json_object_boolean_true_add(json_bestpath, "overall");
|
|
|
|
json_object_string_add(
|
|
|
|
json_bestpath, "selectionReason",
|
|
|
|
bgp_path_selection_reason2str(bn->reason));
|
|
|
|
} else {
|
|
|
|
vty_out(vty, ", best");
|
|
|
|
vty_out(vty, " (%s)",
|
|
|
|
bgp_path_selection_reason2str(bn->reason));
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (json_bestpath)
|
|
|
|
json_object_object_add(json_path, "bestpath", json_bestpath);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (!json_paths)
|
|
|
|
vty_out(vty, "\n");
|
|
|
|
|
|
|
|
/* Line 4 display Community */
|
|
|
|
if (attr->community) {
|
|
|
|
if (json_paths) {
|
|
|
|
if (!attr->community->json)
|
|
|
|
community_str(attr->community, true);
|
|
|
|
json_object_lock(attr->community->json);
|
|
|
|
json_object_object_add(json_path, "community",
|
|
|
|
attr->community->json);
|
|
|
|
} else {
|
|
|
|
vty_out(vty, " Community: %s\n",
|
|
|
|
attr->community->str);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Line 5 display Extended-community */
|
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_EXT_COMMUNITIES)) {
|
|
|
|
if (json_paths) {
|
|
|
|
json_ext_community = json_object_new_object();
|
|
|
|
json_object_string_add(json_ext_community, "string",
|
|
|
|
attr->ecommunity->str);
|
|
|
|
json_object_object_add(json_path, "extendedCommunity",
|
|
|
|
json_ext_community);
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, " Extended Community: %s\n",
|
|
|
|
attr->ecommunity->str);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Line 6 display Large community */
|
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_LARGE_COMMUNITIES)) {
|
|
|
|
if (json_paths) {
|
|
|
|
if (!attr->lcommunity->json)
|
|
|
|
lcommunity_str(attr->lcommunity, true);
|
|
|
|
json_object_lock(attr->lcommunity->json);
|
|
|
|
json_object_object_add(json_path, "largeCommunity",
|
|
|
|
attr->lcommunity->json);
|
|
|
|
} else {
|
|
|
|
vty_out(vty, " Large Community: %s\n",
|
|
|
|
attr->lcommunity->str);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Line 7 display Originator, Cluster-id */
|
|
|
|
if ((attr->flag & ATTR_FLAG_BIT(BGP_ATTR_ORIGINATOR_ID))
|
|
|
|
|| (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_CLUSTER_LIST))) {
|
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_ORIGINATOR_ID)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
if (json_paths)
|
2019-10-16 16:25:19 +02:00
|
|
|
json_object_string_add(
|
|
|
|
json_path, "originatorId",
|
|
|
|
inet_ntoa(attr->originator_id));
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, " Originator: %s",
|
|
|
|
inet_ntoa(attr->originator_id));
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2015-08-12 15:59:18 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_CLUSTER_LIST)) {
|
|
|
|
int i;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (json_paths) {
|
2019-10-16 16:25:19 +02:00
|
|
|
json_cluster_list = json_object_new_object();
|
|
|
|
json_cluster_list_list =
|
|
|
|
json_object_new_array();
|
|
|
|
|
|
|
|
for (i = 0; i < attr->cluster->length / 4;
|
|
|
|
i++) {
|
|
|
|
json_string = json_object_new_string(
|
|
|
|
inet_ntoa(attr->cluster
|
|
|
|
->list[i]));
|
|
|
|
json_object_array_add(
|
|
|
|
json_cluster_list_list,
|
|
|
|
json_string);
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/*
|
|
|
|
* struct cluster_list does not have
|
|
|
|
* "str" variable like aspath and community
|
|
|
|
* do. Add this someday if someone asks
|
|
|
|
* for it.
|
|
|
|
* json_object_string_add(json_cluster_list,
|
|
|
|
* "string", attr->cluster->str);
|
|
|
|
*/
|
|
|
|
json_object_object_add(json_cluster_list,
|
|
|
|
"list",
|
|
|
|
json_cluster_list_list);
|
|
|
|
json_object_object_add(json_path, "clusterList",
|
|
|
|
json_cluster_list);
|
2019-05-16 03:11:02 +02:00
|
|
|
} else {
|
2019-10-16 16:25:19 +02:00
|
|
|
vty_out(vty, ", Cluster list: ");
|
|
|
|
|
|
|
|
for (i = 0; i < attr->cluster->length / 4;
|
|
|
|
i++) {
|
|
|
|
vty_out(vty, "%s ",
|
|
|
|
inet_ntoa(attr->cluster
|
|
|
|
->list[i]));
|
|
|
|
}
|
2019-05-16 03:11:02 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!json_paths)
|
|
|
|
vty_out(vty, "\n");
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (path->extra && path->extra->damp_info)
|
2019-11-10 19:13:20 +01:00
|
|
|
bgp_damp_info_vty(vty, path, afi, safi, json_path);
|
BGP: support for addpath TX
Signed-off-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Donald Sharp <sharpd@cumulusnetworks.com>
Reviewed-by: Vivek Venkataraman <vivek@cumulusnetworks.com
Ticket: CM-8014
This implements addpath TX with the first feature to use it
being "neighbor x.x.x.x addpath-tx-all-paths".
One change to show output is 'show ip bgp x.x.x.x'. If no addpath-tx
features are configured for any peers then everything looks the same
as it is today in that "Advertised to" is at the top and refers to
which peers the bestpath was advertise to.
root@superm-redxp-05[quagga-stash5]# vtysh -c 'show ip bgp 1.1.1.1'
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Last update: Fri Oct 30 18:26:44 2015
[snip]
but once you enable an addpath feature we must display "Advertised to" on a path-by-path basis:
superm-redxp-05# show ip bgp 1.1.1.1/32
BGP routing table entry for 1.1.1.1/32
Paths: (6 available, best #6, table Default-IP-Routing-Table)
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r2(10.0.0.2) (10.0.0.2)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 8
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:44 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r3(10.0.0.3) (10.0.0.3)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 7
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r6(10.0.0.6) (10.0.0.6)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 6
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
56.56.56.56 (metric 20) from r5(10.0.0.5) (10.0.0.5)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 5
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
34.34.34.34 (metric 20) from r4(10.0.0.4) (10.0.0.4)
Origin IGP, metric 0, localpref 100, valid, internal
AddPath ID: RX 0, TX 4
Advertised to: r8(10.0.0.8)
Last update: Fri Oct 30 18:26:39 2015
Local, (Received from a RR-client)
12.12.12.12 (metric 20) from r1(10.0.0.1) (10.0.0.1)
Origin IGP, metric 0, localpref 100, valid, internal, best
AddPath ID: RX 0, TX 3
Advertised to: r1(10.0.0.1) r2(10.0.0.2) r3(10.0.0.3) r4(10.0.0.4) r5(10.0.0.5) r6(10.0.0.6) r8(10.0.0.8)
Last update: Fri Oct 30 18:26:34 2015
superm-redxp-05#
2015-11-05 18:29:43 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Remote Label */
|
|
|
|
if (path->extra && bgp_is_valid_label(&path->extra->label[0])
|
|
|
|
&& (safi != SAFI_EVPN && !is_route_parent_evpn(path))) {
|
|
|
|
mpls_label_t label = label_pton(&path->extra->label[0]);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (json_paths)
|
|
|
|
json_object_int_add(json_path, "remoteLabel", label);
|
|
|
|
else
|
|
|
|
vty_out(vty, " Remote label: %d\n", label);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Label Index */
|
|
|
|
if (attr->label_index != BGP_INVALID_LABEL_INDEX) {
|
|
|
|
if (json_paths)
|
|
|
|
json_object_int_add(json_path, "labelIndex",
|
|
|
|
attr->label_index);
|
|
|
|
else
|
|
|
|
vty_out(vty, " Label Index: %d\n",
|
|
|
|
attr->label_index);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Line 8 display Addpath IDs */
|
|
|
|
if (path->addpath_rx_id
|
|
|
|
|| bgp_addpath_info_has_ids(&path->tx_addpath)) {
|
|
|
|
if (json_paths) {
|
|
|
|
json_object_int_add(json_path, "addpathRxId",
|
|
|
|
path->addpath_rx_id);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Keep backwards compatibility with the old API
|
|
|
|
* by putting TX All's ID in the old field
|
|
|
|
*/
|
|
|
|
json_object_int_add(
|
|
|
|
json_path, "addpathTxId",
|
|
|
|
path->tx_addpath
|
|
|
|
.addpath_tx_id[BGP_ADDPATH_ALL]);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* ... but create a specific field for each
|
|
|
|
* strategy
|
|
|
|
*/
|
|
|
|
for (i = 0; i < BGP_ADDPATH_MAX; i++) {
|
|
|
|
json_object_int_add(
|
|
|
|
json_path,
|
|
|
|
bgp_addpath_names(i)->id_json_name,
|
|
|
|
path->tx_addpath.addpath_tx_id[i]);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
} else {
|
|
|
|
vty_out(vty, " AddPath ID: RX %u, ",
|
|
|
|
path->addpath_rx_id);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
route_vty_out_tx_ids(vty, &path->tx_addpath);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2017-05-15 23:53:31 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* If we used addpath to TX a non-bestpath we need to display
|
|
|
|
* "Advertised to" on a path-by-path basis
|
|
|
|
*/
|
|
|
|
if (bgp_addpath_is_addpath_used(&bgp->tx_addpath, afi, safi)) {
|
|
|
|
first = 1;
|
bgpd: Re-use TX Addpath IDs where possible
The motivation for this patch is to address a concerning behavior of
tx-addpath-bestpath-per-AS. Prior to this patch, all paths' TX ID was
pre-determined as the path was received from a peer. However, this meant
that any time the path selected as best from an AS changed, bgpd had no
choice but to withdraw the previous best path, and advertise the new
best-path under a new TX ID. This could cause significant network
disruption, especially for the subset of prefixes coming from only one
AS that were also communicated over a bestpath-per-AS session.
The patch's general approach is best illustrated by
txaddpath_update_ids. After a bestpath run (required for best-per-AS to
know what will and will not be sent as addpaths) ID numbers will be
stripped from paths that no longer need to be sent, and held in a pool.
Then, paths that will be sent as addpaths and do not already have ID
numbers will allocate new ID numbers, pulling first from that pool.
Finally, anything left in the pool will be returned to the allocator.
In order for this to work, ID numbers had to be split by strategy. The
tx-addpath-All strategy would keep every ID number "in use" constantly,
preventing IDs from being transferred to different paths. Rather than
create two variables for ID, this patch create a more generic array that
will easily enable more addpath strategies to be implemented. The
previously described ID manipulations will happen per addpath strategy,
and will only be run for strategies that are enabled on at least one
peer.
Finally, the ID numbers are allocated from an allocator that tracks per
AFI/SAFI/Addpath Strategy which IDs are in use. Though it would be very
improbable, there was the possibility with the free-running counter
approach for rollover to cause two paths on the same prefix to get
assigned the same TX ID. As remote as the possibility is, we prefer to
not leave it to chance.
This ID re-use method is not perfect. In some cases you could still get
withdraw-then-add behaviors where not strictly necessary. In the case of
bestpath-per-AS this requires one AS to advertise a prefix for the first
time, then a second AS withdraws that prefix, all within the space of an
already pending MRAI timer. In those situations a withdraw-then-add is
more forgivable, and fixing it would probably require a much more
significant effort, as IDs would need to be moved to ADVs instead of
paths.
Signed-off-by Mitchell Skiba <mskiba@amazon.com>
2018-05-10 01:10:02 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
for (ALL_LIST_ELEMENTS(bgp->peer, node, nnode, peer)) {
|
|
|
|
addpath_capable =
|
|
|
|
bgp_addpath_encode_tx(peer, afi, safi);
|
|
|
|
has_adj = bgp_adj_out_lookup(
|
|
|
|
peer, path->net,
|
|
|
|
bgp_addpath_id_for_peer(peer, afi, safi,
|
|
|
|
&path->tx_addpath));
|
|
|
|
|
|
|
|
if ((addpath_capable && has_adj)
|
|
|
|
|| (!addpath_capable && has_adj
|
|
|
|
&& CHECK_FLAG(path->flags,
|
|
|
|
BGP_PATH_SELECTED))) {
|
|
|
|
if (json_path && !json_adv_to)
|
|
|
|
json_adv_to = json_object_new_object();
|
bgpd: Re-use TX Addpath IDs where possible
The motivation for this patch is to address a concerning behavior of
tx-addpath-bestpath-per-AS. Prior to this patch, all paths' TX ID was
pre-determined as the path was received from a peer. However, this meant
that any time the path selected as best from an AS changed, bgpd had no
choice but to withdraw the previous best path, and advertise the new
best-path under a new TX ID. This could cause significant network
disruption, especially for the subset of prefixes coming from only one
AS that were also communicated over a bestpath-per-AS session.
The patch's general approach is best illustrated by
txaddpath_update_ids. After a bestpath run (required for best-per-AS to
know what will and will not be sent as addpaths) ID numbers will be
stripped from paths that no longer need to be sent, and held in a pool.
Then, paths that will be sent as addpaths and do not already have ID
numbers will allocate new ID numbers, pulling first from that pool.
Finally, anything left in the pool will be returned to the allocator.
In order for this to work, ID numbers had to be split by strategy. The
tx-addpath-All strategy would keep every ID number "in use" constantly,
preventing IDs from being transferred to different paths. Rather than
create two variables for ID, this patch create a more generic array that
will easily enable more addpath strategies to be implemented. The
previously described ID manipulations will happen per addpath strategy,
and will only be run for strategies that are enabled on at least one
peer.
Finally, the ID numbers are allocated from an allocator that tracks per
AFI/SAFI/Addpath Strategy which IDs are in use. Though it would be very
improbable, there was the possibility with the free-running counter
approach for rollover to cause two paths on the same prefix to get
assigned the same TX ID. As remote as the possibility is, we prefer to
not leave it to chance.
This ID re-use method is not perfect. In some cases you could still get
withdraw-then-add behaviors where not strictly necessary. In the case of
bestpath-per-AS this requires one AS to advertise a prefix for the first
time, then a second AS withdraws that prefix, all within the space of an
already pending MRAI timer. In those situations a withdraw-then-add is
more forgivable, and fixing it would probably require a much more
significant effort, as IDs would need to be moved to ADVs instead of
paths.
Signed-off-by Mitchell Skiba <mskiba@amazon.com>
2018-05-10 01:10:02 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
route_vty_out_advertised_to(
|
|
|
|
vty, peer, &first,
|
|
|
|
" Advertised to:", json_adv_to);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (json_path) {
|
|
|
|
if (json_adv_to) {
|
|
|
|
json_object_object_add(
|
|
|
|
json_path, "advertisedTo", json_adv_to);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
} else {
|
|
|
|
if (!first) {
|
|
|
|
vty_out(vty, "\n");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Line 9 display Uptime */
|
|
|
|
tbuf = time(NULL) - (bgp_clock() - path->uptime);
|
|
|
|
if (json_paths) {
|
|
|
|
json_last_update = json_object_new_object();
|
|
|
|
json_object_int_add(json_last_update, "epoch", tbuf);
|
|
|
|
json_object_string_add(json_last_update, "string",
|
|
|
|
ctime(&tbuf));
|
|
|
|
json_object_object_add(json_path, "lastUpdate",
|
|
|
|
json_last_update);
|
|
|
|
} else
|
|
|
|
vty_out(vty, " Last update: %s", ctime(&tbuf));
|
2018-03-13 19:14:26 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
/* Line 10 display PMSI tunnel attribute, if present */
|
|
|
|
if (attr->flag & ATTR_FLAG_BIT(BGP_ATTR_PMSI_TUNNEL)) {
|
|
|
|
const char *str =
|
|
|
|
lookup_msg(bgp_pmsi_tnltype_str, attr->pmsi_tnl_type,
|
|
|
|
PMSI_TNLTYPE_STR_DEFAULT);
|
2018-03-04 04:28:50 +01:00
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (json_paths) {
|
|
|
|
json_pmsi = json_object_new_object();
|
|
|
|
json_object_string_add(json_pmsi, "tunnelType", str);
|
|
|
|
json_object_int_add(json_pmsi, "label",
|
|
|
|
label2vni(&attr->label));
|
|
|
|
json_object_object_add(json_path, "pmsi", json_pmsi);
|
|
|
|
} else
|
|
|
|
vty_out(vty, " PMSI Tunnel Type: %s, label: %d\n",
|
|
|
|
str, label2vni(&attr->label));
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
Key changes:
- The aspath and community structures now have a json_object where we
store the json representation. This is updated at the same time
the "str" for aspath/community are updated. We do this so that we
do not have to compute the json rep
- Added a small wrappper to libjson0, the wrapper lives in quagga's lib/json.[ch].
- Added more structure to the json output. Sample output:
show ip bgp summary json
------------------------
BGP router identifier 10.0.0.1, local AS number 10
BGP table version 2400
RIB entries 4799, using 562 KiB of memory
Peers 17, using 284 KiB of memory
Peer groups 4, using 224 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
1.1.1.1 4 10 0 0 0 0 0 never Active
10.0.0.2 4 10 104 7 0 0 0 00:02:29 600
10.0.0.3 4 10 104 7 0 0 0 00:02:29 600
10.0.0.4 4 10 204 7 0 0 0 00:02:29 1200
20.1.1.6 4 20 406 210 0 0 0 00:02:44 600
20.1.1.7 4 20 406 210 0 0 0 00:02:44 600
40.1.1.2 4 40 406 210 0 0 0 00:02:44 600
40.1.1.6 4 40 406 210 0 0 0 00:02:44 600
40.1.1.10 4 40 406 210 0 0 0 00:02:44 600
Total number of neighbors 9
{
"as": 10,
"dynamic-peers": 0,
"peer-count": 17,
"peer-group-count": 4,
"peer-group-memory": 224,
"peer-memory": 291312,
"peers": {
"1.1.1.1": {
"inq": 0,
"msgrcvd": 0,
"msgsent": 0,
"outq": 0,
"prefix-advertised-count": 0,
"prefix-received-count": 0,
"remote-as": 10,
"state": "Active",
"table-version": 0,
"uptime": "never",
"version": 4
},
"10.0.0.2": {
"hostname": "r2",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.3": {
"hostname": "r3",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.4": {
"hostname": "r4",
"inq": 0,
"msgrcvd": 204,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 1200,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"20.1.1.6": {
"hostname": "r6",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"20.1.1.7": {
"hostname": "r7",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.10": {
"hostname": "r10",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.2": {
"hostname": "r8",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.6": {
"hostname": "r9",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
}
},
"rib-count": 4799,
"rib-memory": 575880,
"router-id": "10.0.0.1",
"table-version": 2400,
"total-peers": 9
}
show ip bgp json
----------------
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.88.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.89.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
"40.3.88.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
"40.3.89.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
show ip bgp x.x.x.x json
------------------------
BGP routing table entry for 40.3.86.0/24
Paths: (3 available, best #3, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.2 10.0.0.3 10.0.0.4 20.1.1.6 20.1.1.7 40.1.1.2 40.1.1.6 40.1.1.10
100 200 300 400 500 40
40.1.1.6 from 40.1.1.6 (40.0.0.9)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.10 from 40.1.1.10 (40.0.0.10)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.2 from 40.1.1.2 (40.0.0.8)
Origin IGP, metric 0, localpref 100, valid, external, best
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
{
"advertised-to": {
"10.0.0.2": {
"hostname": "r2"
},
"10.0.0.3": {
"hostname": "r3"
},
"10.0.0.4": {
"hostname": "r4"
},
"20.1.1.6": {
"hostname": "r6"
},
"20.1.1.7": {
"hostname": "r7"
},
"40.1.1.10": {
"hostname": "r10"
},
"40.1.1.2": {
"hostname": "r8"
},
"40.1.1.6": {
"hostname": "r9"
}
},
"paths": [
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.6",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r9",
"peer-id": "40.1.1.6",
"router-id": "40.0.0.9",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.10",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r10",
"peer-id": "40.1.1.10",
"router-id": "40.0.0.10",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"bestpath": {
"overall": true
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.2",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r8",
"peer-id": "40.1.1.2",
"router-id": "40.0.0.8",
"type": "external"
},
"valid": true
}
],
"prefix": "40.3.86.0",
"prefixlen": 24
}
2015-06-12 16:59:11 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* We've constructed the json object for this path, add it to the json
|
|
|
|
* array of paths
|
|
|
|
*/
|
|
|
|
if (json_paths) {
|
|
|
|
if (json_nexthop_global || json_nexthop_ll) {
|
|
|
|
json_nexthops = json_object_new_array();
|
Key changes:
- The aspath and community structures now have a json_object where we
store the json representation. This is updated at the same time
the "str" for aspath/community are updated. We do this so that we
do not have to compute the json rep
- Added a small wrappper to libjson0, the wrapper lives in quagga's lib/json.[ch].
- Added more structure to the json output. Sample output:
show ip bgp summary json
------------------------
BGP router identifier 10.0.0.1, local AS number 10
BGP table version 2400
RIB entries 4799, using 562 KiB of memory
Peers 17, using 284 KiB of memory
Peer groups 4, using 224 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
1.1.1.1 4 10 0 0 0 0 0 never Active
10.0.0.2 4 10 104 7 0 0 0 00:02:29 600
10.0.0.3 4 10 104 7 0 0 0 00:02:29 600
10.0.0.4 4 10 204 7 0 0 0 00:02:29 1200
20.1.1.6 4 20 406 210 0 0 0 00:02:44 600
20.1.1.7 4 20 406 210 0 0 0 00:02:44 600
40.1.1.2 4 40 406 210 0 0 0 00:02:44 600
40.1.1.6 4 40 406 210 0 0 0 00:02:44 600
40.1.1.10 4 40 406 210 0 0 0 00:02:44 600
Total number of neighbors 9
{
"as": 10,
"dynamic-peers": 0,
"peer-count": 17,
"peer-group-count": 4,
"peer-group-memory": 224,
"peer-memory": 291312,
"peers": {
"1.1.1.1": {
"inq": 0,
"msgrcvd": 0,
"msgsent": 0,
"outq": 0,
"prefix-advertised-count": 0,
"prefix-received-count": 0,
"remote-as": 10,
"state": "Active",
"table-version": 0,
"uptime": "never",
"version": 4
},
"10.0.0.2": {
"hostname": "r2",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.3": {
"hostname": "r3",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.4": {
"hostname": "r4",
"inq": 0,
"msgrcvd": 204,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 1200,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"20.1.1.6": {
"hostname": "r6",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"20.1.1.7": {
"hostname": "r7",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.10": {
"hostname": "r10",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.2": {
"hostname": "r8",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.6": {
"hostname": "r9",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
}
},
"rib-count": 4799,
"rib-memory": 575880,
"router-id": "10.0.0.1",
"table-version": 2400,
"total-peers": 9
}
show ip bgp json
----------------
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.88.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.89.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
"40.3.88.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
"40.3.89.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
show ip bgp x.x.x.x json
------------------------
BGP routing table entry for 40.3.86.0/24
Paths: (3 available, best #3, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.2 10.0.0.3 10.0.0.4 20.1.1.6 20.1.1.7 40.1.1.2 40.1.1.6 40.1.1.10
100 200 300 400 500 40
40.1.1.6 from 40.1.1.6 (40.0.0.9)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.10 from 40.1.1.10 (40.0.0.10)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.2 from 40.1.1.2 (40.0.0.8)
Origin IGP, metric 0, localpref 100, valid, external, best
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
{
"advertised-to": {
"10.0.0.2": {
"hostname": "r2"
},
"10.0.0.3": {
"hostname": "r3"
},
"10.0.0.4": {
"hostname": "r4"
},
"20.1.1.6": {
"hostname": "r6"
},
"20.1.1.7": {
"hostname": "r7"
},
"40.1.1.10": {
"hostname": "r10"
},
"40.1.1.2": {
"hostname": "r8"
},
"40.1.1.6": {
"hostname": "r9"
}
},
"paths": [
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.6",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r9",
"peer-id": "40.1.1.6",
"router-id": "40.0.0.9",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.10",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r10",
"peer-id": "40.1.1.10",
"router-id": "40.0.0.10",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"bestpath": {
"overall": true
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.2",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r8",
"peer-id": "40.1.1.2",
"router-id": "40.0.0.8",
"type": "external"
},
"valid": true
}
],
"prefix": "40.3.86.0",
"prefixlen": 24
}
2015-06-12 16:59:11 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (json_nexthop_global)
|
|
|
|
json_object_array_add(json_nexthops,
|
|
|
|
json_nexthop_global);
|
Key changes:
- The aspath and community structures now have a json_object where we
store the json representation. This is updated at the same time
the "str" for aspath/community are updated. We do this so that we
do not have to compute the json rep
- Added a small wrappper to libjson0, the wrapper lives in quagga's lib/json.[ch].
- Added more structure to the json output. Sample output:
show ip bgp summary json
------------------------
BGP router identifier 10.0.0.1, local AS number 10
BGP table version 2400
RIB entries 4799, using 562 KiB of memory
Peers 17, using 284 KiB of memory
Peer groups 4, using 224 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
1.1.1.1 4 10 0 0 0 0 0 never Active
10.0.0.2 4 10 104 7 0 0 0 00:02:29 600
10.0.0.3 4 10 104 7 0 0 0 00:02:29 600
10.0.0.4 4 10 204 7 0 0 0 00:02:29 1200
20.1.1.6 4 20 406 210 0 0 0 00:02:44 600
20.1.1.7 4 20 406 210 0 0 0 00:02:44 600
40.1.1.2 4 40 406 210 0 0 0 00:02:44 600
40.1.1.6 4 40 406 210 0 0 0 00:02:44 600
40.1.1.10 4 40 406 210 0 0 0 00:02:44 600
Total number of neighbors 9
{
"as": 10,
"dynamic-peers": 0,
"peer-count": 17,
"peer-group-count": 4,
"peer-group-memory": 224,
"peer-memory": 291312,
"peers": {
"1.1.1.1": {
"inq": 0,
"msgrcvd": 0,
"msgsent": 0,
"outq": 0,
"prefix-advertised-count": 0,
"prefix-received-count": 0,
"remote-as": 10,
"state": "Active",
"table-version": 0,
"uptime": "never",
"version": 4
},
"10.0.0.2": {
"hostname": "r2",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.3": {
"hostname": "r3",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.4": {
"hostname": "r4",
"inq": 0,
"msgrcvd": 204,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 1200,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"20.1.1.6": {
"hostname": "r6",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"20.1.1.7": {
"hostname": "r7",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.10": {
"hostname": "r10",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.2": {
"hostname": "r8",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.6": {
"hostname": "r9",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
}
},
"rib-count": 4799,
"rib-memory": 575880,
"router-id": "10.0.0.1",
"table-version": 2400,
"total-peers": 9
}
show ip bgp json
----------------
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.88.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.89.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
"40.3.88.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
"40.3.89.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
show ip bgp x.x.x.x json
------------------------
BGP routing table entry for 40.3.86.0/24
Paths: (3 available, best #3, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.2 10.0.0.3 10.0.0.4 20.1.1.6 20.1.1.7 40.1.1.2 40.1.1.6 40.1.1.10
100 200 300 400 500 40
40.1.1.6 from 40.1.1.6 (40.0.0.9)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.10 from 40.1.1.10 (40.0.0.10)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.2 from 40.1.1.2 (40.0.0.8)
Origin IGP, metric 0, localpref 100, valid, external, best
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
{
"advertised-to": {
"10.0.0.2": {
"hostname": "r2"
},
"10.0.0.3": {
"hostname": "r3"
},
"10.0.0.4": {
"hostname": "r4"
},
"20.1.1.6": {
"hostname": "r6"
},
"20.1.1.7": {
"hostname": "r7"
},
"40.1.1.10": {
"hostname": "r10"
},
"40.1.1.2": {
"hostname": "r8"
},
"40.1.1.6": {
"hostname": "r9"
}
},
"paths": [
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.6",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r9",
"peer-id": "40.1.1.6",
"router-id": "40.0.0.9",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.10",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r10",
"peer-id": "40.1.1.10",
"router-id": "40.0.0.10",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"bestpath": {
"overall": true
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.2",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r8",
"peer-id": "40.1.1.2",
"router-id": "40.0.0.8",
"type": "external"
},
"valid": true
}
],
"prefix": "40.3.86.0",
"prefixlen": 24
}
2015-06-12 16:59:11 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (json_nexthop_ll)
|
|
|
|
json_object_array_add(json_nexthops,
|
|
|
|
json_nexthop_ll);
|
Key changes:
- The aspath and community structures now have a json_object where we
store the json representation. This is updated at the same time
the "str" for aspath/community are updated. We do this so that we
do not have to compute the json rep
- Added a small wrappper to libjson0, the wrapper lives in quagga's lib/json.[ch].
- Added more structure to the json output. Sample output:
show ip bgp summary json
------------------------
BGP router identifier 10.0.0.1, local AS number 10
BGP table version 2400
RIB entries 4799, using 562 KiB of memory
Peers 17, using 284 KiB of memory
Peer groups 4, using 224 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
1.1.1.1 4 10 0 0 0 0 0 never Active
10.0.0.2 4 10 104 7 0 0 0 00:02:29 600
10.0.0.3 4 10 104 7 0 0 0 00:02:29 600
10.0.0.4 4 10 204 7 0 0 0 00:02:29 1200
20.1.1.6 4 20 406 210 0 0 0 00:02:44 600
20.1.1.7 4 20 406 210 0 0 0 00:02:44 600
40.1.1.2 4 40 406 210 0 0 0 00:02:44 600
40.1.1.6 4 40 406 210 0 0 0 00:02:44 600
40.1.1.10 4 40 406 210 0 0 0 00:02:44 600
Total number of neighbors 9
{
"as": 10,
"dynamic-peers": 0,
"peer-count": 17,
"peer-group-count": 4,
"peer-group-memory": 224,
"peer-memory": 291312,
"peers": {
"1.1.1.1": {
"inq": 0,
"msgrcvd": 0,
"msgsent": 0,
"outq": 0,
"prefix-advertised-count": 0,
"prefix-received-count": 0,
"remote-as": 10,
"state": "Active",
"table-version": 0,
"uptime": "never",
"version": 4
},
"10.0.0.2": {
"hostname": "r2",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.3": {
"hostname": "r3",
"inq": 0,
"msgrcvd": 104,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 600,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"10.0.0.4": {
"hostname": "r4",
"inq": 0,
"msgrcvd": 204,
"msgsent": 7,
"outq": 0,
"prefix-advertised-count": 1200,
"prefix-received-count": 1200,
"remote-as": 10,
"state": "Established",
"table-version": 0,
"uptime": "00:02:21",
"version": 4
},
"20.1.1.6": {
"hostname": "r6",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"20.1.1.7": {
"hostname": "r7",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 20,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.10": {
"hostname": "r10",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.2": {
"hostname": "r8",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
},
"40.1.1.6": {
"hostname": "r9",
"inq": 0,
"msgrcvd": 406,
"msgsent": 210,
"outq": 0,
"prefix-advertised-count": 2400,
"prefix-received-count": 600,
"remote-as": 40,
"state": "Established",
"table-version": 0,
"uptime": "00:02:36",
"version": 4
}
},
"rib-count": 4799,
"rib-memory": 575880,
"router-id": "10.0.0.1",
"table-version": 2400,
"total-peers": 9
}
show ip bgp json
----------------
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.88.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
* 40.3.89.0/24 40.1.1.6 0 0 100 200 300 400 500 40 i
* 40.1.1.10 0 0 100 200 300 400 500 40 i
*> 40.1.1.2 0 0 100 200 300 400 500 40 i
"40.3.88.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
"40.3.89.0/24": [
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.6",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.10",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
},
{
"aspath": "100 200 300 400 500 40",
"bestpath": true,
"med": 0,
"nexthops": [
{
"afi": "ipv4",
"ip": "40.1.1.2",
"used": true
}
],
"origin": "IGP",
"path-from": "external",
"valid": true,
"weight": 0
}
],
show ip bgp x.x.x.x json
------------------------
BGP routing table entry for 40.3.86.0/24
Paths: (3 available, best #3, table Default-IP-Routing-Table)
Advertised to non peer-group peers:
10.0.0.2 10.0.0.3 10.0.0.4 20.1.1.6 20.1.1.7 40.1.1.2 40.1.1.6 40.1.1.10
100 200 300 400 500 40
40.1.1.6 from 40.1.1.6 (40.0.0.9)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.10 from 40.1.1.10 (40.0.0.10)
Origin IGP, metric 0, localpref 100, valid, external
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
100 200 300 400 500 40
40.1.1.2 from 40.1.1.2 (40.0.0.8)
Origin IGP, metric 0, localpref 100, valid, external, best
Community: 1:1 2:2 3:3 4:4 10:10 20:20
Extended Community: RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66
Last update: Fri May 8 21:23:41 2015
{
"advertised-to": {
"10.0.0.2": {
"hostname": "r2"
},
"10.0.0.3": {
"hostname": "r3"
},
"10.0.0.4": {
"hostname": "r4"
},
"20.1.1.6": {
"hostname": "r6"
},
"20.1.1.7": {
"hostname": "r7"
},
"40.1.1.10": {
"hostname": "r10"
},
"40.1.1.2": {
"hostname": "r8"
},
"40.1.1.6": {
"hostname": "r9"
}
},
"paths": [
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.6",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r9",
"peer-id": "40.1.1.6",
"router-id": "40.0.0.9",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.10",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r10",
"peer-id": "40.1.1.10",
"router-id": "40.0.0.10",
"type": "external"
},
"valid": true
},
{
"aspath": {
"length": 6,
"segments": [
{
"list": [
100,
200,
300,
400,
500,
40
],
"type": "as-sequence"
}
],
"string": "100 200 300 400 500 40"
},
"bestpath": {
"overall": true
},
"community": {
"list": [
"1:1",
"2:2",
"3:3",
"4:4",
"10:10",
"20:20"
],
"string": "1:1 2:2 3:3 4:4 10:10 20:20"
},
"extended-community": {
"string": "RT:100:100 RT:200:200 RT:300:300 RT:400:400 SoO:44:44 SoO:55:55 SoO:66:66"
},
"last-update": {
"epoch": 1431120222,
"string": "Fri May 8 21:23:42 2015\n"
},
"localpref": 100,
"med": 0,
"nexthops": [
{
"accessible": true,
"afi": "ipv4",
"ip": "40.1.1.2",
"metric": 0,
"used": true
}
],
"origin": "IGP",
"peer": {
"hostname": "r8",
"peer-id": "40.1.1.2",
"router-id": "40.0.0.8",
"type": "external"
},
"valid": true
}
],
"prefix": "40.3.86.0",
"prefixlen": 24
}
2015-06-12 16:59:11 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_object_add(json_path, "nexthops",
|
|
|
|
json_nexthops);
|
|
|
|
}
|
|
|
|
|
|
|
|
json_object_object_add(json_path, "peer", json_peer);
|
|
|
|
json_object_array_add(json_paths, json_path);
|
2019-10-16 16:25:19 +02:00
|
|
|
}
|
bgpd: display multipath status in "show ip bgp"
The output of "show ip bg" does not show whether and which routes are
installed as multipath routes along the best route:
BGP table version is 0, local router ID is 10.10.100.209
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*>i1.0.0.0/24 10.10.100.1 1 111 0 15169 i
* i 10.10.100.2 1 111 0 15169 i
* i 10.10.100.3 1 111 0 65100 15169 i
This patch adds a new status code that is showing exactly which routes
are used as multipath:
BGP table version is 0, local router ID is 10.10.100.209
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*>i1.0.0.0/24 10.10.100.1 1 111 0 15169 i
*=i 10.10.100.2 1 111 0 15169 i
* i 10.10.100.3 1 111 0 65100 15169 i
The inconsistency in the status code legend ("i - internal" vs. "i internal")
inherent from old IOS was fixed. It had to be touched anyways.
Signed-off-by: Boian Bonev <bbonev at ipacct.com>
[DL: rewrap long line, clean whitespace in same chunk]
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
2013-09-09 18:41:35 +02:00
|
|
|
}
|
|
|
|
|
2017-06-21 05:10:57 +02:00
|
|
|
#define BGP_SHOW_HEADER_CSV "Flags, Network, Next Hop, Metric, LocPrf, Weight, Path"
|
2017-07-13 18:50:29 +02:00
|
|
|
#define BGP_SHOW_DAMP_HEADER " Network From Reuse Path\n"
|
|
|
|
#define BGP_SHOW_FLAP_HEADER " Network From Flaps Duration Reuse Path\n"
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_show_prefix_list(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *prefix_list_str, afi_t afi,
|
|
|
|
safi_t safi, enum bgp_show_type type);
|
|
|
|
static int bgp_show_filter_list(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *filter, afi_t afi, safi_t safi,
|
|
|
|
enum bgp_show_type type);
|
|
|
|
static int bgp_show_route_map(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *rmap_str, afi_t afi, safi_t safi,
|
|
|
|
enum bgp_show_type type);
|
|
|
|
static int bgp_show_community_list(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *com, int exact, afi_t afi,
|
|
|
|
safi_t safi);
|
|
|
|
static int bgp_show_prefix_longer(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *prefix, afi_t afi, safi_t safi,
|
|
|
|
enum bgp_show_type type);
|
2018-02-09 19:22:50 +01:00
|
|
|
static int bgp_show_regexp(struct vty *vty, struct bgp *bgp, const char *regstr,
|
2019-12-17 10:42:02 +01:00
|
|
|
afi_t afi, safi_t safi, enum bgp_show_type type,
|
|
|
|
bool use_json);
|
2017-08-25 20:27:49 +02:00
|
|
|
static int bgp_show_community(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *comstr, int exact, afi_t afi,
|
2018-08-29 14:19:54 +02:00
|
|
|
safi_t safi, bool use_json);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-09-29 00:51:31 +02:00
|
|
|
|
|
|
|
static int bgp_show_table(struct vty *vty, struct bgp *bgp, safi_t safi,
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_table *table, enum bgp_show_type type,
|
2018-08-29 14:19:54 +02:00
|
|
|
void *output_arg, bool use_json, char *rd,
|
2018-02-09 19:22:50 +01:00
|
|
|
int is_last, unsigned long *output_cum,
|
|
|
|
unsigned long *total_cum,
|
2018-02-09 18:29:39 +01:00
|
|
|
unsigned long *json_header_depth)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *rn;
|
|
|
|
int header = 1;
|
|
|
|
int display;
|
2017-09-29 00:51:31 +02:00
|
|
|
unsigned long output_count = 0;
|
|
|
|
unsigned long total_count = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct prefix *p;
|
|
|
|
char buf2[BUFSIZ];
|
|
|
|
json_object *json_paths = NULL;
|
|
|
|
int first = 1;
|
|
|
|
|
2017-09-29 00:51:31 +02:00
|
|
|
if (output_cum && *output_cum != 0)
|
|
|
|
header = 0;
|
|
|
|
|
2018-02-09 18:29:39 +01:00
|
|
|
if (use_json && !*json_header_depth) {
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty,
|
2017-10-27 17:32:17 +02:00
|
|
|
"{\n \"vrfId\": %d,\n \"vrfName\": \"%s\",\n \"tableVersion\": %" PRId64
|
2018-11-02 22:40:44 +01:00
|
|
|
",\n \"routerId\": \"%s\",\n \"defaultLocPrf\": %u,\n"
|
|
|
|
" \"localAS\": %u,\n \"routes\": { ",
|
2017-12-18 12:33:29 +01:00
|
|
|
bgp->vrf_id == VRF_UNKNOWN ? -1 : (int)bgp->vrf_id,
|
2018-07-20 17:02:15 +02:00
|
|
|
bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT
|
|
|
|
? VRF_DEFAULT_NAME
|
|
|
|
: bgp->name,
|
2018-11-02 22:40:44 +01:00
|
|
|
table->version, inet_ntoa(bgp->router_id),
|
|
|
|
bgp->default_local_pref, bgp->as);
|
2018-02-09 18:29:39 +01:00
|
|
|
*json_header_depth = 2;
|
|
|
|
if (rd) {
|
2017-10-05 16:11:36 +02:00
|
|
|
vty_out(vty, " \"routeDistinguishers\" : {");
|
2018-02-09 18:29:39 +01:00
|
|
|
++*json_header_depth;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-10-05 16:11:36 +02:00
|
|
|
if (use_json && rd) {
|
|
|
|
vty_out(vty, " \"%s\" : { ", rd);
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Start processing of routes. */
|
2017-10-04 16:26:43 +02:00
|
|
|
for (rn = bgp_table_top(table); rn; rn = bgp_route_next(rn)) {
|
2018-07-30 17:40:02 +02:00
|
|
|
pi = bgp_node_get_bgp_path_info(rn);
|
|
|
|
if (pi == NULL)
|
2017-10-04 16:26:43 +02:00
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-10-04 16:26:43 +02:00
|
|
|
display = 0;
|
|
|
|
if (use_json)
|
|
|
|
json_paths = json_object_new_array();
|
|
|
|
else
|
|
|
|
json_paths = NULL;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (; pi; pi = pi->next) {
|
2017-10-04 16:26:43 +02:00
|
|
|
total_count++;
|
|
|
|
if (type == bgp_show_type_flap_statistics
|
|
|
|
|| type == bgp_show_type_flap_neighbor
|
|
|
|
|| type == bgp_show_type_dampend_paths
|
|
|
|
|| type == bgp_show_type_damp_neighbor) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!(pi->extra && pi->extra->damp_info))
|
2017-10-04 16:26:43 +02:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_regexp) {
|
|
|
|
regex_t *regex = output_arg;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (bgp_regexec(regex, pi->attr->aspath)
|
2017-10-04 16:26:43 +02:00
|
|
|
== REG_NOMATCH)
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_prefix_list) {
|
|
|
|
struct prefix_list *plist = output_arg;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-10-04 16:26:43 +02:00
|
|
|
if (prefix_list_apply(plist, &rn->p)
|
|
|
|
!= PREFIX_PERMIT)
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_filter_list) {
|
|
|
|
struct as_list *as_list = output_arg;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (as_list_apply(as_list, pi->attr->aspath)
|
2017-10-04 16:26:43 +02:00
|
|
|
!= AS_FILTER_PERMIT)
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_route_map) {
|
|
|
|
struct route_map *rmap = output_arg;
|
2018-10-03 00:34:03 +02:00
|
|
|
struct bgp_path_info path;
|
2017-10-04 16:26:43 +02:00
|
|
|
struct attr dummy_attr;
|
lib: Introducing a 3rd state for route-map match cmd: RMAP_NOOP
Introducing a 3rd state for route_map_apply library function: RMAP_NOOP
Traditionally route map MATCH rule apis were designed to return
a binary response, consisting of either RMAP_MATCH or RMAP_NOMATCH.
(Route-map SET rule apis return RMAP_OKAY or RMAP_ERROR).
Depending on this response, the following statemachine decided the
course of action:
State1:
If match cmd returns RMAP_MATCH then, keep existing behaviour.
If routemap type is PERMIT, execute set cmds or call cmds if applicable,
otherwise PERMIT!
Else If routemap type is DENY, we DENYMATCH right away
State2:
If match cmd returns RMAP_NOMATCH, continue on to next route-map. If there
are no other rules or if all the rules return RMAP_NOMATCH, return DENYMATCH
We require a 3rd state because of the following situation:
The issue - what if, the rule api needs to abort or ignore a rule?:
"match evpn vni xx" route-map filter can be applied to incoming routes
regardless of whether the tunnel type is vxlan or mpls.
This rule should be N/A for mpls based evpn route, but applicable to only
vxlan based evpn route.
Also, this rule should be applicable for routes with VNI label only, and
not for routes without labels. For example, type 3 and type 4 EVPN routes
do not have labels, so, this match cmd should let them through.
Today, the filter produces either a match or nomatch response regardless of
whether it is mpls/vxlan, resulting in either permitting or denying the
route.. So an mpls evpn route may get filtered out incorrectly.
Eg: "route-map RM1 permit 10 ; match evpn vni 20" or
"route-map RM2 deny 20 ; match vni 20"
With the introduction of the 3rd state, we can abort this rule check safely.
How? The rules api can now return RMAP_NOOP to indicate
that it encountered an invalid check, and needs to abort just that rule,
but continue with other rules.
As a result we have a 3rd state:
State3:
If match cmd returned RMAP_NOOP
Then, proceed to other route-map, otherwise if there are no more
rules or if all the rules return RMAP_NOOP, then, return RMAP_PERMITMATCH.
Signed-off-by: Lakshman Krishnamoorthy <lkrishnamoor@vmware.com>
2019-06-19 23:04:36 +02:00
|
|
|
route_map_result_t ret;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-12-03 22:01:19 +01:00
|
|
|
dummy_attr = *pi->attr;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
path.peer = pi->peer;
|
2018-10-03 00:34:03 +02:00
|
|
|
path.attr = &dummy_attr;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
ret = route_map_apply(rmap, &rn->p, RMAP_BGP,
|
2018-10-03 00:34:03 +02:00
|
|
|
&path);
|
2017-10-04 16:26:43 +02:00
|
|
|
if (ret == RMAP_DENYMATCH)
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_neighbor
|
|
|
|
|| type == bgp_show_type_flap_neighbor
|
|
|
|
|| type == bgp_show_type_damp_neighbor) {
|
|
|
|
union sockunion *su = output_arg;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->peer == NULL
|
|
|
|
|| pi->peer->su_remote == NULL
|
|
|
|
|| !sockunion_same(pi->peer->su_remote, su))
|
2017-10-04 16:26:43 +02:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_cidr_only) {
|
2018-03-27 21:13:34 +02:00
|
|
|
uint32_t destination;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-10-04 16:26:43 +02:00
|
|
|
destination = ntohl(rn->p.u.prefix4.s_addr);
|
|
|
|
if (IN_CLASSC(destination)
|
|
|
|
&& rn->p.prefixlen == 24)
|
|
|
|
continue;
|
|
|
|
if (IN_CLASSB(destination)
|
|
|
|
&& rn->p.prefixlen == 16)
|
|
|
|
continue;
|
|
|
|
if (IN_CLASSA(destination)
|
|
|
|
&& rn->p.prefixlen == 8)
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_prefix_longer) {
|
2018-09-12 12:18:44 +02:00
|
|
|
p = output_arg;
|
2017-10-04 16:26:43 +02:00
|
|
|
if (!prefix_match(p, &rn->p))
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_community_all) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!pi->attr->community)
|
2017-10-04 16:26:43 +02:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_community) {
|
|
|
|
struct community *com = output_arg;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!pi->attr->community
|
|
|
|
|| !community_match(pi->attr->community,
|
2017-10-04 16:26:43 +02:00
|
|
|
com))
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_community_exact) {
|
|
|
|
struct community *com = output_arg;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!pi->attr->community
|
|
|
|
|| !community_cmp(pi->attr->community, com))
|
2017-10-04 16:26:43 +02:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_community_list) {
|
|
|
|
struct community_list *list = output_arg;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!community_list_match(pi->attr->community,
|
2018-02-09 19:22:50 +01:00
|
|
|
list))
|
2017-10-04 16:26:43 +02:00
|
|
|
continue;
|
|
|
|
}
|
2018-02-09 19:22:50 +01:00
|
|
|
if (type == bgp_show_type_community_list_exact) {
|
2017-10-04 16:26:43 +02:00
|
|
|
struct community_list *list = output_arg;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-10-04 16:26:43 +02:00
|
|
|
if (!community_list_exact_match(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->attr->community, list))
|
2017-10-04 16:26:43 +02:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_lcommunity) {
|
|
|
|
struct lcommunity *lcom = output_arg;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!pi->attr->lcommunity
|
|
|
|
|| !lcommunity_match(pi->attr->lcommunity,
|
2017-10-04 16:26:43 +02:00
|
|
|
lcom))
|
|
|
|
continue;
|
|
|
|
}
|
2019-05-06 12:59:19 +02:00
|
|
|
|
|
|
|
if (type == bgp_show_type_lcommunity_exact) {
|
|
|
|
struct lcommunity *lcom = output_arg;
|
|
|
|
|
|
|
|
if (!pi->attr->lcommunity
|
|
|
|
|| !lcommunity_cmp(pi->attr->lcommunity,
|
|
|
|
lcom))
|
|
|
|
continue;
|
|
|
|
}
|
2017-10-04 16:26:43 +02:00
|
|
|
if (type == bgp_show_type_lcommunity_list) {
|
|
|
|
struct community_list *list = output_arg;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!lcommunity_list_match(pi->attr->lcommunity,
|
2018-02-09 19:22:50 +01:00
|
|
|
list))
|
2017-10-04 16:26:43 +02:00
|
|
|
continue;
|
|
|
|
}
|
2019-05-06 12:59:19 +02:00
|
|
|
if (type
|
|
|
|
== bgp_show_type_lcommunity_list_exact) {
|
|
|
|
struct community_list *list = output_arg;
|
|
|
|
|
|
|
|
if (!lcommunity_list_exact_match(
|
|
|
|
pi->attr->lcommunity, list))
|
|
|
|
continue;
|
|
|
|
}
|
2017-10-04 16:26:43 +02:00
|
|
|
if (type == bgp_show_type_lcommunity_all) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!pi->attr->lcommunity)
|
2017-10-04 16:26:43 +02:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (type == bgp_show_type_dampend_paths
|
|
|
|
|| type == bgp_show_type_damp_neighbor) {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!CHECK_FLAG(pi->flags, BGP_PATH_DAMPED)
|
|
|
|
|| CHECK_FLAG(pi->flags, BGP_PATH_HISTORY))
|
2017-10-04 16:26:43 +02:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!use_json && header) {
|
2018-03-06 20:02:52 +01:00
|
|
|
vty_out(vty, "BGP table version is %" PRIu64
|
2018-04-09 22:28:11 +02:00
|
|
|
", local router ID is %s, vrf id ",
|
2017-10-04 16:26:43 +02:00
|
|
|
table->version,
|
|
|
|
inet_ntoa(bgp->router_id));
|
2018-04-09 22:28:11 +02:00
|
|
|
if (bgp->vrf_id == VRF_UNKNOWN)
|
|
|
|
vty_out(vty, "%s", VRFID_NONE_STR);
|
|
|
|
else
|
|
|
|
vty_out(vty, "%u", bgp->vrf_id);
|
|
|
|
vty_out(vty, "\n");
|
2018-11-02 22:40:44 +01:00
|
|
|
vty_out(vty, "Default local pref %u, ",
|
|
|
|
bgp->default_local_pref);
|
|
|
|
vty_out(vty, "local AS %u\n", bgp->as);
|
2017-10-04 16:26:43 +02:00
|
|
|
vty_out(vty, BGP_SHOW_SCODE_HEADER);
|
2018-04-09 22:28:11 +02:00
|
|
|
vty_out(vty, BGP_SHOW_NCODE_HEADER);
|
2017-10-04 16:26:43 +02:00
|
|
|
vty_out(vty, BGP_SHOW_OCODE_HEADER);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (type == bgp_show_type_dampend_paths
|
|
|
|
|| type == bgp_show_type_damp_neighbor)
|
2017-10-04 16:26:43 +02:00
|
|
|
vty_out(vty, BGP_SHOW_DAMP_HEADER);
|
2018-02-09 19:22:50 +01:00
|
|
|
else if (type == bgp_show_type_flap_statistics
|
|
|
|
|| type == bgp_show_type_flap_neighbor)
|
2017-10-04 16:26:43 +02:00
|
|
|
vty_out(vty, BGP_SHOW_FLAP_HEADER);
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2017-10-04 16:26:43 +02:00
|
|
|
vty_out(vty, BGP_SHOW_HEADER);
|
|
|
|
header = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-10-04 16:26:43 +02:00
|
|
|
if (rd != NULL && !display && !output_count) {
|
|
|
|
if (!use_json)
|
|
|
|
vty_out(vty,
|
|
|
|
"Route Distinguisher: %s\n",
|
|
|
|
rd);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-10-04 16:26:43 +02:00
|
|
|
if (type == bgp_show_type_dampend_paths
|
|
|
|
|| type == bgp_show_type_damp_neighbor)
|
2019-11-10 19:13:20 +01:00
|
|
|
damp_route_vty_out(vty, &rn->p, pi, display, AFI_IP,
|
2018-02-09 19:22:50 +01:00
|
|
|
safi, use_json, json_paths);
|
2017-10-04 16:26:43 +02:00
|
|
|
else if (type == bgp_show_type_flap_statistics
|
|
|
|
|| type == bgp_show_type_flap_neighbor)
|
2019-11-10 19:13:20 +01:00
|
|
|
flap_route_vty_out(vty, &rn->p, pi, display, AFI_IP,
|
2018-02-09 19:22:50 +01:00
|
|
|
safi, use_json, json_paths);
|
2017-10-04 16:26:43 +02:00
|
|
|
else
|
2018-10-03 02:43:07 +02:00
|
|
|
route_vty_out(vty, &rn->p, pi, display, safi,
|
2018-02-09 19:22:50 +01:00
|
|
|
json_paths);
|
2017-10-04 16:26:43 +02:00
|
|
|
display++;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
2017-10-04 16:26:43 +02:00
|
|
|
if (display) {
|
|
|
|
output_count++;
|
|
|
|
if (!use_json)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
p = &rn->p;
|
2018-10-01 17:06:26 +02:00
|
|
|
/* encode prefix */
|
|
|
|
if (p->family == AF_FLOWSPEC) {
|
|
|
|
char retstr[BGP_FLOWSPEC_STRING_DISPLAY_MAX];
|
|
|
|
|
|
|
|
bgp_fs_nlri_get_string((unsigned char *)
|
|
|
|
p->u.prefix_flowspec.ptr,
|
|
|
|
p->u.prefix_flowspec
|
|
|
|
.prefixlen,
|
|
|
|
retstr,
|
|
|
|
NLRI_STRING_FORMAT_MIN,
|
|
|
|
NULL);
|
|
|
|
if (first)
|
|
|
|
vty_out(vty, "\"%s/%d\": ",
|
|
|
|
retstr,
|
|
|
|
p->u.prefix_flowspec.prefixlen);
|
|
|
|
else
|
|
|
|
vty_out(vty, ",\"%s/%d\": ",
|
|
|
|
retstr,
|
|
|
|
p->u.prefix_flowspec.prefixlen);
|
|
|
|
} else {
|
|
|
|
prefix2str(p, buf2, sizeof(buf2));
|
|
|
|
if (first)
|
|
|
|
vty_out(vty, "\"%s\": ", buf2);
|
|
|
|
else
|
|
|
|
vty_out(vty, ",\"%s\": ", buf2);
|
|
|
|
}
|
2017-10-04 16:26:43 +02:00
|
|
|
vty_out(vty, "%s",
|
bgpd: Print pretty json output for bgp_show_table()
This is not very cool:
```
{
"vrfId": 0,
"vrfName": "default",
"tableVersion": 4,
"routerId": "192.168.0.1",
"defaultLocPrf": 100,
"localAS": 200,
"routes": { "10.0.0.150/32": [{"valid":true,"bestpath":true,"pathFrom":"external","prefix":"10.0.0.150","prefixLen":32,"network":"10.0.0.150\/32","med":0,"metric":0,"weight":32768,"peerId":"(unspec)","aspath":"200 200 200","path":"200 200 200","origin":"incomplete","nexthops":[{"ip":"0.0.0.0","afi":"ipv4","used":true}]}],"10.0.0.200/32": [{"valid":true,"bestpath":true,"pathFrom":"external","prefix":"10.0.0.200","prefixLen":32,"network":"10.0.0.200\/32","med":0,"metric":0,"weight":32768,"peerId":"(unspec)","aspath":"200 200 200","path":"200 200 200","origin":"incomplete","nexthops":[{"ip":"0.0.0.0","afi":"ipv4","used":true}]}],"10.0.2.0/24": [{"valid":true,"bestpath":true,"pathFrom":"external","prefix":"10.0.2.0","prefixLen":24,"network":"10.0.2.0\/24","med":0,"metric":0,"weight":32768,"peerId":"(unspec)","aspath":"200 200 200","path":"200 200 200","origin":"incomplete","nexthops":[{"ip":"0.0.0.0","afi":"ipv4","used":true}]}],"192.168.0.0/24": [{"valid":true,"bestpath":true,"pathFrom":"external","prefix":"192.168.0.0","prefixLen":24,"network":"192.168.0.0\/24","med":0,"metric":0,"weight":32768,"peerId":"(unspec)","aspath":"200 200 200","path":"200 200 200","origin":"incomplete","nexthops":[{"ip":"0.0.0.0","afi":"ipv4","used":true}]}] } }
```
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
2019-12-17 11:49:30 +01:00
|
|
|
json_object_to_json_string_ext(
|
|
|
|
json_paths, JSON_C_TO_STRING_PRETTY));
|
2017-10-04 16:26:43 +02:00
|
|
|
json_object_free(json_paths);
|
2017-12-19 14:20:30 +01:00
|
|
|
json_paths = NULL;
|
2017-10-04 16:26:43 +02:00
|
|
|
first = 0;
|
2019-12-01 15:29:32 +01:00
|
|
|
} else
|
|
|
|
json_object_free(json_paths);
|
2017-10-04 16:26:43 +02:00
|
|
|
}
|
|
|
|
|
2017-09-29 00:51:31 +02:00
|
|
|
if (output_cum) {
|
|
|
|
output_count += *output_cum;
|
|
|
|
*output_cum = output_count;
|
|
|
|
}
|
|
|
|
if (total_cum) {
|
|
|
|
total_count += *total_cum;
|
|
|
|
*total_cum = total_count;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
if (use_json) {
|
2018-02-09 18:29:39 +01:00
|
|
|
if (rd) {
|
2018-02-09 19:22:50 +01:00
|
|
|
vty_out(vty, " }%s ", (is_last ? "" : ","));
|
2018-02-09 18:29:39 +01:00
|
|
|
}
|
|
|
|
if (is_last) {
|
2018-02-09 19:22:50 +01:00
|
|
|
unsigned long i;
|
|
|
|
for (i = 0; i < *json_header_depth; ++i)
|
|
|
|
vty_out(vty, " } ");
|
2019-04-29 11:28:42 +02:00
|
|
|
vty_out(vty, "\n");
|
2018-02-09 18:29:39 +01:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
2017-09-29 00:51:31 +02:00
|
|
|
if (is_last) {
|
|
|
|
/* No route is displayed */
|
|
|
|
if (output_count == 0) {
|
|
|
|
if (type == bgp_show_type_normal)
|
|
|
|
vty_out(vty,
|
|
|
|
"No BGP prefixes displayed, %ld exist\n",
|
|
|
|
total_count);
|
|
|
|
} else
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty,
|
2017-09-29 00:51:31 +02:00
|
|
|
"\nDisplayed %ld routes and %ld total paths\n",
|
|
|
|
output_count, total_count);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-09-29 00:51:31 +02:00
|
|
|
int bgp_show_table_rd(struct vty *vty, struct bgp *bgp, safi_t safi,
|
|
|
|
struct bgp_table *table, struct prefix_rd *prd_match,
|
2018-08-29 14:19:54 +02:00
|
|
|
enum bgp_show_type type, void *output_arg, bool use_json)
|
2017-09-29 00:51:31 +02:00
|
|
|
{
|
|
|
|
struct bgp_node *rn, *next;
|
|
|
|
unsigned long output_cum = 0;
|
|
|
|
unsigned long total_cum = 0;
|
2018-02-09 18:29:39 +01:00
|
|
|
unsigned long json_header_depth = 0;
|
2018-09-26 02:37:16 +02:00
|
|
|
struct bgp_table *itable;
|
2018-01-12 03:33:34 +01:00
|
|
|
bool show_msg;
|
|
|
|
|
|
|
|
show_msg = (!use_json && type == bgp_show_type_normal);
|
2017-09-29 00:51:31 +02:00
|
|
|
|
|
|
|
for (rn = bgp_table_top(table); rn; rn = next) {
|
|
|
|
next = bgp_route_next(rn);
|
|
|
|
if (prd_match && memcmp(rn->p.u.val, prd_match->val, 8) != 0)
|
|
|
|
continue;
|
2018-09-26 02:37:16 +02:00
|
|
|
|
|
|
|
itable = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (itable != NULL) {
|
2017-09-29 00:51:31 +02:00
|
|
|
struct prefix_rd prd;
|
2017-12-11 18:38:26 +01:00
|
|
|
char rd[RD_ADDRSTRLEN];
|
2017-09-29 00:51:31 +02:00
|
|
|
|
|
|
|
memcpy(&prd, &(rn->p), sizeof(struct prefix_rd));
|
2017-12-11 18:38:26 +01:00
|
|
|
prefix_rd2str(&prd, rd, sizeof(rd));
|
2018-09-26 02:37:16 +02:00
|
|
|
bgp_show_table(vty, bgp, safi, itable, type, output_arg,
|
|
|
|
use_json, rd, next == NULL, &output_cum,
|
|
|
|
&total_cum, &json_header_depth);
|
2018-01-12 03:33:34 +01:00
|
|
|
if (next == NULL)
|
|
|
|
show_msg = false;
|
2017-09-29 00:51:31 +02:00
|
|
|
}
|
|
|
|
}
|
2018-01-12 03:33:34 +01:00
|
|
|
if (show_msg) {
|
|
|
|
if (output_cum == 0)
|
|
|
|
vty_out(vty, "No BGP prefixes displayed, %ld exist\n",
|
|
|
|
total_cum);
|
|
|
|
else
|
|
|
|
vty_out(vty,
|
|
|
|
"\nDisplayed %ld routes and %ld total paths\n",
|
|
|
|
output_cum, total_cum);
|
|
|
|
}
|
2017-09-29 00:51:31 +02:00
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_show(struct vty *vty, struct bgp *bgp, afi_t afi, safi_t safi,
|
2018-08-29 14:19:54 +02:00
|
|
|
enum bgp_show_type type, void *output_arg, bool use_json)
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_table *table;
|
2018-02-09 18:29:39 +01:00
|
|
|
unsigned long json_header_depth = 0;
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp == NULL) {
|
|
|
|
bgp = bgp_get_default();
|
|
|
|
}
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp == NULL) {
|
|
|
|
if (!use_json)
|
|
|
|
vty_out(vty, "No BGP process is configured\n");
|
2017-07-26 17:27:37 +02:00
|
|
|
else
|
|
|
|
vty_out(vty, "{}\n");
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2017-07-11 20:59:03 +02:00
|
|
|
|
2017-09-29 00:51:31 +02:00
|
|
|
table = bgp->rib[afi][safi];
|
2017-07-17 14:03:14 +02:00
|
|
|
/* use MPLS and ENCAP specific shows until they are merged */
|
|
|
|
if (safi == SAFI_MPLS_VPN) {
|
2017-09-29 00:51:31 +02:00
|
|
|
return bgp_show_table_rd(vty, bgp, safi, table, NULL, type,
|
|
|
|
output_arg, use_json);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2018-02-19 17:17:41 +01:00
|
|
|
|
|
|
|
if (safi == SAFI_FLOWSPEC && type == bgp_show_type_detail) {
|
|
|
|
return bgp_show_table_flowspec(vty, bgp, afi, table, type,
|
|
|
|
output_arg, use_json,
|
|
|
|
1, NULL, NULL);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
/* labeled-unicast routes live in the unicast table */
|
|
|
|
else if (safi == SAFI_LABELED_UNICAST)
|
|
|
|
safi = SAFI_UNICAST;
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
|
2017-09-29 00:51:31 +02:00
|
|
|
return bgp_show_table(vty, bgp, safi, table, type, output_arg, use_json,
|
2018-02-09 18:29:39 +01:00
|
|
|
NULL, 1, NULL, NULL, &json_header_depth);
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_show_all_instances_routes_vty(struct vty *vty, afi_t afi,
|
2018-08-29 14:19:54 +02:00
|
|
|
safi_t safi, bool use_json)
|
2016-04-13 02:33:03 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct listnode *node, *nnode;
|
|
|
|
struct bgp *bgp;
|
|
|
|
int is_first = 1;
|
2018-08-29 14:19:54 +02:00
|
|
|
bool route_output = false;
|
2016-04-13 02:33:03 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (use_json)
|
|
|
|
vty_out(vty, "{\n");
|
2016-04-28 04:03:31 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
for (ALL_LIST_ELEMENTS(bm->bgp, node, nnode, bgp)) {
|
2018-08-29 14:19:54 +02:00
|
|
|
route_output = true;
|
2017-07-17 14:03:14 +02:00
|
|
|
if (use_json) {
|
|
|
|
if (!is_first)
|
|
|
|
vty_out(vty, ",\n");
|
|
|
|
else
|
|
|
|
is_first = 0;
|
|
|
|
|
|
|
|
vty_out(vty, "\"%s\":",
|
|
|
|
(bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)
|
2018-07-20 17:02:15 +02:00
|
|
|
? VRF_DEFAULT_NAME
|
2017-07-17 14:03:14 +02:00
|
|
|
: bgp->name);
|
|
|
|
} else {
|
|
|
|
vty_out(vty, "\nInstance %s:\n",
|
|
|
|
(bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)
|
2018-07-20 17:02:15 +02:00
|
|
|
? VRF_DEFAULT_NAME
|
2017-07-17 14:03:14 +02:00
|
|
|
: bgp->name);
|
|
|
|
}
|
|
|
|
bgp_show(vty, bgp, afi, safi, bgp_show_type_normal, NULL,
|
|
|
|
use_json);
|
|
|
|
}
|
2016-04-28 04:03:31 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (use_json)
|
|
|
|
vty_out(vty, "}\n");
|
2018-08-29 14:19:54 +02:00
|
|
|
else if (!route_output)
|
|
|
|
vty_out(vty, "%% BGP instance not found\n");
|
2016-04-13 02:33:03 +02:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Header of detailed BGP route information */
|
2017-07-17 14:03:14 +02:00
|
|
|
void route_vty_out_detail_header(struct vty *vty, struct bgp *bgp,
|
|
|
|
struct bgp_node *rn, struct prefix_rd *prd,
|
|
|
|
afi_t afi, safi_t safi, json_object *json)
|
|
|
|
{
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct prefix *p;
|
|
|
|
struct peer *peer;
|
|
|
|
struct listnode *node, *nnode;
|
2017-12-11 18:38:26 +01:00
|
|
|
char buf1[RD_ADDRSTRLEN];
|
2017-07-17 14:03:14 +02:00
|
|
|
char buf2[INET6_ADDRSTRLEN];
|
|
|
|
char buf3[EVPN_ROUTE_STRLEN];
|
2017-08-19 07:43:09 +02:00
|
|
|
char prefix_str[BUFSIZ];
|
2017-07-17 14:03:14 +02:00
|
|
|
int count = 0;
|
|
|
|
int best = 0;
|
|
|
|
int suppress = 0;
|
2018-08-24 23:57:42 +02:00
|
|
|
int accept_own = 0;
|
|
|
|
int route_filter_translated_v4 = 0;
|
|
|
|
int route_filter_v4 = 0;
|
|
|
|
int route_filter_translated_v6 = 0;
|
|
|
|
int route_filter_v6 = 0;
|
|
|
|
int llgr_stale = 0;
|
|
|
|
int no_llgr = 0;
|
|
|
|
int accept_own_nexthop = 0;
|
|
|
|
int blackhole = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
int no_export = 0;
|
|
|
|
int no_advertise = 0;
|
|
|
|
int local_as = 0;
|
2018-08-24 23:57:42 +02:00
|
|
|
int no_peer = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
int first = 1;
|
|
|
|
int has_valid_label = 0;
|
|
|
|
mpls_label_t label = 0;
|
|
|
|
json_object *json_adv_to = NULL;
|
2017-06-16 21:12:57 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
p = &rn->p;
|
|
|
|
has_valid_label = bgp_is_valid_label(&rn->local_label);
|
|
|
|
|
|
|
|
if (has_valid_label)
|
|
|
|
label = label_pton(&rn->local_label);
|
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
if (safi == SAFI_EVPN) {
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
if (!json) {
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "BGP routing table entry for %s%s%s\n",
|
2017-12-11 18:38:26 +01:00
|
|
|
prd ? prefix_rd2str(prd, buf1, sizeof(buf1))
|
2019-09-11 09:01:39 +02:00
|
|
|
: "", prd ? ":" : "",
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_evpn_route2str((struct prefix_evpn *)p,
|
2019-09-11 09:01:39 +02:00
|
|
|
buf3, sizeof(buf3)));
|
|
|
|
} else {
|
|
|
|
json_object_string_add(json, "rd",
|
|
|
|
prd ? prefix_rd2str(prd, buf1, sizeof(buf1)) :
|
|
|
|
"");
|
|
|
|
bgp_evpn_route2json((struct prefix_evpn *)p, json);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (!json) {
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "BGP routing table entry for %s%s%s/%d\n",
|
|
|
|
((safi == SAFI_MPLS_VPN || safi == SAFI_ENCAP)
|
2019-09-11 09:01:39 +02:00
|
|
|
? prefix_rd2str(prd, buf1,
|
|
|
|
sizeof(buf1))
|
|
|
|
: ""),
|
2017-07-17 14:03:14 +02:00
|
|
|
safi == SAFI_MPLS_VPN ? ":" : "",
|
|
|
|
inet_ntop(p->family, &p->u.prefix, buf2,
|
2019-09-11 09:01:39 +02:00
|
|
|
INET6_ADDRSTRLEN),
|
2017-07-17 14:03:14 +02:00
|
|
|
p->prefixlen);
|
2017-03-09 15:54:20 +01:00
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
} else
|
|
|
|
json_object_string_add(json, "prefix",
|
|
|
|
prefix2str(p, prefix_str, sizeof(prefix_str)));
|
|
|
|
}
|
|
|
|
|
|
|
|
if (has_valid_label) {
|
|
|
|
if (json)
|
|
|
|
json_object_int_add(json, "localLabel", label);
|
|
|
|
else
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "Local label: %d\n", label);
|
2019-09-11 09:01:39 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!json)
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_labeled_safi(safi) && safi != SAFI_EVPN)
|
|
|
|
vty_out(vty, "not allocated\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next) {
|
2017-07-17 14:03:14 +02:00
|
|
|
count++;
|
2018-10-03 02:43:07 +02:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_SELECTED)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
best = count;
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->extra && pi->extra->suppress)
|
2017-07-17 14:03:14 +02:00
|
|
|
suppress = 1;
|
2018-10-04 20:10:09 +02:00
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
if (pi->attr->community == NULL)
|
2018-10-04 20:10:09 +02:00
|
|
|
continue;
|
|
|
|
|
|
|
|
no_advertise += community_include(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->attr->community, COMMUNITY_NO_ADVERTISE);
|
|
|
|
no_export += community_include(pi->attr->community,
|
2018-10-04 20:10:09 +02:00
|
|
|
COMMUNITY_NO_EXPORT);
|
2018-10-03 02:43:07 +02:00
|
|
|
local_as += community_include(pi->attr->community,
|
2018-10-04 20:10:09 +02:00
|
|
|
COMMUNITY_LOCAL_AS);
|
2018-10-03 02:43:07 +02:00
|
|
|
accept_own += community_include(pi->attr->community,
|
2018-10-04 20:10:09 +02:00
|
|
|
COMMUNITY_ACCEPT_OWN);
|
|
|
|
route_filter_translated_v4 += community_include(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->attr->community,
|
2018-10-04 20:10:09 +02:00
|
|
|
COMMUNITY_ROUTE_FILTER_TRANSLATED_v4);
|
|
|
|
route_filter_translated_v6 += community_include(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->attr->community,
|
2018-10-04 20:10:09 +02:00
|
|
|
COMMUNITY_ROUTE_FILTER_TRANSLATED_v6);
|
|
|
|
route_filter_v4 += community_include(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->attr->community, COMMUNITY_ROUTE_FILTER_v4);
|
2018-10-04 20:10:09 +02:00
|
|
|
route_filter_v6 += community_include(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->attr->community, COMMUNITY_ROUTE_FILTER_v6);
|
|
|
|
llgr_stale += community_include(pi->attr->community,
|
2018-10-04 20:10:09 +02:00
|
|
|
COMMUNITY_LLGR_STALE);
|
2018-10-03 02:43:07 +02:00
|
|
|
no_llgr += community_include(pi->attr->community,
|
2018-10-04 20:10:09 +02:00
|
|
|
COMMUNITY_NO_LLGR);
|
|
|
|
accept_own_nexthop +=
|
2018-10-03 02:43:07 +02:00
|
|
|
community_include(pi->attr->community,
|
2018-10-04 20:10:09 +02:00
|
|
|
COMMUNITY_ACCEPT_OWN_NEXTHOP);
|
2018-10-03 02:43:07 +02:00
|
|
|
blackhole += community_include(pi->attr->community,
|
2018-10-04 20:10:09 +02:00
|
|
|
COMMUNITY_BLACKHOLE);
|
2018-10-03 02:43:07 +02:00
|
|
|
no_peer += community_include(pi->attr->community,
|
2018-10-04 20:10:09 +02:00
|
|
|
COMMUNITY_NO_PEER);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!json) {
|
|
|
|
vty_out(vty, "Paths: (%d available", count);
|
|
|
|
if (best) {
|
|
|
|
vty_out(vty, ", best #%d", best);
|
2019-01-02 16:08:42 +01:00
|
|
|
if (safi == SAFI_UNICAST) {
|
|
|
|
if (bgp->inst_type == BGP_INSTANCE_TYPE_DEFAULT)
|
|
|
|
vty_out(vty, ", table %s",
|
|
|
|
VRF_DEFAULT_NAME);
|
|
|
|
else
|
|
|
|
vty_out(vty, ", vrf %s",
|
|
|
|
bgp->name);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
} else
|
|
|
|
vty_out(vty, ", no best path");
|
|
|
|
|
2018-08-24 23:57:42 +02:00
|
|
|
if (accept_own)
|
|
|
|
vty_out(vty,
|
|
|
|
", accept own local route exported and imported in different VRF");
|
|
|
|
else if (route_filter_translated_v4)
|
|
|
|
vty_out(vty,
|
|
|
|
", mark translated RTs for VPNv4 route filtering");
|
|
|
|
else if (route_filter_v4)
|
|
|
|
vty_out(vty,
|
|
|
|
", attach RT as-is for VPNv4 route filtering");
|
|
|
|
else if (route_filter_translated_v6)
|
|
|
|
vty_out(vty,
|
|
|
|
", mark translated RTs for VPNv6 route filtering");
|
|
|
|
else if (route_filter_v6)
|
|
|
|
vty_out(vty,
|
|
|
|
", attach RT as-is for VPNv6 route filtering");
|
|
|
|
else if (llgr_stale)
|
|
|
|
vty_out(vty,
|
|
|
|
", mark routes to be retained for a longer time. Requeres support for Long-lived BGP Graceful Restart");
|
|
|
|
else if (no_llgr)
|
|
|
|
vty_out(vty,
|
|
|
|
", mark routes to not be treated according to Long-lived BGP Graceful Restart operations");
|
|
|
|
else if (accept_own_nexthop)
|
|
|
|
vty_out(vty,
|
|
|
|
", accept local nexthop");
|
|
|
|
else if (blackhole)
|
|
|
|
vty_out(vty, ", inform peer to blackhole prefix");
|
2017-07-17 14:03:14 +02:00
|
|
|
else if (no_export)
|
|
|
|
vty_out(vty, ", not advertised to EBGP peer");
|
2018-08-24 23:57:42 +02:00
|
|
|
else if (no_advertise)
|
|
|
|
vty_out(vty, ", not advertised to any peer");
|
2017-07-17 14:03:14 +02:00
|
|
|
else if (local_as)
|
|
|
|
vty_out(vty, ", not advertised outside local AS");
|
2018-08-24 23:57:42 +02:00
|
|
|
else if (no_peer)
|
|
|
|
vty_out(vty,
|
|
|
|
", inform EBGP peer not to advertise to their EBGP peers");
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (suppress)
|
|
|
|
vty_out(vty,
|
|
|
|
", Advertisements suppressed by an aggregate.");
|
|
|
|
vty_out(vty, ")\n");
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* If we are not using addpath then we can display Advertised to and
|
|
|
|
* that will
|
|
|
|
* show what peers we advertised the bestpath to. If we are using
|
|
|
|
* addpath
|
|
|
|
* though then we must display Advertised to on a path-by-path basis. */
|
bgpd: Re-use TX Addpath IDs where possible
The motivation for this patch is to address a concerning behavior of
tx-addpath-bestpath-per-AS. Prior to this patch, all paths' TX ID was
pre-determined as the path was received from a peer. However, this meant
that any time the path selected as best from an AS changed, bgpd had no
choice but to withdraw the previous best path, and advertise the new
best-path under a new TX ID. This could cause significant network
disruption, especially for the subset of prefixes coming from only one
AS that were also communicated over a bestpath-per-AS session.
The patch's general approach is best illustrated by
txaddpath_update_ids. After a bestpath run (required for best-per-AS to
know what will and will not be sent as addpaths) ID numbers will be
stripped from paths that no longer need to be sent, and held in a pool.
Then, paths that will be sent as addpaths and do not already have ID
numbers will allocate new ID numbers, pulling first from that pool.
Finally, anything left in the pool will be returned to the allocator.
In order for this to work, ID numbers had to be split by strategy. The
tx-addpath-All strategy would keep every ID number "in use" constantly,
preventing IDs from being transferred to different paths. Rather than
create two variables for ID, this patch create a more generic array that
will easily enable more addpath strategies to be implemented. The
previously described ID manipulations will happen per addpath strategy,
and will only be run for strategies that are enabled on at least one
peer.
Finally, the ID numbers are allocated from an allocator that tracks per
AFI/SAFI/Addpath Strategy which IDs are in use. Though it would be very
improbable, there was the possibility with the free-running counter
approach for rollover to cause two paths on the same prefix to get
assigned the same TX ID. As remote as the possibility is, we prefer to
not leave it to chance.
This ID re-use method is not perfect. In some cases you could still get
withdraw-then-add behaviors where not strictly necessary. In the case of
bestpath-per-AS this requires one AS to advertise a prefix for the first
time, then a second AS withdraws that prefix, all within the space of an
already pending MRAI timer. In those situations a withdraw-then-add is
more forgivable, and fixing it would probably require a much more
significant effort, as IDs would need to be moved to ADVs instead of
paths.
Signed-off-by Mitchell Skiba <mskiba@amazon.com>
2018-05-10 01:10:02 +02:00
|
|
|
if (!bgp_addpath_is_addpath_used(&bgp->tx_addpath, afi, safi)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
for (ALL_LIST_ELEMENTS(bgp->peer, node, nnode, peer)) {
|
|
|
|
if (bgp_adj_out_lookup(peer, rn, 0)) {
|
|
|
|
if (json && !json_adv_to)
|
|
|
|
json_adv_to = json_object_new_object();
|
|
|
|
|
|
|
|
route_vty_out_advertised_to(
|
|
|
|
vty, peer, &first,
|
|
|
|
" Advertised to non peer-group peers:\n ",
|
|
|
|
json_adv_to);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (json) {
|
|
|
|
if (json_adv_to) {
|
|
|
|
json_object_object_add(json, "advertisedTo",
|
|
|
|
json_adv_to);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (first)
|
|
|
|
vty_out(vty, " Not advertised to any peer");
|
|
|
|
vty_out(vty, "\n");
|
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
static void bgp_show_path_info(struct prefix_rd *pfx_rd,
|
|
|
|
struct bgp_node *bgp_node, struct vty *vty,
|
|
|
|
struct bgp *bgp, afi_t afi,
|
|
|
|
safi_t safi, json_object *json,
|
|
|
|
enum bgp_path_type pathtype, int *display)
|
|
|
|
{
|
|
|
|
struct bgp_path_info *pi;
|
|
|
|
int header = 1;
|
|
|
|
char rdbuf[RD_ADDRSTRLEN];
|
|
|
|
json_object *json_header = NULL;
|
|
|
|
json_object *json_paths = NULL;
|
|
|
|
|
|
|
|
for (pi = bgp_node_get_bgp_path_info(bgp_node); pi;
|
|
|
|
pi = pi->next) {
|
|
|
|
|
|
|
|
if (json && !json_paths) {
|
|
|
|
/* Instantiate json_paths only if path is valid */
|
|
|
|
json_paths = json_object_new_array();
|
|
|
|
if (pfx_rd) {
|
|
|
|
prefix_rd2str(pfx_rd, rdbuf, sizeof(rdbuf));
|
|
|
|
json_header = json_object_new_object();
|
|
|
|
} else
|
|
|
|
json_header = json;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (header) {
|
|
|
|
route_vty_out_detail_header(
|
|
|
|
vty, bgp, bgp_node, pfx_rd,
|
|
|
|
AFI_IP, safi, json_header);
|
|
|
|
header = 0;
|
|
|
|
}
|
|
|
|
(*display)++;
|
|
|
|
|
|
|
|
if (pathtype == BGP_PATH_SHOW_ALL
|
|
|
|
|| (pathtype == BGP_PATH_SHOW_BESTPATH
|
|
|
|
&& CHECK_FLAG(pi->flags, BGP_PATH_SELECTED))
|
|
|
|
|| (pathtype == BGP_PATH_SHOW_MULTIPATH
|
|
|
|
&& (CHECK_FLAG(pi->flags, BGP_PATH_MULTIPATH)
|
|
|
|
|| CHECK_FLAG(pi->flags, BGP_PATH_SELECTED))))
|
|
|
|
route_vty_out_detail(vty, bgp, bgp_node,
|
|
|
|
pi, AFI_IP, safi,
|
|
|
|
json_paths);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (json && json_paths) {
|
|
|
|
json_object_object_add(json_header, "paths", json_paths);
|
|
|
|
|
|
|
|
if (pfx_rd)
|
|
|
|
json_object_object_add(json, rdbuf, json_header);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Display specified route of BGP table. */
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_show_route_in_table(struct vty *vty, struct bgp *bgp,
|
|
|
|
struct bgp_table *rib, const char *ip_str,
|
|
|
|
afi_t afi, safi_t safi,
|
|
|
|
struct prefix_rd *prd, int prefix_check,
|
2018-08-29 14:19:54 +02:00
|
|
|
enum bgp_path_type pathtype, bool use_json)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
int display = 0;
|
|
|
|
struct prefix match;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_node *rm;
|
|
|
|
struct bgp_table *table;
|
|
|
|
json_object *json = NULL;
|
|
|
|
json_object *json_paths = NULL;
|
|
|
|
|
|
|
|
/* Check IP address argument. */
|
|
|
|
ret = str2prefix(ip_str, &match);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "address is malformed\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
match.family = afi2family(afi);
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
if (use_json)
|
2017-07-17 14:03:14 +02:00
|
|
|
json = json_object_new_object();
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
if (safi == SAFI_MPLS_VPN || safi == SAFI_ENCAP) {
|
2017-07-17 14:03:14 +02:00
|
|
|
for (rn = bgp_table_top(rib); rn; rn = bgp_route_next(rn)) {
|
|
|
|
if (prd && memcmp(rn->p.u.val, prd->val, 8) != 0)
|
|
|
|
continue;
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (!table)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
if ((rm = bgp_node_match(table, &match)) == NULL)
|
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
if (prefix_check
|
|
|
|
&& rm->p.prefixlen != match.prefixlen) {
|
|
|
|
bgp_unlock_node(rm);
|
|
|
|
continue;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
bgp_show_path_info((struct prefix_rd *)&rn->p, rm,
|
|
|
|
vty, bgp, afi, safi, json,
|
|
|
|
pathtype, &display);
|
|
|
|
|
|
|
|
bgp_unlock_node(rm);
|
|
|
|
}
|
|
|
|
} else if (safi == SAFI_EVPN) {
|
|
|
|
struct bgp_node *longest_pfx;
|
|
|
|
bool is_exact_pfxlen_match = FALSE;
|
|
|
|
|
|
|
|
for (rn = bgp_table_top(rib); rn; rn = bgp_route_next(rn)) {
|
|
|
|
if (prd && memcmp(rn->p.u.val, prd->val, 8) != 0)
|
|
|
|
continue;
|
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (!table)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
longest_pfx = NULL;
|
|
|
|
is_exact_pfxlen_match = FALSE;
|
|
|
|
/*
|
|
|
|
* Search through all the prefixes for a match. The
|
|
|
|
* pfx's are enumerated in ascending order of pfxlens.
|
|
|
|
* So, the last pfx match is the longest match. Set
|
|
|
|
* is_exact_pfxlen_match when we get exact pfxlen match
|
|
|
|
*/
|
|
|
|
for (rm = bgp_table_top(table); rm;
|
|
|
|
rm = bgp_route_next(rm)) {
|
|
|
|
/*
|
|
|
|
* Get prefixlen of the ip-prefix within type5
|
|
|
|
* evpn route
|
|
|
|
*/
|
|
|
|
if (evpn_type5_prefix_match(&rm->p,
|
|
|
|
&match) && rm->info) {
|
|
|
|
longest_pfx = rm;
|
|
|
|
int type5_pfxlen =
|
|
|
|
bgp_evpn_get_type5_prefixlen(&rm->p);
|
|
|
|
if (type5_pfxlen == match.prefixlen) {
|
|
|
|
is_exact_pfxlen_match = TRUE;
|
|
|
|
bgp_unlock_node(rm);
|
|
|
|
break;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2017-08-27 22:51:35 +02:00
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
if (!longest_pfx)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (prefix_check && !is_exact_pfxlen_match)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
rm = longest_pfx;
|
|
|
|
bgp_lock_node(rm);
|
|
|
|
|
|
|
|
bgp_show_path_info((struct prefix_rd *)&rn->p, rm,
|
|
|
|
vty, bgp, afi, safi, json,
|
|
|
|
pathtype, &display);
|
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
bgp_unlock_node(rm);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2018-03-15 13:32:04 +01:00
|
|
|
} else if (safi == SAFI_FLOWSPEC) {
|
2019-09-11 09:01:39 +02:00
|
|
|
if (use_json)
|
|
|
|
json_paths = json_object_new_array();
|
|
|
|
|
2018-07-02 17:25:32 +02:00
|
|
|
display = bgp_flowspec_display_match_per_ip(afi, rib,
|
|
|
|
&match, prefix_check,
|
|
|
|
vty,
|
|
|
|
use_json,
|
|
|
|
json_paths);
|
2019-09-11 09:01:39 +02:00
|
|
|
if (use_json && display)
|
|
|
|
json_object_object_add(json, "paths", json_paths);
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
|
|
|
if ((rn = bgp_node_match(rib, &match)) != NULL) {
|
|
|
|
if (!prefix_check
|
|
|
|
|| rn->p.prefixlen == match.prefixlen) {
|
2019-09-11 09:01:39 +02:00
|
|
|
bgp_show_path_info(NULL, rn, vty, bgp, afi,
|
|
|
|
safi, json,
|
|
|
|
pathtype, &display);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
}
|
|
|
|
}
|
2015-05-20 03:03:55 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (use_json) {
|
2018-03-06 20:02:52 +01:00
|
|
|
vty_out(vty, "%s\n", json_object_to_json_string_ext(
|
2019-09-11 09:01:39 +02:00
|
|
|
json, JSON_C_TO_STRING_PRETTY |
|
|
|
|
JSON_C_TO_STRING_NOSLASHESCAPE));
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_free(json);
|
|
|
|
} else {
|
|
|
|
if (!display) {
|
|
|
|
vty_out(vty, "%% Network not in table\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
}
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
/* Display specified route of Main RIB */
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_show_route(struct vty *vty, struct bgp *bgp, const char *ip_str,
|
|
|
|
afi_t afi, safi_t safi, struct prefix_rd *prd,
|
|
|
|
int prefix_check, enum bgp_path_type pathtype,
|
2018-08-29 14:19:54 +02:00
|
|
|
bool use_json)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
2017-07-25 18:55:48 +02:00
|
|
|
if (!bgp) {
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp = bgp_get_default();
|
2017-07-25 18:55:48 +02:00
|
|
|
if (!bgp) {
|
|
|
|
if (!use_json)
|
|
|
|
vty_out(vty, "No BGP process is configured\n");
|
2017-07-26 17:27:37 +02:00
|
|
|
else
|
|
|
|
vty_out(vty, "{}\n");
|
2017-07-25 18:55:48 +02:00
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* labeled-unicast routes live in the unicast table */
|
|
|
|
if (safi == SAFI_LABELED_UNICAST)
|
|
|
|
safi = SAFI_UNICAST;
|
|
|
|
|
|
|
|
return bgp_show_route_in_table(vty, bgp, bgp->rib[afi][safi], ip_str,
|
|
|
|
afi, safi, prd, prefix_check, pathtype,
|
|
|
|
use_json);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int bgp_show_lcommunity(struct vty *vty, struct bgp *bgp, int argc,
|
2019-05-06 12:59:19 +02:00
|
|
|
struct cmd_token **argv, bool exact, afi_t afi,
|
|
|
|
safi_t safi, bool uj)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct lcommunity *lcom;
|
|
|
|
struct buffer *b;
|
|
|
|
int i;
|
|
|
|
char *str;
|
|
|
|
int first = 0;
|
|
|
|
|
|
|
|
b = buffer_new(1024);
|
|
|
|
for (i = 0; i < argc; i++) {
|
|
|
|
if (first)
|
|
|
|
buffer_putc(b, ' ');
|
|
|
|
else {
|
|
|
|
if (strmatch(argv[i]->text, "AA:BB:CC")) {
|
|
|
|
first = 1;
|
|
|
|
buffer_putstr(b, argv[i]->arg);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
buffer_putc(b, '\0');
|
2016-11-15 11:00:39 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
str = buffer_getstr(b);
|
|
|
|
buffer_free(b);
|
2016-11-15 11:00:39 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
lcom = lcommunity_str2com(str);
|
|
|
|
XFREE(MTYPE_TMP, str);
|
|
|
|
if (!lcom) {
|
|
|
|
vty_out(vty, "%% Large-community malformed\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2016-11-15 11:00:39 +01:00
|
|
|
|
2019-05-06 12:59:19 +02:00
|
|
|
return bgp_show(vty, bgp, afi, safi,
|
|
|
|
(exact ? bgp_show_type_lcommunity_exact
|
|
|
|
: bgp_show_type_lcommunity),
|
|
|
|
lcom, uj);
|
2016-11-15 11:00:39 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_show_lcommunity_list(struct vty *vty, struct bgp *bgp,
|
2019-05-06 12:59:19 +02:00
|
|
|
const char *lcom, bool exact, afi_t afi,
|
|
|
|
safi_t safi, bool uj)
|
2016-11-15 11:00:39 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct community_list *list;
|
2016-11-15 11:00:39 +01:00
|
|
|
|
2019-01-09 02:23:11 +01:00
|
|
|
list = community_list_lookup(bgp_clist, lcom, 0,
|
2017-07-17 14:03:14 +02:00
|
|
|
LARGE_COMMUNITY_LIST_MASTER);
|
|
|
|
if (list == NULL) {
|
|
|
|
vty_out(vty, "%% %s is not a valid large-community-list name\n",
|
|
|
|
lcom);
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2016-11-15 11:00:39 +01:00
|
|
|
|
2019-05-06 12:59:19 +02:00
|
|
|
return bgp_show(vty, bgp, afi, safi,
|
|
|
|
(exact ? bgp_show_type_lcommunity_list_exact
|
|
|
|
: bgp_show_type_lcommunity_list),
|
2017-07-17 14:03:14 +02:00
|
|
|
list, uj);
|
2004-09-13 Jose Luis Rubio <jrubio@dit.upm.es>
(at Technical University of Madrid as part of Euro6ix Project)
Enhanced Route Server functionality and Route-Maps:
* bgpd/bgpd.h: Modified 'struct peer' and 'struct bgp_filter' to
support rs-clients. A 'struct bgp_table *rib' has been added to the
first (to mantain a separated RIB for each rs-client) and two new
route-maps have been added to the last (for import/export policies).
Added the following #defines: RMAP_{IN|OUT|IMPORT|EXPORT|MAX},
PEER_RMAP_TYPE_{IMPORT|EXPORT} and BGP_CLEAR_SOFT_RSCLIENT.
* bgpd/bgpd.c: Modified the functions that create/delete/etc peers in
order to consider the new fields included in 'struct peer' for
supporting rs-clients, i.e. the import/export route-maps and the
'struct bgp_table'.
* bgpd/bgp_route.{ch}: Modified several functions related with
receiving/sending announces in order to support the new Route Server
capabilities.
Function 'bgp_process' has been reorganized, creating an auxiliar
function for best path selection ('bgp_best_selection').
Modified 'bgp_show' and 'bgp_show_route' for displaying information
about any RIB (and not only the main bgp RIB).
Added commands for displaying information about RS-clients RIBs:
'show bgp rsclient (A.B.C.D|X:X::X:X)', 'show bgp rsclient
(A.B.C.D|X:X::X:X) X:X::X:X/M', etc
* bgpd/bgp_table.{ch}: The structure 'struct bgp_table' now has two
new fields: type (which can take the values BGP_TABLE_{MAIN|RSCLIENT})
and 'void *owner' which points to 'struct bgp' or 'struct peer' which
owns the table.
When creating a new bgp_table by default 'type=BGP_TABLE_MAIN' is set.
* bgpd/bgp_vty.c: The commands 'neighbor ... route-server-client' and
'no neighbor ... route-server-client' now not only set/unset the flag
PEER_FLAG_RSERVER_CLIENT, but they create/destroy the 'struct
bgp_table' of the peer. Special actions are taken for peer_groups.
Command 'neighbor ... route-map WORD (in|out)' now also supports two
new kinds of route-map: 'import' and 'export'.
Added commands 'clear bgp * rsclient', etc. These commands allow a new
kind of soft_reconfig which affects only the RIB of the specified
RS-client.
Added commands 'show bgp rsclient summary', etc which display a
summary of the rs-clients configured for the corresponding address
family.
* bgpd/bgp_routemap.c: A new match statement is available,
'match peer (A.B.C.D|X:X::X:X)'. This statement can only be used in
import/export route-maps, and it matches when the peer who announces
(when used in an import route-map) or is going to receive (when used
in an export route-map) the route is the same than the one specified
in the statement.
For peer-groups the statement matches if the specified peer is member
of the peer-group.
A special version of the command, 'match peer local', matches with
routes originated by the Route Server (defined with 'network ...',
redistributed routes and default-originate).
* lib/routemap.{ch}: Added a new clause 'call NAME' for use in
route-maps. It jumps into the specified route-map and when it returns
the first route-map ends if the called RM returns DENY_MATCH, or
continues in other case.
2004-09-13 07:12:46 +02:00
|
|
|
}
|
|
|
|
|
2017-01-20 16:43:08 +01:00
|
|
|
DEFUN (show_ip_bgp_large_community_list,
|
|
|
|
show_ip_bgp_large_community_list_cmd,
|
2019-05-06 12:59:19 +02:00
|
|
|
"show [ip] bgp [<view|vrf> VIEWVRFNAME] ["BGP_AFI_CMD_STR" ["BGP_SAFI_WITH_LABEL_CMD_STR"]] large-community-list <(1-500)|WORD> [exact-match] [json]",
|
2017-01-20 16:43:08 +01:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
|
|
|
BGP_INSTANCE_HELP_STR
|
2017-06-16 21:12:57 +02:00
|
|
|
BGP_AFI_HELP_STR
|
2017-07-11 20:59:03 +02:00
|
|
|
BGP_SAFI_WITH_LABEL_HELP_STR
|
2017-01-20 16:43:08 +01:00
|
|
|
"Display routes matching the large-community-list\n"
|
|
|
|
"large-community-list number\n"
|
|
|
|
"large-community-list name\n"
|
2019-05-06 12:59:19 +02:00
|
|
|
"Exact match of the large-communities\n"
|
2017-01-20 16:43:08 +01:00
|
|
|
JSON_STR)
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
char *vrf = NULL;
|
|
|
|
afi_t afi = AFI_IP6;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
int idx = 0;
|
2019-05-06 12:59:19 +02:00
|
|
|
bool exact_match = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (argv_find(argv, argc, "ip", &idx))
|
|
|
|
afi = AFI_IP;
|
|
|
|
if (argv_find(argv, argc, "view", &idx)
|
|
|
|
|| argv_find(argv, argc, "vrf", &idx))
|
|
|
|
vrf = argv[++idx]->arg;
|
|
|
|
if (argv_find(argv, argc, "ipv4", &idx)
|
|
|
|
|| argv_find(argv, argc, "ipv6", &idx)) {
|
|
|
|
afi = strmatch(argv[idx]->text, "ipv6") ? AFI_IP6 : AFI_IP;
|
|
|
|
if (argv_find(argv, argc, "unicast", &idx)
|
|
|
|
|| argv_find(argv, argc, "multicast", &idx))
|
|
|
|
safi = bgp_vty_safi_from_str(argv[idx]->text);
|
|
|
|
}
|
|
|
|
|
2018-08-29 14:19:54 +02:00
|
|
|
bool uj = use_json(argc, argv);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
struct bgp *bgp = bgp_lookup_by_name(vrf);
|
|
|
|
if (bgp == NULL) {
|
|
|
|
vty_out(vty, "Can't find BGP instance %s\n", vrf);
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
|
|
|
|
argv_find(argv, argc, "large-community-list", &idx);
|
2019-05-06 12:59:19 +02:00
|
|
|
|
|
|
|
const char *clist_number_or_name = argv[++idx]->arg;
|
|
|
|
|
|
|
|
if (++idx < argc && strmatch(argv[idx]->text, "exact-match"))
|
|
|
|
exact_match = 1;
|
|
|
|
|
|
|
|
return bgp_show_lcommunity_list(vty, bgp, clist_number_or_name,
|
|
|
|
exact_match, afi, safi, uj);
|
2017-01-20 16:43:08 +01:00
|
|
|
}
|
|
|
|
DEFUN (show_ip_bgp_large_community,
|
|
|
|
show_ip_bgp_large_community_cmd,
|
2019-05-06 12:59:19 +02:00
|
|
|
"show [ip] bgp [<view|vrf> VIEWVRFNAME] ["BGP_AFI_CMD_STR" ["BGP_SAFI_WITH_LABEL_CMD_STR"]] large-community [<AA:BB:CC> [exact-match]] [json]",
|
2017-01-20 16:43:08 +01:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
|
|
|
BGP_INSTANCE_HELP_STR
|
2017-06-16 21:12:57 +02:00
|
|
|
BGP_AFI_HELP_STR
|
2017-07-11 20:59:03 +02:00
|
|
|
BGP_SAFI_WITH_LABEL_HELP_STR
|
2017-01-20 16:43:08 +01:00
|
|
|
"Display routes matching the large-communities\n"
|
|
|
|
"List of large-community numbers\n"
|
2019-05-06 12:59:19 +02:00
|
|
|
"Exact match of the large-communities\n"
|
2017-01-20 16:43:08 +01:00
|
|
|
JSON_STR)
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
char *vrf = NULL;
|
|
|
|
afi_t afi = AFI_IP6;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
int idx = 0;
|
2019-05-06 12:59:19 +02:00
|
|
|
bool exact_match = 0;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (argv_find(argv, argc, "ip", &idx))
|
|
|
|
afi = AFI_IP;
|
|
|
|
if (argv_find(argv, argc, "view", &idx)
|
|
|
|
|| argv_find(argv, argc, "vrf", &idx))
|
|
|
|
vrf = argv[++idx]->arg;
|
|
|
|
if (argv_find(argv, argc, "ipv4", &idx)
|
|
|
|
|| argv_find(argv, argc, "ipv6", &idx)) {
|
|
|
|
afi = strmatch(argv[idx]->text, "ipv6") ? AFI_IP6 : AFI_IP;
|
|
|
|
if (argv_find(argv, argc, "unicast", &idx)
|
|
|
|
|| argv_find(argv, argc, "multicast", &idx))
|
|
|
|
safi = bgp_vty_safi_from_str(argv[idx]->text);
|
|
|
|
}
|
|
|
|
|
2018-08-29 14:19:54 +02:00
|
|
|
bool uj = use_json(argc, argv);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
struct bgp *bgp = bgp_lookup_by_name(vrf);
|
|
|
|
if (bgp == NULL) {
|
|
|
|
vty_out(vty, "Can't find BGP instance %s\n", vrf);
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
|
2019-05-06 12:59:19 +02:00
|
|
|
if (argv_find(argv, argc, "AA:BB:CC", &idx)) {
|
|
|
|
if (argv_find(argv, argc, "exact-match", &idx))
|
|
|
|
exact_match = 1;
|
|
|
|
return bgp_show_lcommunity(vty, bgp, argc, argv,
|
|
|
|
exact_match, afi, safi, uj);
|
|
|
|
} else
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_show(vty, bgp, afi, safi,
|
|
|
|
bgp_show_type_lcommunity_all, NULL, uj);
|
2017-01-20 16:43:08 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_table_stats(struct vty *vty, struct bgp *bgp, afi_t afi,
|
|
|
|
safi_t safi);
|
2017-01-24 03:34:36 +01:00
|
|
|
|
2017-08-22 21:11:31 +02:00
|
|
|
|
|
|
|
/* BGP route print out function without JSON */
|
2017-01-24 01:48:24 +01:00
|
|
|
DEFUN (show_ip_bgp,
|
|
|
|
show_ip_bgp_cmd,
|
2017-07-11 20:59:03 +02:00
|
|
|
"show [ip] bgp [<view|vrf> VIEWVRFNAME] ["BGP_AFI_CMD_STR" ["BGP_SAFI_WITH_LABEL_CMD_STR"]]\
|
2017-08-22 21:11:31 +02:00
|
|
|
<dampening <parameters>\
|
|
|
|
|route-map WORD\
|
|
|
|
|prefix-list WORD\
|
|
|
|
|filter-list WORD\
|
|
|
|
|statistics\
|
|
|
|
|community-list <(1-500)|WORD> [exact-match]\
|
|
|
|
|A.B.C.D/M longer-prefixes\
|
|
|
|
|X:X::X:X/M longer-prefixes\
|
|
|
|
>",
|
2002-12-13 21:15:29 +01:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
2016-09-26 20:08:45 +02:00
|
|
|
BGP_INSTANCE_HELP_STR
|
2017-01-24 20:52:06 +01:00
|
|
|
BGP_AFI_HELP_STR
|
2017-07-11 20:59:03 +02:00
|
|
|
BGP_SAFI_WITH_LABEL_HELP_STR
|
2016-09-26 20:08:45 +02:00
|
|
|
"Display detailed information about dampening\n"
|
2017-01-24 01:48:24 +01:00
|
|
|
"Display detail of configured dampening parameters\n"
|
2016-09-26 20:08:45 +02:00
|
|
|
"Display routes matching the route-map\n"
|
|
|
|
"A route-map to match on\n"
|
|
|
|
"Display routes conforming to the prefix-list\n"
|
2016-10-25 00:24:40 +02:00
|
|
|
"Prefix-list name\n"
|
2016-09-26 20:08:45 +02:00
|
|
|
"Display routes conforming to the filter-list\n"
|
|
|
|
"Regular expression access list name\n"
|
2017-01-24 03:34:36 +01:00
|
|
|
"BGP RIB advertisement statistics\n"
|
2016-09-26 20:08:45 +02:00
|
|
|
"Display routes matching the community-list\n"
|
|
|
|
"community-list number\n"
|
|
|
|
"community-list name\n"
|
|
|
|
"Exact match of the communities\n"
|
2016-10-28 01:18:26 +02:00
|
|
|
"IPv4 prefix\n"
|
2016-10-25 00:24:40 +02:00
|
|
|
"Display route and more specific routes\n"
|
2016-10-28 01:18:26 +02:00
|
|
|
"IPv6 prefix\n"
|
2017-08-22 21:11:31 +02:00
|
|
|
"Display route and more specific routes\n")
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi = AFI_IP6;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
int exact_match = 0;
|
|
|
|
struct bgp *bgp = NULL;
|
|
|
|
int idx = 0;
|
|
|
|
|
|
|
|
bgp_vty_find_and_parse_afi_safi_bgp(vty, argv, argc, &idx, &afi, &safi,
|
2018-08-29 14:19:54 +02:00
|
|
|
&bgp, false);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!idx)
|
|
|
|
return CMD_WARNING;
|
|
|
|
|
|
|
|
if (argv_find(argv, argc, "dampening", &idx)) {
|
2017-08-22 21:11:31 +02:00
|
|
|
if (argv_find(argv, argc, "parameters", &idx))
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_show_dampening_parameters(vty, afi, safi);
|
|
|
|
}
|
2017-01-27 17:44:42 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (argv_find(argv, argc, "prefix-list", &idx))
|
|
|
|
return bgp_show_prefix_list(vty, bgp, argv[idx + 1]->arg, afi,
|
|
|
|
safi, bgp_show_type_prefix_list);
|
|
|
|
|
|
|
|
if (argv_find(argv, argc, "filter-list", &idx))
|
|
|
|
return bgp_show_filter_list(vty, bgp, argv[idx + 1]->arg, afi,
|
|
|
|
safi, bgp_show_type_filter_list);
|
|
|
|
|
|
|
|
if (argv_find(argv, argc, "statistics", &idx))
|
|
|
|
return bgp_table_stats(vty, bgp, afi, safi);
|
|
|
|
|
|
|
|
if (argv_find(argv, argc, "route-map", &idx))
|
|
|
|
return bgp_show_route_map(vty, bgp, argv[idx + 1]->arg, afi,
|
|
|
|
safi, bgp_show_type_route_map);
|
|
|
|
|
|
|
|
if (argv_find(argv, argc, "community-list", &idx)) {
|
|
|
|
const char *clist_number_or_name = argv[++idx]->arg;
|
|
|
|
if (++idx < argc && strmatch(argv[idx]->text, "exact-match"))
|
|
|
|
exact_match = 1;
|
|
|
|
return bgp_show_community_list(vty, bgp, clist_number_or_name,
|
|
|
|
exact_match, afi, safi);
|
|
|
|
}
|
|
|
|
/* prefix-longer */
|
|
|
|
if (argv_find(argv, argc, "A.B.C.D/M", &idx)
|
|
|
|
|| argv_find(argv, argc, "X:X::X:X/M", &idx))
|
|
|
|
return bgp_show_prefix_longer(vty, bgp, argv[idx]->arg, afi,
|
|
|
|
safi,
|
|
|
|
bgp_show_type_prefix_longer);
|
|
|
|
|
2017-08-22 21:11:31 +02:00
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* BGP route print out function with JSON */
|
|
|
|
DEFUN (show_ip_bgp_json,
|
|
|
|
show_ip_bgp_json_cmd,
|
|
|
|
"show [ip] bgp [<view|vrf> VIEWVRFNAME] ["BGP_AFI_CMD_STR" ["BGP_SAFI_WITH_LABEL_CMD_STR"]]\
|
2018-10-04 19:46:52 +02:00
|
|
|
[cidr-only\
|
|
|
|
|dampening <flap-statistics|dampened-paths>\
|
|
|
|
|community [AA:NN|local-AS|no-advertise|no-export\
|
|
|
|
|graceful-shutdown|no-peer|blackhole|llgr-stale|no-llgr\
|
|
|
|
|accept-own|accept-own-nexthop|route-filter-v6\
|
|
|
|
|route-filter-v4|route-filter-translated-v6\
|
|
|
|
|route-filter-translated-v4] [exact-match]\
|
|
|
|
] [json]",
|
2017-08-22 21:11:31 +02:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
|
|
|
BGP_INSTANCE_HELP_STR
|
|
|
|
BGP_AFI_HELP_STR
|
|
|
|
BGP_SAFI_WITH_LABEL_HELP_STR
|
|
|
|
"Display only routes with non-natural netmasks\n"
|
|
|
|
"Display detailed information about dampening\n"
|
|
|
|
"Display flap statistics of routes\n"
|
|
|
|
"Display paths suppressed due to dampening\n"
|
|
|
|
"Display routes matching the communities\n"
|
2018-07-19 22:46:46 +02:00
|
|
|
COMMUNITY_AANN_STR
|
|
|
|
"Do not send outside local AS (well-known community)\n"
|
|
|
|
"Do not advertise to any peer (well-known community)\n"
|
|
|
|
"Do not export to next AS (well-known community)\n"
|
|
|
|
"Graceful shutdown (well-known community)\n"
|
2018-10-04 19:46:52 +02:00
|
|
|
"Do not export to any peer (well-known community)\n"
|
|
|
|
"Inform EBGP peers to blackhole traffic to prefix (well-known community)\n"
|
|
|
|
"Staled Long-lived Graceful Restart VPN route (well-known community)\n"
|
|
|
|
"Removed because Long-lived Graceful Restart was not enabled for VPN route (well-known community)\n"
|
|
|
|
"Should accept local VPN route if exported and imported into different VRF (well-known community)\n"
|
|
|
|
"Should accept VPN route with local nexthop (well-known community)\n"
|
|
|
|
"RT VPNv6 route filtering (well-known community)\n"
|
|
|
|
"RT VPNv4 route filtering (well-known community)\n"
|
|
|
|
"RT translated VPNv6 route filtering (well-known community)\n"
|
|
|
|
"RT translated VPNv4 route filtering (well-known community)\n"
|
2018-07-19 22:46:46 +02:00
|
|
|
"Exact match of the communities\n"
|
2017-08-22 21:11:31 +02:00
|
|
|
JSON_STR)
|
|
|
|
{
|
|
|
|
afi_t afi = AFI_IP6;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
enum bgp_show_type sh_type = bgp_show_type_normal;
|
|
|
|
struct bgp *bgp = NULL;
|
|
|
|
int idx = 0;
|
2018-07-19 22:46:46 +02:00
|
|
|
int exact_match = 0;
|
2018-08-29 14:19:54 +02:00
|
|
|
bool uj = use_json(argc, argv);
|
|
|
|
|
|
|
|
if (uj)
|
|
|
|
argc--;
|
2017-08-22 21:11:31 +02:00
|
|
|
|
|
|
|
bgp_vty_find_and_parse_afi_safi_bgp(vty, argv, argc, &idx, &afi, &safi,
|
2018-08-29 14:19:54 +02:00
|
|
|
&bgp, uj);
|
2017-08-22 21:11:31 +02:00
|
|
|
if (!idx)
|
|
|
|
return CMD_WARNING;
|
|
|
|
|
|
|
|
if (argv_find(argv, argc, "cidr-only", &idx))
|
|
|
|
return bgp_show(vty, bgp, afi, safi, bgp_show_type_cidr_only,
|
|
|
|
NULL, uj);
|
|
|
|
|
|
|
|
if (argv_find(argv, argc, "dampening", &idx)) {
|
|
|
|
if (argv_find(argv, argc, "dampened-paths", &idx))
|
|
|
|
return bgp_show(vty, bgp, afi, safi,
|
|
|
|
bgp_show_type_dampend_paths, NULL, uj);
|
|
|
|
else if (argv_find(argv, argc, "flap-statistics", &idx))
|
|
|
|
return bgp_show(vty, bgp, afi, safi,
|
|
|
|
bgp_show_type_flap_statistics, NULL,
|
|
|
|
uj);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (argv_find(argv, argc, "community", &idx)) {
|
2019-01-02 22:25:02 +01:00
|
|
|
char *maybecomm = NULL;
|
2018-10-04 19:46:52 +02:00
|
|
|
char *community = NULL;
|
2018-07-19 22:46:46 +02:00
|
|
|
|
2019-01-02 22:25:02 +01:00
|
|
|
if (idx + 1 < argc) {
|
|
|
|
if (argv[idx + 1]->type == VARIABLE_TKN)
|
|
|
|
maybecomm = argv[idx + 1]->arg;
|
|
|
|
else
|
|
|
|
maybecomm = argv[idx + 1]->text;
|
|
|
|
}
|
|
|
|
|
2018-10-04 19:46:52 +02:00
|
|
|
if (maybecomm && !strmatch(maybecomm, "json")
|
|
|
|
&& !strmatch(maybecomm, "exact-match"))
|
|
|
|
community = maybecomm;
|
2018-07-19 22:46:46 +02:00
|
|
|
|
2018-10-04 19:46:52 +02:00
|
|
|
if (argv_find(argv, argc, "exact-match", &idx))
|
|
|
|
exact_match = 1;
|
2018-07-19 22:46:46 +02:00
|
|
|
|
2018-10-04 19:46:52 +02:00
|
|
|
if (community)
|
|
|
|
return bgp_show_community(vty, bgp, community,
|
|
|
|
exact_match, afi, safi, uj);
|
|
|
|
else
|
2018-07-19 22:46:46 +02:00
|
|
|
return (bgp_show(vty, bgp, afi, safi,
|
2018-10-04 19:46:52 +02:00
|
|
|
bgp_show_type_community_all, NULL,
|
|
|
|
uj));
|
2017-08-22 21:11:31 +02:00
|
|
|
}
|
2018-07-19 22:46:46 +02:00
|
|
|
|
2017-09-29 00:51:31 +02:00
|
|
|
return bgp_show(vty, bgp, afi, safi, sh_type, NULL, uj);
|
2016-09-26 20:08:45 +02:00
|
|
|
}
|
2015-05-20 02:40:34 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
DEFUN (show_ip_bgp_route,
|
|
|
|
show_ip_bgp_route_cmd,
|
2017-07-11 20:59:03 +02:00
|
|
|
"show [ip] bgp [<view|vrf> VIEWVRFNAME] ["BGP_AFI_CMD_STR" ["BGP_SAFI_WITH_LABEL_CMD_STR"]]"
|
2016-10-20 22:31:24 +02:00
|
|
|
"<A.B.C.D|A.B.C.D/M|X:X::X:X|X:X::X:X/M> [<bestpath|multipath>] [json]",
|
2002-12-13 21:15:29 +01:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
2016-09-26 20:08:45 +02:00
|
|
|
BGP_INSTANCE_HELP_STR
|
2017-01-24 20:52:06 +01:00
|
|
|
BGP_AFI_HELP_STR
|
2017-07-11 20:59:03 +02:00
|
|
|
BGP_SAFI_WITH_LABEL_HELP_STR
|
2015-05-20 03:03:48 +02:00
|
|
|
"Network in the BGP routing table to display\n"
|
2016-10-28 01:18:26 +02:00
|
|
|
"IPv4 prefix\n"
|
2016-10-25 00:24:40 +02:00
|
|
|
"Network in the BGP routing table to display\n"
|
2016-10-28 01:18:26 +02:00
|
|
|
"IPv6 prefix\n"
|
2015-05-20 02:58:11 +02:00
|
|
|
"Display only the bestpath\n"
|
2015-05-20 03:03:48 +02:00
|
|
|
"Display only multipaths\n"
|
2016-11-30 00:26:03 +01:00
|
|
|
JSON_STR)
|
2015-05-20 02:58:11 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int prefix_check = 0;
|
2016-10-20 22:31:24 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi = AFI_IP6;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
char *prefix = NULL;
|
|
|
|
struct bgp *bgp = NULL;
|
|
|
|
enum bgp_path_type path_type;
|
2018-08-29 14:19:54 +02:00
|
|
|
bool uj = use_json(argc, argv);
|
2015-05-20 03:03:48 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx = 0;
|
2016-10-20 22:31:24 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_vty_find_and_parse_afi_safi_bgp(vty, argv, argc, &idx, &afi, &safi,
|
2018-08-29 14:19:54 +02:00
|
|
|
&bgp, uj);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!idx)
|
|
|
|
return CMD_WARNING;
|
2017-01-24 01:58:56 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!bgp) {
|
|
|
|
vty_out(vty,
|
|
|
|
"Specified 'all' vrf's but this command currently only works per view/vrf\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2016-09-26 20:08:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* <A.B.C.D|A.B.C.D/M|X:X::X:X|X:X::X:X/M> */
|
|
|
|
if (argv_find(argv, argc, "A.B.C.D", &idx)
|
|
|
|
|| argv_find(argv, argc, "X:X::X:X", &idx))
|
|
|
|
prefix_check = 0;
|
|
|
|
else if (argv_find(argv, argc, "A.B.C.D/M", &idx)
|
|
|
|
|| argv_find(argv, argc, "X:X::X:X/M", &idx))
|
|
|
|
prefix_check = 1;
|
|
|
|
|
|
|
|
if ((argv[idx]->type == IPV6_TKN || argv[idx]->type == IPV6_PREFIX_TKN)
|
|
|
|
&& afi != AFI_IP6) {
|
|
|
|
vty_out(vty,
|
|
|
|
"%% Cannot specify IPv6 address or prefix with IPv4 AFI\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
if ((argv[idx]->type == IPV4_TKN || argv[idx]->type == IPV4_PREFIX_TKN)
|
|
|
|
&& afi != AFI_IP) {
|
|
|
|
vty_out(vty,
|
|
|
|
"%% Cannot specify IPv4 address or prefix with IPv6 AFI\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
|
|
|
|
prefix = argv[idx]->arg;
|
|
|
|
|
|
|
|
/* [<bestpath|multipath>] */
|
|
|
|
if (argv_find(argv, argc, "bestpath", &idx))
|
2018-10-02 21:50:10 +02:00
|
|
|
path_type = BGP_PATH_SHOW_BESTPATH;
|
2017-07-17 14:03:14 +02:00
|
|
|
else if (argv_find(argv, argc, "multipath", &idx))
|
2018-10-02 21:50:10 +02:00
|
|
|
path_type = BGP_PATH_SHOW_MULTIPATH;
|
2017-07-17 14:03:14 +02:00
|
|
|
else
|
2018-10-02 21:50:10 +02:00
|
|
|
path_type = BGP_PATH_SHOW_ALL;
|
2016-09-26 20:08:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_show_route(vty, bgp, prefix, afi, safi, NULL, prefix_check,
|
|
|
|
path_type, uj);
|
2015-05-20 02:58:11 +02:00
|
|
|
}
|
|
|
|
|
2016-10-25 00:24:40 +02:00
|
|
|
DEFUN (show_ip_bgp_regexp,
|
|
|
|
show_ip_bgp_regexp_cmd,
|
2019-12-17 10:42:02 +01:00
|
|
|
"show [ip] bgp [<view|vrf> VIEWVRFNAME] ["BGP_AFI_CMD_STR" ["BGP_SAFI_WITH_LABEL_CMD_STR"]] regexp REGEX [json]",
|
2016-10-25 00:24:40 +02:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
2017-01-24 02:29:54 +01:00
|
|
|
BGP_INSTANCE_HELP_STR
|
2017-01-24 20:52:06 +01:00
|
|
|
BGP_AFI_HELP_STR
|
2017-07-11 20:59:03 +02:00
|
|
|
BGP_SAFI_WITH_LABEL_HELP_STR
|
2016-10-25 00:24:40 +02:00
|
|
|
"Display routes matching the AS path regular expression\n"
|
2019-12-17 10:42:02 +01:00
|
|
|
"A regular-expression (1234567890_^|[,{}() ]$*+.?-\\) to match the BGP AS paths\n"
|
|
|
|
JSON_STR)
|
2016-10-25 00:24:40 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi = AFI_IP6;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
struct bgp *bgp = NULL;
|
2019-12-17 10:42:02 +01:00
|
|
|
bool uj = use_json(argc, argv);
|
|
|
|
char *regstr = NULL;
|
2016-10-25 00:24:40 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx = 0;
|
|
|
|
bgp_vty_find_and_parse_afi_safi_bgp(vty, argv, argc, &idx, &afi, &safi,
|
2018-08-29 14:19:54 +02:00
|
|
|
&bgp, false);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!idx)
|
|
|
|
return CMD_WARNING;
|
2016-10-25 00:24:40 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
// get index of regex
|
2019-12-17 10:42:02 +01:00
|
|
|
if (argv_find(argv, argc, "REGEX", &idx))
|
|
|
|
regstr = argv[idx]->arg;
|
2016-10-25 00:24:40 +02:00
|
|
|
|
2019-12-17 10:42:02 +01:00
|
|
|
return bgp_show_regexp(vty, bgp, (const char *)regstr, afi, safi,
|
|
|
|
bgp_show_type_regexp, uj);
|
2016-10-25 00:24:40 +02:00
|
|
|
}
|
|
|
|
|
2016-09-26 20:08:45 +02:00
|
|
|
DEFUN (show_ip_bgp_instance_all,
|
|
|
|
show_ip_bgp_instance_all_cmd,
|
2017-07-11 20:59:03 +02:00
|
|
|
"show [ip] bgp <view|vrf> all ["BGP_AFI_CMD_STR" ["BGP_SAFI_WITH_LABEL_CMD_STR"]] [json]",
|
2015-05-20 02:58:11 +02:00
|
|
|
SHOW_STR
|
2016-09-26 20:08:45 +02:00
|
|
|
IP_STR
|
2015-05-20 02:58:11 +02:00
|
|
|
BGP_STR
|
2016-09-26 20:08:45 +02:00
|
|
|
BGP_INSTANCE_ALL_HELP_STR
|
2017-01-24 20:52:06 +01:00
|
|
|
BGP_AFI_HELP_STR
|
2017-07-11 20:59:03 +02:00
|
|
|
BGP_SAFI_WITH_LABEL_HELP_STR
|
2016-11-30 00:26:03 +01:00
|
|
|
JSON_STR)
|
2015-05-20 02:58:11 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi = AFI_IP;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
struct bgp *bgp = NULL;
|
|
|
|
int idx = 0;
|
2018-08-29 14:19:54 +02:00
|
|
|
bool uj = use_json(argc, argv);
|
2016-10-20 22:31:24 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (uj)
|
|
|
|
argc--;
|
2016-09-21 15:51:30 +02:00
|
|
|
|
2018-08-29 14:19:54 +02:00
|
|
|
bgp_vty_find_and_parse_afi_safi_bgp(vty, argv, argc, &idx, &afi, &safi,
|
|
|
|
&bgp, uj);
|
|
|
|
if (!idx)
|
|
|
|
return CMD_WARNING;
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_show_all_instances_routes_vty(vty, afi, safi, uj);
|
|
|
|
return CMD_SUCCESS;
|
2016-09-21 15:51:30 +02:00
|
|
|
}
|
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
static int bgp_show_regexp(struct vty *vty, struct bgp *bgp, const char *regstr,
|
2019-12-17 10:42:02 +01:00
|
|
|
afi_t afi, safi_t safi, enum bgp_show_type type,
|
|
|
|
bool use_json)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
regex_t *regex;
|
|
|
|
int rc;
|
2016-09-21 15:51:30 +02:00
|
|
|
|
2019-04-18 09:17:57 +02:00
|
|
|
if (!config_bgp_aspath_validate(regstr)) {
|
2019-12-17 11:39:40 +01:00
|
|
|
vty_out(vty, "Invalid character in REGEX %s\n",
|
2019-04-18 09:17:57 +02:00
|
|
|
regstr);
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
regex = bgp_regcomp(regstr);
|
|
|
|
if (!regex) {
|
|
|
|
vty_out(vty, "Can't compile regexp %s\n", regstr);
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2016-09-26 20:08:45 +02:00
|
|
|
|
2019-12-17 10:42:02 +01:00
|
|
|
rc = bgp_show(vty, bgp, afi, safi, type, regex, use_json);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_regex_free(regex);
|
|
|
|
return rc;
|
2016-09-21 15:51:30 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_show_prefix_list(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *prefix_list_str, afi_t afi,
|
|
|
|
safi_t safi, enum bgp_show_type type)
|
2016-09-21 15:51:30 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct prefix_list *plist;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
plist = prefix_list_lookup(afi, prefix_list_str);
|
|
|
|
if (plist == NULL) {
|
|
|
|
vty_out(vty, "%% %s is not a valid prefix-list name\n",
|
|
|
|
prefix_list_str);
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_show(vty, bgp, afi, safi, type, plist, 0);
|
2015-05-20 02:58:11 +02:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_show_filter_list(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *filter, afi_t afi, safi_t safi,
|
|
|
|
enum bgp_show_type type)
|
2015-05-20 02:58:11 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct as_list *as_list;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
as_list = as_list_lookup(filter);
|
|
|
|
if (as_list == NULL) {
|
|
|
|
vty_out(vty, "%% %s is not a valid AS-path access-list name\n",
|
|
|
|
filter);
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2016-09-26 20:08:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_show(vty, bgp, afi, safi, type, as_list, 0);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_show_route_map(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *rmap_str, afi_t afi, safi_t safi,
|
|
|
|
enum bgp_show_type type)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct route_map *rmap;
|
2003-10-24 21:02:03 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
rmap = route_map_lookup_by_name(rmap_str);
|
|
|
|
if (!rmap) {
|
|
|
|
vty_out(vty, "%% %s is not a valid route-map name\n", rmap_str);
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
|
|
|
|
return bgp_show(vty, bgp, afi, safi, type, rmap, 0);
|
|
|
|
}
|
|
|
|
|
2017-08-25 20:27:49 +02:00
|
|
|
static int bgp_show_community(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *comstr, int exact, afi_t afi,
|
2018-08-29 14:19:54 +02:00
|
|
|
safi_t safi, bool use_json)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct community *com;
|
|
|
|
int ret = 0;
|
|
|
|
|
2017-08-25 20:27:49 +02:00
|
|
|
com = community_str2com(comstr);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!com) {
|
2017-08-25 20:27:49 +02:00
|
|
|
vty_out(vty, "%% Community malformed: %s\n", comstr);
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = bgp_show(vty, bgp, afi, safi,
|
|
|
|
(exact ? bgp_show_type_community_exact
|
|
|
|
: bgp_show_type_community),
|
2018-07-19 22:46:46 +02:00
|
|
|
com, use_json);
|
2018-10-22 21:58:39 +02:00
|
|
|
community_free(&com);
|
2017-05-15 13:33:48 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return ret;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_show_community_list(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *com, int exact, afi_t afi,
|
|
|
|
safi_t safi)
|
2016-03-07 01:08:49 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct community_list *list;
|
2016-03-07 01:08:49 +01:00
|
|
|
|
2019-01-09 02:23:11 +01:00
|
|
|
list = community_list_lookup(bgp_clist, com, 0, COMMUNITY_LIST_MASTER);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (list == NULL) {
|
|
|
|
vty_out(vty, "%% %s is not a valid community-list name\n", com);
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_show(vty, bgp, afi, safi,
|
|
|
|
(exact ? bgp_show_type_community_list_exact
|
|
|
|
: bgp_show_type_community_list),
|
|
|
|
list, 0);
|
2016-03-07 01:08:49 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_show_prefix_longer(struct vty *vty, struct bgp *bgp,
|
|
|
|
const char *prefix, afi_t afi, safi_t safi,
|
|
|
|
enum bgp_show_type type)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int ret;
|
|
|
|
struct prefix *p;
|
2015-05-20 02:40:34 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
p = prefix_new();
|
2010-07-23 20:43:04 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
ret = str2prefix(prefix, p);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Malformed Prefix\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2015-11-13 04:14:10 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
ret = bgp_show(vty, bgp, afi, safi, type, p, 0);
|
2019-10-30 01:05:27 +01:00
|
|
|
prefix_free(&p);
|
2017-07-17 14:03:14 +02:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
enum bgp_stats {
|
|
|
|
BGP_STATS_MAXBITLEN = 0,
|
|
|
|
BGP_STATS_RIB,
|
|
|
|
BGP_STATS_PREFIXES,
|
|
|
|
BGP_STATS_TOTPLEN,
|
|
|
|
BGP_STATS_UNAGGREGATEABLE,
|
|
|
|
BGP_STATS_MAX_AGGREGATEABLE,
|
|
|
|
BGP_STATS_AGGREGATES,
|
|
|
|
BGP_STATS_SPACE,
|
|
|
|
BGP_STATS_ASPATH_COUNT,
|
|
|
|
BGP_STATS_ASPATH_MAXHOPS,
|
|
|
|
BGP_STATS_ASPATH_TOTHOPS,
|
|
|
|
BGP_STATS_ASPATH_MAXSIZE,
|
|
|
|
BGP_STATS_ASPATH_TOTSIZE,
|
|
|
|
BGP_STATS_ASN_HIGHEST,
|
|
|
|
BGP_STATS_MAX,
|
2016-09-26 20:08:45 +02:00
|
|
|
};
|
2006-09-14 04:56:07 +02:00
|
|
|
|
2019-11-20 17:26:59 +01:00
|
|
|
static const char *const table_stats_strs[] = {
|
2017-07-22 14:52:33 +02:00
|
|
|
[BGP_STATS_PREFIXES] = "Total Prefixes",
|
|
|
|
[BGP_STATS_TOTPLEN] = "Average prefix length",
|
|
|
|
[BGP_STATS_RIB] = "Total Advertisements",
|
|
|
|
[BGP_STATS_UNAGGREGATEABLE] = "Unaggregateable prefixes",
|
|
|
|
[BGP_STATS_MAX_AGGREGATEABLE] =
|
|
|
|
"Maximum aggregateable prefixes",
|
|
|
|
[BGP_STATS_AGGREGATES] = "BGP Aggregate advertisements",
|
|
|
|
[BGP_STATS_SPACE] = "Address space advertised",
|
|
|
|
[BGP_STATS_ASPATH_COUNT] = "Advertisements with paths",
|
|
|
|
[BGP_STATS_ASPATH_MAXHOPS] = "Longest AS-Path (hops)",
|
|
|
|
[BGP_STATS_ASPATH_MAXSIZE] = "Largest AS-Path (bytes)",
|
|
|
|
[BGP_STATS_ASPATH_TOTHOPS] = "Average AS-Path length (hops)",
|
|
|
|
[BGP_STATS_ASPATH_TOTSIZE] = "Average AS-Path size (bytes)",
|
|
|
|
[BGP_STATS_ASN_HIGHEST] = "Highest public ASN",
|
|
|
|
[BGP_STATS_MAX] = NULL,
|
2016-09-26 20:08:45 +02:00
|
|
|
};
|
2006-09-14 04:56:07 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_table_stats {
|
|
|
|
struct bgp_table *table;
|
|
|
|
unsigned long long counts[BGP_STATS_MAX];
|
2017-08-23 18:21:30 +02:00
|
|
|
double total_space;
|
2006-09-04 03:10:36 +02:00
|
|
|
};
|
|
|
|
|
2016-09-26 20:08:45 +02:00
|
|
|
#if 0
|
|
|
|
#define TALLY_SIGFIG 100000
|
|
|
|
static unsigned long
|
|
|
|
ravg_tally (unsigned long count, unsigned long oldavg, unsigned long newval)
|
2006-09-04 03:10:36 +02:00
|
|
|
{
|
2016-09-26 20:08:45 +02:00
|
|
|
unsigned long newtot = (count-1) * oldavg + (newval * TALLY_SIGFIG);
|
|
|
|
unsigned long res = (newtot * TALLY_SIGFIG) / count;
|
|
|
|
unsigned long ret = newtot / count;
|
2018-08-07 14:37:57 +02:00
|
|
|
|
2016-09-26 20:08:45 +02:00
|
|
|
if ((res % TALLY_SIGFIG) > (TALLY_SIGFIG/2))
|
|
|
|
return ret + 1;
|
|
|
|
else
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
#endif
|
2006-09-04 03:10:36 +02:00
|
|
|
|
2019-03-30 04:53:16 +01:00
|
|
|
static void bgp_table_stats_rn(struct bgp_node *rn, struct bgp_node *top,
|
|
|
|
struct bgp_table_stats *ts, unsigned int space)
|
2006-09-14 04:56:07 +02:00
|
|
|
{
|
2019-03-30 04:53:16 +01:00
|
|
|
struct bgp_node *prn = bgp_node_parent_nolock(rn);
|
|
|
|
struct bgp_path_info *pi;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-03-30 04:53:16 +01:00
|
|
|
if (rn == top)
|
|
|
|
return;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-03-30 04:53:16 +01:00
|
|
|
if (!bgp_node_has_bgp_path_info_data(rn))
|
|
|
|
return;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-03-30 04:53:16 +01:00
|
|
|
ts->counts[BGP_STATS_PREFIXES]++;
|
|
|
|
ts->counts[BGP_STATS_TOTPLEN] += rn->p.prefixlen;
|
2006-09-04 03:10:36 +02:00
|
|
|
|
2016-09-26 20:08:45 +02:00
|
|
|
#if 0
|
|
|
|
ts->counts[BGP_STATS_AVGPLEN]
|
|
|
|
= ravg_tally (ts->counts[BGP_STATS_PREFIXES],
|
|
|
|
ts->counts[BGP_STATS_AVGPLEN],
|
|
|
|
rn->p.prefixlen);
|
|
|
|
#endif
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-03-30 04:53:16 +01:00
|
|
|
/* check if the prefix is included by any other announcements */
|
|
|
|
while (prn && !bgp_node_has_bgp_path_info_data(prn))
|
|
|
|
prn = bgp_node_parent_nolock(prn);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-03-30 04:53:16 +01:00
|
|
|
if (prn == NULL || prn == top) {
|
|
|
|
ts->counts[BGP_STATS_UNAGGREGATEABLE]++;
|
|
|
|
/* announced address space */
|
|
|
|
if (space)
|
|
|
|
ts->total_space += pow(2.0, space - rn->p.prefixlen);
|
|
|
|
} else if (bgp_node_has_bgp_path_info_data(prn))
|
|
|
|
ts->counts[BGP_STATS_MAX_AGGREGATEABLE]++;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-03-30 04:53:16 +01:00
|
|
|
|
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next) {
|
|
|
|
ts->counts[BGP_STATS_RIB]++;
|
|
|
|
|
2019-10-16 16:25:19 +02:00
|
|
|
if (CHECK_FLAG(pi->attr->flag,
|
|
|
|
ATTR_FLAG_BIT(BGP_ATTR_ATOMIC_AGGREGATE)))
|
2019-03-30 04:53:16 +01:00
|
|
|
ts->counts[BGP_STATS_AGGREGATES]++;
|
|
|
|
|
|
|
|
/* as-path stats */
|
2019-10-16 16:25:19 +02:00
|
|
|
if (pi->attr->aspath) {
|
2019-03-30 04:53:16 +01:00
|
|
|
unsigned int hops = aspath_count_hops(pi->attr->aspath);
|
|
|
|
unsigned int size = aspath_size(pi->attr->aspath);
|
|
|
|
as_t highest = aspath_highest(pi->attr->aspath);
|
|
|
|
|
|
|
|
ts->counts[BGP_STATS_ASPATH_COUNT]++;
|
|
|
|
|
|
|
|
if (hops > ts->counts[BGP_STATS_ASPATH_MAXHOPS])
|
|
|
|
ts->counts[BGP_STATS_ASPATH_MAXHOPS] = hops;
|
|
|
|
|
|
|
|
if (size > ts->counts[BGP_STATS_ASPATH_MAXSIZE])
|
|
|
|
ts->counts[BGP_STATS_ASPATH_MAXSIZE] = size;
|
|
|
|
|
|
|
|
ts->counts[BGP_STATS_ASPATH_TOTHOPS] += hops;
|
|
|
|
ts->counts[BGP_STATS_ASPATH_TOTSIZE] += size;
|
2016-09-26 20:08:45 +02:00
|
|
|
#if 0
|
2018-08-07 14:37:57 +02:00
|
|
|
ts->counts[BGP_STATS_ASPATH_AVGHOPS]
|
2016-09-26 20:08:45 +02:00
|
|
|
= ravg_tally (ts->counts[BGP_STATS_ASPATH_COUNT],
|
|
|
|
ts->counts[BGP_STATS_ASPATH_AVGHOPS],
|
|
|
|
hops);
|
|
|
|
ts->counts[BGP_STATS_ASPATH_AVGSIZE]
|
|
|
|
= ravg_tally (ts->counts[BGP_STATS_ASPATH_COUNT],
|
|
|
|
ts->counts[BGP_STATS_ASPATH_AVGSIZE],
|
|
|
|
size);
|
|
|
|
#endif
|
2019-03-30 04:53:16 +01:00
|
|
|
if (highest > ts->counts[BGP_STATS_ASN_HIGHEST])
|
|
|
|
ts->counts[BGP_STATS_ASN_HIGHEST] = highest;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int bgp_table_stats_walker(struct thread *t)
|
|
|
|
{
|
|
|
|
struct bgp_node *rn, *nrn;
|
|
|
|
struct bgp_node *top;
|
|
|
|
struct bgp_table_stats *ts = THREAD_ARG(t);
|
|
|
|
unsigned int space = 0;
|
|
|
|
|
|
|
|
if (!(top = bgp_table_top(ts->table)))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
switch (ts->table->afi) {
|
|
|
|
case AFI_IP:
|
|
|
|
space = IPV4_MAX_BITLEN;
|
|
|
|
break;
|
|
|
|
case AFI_IP6:
|
|
|
|
space = IPV6_MAX_BITLEN;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
ts->counts[BGP_STATS_MAXBITLEN] = space;
|
|
|
|
|
|
|
|
for (rn = top; rn; rn = bgp_route_next(rn)) {
|
|
|
|
if (ts->table->safi == SAFI_MPLS_VPN) {
|
|
|
|
struct bgp_table *table;
|
|
|
|
|
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (!table)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
top = bgp_table_top(table);
|
|
|
|
for (nrn = bgp_table_top(table); nrn;
|
|
|
|
nrn = bgp_route_next(nrn))
|
|
|
|
bgp_table_stats_rn(nrn, top, ts, space);
|
|
|
|
} else {
|
|
|
|
bgp_table_stats_rn(rn, top, ts, space);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2019-03-30 04:53:16 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
2006-09-14 04:56:07 +02:00
|
|
|
}
|
2006-09-04 03:10:36 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_table_stats(struct vty *vty, struct bgp *bgp, afi_t afi,
|
|
|
|
safi_t safi)
|
2006-09-14 04:56:07 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_table_stats ts;
|
|
|
|
unsigned int i;
|
2017-07-13 17:17:15 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!bgp->rib[afi][safi]) {
|
|
|
|
vty_out(vty, "%% No RIB exist's for the AFI(%d)/SAFI(%d)\n",
|
|
|
|
afi, safi);
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2017-07-13 17:17:15 +02:00
|
|
|
|
2019-08-27 03:48:53 +02:00
|
|
|
vty_out(vty, "BGP %s RIB statistics\n", get_afi_safi_str(afi, safi, false));
|
2017-07-13 17:17:15 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* labeled-unicast routes live in the unicast table */
|
|
|
|
if (safi == SAFI_LABELED_UNICAST)
|
|
|
|
safi = SAFI_UNICAST;
|
2017-07-13 17:17:15 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
memset(&ts, 0, sizeof(ts));
|
|
|
|
ts.table = bgp->rib[afi][safi];
|
|
|
|
thread_execute(bm->master, bgp_table_stats_walker, &ts, 0);
|
2006-09-04 03:10:36 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
for (i = 0; i < BGP_STATS_MAX; i++) {
|
|
|
|
if (!table_stats_strs[i])
|
|
|
|
continue;
|
|
|
|
|
|
|
|
switch (i) {
|
2016-09-26 20:08:45 +02:00
|
|
|
#if 0
|
|
|
|
case BGP_STATS_ASPATH_AVGHOPS:
|
|
|
|
case BGP_STATS_ASPATH_AVGSIZE:
|
|
|
|
case BGP_STATS_AVGPLEN:
|
|
|
|
vty_out (vty, "%-30s: ", table_stats_strs[i]);
|
|
|
|
vty_out (vty, "%12.2f",
|
|
|
|
(float)ts.counts[i] / (float)TALLY_SIGFIG);
|
|
|
|
break;
|
|
|
|
#endif
|
2017-07-17 14:03:14 +02:00
|
|
|
case BGP_STATS_ASPATH_TOTHOPS:
|
|
|
|
case BGP_STATS_ASPATH_TOTSIZE:
|
|
|
|
vty_out(vty, "%-30s: ", table_stats_strs[i]);
|
|
|
|
vty_out(vty, "%12.2f",
|
|
|
|
ts.counts[i]
|
|
|
|
? (float)ts.counts[i]
|
|
|
|
/ (float)ts.counts
|
|
|
|
[BGP_STATS_ASPATH_COUNT]
|
|
|
|
: 0);
|
|
|
|
break;
|
|
|
|
case BGP_STATS_TOTPLEN:
|
|
|
|
vty_out(vty, "%-30s: ", table_stats_strs[i]);
|
|
|
|
vty_out(vty, "%12.2f",
|
|
|
|
ts.counts[i]
|
|
|
|
? (float)ts.counts[i]
|
|
|
|
/ (float)ts.counts
|
|
|
|
[BGP_STATS_PREFIXES]
|
|
|
|
: 0);
|
|
|
|
break;
|
|
|
|
case BGP_STATS_SPACE:
|
|
|
|
vty_out(vty, "%-30s: ", table_stats_strs[i]);
|
2017-08-23 18:21:30 +02:00
|
|
|
vty_out(vty, "%12g\n", ts.total_space);
|
|
|
|
|
|
|
|
if (afi == AFI_IP6) {
|
|
|
|
vty_out(vty, "%30s: ", "/32 equivalent ");
|
|
|
|
vty_out(vty, "%12g\n",
|
2018-02-09 19:22:50 +01:00
|
|
|
ts.total_space * pow(2.0, -128 + 32));
|
2017-08-23 18:21:30 +02:00
|
|
|
vty_out(vty, "%30s: ", "/48 equivalent ");
|
|
|
|
vty_out(vty, "%12g\n",
|
2018-02-09 19:22:50 +01:00
|
|
|
ts.total_space * pow(2.0, -128 + 48));
|
2017-08-23 18:21:30 +02:00
|
|
|
} else {
|
|
|
|
vty_out(vty, "%30s: ", "% announced ");
|
|
|
|
vty_out(vty, "%12.2f\n",
|
|
|
|
ts.total_space * 100. * pow(2.0, -32));
|
|
|
|
vty_out(vty, "%30s: ", "/8 equivalent ");
|
|
|
|
vty_out(vty, "%12.2f\n",
|
2018-02-09 19:22:50 +01:00
|
|
|
ts.total_space * pow(2.0, -32 + 8));
|
2017-08-23 18:21:30 +02:00
|
|
|
vty_out(vty, "%30s: ", "/24 equivalent ");
|
|
|
|
vty_out(vty, "%12.2f\n",
|
2018-02-09 19:22:50 +01:00
|
|
|
ts.total_space * pow(2.0, -32 + 24));
|
2017-08-23 18:21:30 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
vty_out(vty, "%-30s: ", table_stats_strs[i]);
|
|
|
|
vty_out(vty, "%12llu", ts.counts[i]);
|
|
|
|
}
|
2006-09-04 03:10:36 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "\n");
|
|
|
|
}
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
enum bgp_pcounts {
|
|
|
|
PCOUNT_ADJ_IN = 0,
|
|
|
|
PCOUNT_DAMPED,
|
|
|
|
PCOUNT_REMOVED,
|
|
|
|
PCOUNT_HISTORY,
|
|
|
|
PCOUNT_STALE,
|
|
|
|
PCOUNT_VALID,
|
|
|
|
PCOUNT_ALL,
|
|
|
|
PCOUNT_COUNTED,
|
|
|
|
PCOUNT_PFCNT, /* the figure we display to users */
|
|
|
|
PCOUNT_MAX,
|
2016-09-26 20:08:45 +02:00
|
|
|
};
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-11-20 17:26:59 +01:00
|
|
|
static const char *const pcount_strs[] = {
|
2017-07-22 14:52:33 +02:00
|
|
|
[PCOUNT_ADJ_IN] = "Adj-in",
|
|
|
|
[PCOUNT_DAMPED] = "Damped",
|
|
|
|
[PCOUNT_REMOVED] = "Removed",
|
|
|
|
[PCOUNT_HISTORY] = "History",
|
|
|
|
[PCOUNT_STALE] = "Stale",
|
|
|
|
[PCOUNT_VALID] = "Valid",
|
|
|
|
[PCOUNT_ALL] = "All RIB",
|
|
|
|
[PCOUNT_COUNTED] = "PfxCt counted",
|
|
|
|
[PCOUNT_PFCNT] = "Useable",
|
|
|
|
[PCOUNT_MAX] = NULL,
|
2016-09-26 20:08:45 +02:00
|
|
|
};
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
struct peer_pcounts {
|
|
|
|
unsigned int count[PCOUNT_MAX];
|
|
|
|
const struct peer *peer;
|
|
|
|
const struct bgp_table *table;
|
2019-11-19 17:41:04 +01:00
|
|
|
safi_t safi;
|
2016-09-26 20:08:45 +02:00
|
|
|
};
|
2015-05-20 02:40:34 +02:00
|
|
|
|
2019-11-19 17:41:04 +01:00
|
|
|
static void bgp_peer_count_proc(struct bgp_node *rn,
|
|
|
|
struct peer_pcounts *pc)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
2019-11-19 17:41:04 +01:00
|
|
|
const struct bgp_adj_in *ain;
|
|
|
|
const struct bgp_path_info *pi;
|
2017-07-17 14:03:14 +02:00
|
|
|
const struct peer *peer = pc->peer;
|
|
|
|
|
2019-11-19 17:41:04 +01:00
|
|
|
for (ain = rn->adj_in; ain; ain = ain->next)
|
|
|
|
if (ain->peer == peer)
|
|
|
|
pc->count[PCOUNT_ADJ_IN]++;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-11-19 17:41:04 +01:00
|
|
|
for (pi = bgp_node_get_bgp_path_info(rn); pi; pi = pi->next) {
|
2018-07-30 17:40:02 +02:00
|
|
|
|
2019-11-19 17:41:04 +01:00
|
|
|
if (pi->peer != peer)
|
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-11-19 17:41:04 +01:00
|
|
|
pc->count[PCOUNT_ALL]++;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-11-19 17:41:04 +01:00
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_DAMPED))
|
|
|
|
pc->count[PCOUNT_DAMPED]++;
|
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_HISTORY))
|
|
|
|
pc->count[PCOUNT_HISTORY]++;
|
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_REMOVED))
|
|
|
|
pc->count[PCOUNT_REMOVED]++;
|
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_STALE))
|
|
|
|
pc->count[PCOUNT_STALE]++;
|
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_VALID))
|
|
|
|
pc->count[PCOUNT_VALID]++;
|
|
|
|
if (!CHECK_FLAG(pi->flags, BGP_PATH_UNUSEABLE))
|
|
|
|
pc->count[PCOUNT_PFCNT]++;
|
|
|
|
|
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_COUNTED)) {
|
|
|
|
pc->count[PCOUNT_COUNTED]++;
|
|
|
|
if (CHECK_FLAG(pi->flags, BGP_PATH_UNUSEABLE))
|
|
|
|
flog_err(
|
|
|
|
EC_LIB_DEVELOPMENT,
|
|
|
|
"Attempting to count but flags say it is unusable");
|
|
|
|
} else {
|
2018-10-03 02:43:07 +02:00
|
|
|
if (!CHECK_FLAG(pi->flags, BGP_PATH_UNUSEABLE))
|
2019-11-19 17:41:04 +01:00
|
|
|
flog_err(
|
|
|
|
EC_LIB_DEVELOPMENT,
|
|
|
|
"Not counted but flags say we should");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
2019-11-19 17:41:04 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static int bgp_peer_count_walker(struct thread *t)
|
|
|
|
{
|
|
|
|
struct bgp_node *rn, *rm;
|
|
|
|
const struct bgp_table *table;
|
|
|
|
struct peer_pcounts *pc = THREAD_ARG(t);
|
|
|
|
|
|
|
|
if (pc->safi == SAFI_MPLS_VPN || pc->safi == SAFI_ENCAP
|
|
|
|
|| pc->safi == SAFI_EVPN) {
|
|
|
|
/* Special handling for 2-level routing tables. */
|
|
|
|
for (rn = bgp_table_top(pc->table); rn;
|
|
|
|
rn = bgp_route_next(rn)) {
|
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (table != NULL)
|
|
|
|
for (rm = bgp_table_top(table); rm;
|
|
|
|
rm = bgp_route_next(rm))
|
|
|
|
bgp_peer_count_proc(rm, pc);
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
for (rn = bgp_table_top(pc->table); rn; rn = bgp_route_next(rn))
|
|
|
|
bgp_peer_count_proc(rn, pc);
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_peer_counts(struct vty *vty, struct peer *peer, afi_t afi,
|
2018-08-29 14:19:54 +02:00
|
|
|
safi_t safi, bool use_json)
|
2015-08-12 15:59:18 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
struct peer_pcounts pcounts = {.peer = peer};
|
|
|
|
unsigned int i;
|
|
|
|
json_object *json = NULL;
|
|
|
|
json_object *json_loop = NULL;
|
2015-08-12 15:59:18 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (use_json) {
|
|
|
|
json = json_object_new_object();
|
|
|
|
json_loop = json_object_new_object();
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!peer || !peer->bgp || !peer->afc[afi][safi]
|
|
|
|
|| !peer->bgp->rib[afi][safi]) {
|
|
|
|
if (use_json) {
|
|
|
|
json_object_string_add(
|
|
|
|
json, "warning",
|
|
|
|
"No such neighbor or address family");
|
|
|
|
vty_out(vty, "%s\n", json_object_to_json_string(json));
|
|
|
|
json_object_free(json);
|
|
|
|
} else
|
|
|
|
vty_out(vty, "%% No such neighbor or address family\n");
|
|
|
|
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2009-06-24 22:36:50 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
memset(&pcounts, 0, sizeof(pcounts));
|
|
|
|
pcounts.peer = peer;
|
|
|
|
pcounts.table = peer->bgp->rib[afi][safi];
|
2019-11-19 17:41:04 +01:00
|
|
|
pcounts.safi = safi;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* in-place call via thread subsystem so as to record execution time
|
2018-02-09 22:14:22 +01:00
|
|
|
* stats for the thread-walk (i.e. ensure this can't be blamed on
|
|
|
|
* on just vty_read()).
|
|
|
|
*/
|
2017-07-17 14:03:14 +02:00
|
|
|
thread_execute(bm->master, bgp_peer_count_walker, &pcounts, 0);
|
|
|
|
|
|
|
|
if (use_json) {
|
|
|
|
json_object_string_add(json, "prefixCountsFor", peer->host);
|
|
|
|
json_object_string_add(json, "multiProtocol",
|
2019-08-27 03:48:53 +02:00
|
|
|
get_afi_safi_str(afi, safi, true));
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_int_add(json, "pfxCounter",
|
|
|
|
peer->pcount[afi][safi]);
|
|
|
|
|
|
|
|
for (i = 0; i < PCOUNT_MAX; i++)
|
|
|
|
json_object_int_add(json_loop, pcount_strs[i],
|
|
|
|
pcounts.count[i]);
|
|
|
|
|
|
|
|
json_object_object_add(json, "ribTableWalkCounters", json_loop);
|
|
|
|
|
|
|
|
if (pcounts.count[PCOUNT_PFCNT] != peer->pcount[afi][safi]) {
|
|
|
|
json_object_string_add(json, "pfxctDriftFor",
|
|
|
|
peer->host);
|
|
|
|
json_object_string_add(
|
|
|
|
json, "recommended",
|
|
|
|
"Please report this bug, with the above command output");
|
|
|
|
}
|
2018-03-06 20:02:52 +01:00
|
|
|
vty_out(vty, "%s\n", json_object_to_json_string_ext(
|
|
|
|
json, JSON_C_TO_STRING_PRETTY));
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_free(json);
|
|
|
|
} else {
|
|
|
|
|
|
|
|
if (peer->hostname
|
|
|
|
&& bgp_flag_check(peer->bgp, BGP_FLAG_SHOW_HOSTNAME)) {
|
|
|
|
vty_out(vty, "Prefix counts for %s/%s, %s\n",
|
|
|
|
peer->hostname, peer->host,
|
2019-08-27 03:48:53 +02:00
|
|
|
get_afi_safi_str(afi, safi, false));
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
|
|
|
vty_out(vty, "Prefix counts for %s, %s\n", peer->host,
|
2019-08-27 03:48:53 +02:00
|
|
|
get_afi_safi_str(afi, safi, false));
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
2019-10-03 23:30:28 +02:00
|
|
|
vty_out(vty, "PfxCt: %" PRIu32 "\n", peer->pcount[afi][safi]);
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "\nCounts from RIB table walk:\n\n");
|
|
|
|
|
|
|
|
for (i = 0; i < PCOUNT_MAX; i++)
|
|
|
|
vty_out(vty, "%20s: %-10d\n", pcount_strs[i],
|
|
|
|
pcounts.count[i]);
|
|
|
|
|
|
|
|
if (pcounts.count[PCOUNT_PFCNT] != peer->pcount[afi][safi]) {
|
|
|
|
vty_out(vty, "%s [pcount] PfxCt drift!\n", peer->host);
|
|
|
|
vty_out(vty,
|
|
|
|
"Please report this bug, with the above command output\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2016-09-26 20:08:45 +02:00
|
|
|
DEFUN (show_ip_bgp_instance_neighbor_prefix_counts,
|
|
|
|
show_ip_bgp_instance_neighbor_prefix_counts_cmd,
|
2017-06-21 16:30:29 +02:00
|
|
|
"show [ip] bgp [<view|vrf> VIEWVRFNAME] ["BGP_AFI_CMD_STR" ["BGP_SAFI_CMD_STR"]] "
|
2017-01-24 04:30:37 +01:00
|
|
|
"neighbors <A.B.C.D|X:X::X:X|WORD> prefix-counts [json]",
|
2002-12-13 21:15:29 +01:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
2016-03-09 04:39:38 +01:00
|
|
|
BGP_INSTANCE_HELP_STR
|
2017-06-16 21:12:57 +02:00
|
|
|
BGP_AFI_HELP_STR
|
|
|
|
BGP_SAFI_HELP_STR
|
2015-05-20 02:40:45 +02:00
|
|
|
"Detailed information on TCP and BGP neighbor connections\n"
|
|
|
|
"Neighbor to display information about\n"
|
|
|
|
"Neighbor to display information about\n"
|
2016-11-29 21:25:24 +01:00
|
|
|
"Neighbor on BGP configured interface\n"
|
2016-09-26 20:08:45 +02:00
|
|
|
"Display detailed prefix count information\n"
|
2016-11-30 00:26:03 +01:00
|
|
|
JSON_STR)
|
2015-05-20 02:40:45 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi = AFI_IP6;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
struct peer *peer;
|
|
|
|
int idx = 0;
|
|
|
|
struct bgp *bgp = NULL;
|
2018-08-29 14:19:54 +02:00
|
|
|
bool uj = use_json(argc, argv);
|
|
|
|
|
|
|
|
if (uj)
|
|
|
|
argc--;
|
2015-08-12 15:59:18 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_vty_find_and_parse_afi_safi_bgp(vty, argv, argc, &idx, &afi, &safi,
|
2018-08-29 14:19:54 +02:00
|
|
|
&bgp, uj);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!idx)
|
|
|
|
return CMD_WARNING;
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
argv_find(argv, argc, "neighbors", &idx);
|
|
|
|
peer = peer_lookup_in_view(vty, bgp, argv[idx + 1]->arg, uj);
|
|
|
|
if (!peer)
|
|
|
|
return CMD_WARNING;
|
2003-10-24 21:02:03 +02:00
|
|
|
|
2019-06-24 09:50:33 +02:00
|
|
|
return bgp_peer_counts(vty, peer, afi, safi, uj);
|
2016-09-26 20:08:45 +02:00
|
|
|
}
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-01-18 12:27:52 +01:00
|
|
|
#ifdef KEEP_OLD_VPN_COMMANDS
|
|
|
|
DEFUN (show_ip_bgp_vpn_neighbor_prefix_counts,
|
|
|
|
show_ip_bgp_vpn_neighbor_prefix_counts_cmd,
|
|
|
|
"show [ip] bgp <vpnv4|vpnv6> all neighbors <A.B.C.D|X:X::X:X|WORD> prefix-counts [json]",
|
2009-06-24 22:36:50 +02:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
2017-01-18 12:27:52 +01:00
|
|
|
BGP_VPNVX_HELP_STR
|
2016-11-29 21:25:24 +01:00
|
|
|
"Display information about all VPNv4 NLRIs\n"
|
2009-06-24 22:36:50 +02:00
|
|
|
"Detailed information on TCP and BGP neighbor connections\n"
|
|
|
|
"Neighbor to display information about\n"
|
|
|
|
"Neighbor to display information about\n"
|
2016-11-29 21:25:24 +01:00
|
|
|
"Neighbor on BGP configured interface\n"
|
2016-09-26 20:08:45 +02:00
|
|
|
"Display detailed prefix count information\n"
|
2016-11-30 00:26:03 +01:00
|
|
|
JSON_STR)
|
2016-09-26 20:08:45 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx_peer = 6;
|
|
|
|
struct peer *peer;
|
2018-08-29 14:19:54 +02:00
|
|
|
bool uj = use_json(argc, argv);
|
2016-09-26 20:08:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
peer = peer_lookup_in_view(vty, NULL, argv[idx_peer]->arg, uj);
|
|
|
|
if (!peer)
|
|
|
|
return CMD_WARNING;
|
|
|
|
|
|
|
|
return bgp_peer_counts(vty, peer, AFI_IP, SAFI_MPLS_VPN, uj);
|
2016-09-26 20:08:45 +02:00
|
|
|
}
|
|
|
|
|
2017-01-18 12:27:52 +01:00
|
|
|
DEFUN (show_ip_bgp_vpn_all_route_prefix,
|
|
|
|
show_ip_bgp_vpn_all_route_prefix_cmd,
|
|
|
|
"show [ip] bgp <vpnv4|vpnv6> all <A.B.C.D|A.B.C.D/M> [json]",
|
2016-11-29 21:25:24 +01:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
2017-01-18 12:27:52 +01:00
|
|
|
BGP_VPNVX_HELP_STR
|
2016-11-29 21:25:24 +01:00
|
|
|
"Display information about all VPNv4 NLRIs\n"
|
|
|
|
"Network in the BGP routing table to display\n"
|
2016-11-30 00:07:11 +01:00
|
|
|
"Network in the BGP routing table to display\n"
|
2016-11-30 00:26:03 +01:00
|
|
|
JSON_STR)
|
2016-11-29 21:25:24 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx = 0;
|
|
|
|
char *network = NULL;
|
|
|
|
struct bgp *bgp = bgp_get_default();
|
|
|
|
if (!bgp) {
|
|
|
|
vty_out(vty, "Can't find default instance\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2017-04-28 19:50:09 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (argv_find(argv, argc, "A.B.C.D", &idx))
|
|
|
|
network = argv[idx]->arg;
|
|
|
|
else if (argv_find(argv, argc, "A.B.C.D/M", &idx))
|
|
|
|
network = argv[idx]->arg;
|
|
|
|
else {
|
|
|
|
vty_out(vty, "Unable to figure out Network\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2017-04-28 19:50:09 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_show_route(vty, bgp, network, AFI_IP, SAFI_MPLS_VPN, NULL, 0,
|
2018-10-02 21:50:10 +02:00
|
|
|
BGP_PATH_SHOW_ALL, use_json(argc, argv));
|
2016-11-29 21:25:24 +01:00
|
|
|
}
|
2017-01-18 12:27:52 +01:00
|
|
|
#endif /* KEEP_OLD_VPN_COMMANDS */
|
2016-11-29 21:25:24 +01:00
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
DEFUN (show_bgp_l2vpn_evpn_route_prefix,
|
|
|
|
show_bgp_l2vpn_evpn_route_prefix_cmd,
|
|
|
|
"show bgp l2vpn evpn <A.B.C.D|A.B.C.D/M|X:X::X:X|X:X::X:X/M> [json]",
|
2016-09-28 18:54:17 +02:00
|
|
|
SHOW_STR
|
|
|
|
BGP_STR
|
|
|
|
L2VPN_HELP_STR
|
|
|
|
EVPN_HELP_STR
|
2019-09-11 09:01:39 +02:00
|
|
|
"Network in the BGP routing table to display\n"
|
|
|
|
"Network in the BGP routing table to display\n"
|
2016-09-28 18:54:17 +02:00
|
|
|
"Network in the BGP routing table to display\n"
|
|
|
|
"Network in the BGP routing table to display\n"
|
|
|
|
JSON_STR)
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx = 0;
|
|
|
|
char *network = NULL;
|
2019-09-11 09:01:39 +02:00
|
|
|
int prefix_check = 0;
|
2016-09-26 20:08:45 +02:00
|
|
|
|
2019-09-11 09:01:39 +02:00
|
|
|
if (argv_find(argv, argc, "A.B.C.D", &idx) ||
|
|
|
|
argv_find(argv, argc, "X:X::X:X", &idx))
|
2017-07-17 14:03:14 +02:00
|
|
|
network = argv[idx]->arg;
|
2019-09-11 09:01:39 +02:00
|
|
|
else if (argv_find(argv, argc, "A.B.C.D/M", &idx) ||
|
2019-11-19 03:20:21 +01:00
|
|
|
argv_find(argv, argc, "X:X::X:X/M", &idx)) {
|
2017-07-17 14:03:14 +02:00
|
|
|
network = argv[idx]->arg;
|
2019-09-11 09:01:39 +02:00
|
|
|
prefix_check = 1;
|
|
|
|
} else {
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, "Unable to figure out Network\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2019-09-11 09:01:39 +02:00
|
|
|
return bgp_show_route(vty, NULL, network, AFI_L2VPN, SAFI_EVPN, NULL,
|
|
|
|
prefix_check, BGP_PATH_SHOW_ALL,
|
|
|
|
use_json(argc, argv));
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void show_adj_route(struct vty *vty, struct peer *peer, afi_t afi,
|
2018-05-16 19:17:42 +02:00
|
|
|
safi_t safi, enum bgp_show_adj_route_type type,
|
2018-08-29 14:19:54 +02:00
|
|
|
const char *rmap_name, bool use_json,
|
2018-05-16 19:17:42 +02:00
|
|
|
json_object *json)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct bgp_table *table;
|
|
|
|
struct bgp_adj_in *ain;
|
|
|
|
struct bgp_adj_out *adj;
|
|
|
|
unsigned long output_count;
|
|
|
|
unsigned long filtered_count;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
int header1 = 1;
|
|
|
|
struct bgp *bgp;
|
|
|
|
int header2 = 1;
|
|
|
|
struct attr attr;
|
|
|
|
int ret;
|
|
|
|
struct update_subgroup *subgrp;
|
|
|
|
json_object *json_scode = NULL;
|
|
|
|
json_object *json_ocode = NULL;
|
|
|
|
json_object *json_ar = NULL;
|
|
|
|
struct peer_af *paf;
|
2018-07-26 23:48:05 +02:00
|
|
|
bool route_filtered;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
if (use_json) {
|
|
|
|
json_scode = json_object_new_object();
|
|
|
|
json_ocode = json_object_new_object();
|
|
|
|
json_ar = json_object_new_object();
|
|
|
|
|
|
|
|
json_object_string_add(json_scode, "suppressed", "s");
|
|
|
|
json_object_string_add(json_scode, "damped", "d");
|
|
|
|
json_object_string_add(json_scode, "history", "h");
|
|
|
|
json_object_string_add(json_scode, "valid", "*");
|
|
|
|
json_object_string_add(json_scode, "best", ">");
|
|
|
|
json_object_string_add(json_scode, "multipath", "=");
|
|
|
|
json_object_string_add(json_scode, "internal", "i");
|
|
|
|
json_object_string_add(json_scode, "ribFailure", "r");
|
|
|
|
json_object_string_add(json_scode, "stale", "S");
|
|
|
|
json_object_string_add(json_scode, "removed", "R");
|
|
|
|
|
|
|
|
json_object_string_add(json_ocode, "igp", "i");
|
|
|
|
json_object_string_add(json_ocode, "egp", "e");
|
|
|
|
json_object_string_add(json_ocode, "incomplete", "?");
|
|
|
|
}
|
2016-09-26 20:08:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp = peer->bgp;
|
2016-09-26 20:08:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!bgp) {
|
|
|
|
if (use_json) {
|
|
|
|
json_object_string_add(json, "alert", "no BGP");
|
|
|
|
vty_out(vty, "%s\n", json_object_to_json_string(json));
|
|
|
|
json_object_free(json);
|
|
|
|
} else
|
|
|
|
vty_out(vty, "%% No bgp\n");
|
|
|
|
return;
|
|
|
|
}
|
2016-09-26 20:08:45 +02:00
|
|
|
|
2019-03-28 17:02:33 +01:00
|
|
|
/* labeled-unicast routes live in the unicast table */
|
|
|
|
if (safi == SAFI_LABELED_UNICAST)
|
|
|
|
table = bgp->rib[afi][SAFI_UNICAST];
|
|
|
|
else
|
|
|
|
table = bgp->rib[afi][safi];
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
output_count = filtered_count = 0;
|
|
|
|
subgrp = peer_subgroup(peer, afi, safi);
|
|
|
|
|
2018-05-16 19:17:42 +02:00
|
|
|
if (type == bgp_show_adj_route_advertised && subgrp
|
2017-07-17 14:03:14 +02:00
|
|
|
&& CHECK_FLAG(subgrp->sflags, SUBGRP_STATUS_DEFAULT_ORIGINATE)) {
|
|
|
|
if (use_json) {
|
|
|
|
json_object_int_add(json, "bgpTableVersion",
|
|
|
|
table->version);
|
|
|
|
json_object_string_add(json, "bgpLocalRouterId",
|
|
|
|
inet_ntoa(bgp->router_id));
|
2018-11-02 22:40:44 +01:00
|
|
|
json_object_int_add(json, "defaultLocPrf",
|
|
|
|
bgp->default_local_pref);
|
|
|
|
json_object_int_add(json, "localAS", bgp->as);
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_object_add(json, "bgpStatusCodes",
|
|
|
|
json_scode);
|
|
|
|
json_object_object_add(json, "bgpOriginCodes",
|
|
|
|
json_ocode);
|
2018-08-07 14:37:57 +02:00
|
|
|
json_object_string_add(
|
|
|
|
json, "bgpOriginatingDefaultNetwork",
|
|
|
|
(afi == AFI_IP) ? "0.0.0.0/0" : "::/0");
|
2017-07-17 14:03:14 +02:00
|
|
|
} else {
|
2018-03-06 20:02:52 +01:00
|
|
|
vty_out(vty, "BGP table version is %" PRIu64
|
2018-04-09 22:28:11 +02:00
|
|
|
", local router ID is %s, vrf id ",
|
2017-07-17 14:03:14 +02:00
|
|
|
table->version, inet_ntoa(bgp->router_id));
|
2018-04-09 22:28:11 +02:00
|
|
|
if (bgp->vrf_id == VRF_UNKNOWN)
|
|
|
|
vty_out(vty, "%s", VRFID_NONE_STR);
|
|
|
|
else
|
|
|
|
vty_out(vty, "%u", bgp->vrf_id);
|
|
|
|
vty_out(vty, "\n");
|
2018-11-02 22:40:44 +01:00
|
|
|
vty_out(vty, "Default local pref %u, ",
|
|
|
|
bgp->default_local_pref);
|
|
|
|
vty_out(vty, "local AS %u\n", bgp->as);
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, BGP_SHOW_SCODE_HEADER);
|
2018-04-09 22:28:11 +02:00
|
|
|
vty_out(vty, BGP_SHOW_NCODE_HEADER);
|
2017-07-17 14:03:14 +02:00
|
|
|
vty_out(vty, BGP_SHOW_OCODE_HEADER);
|
|
|
|
|
2018-08-07 14:37:57 +02:00
|
|
|
vty_out(vty, "Originating default network %s\n\n",
|
|
|
|
(afi == AFI_IP) ? "0.0.0.0/0" : "::/0");
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
header1 = 0;
|
|
|
|
}
|
2016-09-26 20:08:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
for (rn = bgp_table_top(table); rn; rn = bgp_route_next(rn)) {
|
2018-05-16 19:17:42 +02:00
|
|
|
if (type == bgp_show_adj_route_received
|
|
|
|
|| type == bgp_show_adj_route_filtered) {
|
2017-07-17 14:03:14 +02:00
|
|
|
for (ain = rn->adj_in; ain; ain = ain->next) {
|
2019-10-16 16:25:19 +02:00
|
|
|
if (ain->peer != peer)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
2018-05-16 19:17:42 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
if (header1) {
|
|
|
|
if (use_json) {
|
|
|
|
json_object_int_add(
|
2017-08-30 17:23:01 +02:00
|
|
|
json, "bgpTableVersion",
|
2017-08-27 22:51:35 +02:00
|
|
|
0);
|
|
|
|
json_object_string_add(
|
|
|
|
json,
|
|
|
|
"bgpLocalRouterId",
|
|
|
|
inet_ntoa(
|
|
|
|
bgp->router_id));
|
2018-11-02 22:40:44 +01:00
|
|
|
json_object_int_add(json,
|
|
|
|
"defaultLocPrf",
|
|
|
|
bgp->default_local_pref);
|
|
|
|
json_object_int_add(json,
|
|
|
|
"localAS", bgp->as);
|
2017-08-27 22:51:35 +02:00
|
|
|
json_object_object_add(
|
2017-08-30 17:23:01 +02:00
|
|
|
json, "bgpStatusCodes",
|
2017-08-27 22:51:35 +02:00
|
|
|
json_scode);
|
|
|
|
json_object_object_add(
|
2017-08-30 17:23:01 +02:00
|
|
|
json, "bgpOriginCodes",
|
2017-08-27 22:51:35 +02:00
|
|
|
json_ocode);
|
|
|
|
} else {
|
|
|
|
vty_out(vty,
|
2018-04-09 22:28:11 +02:00
|
|
|
"BGP table version is 0, local router ID is %s, vrf id ",
|
2017-08-27 22:51:35 +02:00
|
|
|
inet_ntoa(
|
2018-04-09 22:28:11 +02:00
|
|
|
bgp->router_id));
|
|
|
|
if (bgp->vrf_id == VRF_UNKNOWN)
|
|
|
|
vty_out(vty, "%s",
|
|
|
|
VRFID_NONE_STR);
|
|
|
|
else
|
|
|
|
vty_out(vty, "%u",
|
|
|
|
bgp->vrf_id);
|
|
|
|
vty_out(vty, "\n");
|
2018-11-02 22:40:44 +01:00
|
|
|
vty_out(vty,
|
|
|
|
"Default local pref %u, ",
|
|
|
|
bgp->default_local_pref);
|
|
|
|
vty_out(vty, "local AS %u\n",
|
|
|
|
bgp->as);
|
2017-08-27 22:51:35 +02:00
|
|
|
vty_out(vty,
|
|
|
|
BGP_SHOW_SCODE_HEADER);
|
2018-04-09 22:28:11 +02:00
|
|
|
vty_out(vty,
|
|
|
|
BGP_SHOW_NCODE_HEADER);
|
2017-08-27 22:51:35 +02:00
|
|
|
vty_out(vty,
|
|
|
|
BGP_SHOW_OCODE_HEADER);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-08-27 22:51:35 +02:00
|
|
|
header1 = 0;
|
|
|
|
}
|
|
|
|
if (header2) {
|
|
|
|
if (!use_json)
|
|
|
|
vty_out(vty, BGP_SHOW_HEADER);
|
|
|
|
header2 = 0;
|
|
|
|
}
|
2018-05-16 19:17:42 +02:00
|
|
|
|
2019-12-03 22:01:19 +01:00
|
|
|
attr = *ain->attr;
|
2018-07-26 23:48:05 +02:00
|
|
|
route_filtered = false;
|
|
|
|
|
|
|
|
/* Filter prefix using distribute list,
|
|
|
|
* filter list or prefix list
|
|
|
|
*/
|
|
|
|
if ((bgp_input_filter(peer, &rn->p, &attr, afi,
|
|
|
|
safi)) == FILTER_DENY)
|
|
|
|
route_filtered = true;
|
|
|
|
|
|
|
|
/* Filter prefix using route-map */
|
2018-05-16 19:17:42 +02:00
|
|
|
ret = bgp_input_modifier(peer, &rn->p, &attr,
|
2019-11-13 01:51:24 +01:00
|
|
|
afi, safi, rmap_name, NULL, 0,
|
|
|
|
NULL);
|
2018-05-16 19:17:42 +02:00
|
|
|
|
2018-07-27 20:23:32 +02:00
|
|
|
if (type == bgp_show_adj_route_filtered &&
|
|
|
|
!route_filtered && ret != RMAP_DENY) {
|
2018-05-16 21:55:55 +02:00
|
|
|
bgp_attr_undup(&attr, ain->attr);
|
2018-05-16 19:17:42 +02:00
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2018-05-16 19:17:42 +02:00
|
|
|
|
2018-07-27 20:23:32 +02:00
|
|
|
if (type == bgp_show_adj_route_received &&
|
|
|
|
(route_filtered || ret == RMAP_DENY))
|
2018-05-16 19:17:42 +02:00
|
|
|
filtered_count++;
|
|
|
|
|
|
|
|
route_vty_out_tmp(vty, &rn->p, &attr, safi,
|
|
|
|
use_json, json_ar);
|
2018-05-16 21:55:55 +02:00
|
|
|
bgp_attr_undup(&attr, ain->attr);
|
2018-05-16 19:17:42 +02:00
|
|
|
output_count++;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2018-05-16 19:17:42 +02:00
|
|
|
} else if (type == bgp_show_adj_route_advertised) {
|
2018-12-07 15:01:59 +01:00
|
|
|
RB_FOREACH (adj, bgp_adj_out_rb, &rn->adj_out)
|
2017-10-25 02:21:09 +02:00
|
|
|
SUBGRP_FOREACH_PEER (adj->subgroup, paf) {
|
2018-05-16 21:55:55 +02:00
|
|
|
if (paf->peer != peer || !adj->attr)
|
2017-10-25 02:21:09 +02:00
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-10-25 02:21:09 +02:00
|
|
|
if (header1) {
|
|
|
|
if (use_json) {
|
|
|
|
json_object_int_add(
|
|
|
|
json,
|
|
|
|
"bgpTableVersion",
|
|
|
|
table->version);
|
|
|
|
json_object_string_add(
|
|
|
|
json,
|
|
|
|
"bgpLocalRouterId",
|
|
|
|
inet_ntoa(
|
|
|
|
bgp->router_id));
|
2018-11-02 22:40:44 +01:00
|
|
|
json_object_int_add(
|
|
|
|
json, "defaultLocPrf",
|
|
|
|
bgp->default_local_pref
|
|
|
|
);
|
|
|
|
json_object_int_add(
|
|
|
|
json, "localAS",
|
|
|
|
bgp->as);
|
2017-10-25 02:21:09 +02:00
|
|
|
json_object_object_add(
|
|
|
|
json,
|
|
|
|
"bgpStatusCodes",
|
|
|
|
json_scode);
|
|
|
|
json_object_object_add(
|
|
|
|
json,
|
|
|
|
"bgpOriginCodes",
|
|
|
|
json_ocode);
|
|
|
|
} else {
|
|
|
|
vty_out(vty,
|
|
|
|
"BGP table version is %" PRIu64
|
2018-04-09 22:28:11 +02:00
|
|
|
", local router ID is %s, vrf id ",
|
2017-10-25 02:21:09 +02:00
|
|
|
table->version,
|
|
|
|
inet_ntoa(
|
|
|
|
bgp->router_id));
|
2018-04-09 22:28:11 +02:00
|
|
|
if (bgp->vrf_id ==
|
|
|
|
VRF_UNKNOWN)
|
|
|
|
vty_out(vty,
|
|
|
|
"%s",
|
|
|
|
VRFID_NONE_STR);
|
|
|
|
else
|
|
|
|
vty_out(vty,
|
|
|
|
"%u",
|
|
|
|
bgp->vrf_id);
|
|
|
|
vty_out(vty, "\n");
|
2018-11-02 22:40:44 +01:00
|
|
|
vty_out(vty,
|
|
|
|
"Default local pref %u, ",
|
|
|
|
bgp->default_local_pref
|
|
|
|
);
|
|
|
|
vty_out(vty,
|
|
|
|
"local AS %u\n",
|
|
|
|
bgp->as);
|
2017-10-25 02:21:09 +02:00
|
|
|
vty_out(vty,
|
|
|
|
BGP_SHOW_SCODE_HEADER);
|
2018-04-09 22:28:11 +02:00
|
|
|
vty_out(vty,
|
|
|
|
BGP_SHOW_NCODE_HEADER);
|
2017-10-25 02:21:09 +02:00
|
|
|
vty_out(vty,
|
|
|
|
BGP_SHOW_OCODE_HEADER);
|
2017-09-15 17:47:35 +02:00
|
|
|
}
|
2017-10-25 02:21:09 +02:00
|
|
|
header1 = 0;
|
|
|
|
}
|
|
|
|
if (header2) {
|
|
|
|
if (!use_json)
|
|
|
|
vty_out(vty,
|
|
|
|
BGP_SHOW_HEADER);
|
|
|
|
header2 = 0;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-12-03 22:01:19 +01:00
|
|
|
attr = *adj->attr;
|
2018-05-16 21:55:55 +02:00
|
|
|
ret = bgp_output_modifier(
|
|
|
|
peer, &rn->p, &attr, afi, safi,
|
|
|
|
rmap_name);
|
2017-10-25 02:57:00 +02:00
|
|
|
|
2018-05-16 21:55:55 +02:00
|
|
|
if (ret != RMAP_DENY) {
|
|
|
|
route_vty_out_tmp(vty, &rn->p,
|
|
|
|
&attr, safi,
|
|
|
|
use_json,
|
|
|
|
json_ar);
|
|
|
|
output_count++;
|
|
|
|
} else {
|
|
|
|
filtered_count++;
|
2017-09-15 17:47:35 +02:00
|
|
|
}
|
2018-05-16 21:55:55 +02:00
|
|
|
|
|
|
|
bgp_attr_undup(&attr, adj->attr);
|
2017-10-25 02:21:09 +02:00
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (use_json) {
|
2018-05-16 19:17:42 +02:00
|
|
|
json_object_object_add(json, "advertisedRoutes", json_ar);
|
|
|
|
json_object_int_add(json, "totalPrefixCounter", output_count);
|
|
|
|
json_object_int_add(json, "filteredPrefixCounter",
|
|
|
|
filtered_count);
|
|
|
|
|
2018-03-06 20:02:52 +01:00
|
|
|
vty_out(vty, "%s\n", json_object_to_json_string_ext(
|
|
|
|
json, JSON_C_TO_STRING_PRETTY));
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object_free(json);
|
2018-05-16 19:17:42 +02:00
|
|
|
} else if (output_count > 0) {
|
|
|
|
if (filtered_count > 0)
|
|
|
|
vty_out(vty,
|
|
|
|
"\nTotal number of prefixes %ld (%ld filtered)\n",
|
|
|
|
output_count, filtered_count);
|
|
|
|
else
|
|
|
|
vty_out(vty, "\nTotal number of prefixes %ld\n",
|
|
|
|
output_count);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2016-09-26 20:08:45 +02:00
|
|
|
}
|
2009-06-24 22:36:50 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int peer_adj_routes(struct vty *vty, struct peer *peer, afi_t afi,
|
2018-05-16 19:17:42 +02:00
|
|
|
safi_t safi, enum bgp_show_adj_route_type type,
|
2018-08-29 14:19:54 +02:00
|
|
|
const char *rmap_name, bool use_json)
|
2015-05-20 02:40:45 +02:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
json_object *json = NULL;
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (use_json)
|
|
|
|
json = json_object_new_object();
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!peer || !peer->afc[afi][safi]) {
|
|
|
|
if (use_json) {
|
|
|
|
json_object_string_add(
|
|
|
|
json, "warning",
|
|
|
|
"No such neighbor or address family");
|
|
|
|
vty_out(vty, "%s\n", json_object_to_json_string(json));
|
|
|
|
json_object_free(json);
|
|
|
|
} else
|
|
|
|
vty_out(vty, "%% No such neighbor or address family\n");
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
|
2018-05-16 19:17:42 +02:00
|
|
|
if ((type == bgp_show_adj_route_received
|
|
|
|
|| type == bgp_show_adj_route_filtered)
|
2017-07-17 14:03:14 +02:00
|
|
|
&& !CHECK_FLAG(peer->af_flags[afi][safi],
|
|
|
|
PEER_FLAG_SOFT_RECONFIG)) {
|
|
|
|
if (use_json) {
|
|
|
|
json_object_string_add(
|
|
|
|
json, "warning",
|
|
|
|
"Inbound soft reconfiguration not enabled");
|
|
|
|
vty_out(vty, "%s\n", json_object_to_json_string(json));
|
|
|
|
json_object_free(json);
|
|
|
|
} else
|
|
|
|
vty_out(vty,
|
|
|
|
"%% Inbound soft reconfiguration not enabled\n");
|
|
|
|
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2018-05-16 19:17:42 +02:00
|
|
|
show_adj_route(vty, peer, afi, safi, type, rmap_name, use_json, json);
|
2015-05-20 02:40:45 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
2016-09-26 20:08:45 +02:00
|
|
|
}
|
2016-03-07 01:08:49 +01:00
|
|
|
|
2016-09-26 20:08:45 +02:00
|
|
|
DEFUN (show_ip_bgp_instance_neighbor_advertised_route,
|
|
|
|
show_ip_bgp_instance_neighbor_advertised_route_cmd,
|
2017-07-11 20:59:03 +02:00
|
|
|
"show [ip] bgp [<view|vrf> VIEWVRFNAME] ["BGP_AFI_CMD_STR" ["BGP_SAFI_WITH_LABEL_CMD_STR"]] "
|
2018-05-16 19:17:42 +02:00
|
|
|
"neighbors <A.B.C.D|X:X::X:X|WORD> <advertised-routes|received-routes|filtered-routes> [route-map WORD] [json]",
|
2002-12-13 21:15:29 +01:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
2016-09-26 20:08:45 +02:00
|
|
|
BGP_INSTANCE_HELP_STR
|
2017-01-27 18:45:34 +01:00
|
|
|
BGP_AFI_HELP_STR
|
2017-07-11 20:59:03 +02:00
|
|
|
BGP_SAFI_WITH_LABEL_HELP_STR
|
2002-12-13 21:15:29 +01:00
|
|
|
"Detailed information on TCP and BGP neighbor connections\n"
|
|
|
|
"Neighbor to display information about\n"
|
|
|
|
"Neighbor to display information about\n"
|
2016-11-29 21:25:24 +01:00
|
|
|
"Neighbor on BGP configured interface\n"
|
2016-09-26 20:08:45 +02:00
|
|
|
"Display the routes advertised to a BGP neighbor\n"
|
2018-05-16 19:17:42 +02:00
|
|
|
"Display the received routes from neighbor\n"
|
|
|
|
"Display the filtered routes received from neighbor\n"
|
2016-09-26 20:08:45 +02:00
|
|
|
"Route-map to modify the attributes\n"
|
|
|
|
"Name of the route map\n"
|
2016-11-30 00:26:03 +01:00
|
|
|
JSON_STR)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi = AFI_IP6;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
char *rmap_name = NULL;
|
|
|
|
char *peerstr = NULL;
|
|
|
|
struct bgp *bgp = NULL;
|
|
|
|
struct peer *peer;
|
2018-05-16 19:17:42 +02:00
|
|
|
enum bgp_show_adj_route_type type = bgp_show_adj_route_advertised;
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx = 0;
|
2018-08-29 14:19:54 +02:00
|
|
|
bool uj = use_json(argc, argv);
|
2018-05-16 19:17:42 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (uj)
|
|
|
|
argc--;
|
2017-01-24 04:30:37 +01:00
|
|
|
|
2018-08-29 14:19:54 +02:00
|
|
|
bgp_vty_find_and_parse_afi_safi_bgp(vty, argv, argc, &idx, &afi, &safi,
|
|
|
|
&bgp, uj);
|
|
|
|
if (!idx)
|
|
|
|
return CMD_WARNING;
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* neighbors <A.B.C.D|X:X::X:X|WORD> */
|
|
|
|
argv_find(argv, argc, "neighbors", &idx);
|
|
|
|
peerstr = argv[++idx]->arg;
|
2016-10-25 00:24:40 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
peer = peer_lookup_in_view(vty, bgp, peerstr, uj);
|
|
|
|
if (!peer)
|
|
|
|
return CMD_WARNING;
|
2015-08-12 15:59:18 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (argv_find(argv, argc, "advertised-routes", &idx))
|
2018-05-16 19:17:42 +02:00
|
|
|
type = bgp_show_adj_route_advertised;
|
|
|
|
else if (argv_find(argv, argc, "received-routes", &idx))
|
|
|
|
type = bgp_show_adj_route_received;
|
|
|
|
else if (argv_find(argv, argc, "filtered-routes", &idx))
|
|
|
|
type = bgp_show_adj_route_filtered;
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (argv_find(argv, argc, "route-map", &idx))
|
|
|
|
rmap_name = argv[++idx]->arg;
|
2010-07-23 20:43:04 +02:00
|
|
|
|
2018-05-16 19:17:42 +02:00
|
|
|
return peer_adj_routes(vty, peer, afi, safi, type, rmap_name, uj);
|
2010-07-23 20:43:04 +02:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
DEFUN (show_ip_bgp_neighbor_received_prefix_filter,
|
|
|
|
show_ip_bgp_neighbor_received_prefix_filter_cmd,
|
2016-10-25 00:24:40 +02:00
|
|
|
"show [ip] bgp [<ipv4|ipv6> [unicast]] neighbors <A.B.C.D|X:X::X:X|WORD> received prefix-filter [json]",
|
2002-12-13 21:15:29 +01:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
2016-10-25 00:24:40 +02:00
|
|
|
"Address Family\n"
|
|
|
|
"Address Family\n"
|
2002-12-13 21:15:29 +01:00
|
|
|
"Address Family modifier\n"
|
|
|
|
"Detailed information on TCP and BGP neighbor connections\n"
|
|
|
|
"Neighbor to display information about\n"
|
|
|
|
"Neighbor to display information about\n"
|
2016-11-29 21:25:24 +01:00
|
|
|
"Neighbor on BGP configured interface\n"
|
2002-12-13 21:15:29 +01:00
|
|
|
"Display information received from a BGP neighbor\n"
|
2015-08-12 15:59:18 +02:00
|
|
|
"Display the prefixlist filter\n"
|
2016-11-30 00:26:03 +01:00
|
|
|
JSON_STR)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi = AFI_IP6;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
char *peerstr = NULL;
|
|
|
|
|
|
|
|
char name[BUFSIZ];
|
|
|
|
union sockunion su;
|
|
|
|
struct peer *peer;
|
|
|
|
int count, ret;
|
|
|
|
|
|
|
|
int idx = 0;
|
|
|
|
|
|
|
|
/* show [ip] bgp */
|
|
|
|
if (argv_find(argv, argc, "ip", &idx))
|
|
|
|
afi = AFI_IP;
|
|
|
|
/* [<ipv4|ipv6> [unicast]] */
|
|
|
|
if (argv_find(argv, argc, "ipv4", &idx))
|
|
|
|
afi = AFI_IP;
|
|
|
|
if (argv_find(argv, argc, "ipv6", &idx))
|
|
|
|
afi = AFI_IP6;
|
|
|
|
/* neighbors <A.B.C.D|X:X::X:X|WORD> */
|
|
|
|
argv_find(argv, argc, "neighbors", &idx);
|
|
|
|
peerstr = argv[++idx]->arg;
|
|
|
|
|
2018-08-29 14:19:54 +02:00
|
|
|
bool uj = use_json(argc, argv);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
ret = str2sockunion(peerstr, &su);
|
|
|
|
if (ret < 0) {
|
|
|
|
peer = peer_lookup_by_conf_if(NULL, peerstr);
|
|
|
|
if (!peer) {
|
|
|
|
if (uj)
|
|
|
|
vty_out(vty, "{}\n");
|
|
|
|
else
|
|
|
|
vty_out(vty,
|
|
|
|
"%% Malformed address or name: %s\n",
|
|
|
|
peerstr);
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
peer = peer_lookup(NULL, &su);
|
|
|
|
if (!peer) {
|
|
|
|
if (uj)
|
|
|
|
vty_out(vty, "{}\n");
|
|
|
|
else
|
|
|
|
vty_out(vty, "No peer\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
sprintf(name, "%s.%d.%d", peer->host, afi, safi);
|
|
|
|
count = prefix_bgp_show_prefix_list(NULL, afi, name, uj);
|
|
|
|
if (count) {
|
|
|
|
if (!uj)
|
|
|
|
vty_out(vty, "Address Family: %s\n",
|
2019-08-27 03:48:53 +02:00
|
|
|
get_afi_safi_str(afi, safi, false));
|
2017-07-17 14:03:14 +02:00
|
|
|
prefix_bgp_show_prefix_list(vty, afi, name, uj);
|
|
|
|
} else {
|
|
|
|
if (uj)
|
|
|
|
vty_out(vty, "{}\n");
|
|
|
|
else
|
|
|
|
vty_out(vty, "No functional output\n");
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int bgp_show_neighbor_route(struct vty *vty, struct peer *peer,
|
|
|
|
afi_t afi, safi_t safi,
|
2018-08-29 14:19:54 +02:00
|
|
|
enum bgp_show_type type, bool use_json)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
2017-07-07 01:47:15 +02:00
|
|
|
/* labeled-unicast routes live in the unicast table */
|
|
|
|
if (safi == SAFI_LABELED_UNICAST)
|
|
|
|
safi = SAFI_UNICAST;
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!peer || !peer->afc[afi][safi]) {
|
|
|
|
if (use_json) {
|
|
|
|
json_object *json_no = NULL;
|
|
|
|
json_no = json_object_new_object();
|
|
|
|
json_object_string_add(
|
|
|
|
json_no, "warning",
|
|
|
|
"No such neighbor or address family");
|
|
|
|
vty_out(vty, "%s\n",
|
|
|
|
json_object_to_json_string(json_no));
|
|
|
|
json_object_free(json_no);
|
|
|
|
} else
|
|
|
|
vty_out(vty, "%% No such neighbor or address family\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2015-05-20 02:40:34 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_show(vty, peer->bgp, afi, safi, type, &peer->su, use_json);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2018-02-19 17:17:41 +01:00
|
|
|
DEFUN (show_ip_bgp_flowspec_routes_detailed,
|
|
|
|
show_ip_bgp_flowspec_routes_detailed_cmd,
|
|
|
|
"show [ip] bgp [<view|vrf> VIEWVRFNAME] ["BGP_AFI_CMD_STR" flowspec] detail [json]",
|
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
|
|
|
BGP_INSTANCE_HELP_STR
|
|
|
|
BGP_AFI_HELP_STR
|
|
|
|
"SAFI Flowspec\n"
|
|
|
|
"Detailed information on flowspec entries\n"
|
|
|
|
JSON_STR)
|
|
|
|
{
|
|
|
|
afi_t afi = AFI_IP;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
struct bgp *bgp = NULL;
|
|
|
|
int idx = 0;
|
2018-08-29 14:19:54 +02:00
|
|
|
bool uj = use_json(argc, argv);
|
|
|
|
|
|
|
|
if (uj)
|
|
|
|
argc--;
|
2018-02-19 17:17:41 +01:00
|
|
|
|
|
|
|
bgp_vty_find_and_parse_afi_safi_bgp(vty, argv, argc, &idx, &afi, &safi,
|
2018-08-29 14:19:54 +02:00
|
|
|
&bgp, uj);
|
2018-02-19 17:17:41 +01:00
|
|
|
if (!idx)
|
|
|
|
return CMD_WARNING;
|
|
|
|
|
2018-08-29 14:19:54 +02:00
|
|
|
return bgp_show(vty, bgp, afi, safi, bgp_show_type_detail, NULL, uj);
|
2018-02-19 17:17:41 +01:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
DEFUN (show_ip_bgp_neighbor_routes,
|
|
|
|
show_ip_bgp_neighbor_routes_cmd,
|
2017-07-11 20:59:03 +02:00
|
|
|
"show [ip] bgp [<view|vrf> VIEWVRFNAME] ["BGP_AFI_CMD_STR" ["BGP_SAFI_WITH_LABEL_CMD_STR"]] "
|
2017-01-24 04:30:37 +01:00
|
|
|
"neighbors <A.B.C.D|X:X::X:X|WORD> <flap-statistics|dampened-routes|routes> [json]",
|
2016-03-07 01:08:49 +01:00
|
|
|
SHOW_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
2016-03-09 04:39:38 +01:00
|
|
|
BGP_INSTANCE_HELP_STR
|
2017-01-24 20:52:06 +01:00
|
|
|
BGP_AFI_HELP_STR
|
2017-07-11 20:59:03 +02:00
|
|
|
BGP_SAFI_WITH_LABEL_HELP_STR
|
2002-12-13 21:15:29 +01:00
|
|
|
"Detailed information on TCP and BGP neighbor connections\n"
|
|
|
|
"Neighbor to display information about\n"
|
|
|
|
"Neighbor to display information about\n"
|
2016-11-29 21:25:24 +01:00
|
|
|
"Neighbor on BGP configured interface\n"
|
2016-09-26 20:44:58 +02:00
|
|
|
"Display flap statistics of the routes learned from neighbor\n"
|
2016-10-25 00:24:40 +02:00
|
|
|
"Display the dampened routes received from neighbor\n"
|
|
|
|
"Display routes learned from neighbor\n"
|
2016-11-30 00:26:03 +01:00
|
|
|
JSON_STR)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
char *peerstr = NULL;
|
|
|
|
struct bgp *bgp = NULL;
|
|
|
|
afi_t afi = AFI_IP6;
|
|
|
|
safi_t safi = SAFI_UNICAST;
|
|
|
|
struct peer *peer;
|
|
|
|
enum bgp_show_type sh_type = bgp_show_type_neighbor;
|
|
|
|
int idx = 0;
|
2018-08-29 14:19:54 +02:00
|
|
|
bool uj = use_json(argc, argv);
|
|
|
|
|
|
|
|
if (uj)
|
|
|
|
argc--;
|
2003-10-24 21:02:03 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_vty_find_and_parse_afi_safi_bgp(vty, argv, argc, &idx, &afi, &safi,
|
2018-08-29 14:19:54 +02:00
|
|
|
&bgp, uj);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (!idx)
|
|
|
|
return CMD_WARNING;
|
2017-01-25 15:43:11 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* neighbors <A.B.C.D|X:X::X:X|WORD> */
|
|
|
|
argv_find(argv, argc, "neighbors", &idx);
|
|
|
|
peerstr = argv[++idx]->arg;
|
2016-10-25 00:24:40 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
peer = peer_lookup_in_view(vty, bgp, peerstr, uj);
|
2018-05-24 16:58:37 +02:00
|
|
|
if (!peer)
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_WARNING;
|
2003-10-24 21:02:03 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (argv_find(argv, argc, "flap-statistics", &idx))
|
|
|
|
sh_type = bgp_show_type_flap_neighbor;
|
|
|
|
else if (argv_find(argv, argc, "dampened-routes", &idx))
|
|
|
|
sh_type = bgp_show_type_damp_neighbor;
|
|
|
|
else if (argv_find(argv, argc, "routes", &idx))
|
|
|
|
sh_type = bgp_show_type_neighbor;
|
2016-09-26 20:44:58 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_show_neighbor_route(vty, peer, afi, safi, sh_type, uj);
|
2016-03-07 01:08:49 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2014-03-17 14:01:42 +01:00
|
|
|
struct bgp_table *bgp_distance_table[AFI_MAX][SAFI_MAX];
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_distance {
|
|
|
|
/* Distance value for the IP source prefix. */
|
2018-03-27 21:13:34 +02:00
|
|
|
uint8_t distance;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Name of the access-list to be matched. */
|
|
|
|
char *access_list;
|
2002-12-13 21:15:29 +01:00
|
|
|
};
|
|
|
|
|
2017-01-24 20:52:06 +01:00
|
|
|
DEFUN (show_bgp_afi_vpn_rd_route,
|
|
|
|
show_bgp_afi_vpn_rd_route_cmd,
|
2017-09-14 20:07:30 +02:00
|
|
|
"show bgp "BGP_AFI_CMD_STR" vpn rd ASN:NN_OR_IP-ADDRESS:NN <A.B.C.D/M|X:X::X:X/M> [json]",
|
2017-01-24 20:52:06 +01:00
|
|
|
SHOW_STR
|
|
|
|
BGP_STR
|
|
|
|
BGP_AFI_HELP_STR
|
|
|
|
"Address Family modifier\n"
|
|
|
|
"Display information for a route distinguisher\n"
|
|
|
|
"Route Distinguisher\n"
|
2017-01-27 18:45:34 +01:00
|
|
|
"Network in the BGP routing table to display\n"
|
|
|
|
"Network in the BGP routing table to display\n"
|
|
|
|
JSON_STR)
|
2017-01-24 20:52:06 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int ret;
|
|
|
|
struct prefix_rd prd;
|
|
|
|
afi_t afi = AFI_MAX;
|
|
|
|
int idx = 0;
|
2017-01-24 20:52:06 +01:00
|
|
|
|
2017-12-05 16:09:36 +01:00
|
|
|
if (!argv_find_and_parse_afi(argv, argc, &idx, &afi)) {
|
|
|
|
vty_out(vty, "%% Malformed Address Family\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
ret = str2prefix_rd(argv[5]->arg, &prd);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Malformed Route Distinguisher\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2017-12-05 16:09:36 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_show_route(vty, NULL, argv[6]->arg, afi, SAFI_MPLS_VPN, &prd,
|
2018-10-02 21:50:10 +02:00
|
|
|
0, BGP_PATH_SHOW_ALL, use_json(argc, argv));
|
2017-01-24 20:52:06 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static struct bgp_distance *bgp_distance_new(void)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
return XCALLOC(MTYPE_BGP_DISTANCE, sizeof(struct bgp_distance));
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static void bgp_distance_free(struct bgp_distance *bdistance)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
XFREE(MTYPE_BGP_DISTANCE, bdistance);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_distance_set(struct vty *vty, const char *distance_str,
|
|
|
|
const char *ip_str, const char *access_list_str)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int ret;
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
|
|
|
struct prefix p;
|
2018-03-27 21:13:34 +02:00
|
|
|
uint8_t distance;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_distance *bdistance;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
afi = bgp_node_afi(vty);
|
|
|
|
safi = bgp_node_safi(vty);
|
2014-03-17 14:01:42 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
ret = str2prefix(ip_str, &p);
|
|
|
|
if (ret == 0) {
|
|
|
|
vty_out(vty, "Malformed prefix\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
distance = atoi(distance_str);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Get BGP distance node. */
|
|
|
|
rn = bgp_node_get(bgp_distance_table[afi][safi], (struct prefix *)&p);
|
2018-11-16 14:50:26 +01:00
|
|
|
bdistance = bgp_node_get_bgp_distance_info(rn);
|
2018-07-30 16:29:28 +02:00
|
|
|
if (bdistance)
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
2018-07-30 16:29:28 +02:00
|
|
|
else {
|
2017-07-17 14:03:14 +02:00
|
|
|
bdistance = bgp_distance_new();
|
2018-11-16 14:50:26 +01:00
|
|
|
bgp_node_set_bgp_distance_info(rn, bdistance);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Set distance value. */
|
|
|
|
bdistance->distance = distance;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Reset access-list configuration. */
|
|
|
|
if (bdistance->access_list) {
|
|
|
|
XFREE(MTYPE_AS_LIST, bdistance->access_list);
|
|
|
|
bdistance->access_list = NULL;
|
|
|
|
}
|
|
|
|
if (access_list_str)
|
|
|
|
bdistance->access_list =
|
|
|
|
XSTRDUP(MTYPE_AS_LIST, access_list_str);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_distance_unset(struct vty *vty, const char *distance_str,
|
|
|
|
const char *ip_str, const char *access_list_str)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int ret;
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
|
|
|
struct prefix p;
|
|
|
|
int distance;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_distance *bdistance;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
afi = bgp_node_afi(vty);
|
|
|
|
safi = bgp_node_safi(vty);
|
2014-03-17 14:01:42 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
ret = str2prefix(ip_str, &p);
|
|
|
|
if (ret == 0) {
|
|
|
|
vty_out(vty, "Malformed prefix\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
rn = bgp_node_lookup(bgp_distance_table[afi][safi],
|
|
|
|
(struct prefix *)&p);
|
|
|
|
if (!rn) {
|
|
|
|
vty_out(vty, "Can't find specified prefix\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-11-16 14:50:26 +01:00
|
|
|
bdistance = bgp_node_get_bgp_distance_info(rn);
|
2017-07-17 14:03:14 +02:00
|
|
|
distance = atoi(distance_str);
|
Fix most compiler warnings in default GCC build.
Fix lots of warnings. Some const and type-pun breaks strict-aliasing
warnings left but much reduced.
* bgp_advertise.h: (struct bgp_advertise_fifo) is functionally identical to
(struct fifo), so just use that. Makes it clearer the beginning of
(struct bgp_advertise) is compatible with with (struct fifo), which seems
to be enough for gcc.
Add a BGP_ADV_FIFO_HEAD macro to contain the right cast to try shut up
type-punning breaks strict aliasing warnings.
* bgp_packet.c: Use BGP_ADV_FIFO_HEAD.
(bgp_route_refresh_receive) fix an interesting logic error in
(!ok || (ret != BLAH)) where ret is only well-defined if ok.
* bgp_vty.c: Peer commands should use bgp_vty_return to set their return.
* jhash.{c,h}: Can take const on * args without adding issues & fix warnings.
* libospf.h: LSA sequence numbers use the unsigned range of values, and
constants need to be set to unsigned, or it causes warnings in ospf6d.
* md5.h: signedness of caddr_t is implementation specific, change to an
explicit (uint_8 *), fix sign/unsigned comparison warnings.
* vty.c: (vty_log_fixed) const on level is well-intentioned, but not going
to fly given iov_base.
* workqueue.c: ALL_LIST_ELEMENTS_RO tests for null pointer, which is always
true for address of static variable. Correct but pointless warning in
this case, but use a 2nd pointer to shut it up.
* ospf6_route.h: Add a comment about the use of (struct prefix) to stuff 2
different 32 bit IDs into in (struct ospf6_route), and the resulting
type-pun strict-alias breakage warnings this causes. Need to use 2
different fields to fix that warning?
general:
* remove unused variables, other than a few cases where they serve a
sufficiently useful documentary purpose (e.g. for code that needs
fixing), or they're required dummies. In those cases, try mark them as
unused.
* Remove dead code that can't be reached.
* Quite a few 'no ...' forms of vty commands take arguments, but do not
check the argument matches the command being negated. E.g., should
'distance X <prefix>' succeed if previously 'distance Y <prefix>' was set?
Or should it be required that the distance match the previously configured
distance for the prefix?
Ultimately, probably better to be strict about this. However, changing
from slack to strict might expose problems in command aliases and tools.
* Fix uninitialised use of variables.
* Fix sign/unsigned comparison warnings by making signedness of types consistent.
* Mark functions as static where their use is restricted to the same compilation
unit.
* Add required headers
* Move constants defined in headers into code.
* remove dead, unused functions that have no debug purpose.
(cherry picked from commit 7aa9dcef80b2ce50ecaa77653d87c8b84e009c49)
Conflicts:
bgpd/bgp_advertise.h
bgpd/bgp_mplsvpn.c
bgpd/bgp_nexthop.c
bgpd/bgp_packet.c
bgpd/bgp_route.c
bgpd/bgp_routemap.c
bgpd/bgp_vty.c
lib/command.c
lib/if.c
lib/jhash.c
lib/workqueue.c
ospf6d/ospf6_lsa.c
ospf6d/ospf6_neighbor.h
ospf6d/ospf6_spf.c
ospf6d/ospf6_top.c
ospfd/ospf_api.c
zebra/router-id.c
zebra/rt_netlink.c
zebra/rt_netlink.h
2014-09-19 15:42:23 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bdistance->distance != distance) {
|
|
|
|
vty_out(vty, "Distance does not match configured\n");
|
|
|
|
return CMD_WARNING_CONFIG_FAILED;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-02-25 21:18:13 +01:00
|
|
|
XFREE(MTYPE_AS_LIST, bdistance->access_list);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_distance_free(bdistance);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2018-07-30 17:40:02 +02:00
|
|
|
bgp_node_set_bgp_path_info(rn, NULL);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
bgp_unlock_node(rn);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Apply BGP information to distance method. */
|
2018-10-03 02:43:07 +02:00
|
|
|
uint8_t bgp_distance_apply(struct prefix *p, struct bgp_path_info *pinfo,
|
2018-10-02 22:41:30 +02:00
|
|
|
afi_t afi, safi_t safi, struct bgp *bgp)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct prefix q;
|
|
|
|
struct peer *peer;
|
|
|
|
struct bgp_distance *bdistance;
|
|
|
|
struct access_list *alist;
|
|
|
|
struct bgp_static *bgp_static;
|
|
|
|
|
|
|
|
if (!bgp)
|
|
|
|
return 0;
|
|
|
|
|
2018-10-03 02:43:07 +02:00
|
|
|
peer = pinfo->peer;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
bgpd: Create `set distance XXX` command for routemaps
Allow bgp to set a local Administrative distance to use
for installing routes into the rib.
Example:
!
router bgp 9323
bgp router-id 1.2.3.4
neighbor enp0s8 interface remote-as external
!
address-family ipv4 unicast
neighbor enp0s8 route-map DISTANCE in
exit-address-family
!
route-map DISTANCE permit 10
set distance 153
!
line vty
!
end
eva# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
B 0.0.0.0/0 [153/0] via fe80::a00:27ff:fe84:c2d6, enp0s8, 00:00:06
K>* 0.0.0.0/0 [0/100] via 10.0.2.2, enp0s3, 00:06:31
B>* 1.1.1.1/32 [153/0] via fe80::a00:27ff:fe84:c2d6, enp0s8, 00:00:06
B>* 1.1.1.2/32 [153/0] via fe80::a00:27ff:fe84:c2d6, enp0s8, 00:00:06
B>* 1.1.1.3/32 [153/0] via fe80::a00:27ff:fe84:c2d6, enp0s8, 00:00:06
C>* 10.0.2.0/24 is directly connected, enp0s3, 00:06:31
K>* 169.254.0.0/16 [0/1000] is directly connected, enp0s3, 00:06:31
eva#
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
2019-09-13 22:43:16 +02:00
|
|
|
if (pinfo->attr->distance)
|
|
|
|
return pinfo->attr->distance;
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Check source address. */
|
|
|
|
sockunion2hostprefix(&peer->su, &q);
|
|
|
|
rn = bgp_node_match(bgp_distance_table[afi][safi], &q);
|
|
|
|
if (rn) {
|
2018-11-16 14:50:26 +01:00
|
|
|
bdistance = bgp_node_get_bgp_distance_info(rn);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
|
|
|
|
|
|
|
if (bdistance->access_list) {
|
|
|
|
alist = access_list_lookup(afi, bdistance->access_list);
|
|
|
|
if (alist
|
|
|
|
&& access_list_apply(alist, p) == FILTER_PERMIT)
|
|
|
|
return bdistance->distance;
|
|
|
|
} else
|
|
|
|
return bdistance->distance;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Backdoor check. */
|
|
|
|
rn = bgp_node_lookup(bgp->route[afi][safi], p);
|
|
|
|
if (rn) {
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_static = bgp_node_get_bgp_static_info(rn);
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_unlock_node(rn);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (bgp_static->backdoor) {
|
|
|
|
if (bgp->distance_local[afi][safi])
|
|
|
|
return bgp->distance_local[afi][safi];
|
|
|
|
else
|
|
|
|
return ZEBRA_IBGP_DISTANCE_DEFAULT;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
if (peer->sort == BGP_PEER_EBGP) {
|
|
|
|
if (bgp->distance_ebgp[afi][safi])
|
|
|
|
return bgp->distance_ebgp[afi][safi];
|
|
|
|
return ZEBRA_EBGP_DISTANCE_DEFAULT;
|
|
|
|
} else {
|
|
|
|
if (bgp->distance_ibgp[afi][safi])
|
|
|
|
return bgp->distance_ibgp[afi][safi];
|
|
|
|
return ZEBRA_IBGP_DISTANCE_DEFAULT;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
bgpd: Reflect the distance in RIB when it is changed for an arbitrary afi/safi
debian-9# show ip route 192.168.255.2/32 longer-prefixes
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
B>* 192.168.255.2/32 [20/0] via 192.168.0.1, eth1, 00:15:22
debian-9# conf
debian-9(config)# router bgp 100
debian-9(config-router)# address-family ipv4
debian-9(config-router-af)# distance bgp 123 123 123
debian-9(config-router-af)# do show ip route 192.168.255.2/32 longer-prefixes
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
B>* 192.168.255.2/32 [123/0] via 192.168.0.1, eth1, 00:00:09
debian-9(config-router-af)# no distance bgp
debian-9(config-router-af)# do show ip route 192.168.255.2/32 longer-prefixes
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
B>* 192.168.255.2/32 [20/0] via 192.168.0.1, eth1, 00:00:02
debian-9(config-router-af)#
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
2019-10-31 08:53:18 +01:00
|
|
|
/* If we enter `distance bgp (1-255) (1-255) (1-255)`,
|
|
|
|
* we should tell ZEBRA update the routes for a specific
|
|
|
|
* AFI/SAFI to reflect changes in RIB.
|
|
|
|
*/
|
2019-10-31 10:17:45 +01:00
|
|
|
static void bgp_announce_routes_distance_update(struct bgp *bgp,
|
|
|
|
afi_t update_afi,
|
|
|
|
safi_t update_safi)
|
bgpd: Reflect the distance in RIB when it is changed for an arbitrary afi/safi
debian-9# show ip route 192.168.255.2/32 longer-prefixes
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
B>* 192.168.255.2/32 [20/0] via 192.168.0.1, eth1, 00:15:22
debian-9# conf
debian-9(config)# router bgp 100
debian-9(config-router)# address-family ipv4
debian-9(config-router-af)# distance bgp 123 123 123
debian-9(config-router-af)# do show ip route 192.168.255.2/32 longer-prefixes
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
B>* 192.168.255.2/32 [123/0] via 192.168.0.1, eth1, 00:00:09
debian-9(config-router-af)# no distance bgp
debian-9(config-router-af)# do show ip route 192.168.255.2/32 longer-prefixes
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
B>* 192.168.255.2/32 [20/0] via 192.168.0.1, eth1, 00:00:02
debian-9(config-router-af)#
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
2019-10-31 08:53:18 +01:00
|
|
|
{
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
|
|
|
|
|
|
|
FOREACH_AFI_SAFI (afi, safi) {
|
|
|
|
if (!bgp_fibupd_safi(safi))
|
|
|
|
continue;
|
|
|
|
|
2019-10-31 10:17:45 +01:00
|
|
|
if (afi != update_afi && safi != update_safi)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (BGP_DEBUG(zebra, ZEBRA))
|
|
|
|
zlog_debug(
|
|
|
|
"%s: Announcing routes due to distance change afi/safi (%d/%d)",
|
|
|
|
__func__, afi, safi);
|
|
|
|
bgp_zebra_announce_table(bgp, afi, safi);
|
bgpd: Reflect the distance in RIB when it is changed for an arbitrary afi/safi
debian-9# show ip route 192.168.255.2/32 longer-prefixes
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
B>* 192.168.255.2/32 [20/0] via 192.168.0.1, eth1, 00:15:22
debian-9# conf
debian-9(config)# router bgp 100
debian-9(config-router)# address-family ipv4
debian-9(config-router-af)# distance bgp 123 123 123
debian-9(config-router-af)# do show ip route 192.168.255.2/32 longer-prefixes
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
B>* 192.168.255.2/32 [123/0] via 192.168.0.1, eth1, 00:00:09
debian-9(config-router-af)# no distance bgp
debian-9(config-router-af)# do show ip route 192.168.255.2/32 longer-prefixes
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
B>* 192.168.255.2/32 [20/0] via 192.168.0.1, eth1, 00:00:02
debian-9(config-router-af)#
Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
2019-10-31 08:53:18 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
DEFUN (bgp_distance,
|
|
|
|
bgp_distance_cmd,
|
2016-09-23 15:47:20 +02:00
|
|
|
"distance bgp (1-255) (1-255) (1-255)",
|
2002-12-13 21:15:29 +01:00
|
|
|
"Define an administrative distance\n"
|
|
|
|
"BGP distance\n"
|
|
|
|
"Distance for routes external to the AS\n"
|
|
|
|
"Distance for routes internal to the AS\n"
|
|
|
|
"Distance for local routes\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
VTY_DECLVAR_CONTEXT(bgp, bgp);
|
|
|
|
int idx_number = 2;
|
|
|
|
int idx_number_2 = 3;
|
|
|
|
int idx_number_3 = 4;
|
2019-10-31 10:17:45 +01:00
|
|
|
int distance_ebgp = atoi(argv[idx_number]->arg);
|
|
|
|
int distance_ibgp = atoi(argv[idx_number_2]->arg);
|
|
|
|
int distance_local = atoi(argv[idx_number_3]->arg);
|
2017-07-17 14:03:14 +02:00
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
afi = bgp_node_afi(vty);
|
|
|
|
safi = bgp_node_safi(vty);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-31 10:17:45 +01:00
|
|
|
if (bgp->distance_ebgp[afi][safi] != distance_ebgp
|
|
|
|
|| bgp->distance_ibgp[afi][safi] != distance_ibgp
|
|
|
|
|| bgp->distance_local[afi][safi] != distance_local) {
|
|
|
|
bgp->distance_ebgp[afi][safi] = distance_ebgp;
|
|
|
|
bgp->distance_ibgp[afi][safi] = distance_ibgp;
|
|
|
|
bgp->distance_local[afi][safi] = distance_local;
|
|
|
|
bgp_announce_routes_distance_update(bgp, afi, safi);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_bgp_distance,
|
|
|
|
no_bgp_distance_cmd,
|
2016-09-26 20:08:45 +02:00
|
|
|
"no distance bgp [(1-255) (1-255) (1-255)]",
|
2002-12-13 21:15:29 +01:00
|
|
|
NO_STR
|
|
|
|
"Define an administrative distance\n"
|
|
|
|
"BGP distance\n"
|
|
|
|
"Distance for routes external to the AS\n"
|
|
|
|
"Distance for routes internal to the AS\n"
|
|
|
|
"Distance for local routes\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
VTY_DECLVAR_CONTEXT(bgp, bgp);
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
afi = bgp_node_afi(vty);
|
|
|
|
safi = bgp_node_safi(vty);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-10-31 10:17:45 +01:00
|
|
|
if (bgp->distance_ebgp[afi][safi] != 0
|
|
|
|
|| bgp->distance_ibgp[afi][safi] != 0
|
|
|
|
|| bgp->distance_local[afi][safi] != 0) {
|
|
|
|
bgp->distance_ebgp[afi][safi] = 0;
|
|
|
|
bgp->distance_ibgp[afi][safi] = 0;
|
|
|
|
bgp->distance_local[afi][safi] = 0;
|
|
|
|
bgp_announce_routes_distance_update(bgp, afi, safi);
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
DEFUN (bgp_distance_source,
|
|
|
|
bgp_distance_source_cmd,
|
2016-09-23 15:47:20 +02:00
|
|
|
"distance (1-255) A.B.C.D/M",
|
2002-12-13 21:15:29 +01:00
|
|
|
"Define an administrative distance\n"
|
|
|
|
"Administrative distance\n"
|
|
|
|
"IP source prefix\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx_number = 1;
|
|
|
|
int idx_ipv4_prefixlen = 2;
|
|
|
|
bgp_distance_set(vty, argv[idx_number]->arg,
|
|
|
|
argv[idx_ipv4_prefixlen]->arg, NULL);
|
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_bgp_distance_source,
|
|
|
|
no_bgp_distance_source_cmd,
|
2016-09-23 15:47:20 +02:00
|
|
|
"no distance (1-255) A.B.C.D/M",
|
2002-12-13 21:15:29 +01:00
|
|
|
NO_STR
|
|
|
|
"Define an administrative distance\n"
|
|
|
|
"Administrative distance\n"
|
|
|
|
"IP source prefix\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx_number = 2;
|
|
|
|
int idx_ipv4_prefixlen = 3;
|
|
|
|
bgp_distance_unset(vty, argv[idx_number]->arg,
|
|
|
|
argv[idx_ipv4_prefixlen]->arg, NULL);
|
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (bgp_distance_source_access_list,
|
|
|
|
bgp_distance_source_access_list_cmd,
|
2016-09-23 15:47:20 +02:00
|
|
|
"distance (1-255) A.B.C.D/M WORD",
|
2002-12-13 21:15:29 +01:00
|
|
|
"Define an administrative distance\n"
|
|
|
|
"Administrative distance\n"
|
|
|
|
"IP source prefix\n"
|
|
|
|
"Access list name\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx_number = 1;
|
|
|
|
int idx_ipv4_prefixlen = 2;
|
|
|
|
int idx_word = 3;
|
|
|
|
bgp_distance_set(vty, argv[idx_number]->arg,
|
|
|
|
argv[idx_ipv4_prefixlen]->arg, argv[idx_word]->arg);
|
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_bgp_distance_source_access_list,
|
|
|
|
no_bgp_distance_source_access_list_cmd,
|
2016-09-23 15:47:20 +02:00
|
|
|
"no distance (1-255) A.B.C.D/M WORD",
|
2002-12-13 21:15:29 +01:00
|
|
|
NO_STR
|
|
|
|
"Define an administrative distance\n"
|
|
|
|
"Administrative distance\n"
|
|
|
|
"IP source prefix\n"
|
|
|
|
"Access list name\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx_number = 2;
|
|
|
|
int idx_ipv4_prefixlen = 3;
|
|
|
|
int idx_word = 4;
|
|
|
|
bgp_distance_unset(vty, argv[idx_number]->arg,
|
|
|
|
argv[idx_ipv4_prefixlen]->arg, argv[idx_word]->arg);
|
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2014-03-17 14:01:42 +01:00
|
|
|
DEFUN (ipv6_bgp_distance_source,
|
|
|
|
ipv6_bgp_distance_source_cmd,
|
2016-10-21 21:27:49 +02:00
|
|
|
"distance (1-255) X:X::X:X/M",
|
2014-03-17 14:01:42 +01:00
|
|
|
"Define an administrative distance\n"
|
|
|
|
"Administrative distance\n"
|
|
|
|
"IP source prefix\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_distance_set(vty, argv[1]->arg, argv[2]->arg, NULL);
|
|
|
|
return CMD_SUCCESS;
|
2014-03-17 14:01:42 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_ipv6_bgp_distance_source,
|
|
|
|
no_ipv6_bgp_distance_source_cmd,
|
2016-10-21 21:27:49 +02:00
|
|
|
"no distance (1-255) X:X::X:X/M",
|
2014-03-17 14:01:42 +01:00
|
|
|
NO_STR
|
|
|
|
"Define an administrative distance\n"
|
|
|
|
"Administrative distance\n"
|
|
|
|
"IP source prefix\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_distance_unset(vty, argv[2]->arg, argv[3]->arg, NULL);
|
|
|
|
return CMD_SUCCESS;
|
2014-03-17 14:01:42 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (ipv6_bgp_distance_source_access_list,
|
|
|
|
ipv6_bgp_distance_source_access_list_cmd,
|
2016-10-21 21:27:49 +02:00
|
|
|
"distance (1-255) X:X::X:X/M WORD",
|
2014-03-17 14:01:42 +01:00
|
|
|
"Define an administrative distance\n"
|
|
|
|
"Administrative distance\n"
|
|
|
|
"IP source prefix\n"
|
|
|
|
"Access list name\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_distance_set(vty, argv[1]->arg, argv[2]->arg, argv[3]->arg);
|
|
|
|
return CMD_SUCCESS;
|
2014-03-17 14:01:42 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_ipv6_bgp_distance_source_access_list,
|
|
|
|
no_ipv6_bgp_distance_source_access_list_cmd,
|
2016-10-21 21:27:49 +02:00
|
|
|
"no distance (1-255) X:X::X:X/M WORD",
|
2014-03-17 14:01:42 +01:00
|
|
|
NO_STR
|
|
|
|
"Define an administrative distance\n"
|
|
|
|
"Administrative distance\n"
|
|
|
|
"IP source prefix\n"
|
|
|
|
"Access list name\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_distance_unset(vty, argv[2]->arg, argv[3]->arg, argv[4]->arg);
|
|
|
|
return CMD_SUCCESS;
|
2014-03-17 14:01:42 +01:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
DEFUN (bgp_damp_set,
|
|
|
|
bgp_damp_set_cmd,
|
2016-09-27 01:50:02 +02:00
|
|
|
"bgp dampening [(1-45) [(1-20000) (1-20000) (1-255)]]",
|
2002-12-13 21:15:29 +01:00
|
|
|
"BGP Specific commands\n"
|
|
|
|
"Enable route-flap dampening\n"
|
|
|
|
"Half-life time for the penalty\n"
|
|
|
|
"Value to start reusing a route\n"
|
|
|
|
"Value to start suppressing a route\n"
|
|
|
|
"Maximum duration to suppress a stable route\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
VTY_DECLVAR_CONTEXT(bgp, bgp);
|
|
|
|
int idx_half_life = 2;
|
|
|
|
int idx_reuse = 3;
|
|
|
|
int idx_suppress = 4;
|
|
|
|
int idx_max_suppress = 5;
|
|
|
|
int half = DEFAULT_HALF_LIFE * 60;
|
|
|
|
int reuse = DEFAULT_REUSE;
|
|
|
|
int suppress = DEFAULT_SUPPRESS;
|
|
|
|
int max = 4 * half;
|
|
|
|
|
|
|
|
if (argc == 6) {
|
|
|
|
half = atoi(argv[idx_half_life]->arg) * 60;
|
|
|
|
reuse = atoi(argv[idx_reuse]->arg);
|
|
|
|
suppress = atoi(argv[idx_suppress]->arg);
|
|
|
|
max = atoi(argv[idx_max_suppress]->arg) * 60;
|
|
|
|
} else if (argc == 3) {
|
|
|
|
half = atoi(argv[idx_half_life]->arg) * 60;
|
|
|
|
max = 4 * half;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-11-22 01:41:48 +01:00
|
|
|
/*
|
|
|
|
* These can't be 0 but our SA doesn't understand the
|
|
|
|
* way our cli is constructed
|
|
|
|
*/
|
|
|
|
assert(reuse);
|
|
|
|
assert(half);
|
2017-07-17 14:03:14 +02:00
|
|
|
if (suppress < reuse) {
|
|
|
|
vty_out(vty,
|
|
|
|
"Suppress value cannot be less than reuse value \n");
|
|
|
|
return 0;
|
|
|
|
}
|
2015-03-16 17:55:26 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_damp_enable(bgp, bgp_node_afi(vty), bgp_node_safi(vty), half,
|
|
|
|
reuse, suppress, max);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (bgp_damp_unset,
|
|
|
|
bgp_damp_unset_cmd,
|
2016-09-28 07:07:45 +02:00
|
|
|
"no bgp dampening [(1-45) [(1-20000) (1-20000) (1-255)]]",
|
2002-12-13 21:15:29 +01:00
|
|
|
NO_STR
|
|
|
|
"BGP Specific commands\n"
|
2016-11-05 00:03:03 +01:00
|
|
|
"Enable route-flap dampening\n"
|
|
|
|
"Half-life time for the penalty\n"
|
|
|
|
"Value to start reusing a route\n"
|
|
|
|
"Value to start suppressing a route\n"
|
|
|
|
"Maximum duration to suppress a stable route\n")
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
VTY_DECLVAR_CONTEXT(bgp, bgp);
|
|
|
|
return bgp_damp_disable(bgp, bgp_node_afi(vty), bgp_node_safi(vty));
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Display specified route of BGP table. */
|
2017-07-17 14:03:14 +02:00
|
|
|
static int bgp_clear_damp_route(struct vty *vty, const char *view_name,
|
|
|
|
const char *ip_str, afi_t afi, safi_t safi,
|
|
|
|
struct prefix_rd *prd, int prefix_check)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct prefix match;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_node *rm;
|
2018-10-03 02:43:07 +02:00
|
|
|
struct bgp_path_info *pi;
|
|
|
|
struct bgp_path_info *pi_temp;
|
2017-07-17 14:03:14 +02:00
|
|
|
struct bgp *bgp;
|
|
|
|
struct bgp_table *table;
|
|
|
|
|
|
|
|
/* BGP structure lookup. */
|
|
|
|
if (view_name) {
|
|
|
|
bgp = bgp_lookup_by_name(view_name);
|
|
|
|
if (bgp == NULL) {
|
|
|
|
vty_out(vty, "%% Can't find BGP instance %s\n",
|
|
|
|
view_name);
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
bgp = bgp_get_default();
|
|
|
|
if (bgp == NULL) {
|
|
|
|
vty_out(vty, "%% No BGP process is configured\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Check IP address argument. */
|
|
|
|
ret = str2prefix(ip_str, &match);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% address is malformed\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
match.family = afi2family(afi);
|
|
|
|
|
|
|
|
if ((safi == SAFI_MPLS_VPN) || (safi == SAFI_ENCAP)
|
|
|
|
|| (safi == SAFI_EVPN)) {
|
|
|
|
for (rn = bgp_table_top(bgp->rib[AFI_IP][safi]); rn;
|
|
|
|
rn = bgp_route_next(rn)) {
|
|
|
|
if (prd && memcmp(rn->p.u.val, prd->val, 8) != 0)
|
|
|
|
continue;
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(rn);
|
|
|
|
if (!table)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
|
|
|
if ((rm = bgp_node_match(table, &match)) == NULL)
|
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
if (!prefix_check
|
|
|
|
|| rm->p.prefixlen == match.prefixlen) {
|
2018-07-30 17:40:02 +02:00
|
|
|
pi = bgp_node_get_bgp_path_info(rm);
|
2018-10-03 02:43:07 +02:00
|
|
|
while (pi) {
|
|
|
|
if (pi->extra && pi->extra->damp_info) {
|
|
|
|
pi_temp = pi->next;
|
2017-08-27 22:51:35 +02:00
|
|
|
bgp_damp_info_free(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->extra->damp_info,
|
2019-11-10 19:13:20 +01:00
|
|
|
1, afi, safi);
|
2018-10-03 02:43:07 +02:00
|
|
|
pi = pi_temp;
|
2017-08-27 22:51:35 +02:00
|
|
|
} else
|
2018-10-03 02:43:07 +02:00
|
|
|
pi = pi->next;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2017-08-27 22:51:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
bgp_unlock_node(rm);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if ((rn = bgp_node_match(bgp->rib[afi][safi], &match))
|
|
|
|
!= NULL) {
|
|
|
|
if (!prefix_check
|
|
|
|
|| rn->p.prefixlen == match.prefixlen) {
|
2018-07-30 17:40:02 +02:00
|
|
|
pi = bgp_node_get_bgp_path_info(rn);
|
2018-10-03 02:43:07 +02:00
|
|
|
while (pi) {
|
|
|
|
if (pi->extra && pi->extra->damp_info) {
|
|
|
|
pi_temp = pi->next;
|
2017-07-17 14:03:14 +02:00
|
|
|
bgp_damp_info_free(
|
2018-10-03 02:43:07 +02:00
|
|
|
pi->extra->damp_info,
|
2019-11-10 19:13:20 +01:00
|
|
|
1, afi, safi);
|
2018-10-03 02:43:07 +02:00
|
|
|
pi = pi_temp;
|
2017-07-17 14:03:14 +02:00
|
|
|
} else
|
2018-10-03 02:43:07 +02:00
|
|
|
pi = pi->next;
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
bgp_unlock_node(rn);
|
|
|
|
}
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (clear_ip_bgp_dampening,
|
|
|
|
clear_ip_bgp_dampening_cmd,
|
|
|
|
"clear ip bgp dampening",
|
|
|
|
CLEAR_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
|
|
|
"Clear route flap dampening information\n")
|
|
|
|
{
|
2019-11-10 19:13:20 +01:00
|
|
|
bgp_damp_info_clean(AFI_IP, SAFI_UNICAST);
|
2017-07-17 14:03:14 +02:00
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (clear_ip_bgp_dampening_prefix,
|
|
|
|
clear_ip_bgp_dampening_prefix_cmd,
|
|
|
|
"clear ip bgp dampening A.B.C.D/M",
|
|
|
|
CLEAR_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
|
|
|
"Clear route flap dampening information\n"
|
2016-10-28 01:18:26 +02:00
|
|
|
"IPv4 prefix\n")
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx_ipv4_prefixlen = 4;
|
|
|
|
return bgp_clear_damp_route(vty, NULL, argv[idx_ipv4_prefixlen]->arg,
|
|
|
|
AFI_IP, SAFI_UNICAST, NULL, 1);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (clear_ip_bgp_dampening_address,
|
|
|
|
clear_ip_bgp_dampening_address_cmd,
|
|
|
|
"clear ip bgp dampening A.B.C.D",
|
|
|
|
CLEAR_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
|
|
|
"Clear route flap dampening information\n"
|
|
|
|
"Network to clear damping information\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx_ipv4 = 4;
|
|
|
|
return bgp_clear_damp_route(vty, NULL, argv[idx_ipv4]->arg, AFI_IP,
|
|
|
|
SAFI_UNICAST, NULL, 0);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (clear_ip_bgp_dampening_address_mask,
|
|
|
|
clear_ip_bgp_dampening_address_mask_cmd,
|
|
|
|
"clear ip bgp dampening A.B.C.D A.B.C.D",
|
|
|
|
CLEAR_STR
|
|
|
|
IP_STR
|
|
|
|
BGP_STR
|
|
|
|
"Clear route flap dampening information\n"
|
|
|
|
"Network to clear damping information\n"
|
|
|
|
"Network mask\n")
|
|
|
|
{
|
2017-07-17 14:03:14 +02:00
|
|
|
int idx_ipv4 = 4;
|
|
|
|
int idx_ipv4_2 = 5;
|
|
|
|
int ret;
|
|
|
|
char prefix_str[BUFSIZ];
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
ret = netmask_str2prefix_str(argv[idx_ipv4]->arg, argv[idx_ipv4_2]->arg,
|
|
|
|
prefix_str);
|
|
|
|
if (!ret) {
|
|
|
|
vty_out(vty, "%% Inconsistent address and mask\n");
|
|
|
|
return CMD_WARNING;
|
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
return bgp_clear_damp_route(vty, NULL, prefix_str, AFI_IP, SAFI_UNICAST,
|
|
|
|
NULL, 0);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2019-02-19 16:46:52 +01:00
|
|
|
static void show_bgp_peerhash_entry(struct hash_bucket *bucket, void *arg)
|
2018-10-08 02:47:42 +02:00
|
|
|
{
|
|
|
|
struct vty *vty = arg;
|
2019-02-19 16:46:52 +01:00
|
|
|
struct peer *peer = bucket->data;
|
2018-10-08 02:47:42 +02:00
|
|
|
char buf[SU_ADDRSTRLEN];
|
|
|
|
|
|
|
|
vty_out(vty, "\tPeer: %s %s\n", peer->host,
|
|
|
|
sockunion2str(&peer->su, buf, sizeof(buf)));
|
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (show_bgp_peerhash,
|
|
|
|
show_bgp_peerhash_cmd,
|
|
|
|
"show bgp peerhash",
|
|
|
|
SHOW_STR
|
|
|
|
BGP_STR
|
|
|
|
"Display information about the BGP peerhash\n")
|
|
|
|
{
|
|
|
|
struct list *instances = bm->bgp;
|
|
|
|
struct listnode *node;
|
|
|
|
struct bgp *bgp;
|
|
|
|
|
|
|
|
for (ALL_LIST_ELEMENTS_RO(instances, node, bgp)) {
|
|
|
|
vty_out(vty, "BGP: %s\n", bgp->name);
|
|
|
|
hash_iterate(bgp->peerhash, show_bgp_peerhash_entry,
|
|
|
|
vty);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2016-01-12 19:42:02 +01:00
|
|
|
/* also used for encap safi */
|
2017-08-27 22:18:32 +02:00
|
|
|
static void bgp_config_write_network_vpn(struct vty *vty, struct bgp *bgp,
|
|
|
|
afi_t afi, safi_t safi)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct bgp_node *prn;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_table *table;
|
|
|
|
struct prefix *p;
|
|
|
|
struct prefix_rd *prd;
|
|
|
|
struct bgp_static *bgp_static;
|
|
|
|
mpls_label_t label;
|
|
|
|
char buf[SU_ADDRSTRLEN];
|
|
|
|
char rdbuf[RD_ADDRSTRLEN];
|
|
|
|
|
|
|
|
/* Network configuration. */
|
|
|
|
for (prn = bgp_table_top(bgp->route[afi][safi]); prn;
|
2017-08-27 22:51:35 +02:00
|
|
|
prn = bgp_route_next(prn)) {
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(prn);
|
|
|
|
if (!table)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-30 17:23:01 +02:00
|
|
|
for (rn = bgp_table_top(table); rn; rn = bgp_route_next(rn)) {
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_static = bgp_node_get_bgp_static_info(rn);
|
2018-07-30 16:30:41 +02:00
|
|
|
if (bgp_static == NULL)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
p = &rn->p;
|
|
|
|
prd = (struct prefix_rd *)&prn->p;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
/* "network" configuration display. */
|
2017-12-11 18:38:26 +01:00
|
|
|
prefix_rd2str(prd, rdbuf, sizeof(rdbuf));
|
2017-08-27 22:51:35 +02:00
|
|
|
label = decode_label(&bgp_static->label);
|
|
|
|
|
|
|
|
vty_out(vty, " network %s/%d rd %s",
|
|
|
|
inet_ntop(p->family, &p->u.prefix, buf,
|
|
|
|
SU_ADDRSTRLEN),
|
|
|
|
p->prefixlen, rdbuf);
|
|
|
|
if (safi == SAFI_MPLS_VPN)
|
|
|
|
vty_out(vty, " label %u", label);
|
|
|
|
|
|
|
|
if (bgp_static->rmap.name)
|
|
|
|
vty_out(vty, " route-map %s",
|
|
|
|
bgp_static->rmap.name);
|
2017-12-18 16:40:56 +01:00
|
|
|
|
|
|
|
if (bgp_static->backdoor)
|
|
|
|
vty_out(vty, " backdoor");
|
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
vty_out(vty, "\n");
|
|
|
|
}
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
2017-08-27 22:18:32 +02:00
|
|
|
static void bgp_config_write_network_evpn(struct vty *vty, struct bgp *bgp,
|
|
|
|
afi_t afi, safi_t safi)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct bgp_node *prn;
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_table *table;
|
|
|
|
struct prefix *p;
|
|
|
|
struct prefix_rd *prd;
|
|
|
|
struct bgp_static *bgp_static;
|
2018-06-08 01:51:13 +02:00
|
|
|
char buf[PREFIX_STRLEN * 2];
|
2017-07-17 14:03:14 +02:00
|
|
|
char buf2[SU_ADDRSTRLEN];
|
|
|
|
char rdbuf[RD_ADDRSTRLEN];
|
|
|
|
|
|
|
|
/* Network configuration. */
|
|
|
|
for (prn = bgp_table_top(bgp->route[afi][safi]); prn;
|
2017-08-27 22:51:35 +02:00
|
|
|
prn = bgp_route_next(prn)) {
|
2018-09-26 02:37:16 +02:00
|
|
|
table = bgp_node_get_bgp_table_info(prn);
|
|
|
|
if (!table)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-30 17:23:01 +02:00
|
|
|
for (rn = bgp_table_top(table); rn; rn = bgp_route_next(rn)) {
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_static = bgp_node_get_bgp_static_info(rn);
|
2018-07-30 16:30:41 +02:00
|
|
|
if (bgp_static == NULL)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
char *macrouter = NULL;
|
|
|
|
char *esi = NULL;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
if (bgp_static->router_mac)
|
|
|
|
macrouter = prefix_mac2str(
|
|
|
|
bgp_static->router_mac, NULL, 0);
|
|
|
|
if (bgp_static->eth_s_id)
|
|
|
|
esi = esi2str(bgp_static->eth_s_id);
|
|
|
|
p = &rn->p;
|
|
|
|
prd = (struct prefix_rd *)&prn->p;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
/* "network" configuration display. */
|
2017-12-11 18:38:26 +01:00
|
|
|
prefix_rd2str(prd, rdbuf, sizeof(rdbuf));
|
2017-09-25 18:28:01 +02:00
|
|
|
if (p->u.prefix_evpn.route_type == 5) {
|
|
|
|
char local_buf[PREFIX_STRLEN];
|
2018-04-14 00:37:30 +02:00
|
|
|
uint8_t family = is_evpn_prefix_ipaddr_v4((
|
2018-02-09 19:22:50 +01:00
|
|
|
struct prefix_evpn *)p)
|
|
|
|
? AF_INET
|
|
|
|
: AF_INET6;
|
2018-04-14 00:37:30 +02:00
|
|
|
inet_ntop(family,
|
|
|
|
&p->u.prefix_evpn.prefix_addr.ip.ip.addr,
|
2018-02-09 19:22:50 +01:00
|
|
|
local_buf, PREFIX_STRLEN);
|
|
|
|
sprintf(buf, "%s/%u", local_buf,
|
2018-04-14 00:37:30 +02:00
|
|
|
p->u.prefix_evpn.prefix_addr.ip_prefix_length);
|
2017-09-25 18:28:01 +02:00
|
|
|
} else {
|
|
|
|
prefix2str(p, buf, sizeof(buf));
|
|
|
|
}
|
2017-08-27 22:51:35 +02:00
|
|
|
|
2018-02-09 19:22:50 +01:00
|
|
|
if (bgp_static->gatewayIp.family == AF_INET
|
|
|
|
|| bgp_static->gatewayIp.family == AF_INET6)
|
2017-09-25 18:28:01 +02:00
|
|
|
inet_ntop(bgp_static->gatewayIp.family,
|
|
|
|
&bgp_static->gatewayIp.u.prefix, buf2,
|
|
|
|
sizeof(buf2));
|
2017-08-27 22:51:35 +02:00
|
|
|
vty_out(vty,
|
2018-01-19 08:53:32 +01:00
|
|
|
" network %s rd %s ethtag %u label %u esi %s gwip %s routermac %s\n",
|
2018-04-14 00:37:30 +02:00
|
|
|
buf, rdbuf,
|
|
|
|
p->u.prefix_evpn.prefix_addr.eth_tag,
|
2017-08-30 17:23:01 +02:00
|
|
|
decode_label(&bgp_static->label), esi, buf2,
|
2017-08-27 22:51:35 +02:00
|
|
|
macrouter);
|
|
|
|
|
2019-02-25 21:18:13 +01:00
|
|
|
XFREE(MTYPE_TMP, macrouter);
|
|
|
|
XFREE(MTYPE_TMP, esi);
|
2017-08-27 22:51:35 +02:00
|
|
|
}
|
|
|
|
}
|
2016-10-27 08:02:36 +02:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Configuration of static route announcement and aggregate
|
|
|
|
information. */
|
2017-08-27 22:18:32 +02:00
|
|
|
void bgp_config_write_network(struct vty *vty, struct bgp *bgp, afi_t afi,
|
|
|
|
safi_t safi)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct prefix *p;
|
|
|
|
struct bgp_static *bgp_static;
|
|
|
|
struct bgp_aggregate *bgp_aggregate;
|
|
|
|
char buf[SU_ADDRSTRLEN];
|
|
|
|
|
2017-08-27 22:18:32 +02:00
|
|
|
if ((safi == SAFI_MPLS_VPN) || (safi == SAFI_ENCAP)) {
|
|
|
|
bgp_config_write_network_vpn(vty, bgp, afi, safi);
|
|
|
|
return;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:18:32 +02:00
|
|
|
if (afi == AFI_L2VPN && safi == SAFI_EVPN) {
|
|
|
|
bgp_config_write_network_evpn(vty, bgp, afi, safi);
|
|
|
|
return;
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* Network configuration. */
|
|
|
|
for (rn = bgp_table_top(bgp->route[afi][safi]); rn;
|
2017-08-27 22:51:35 +02:00
|
|
|
rn = bgp_route_next(rn)) {
|
2018-11-16 14:46:19 +01:00
|
|
|
bgp_static = bgp_node_get_bgp_static_info(rn);
|
2018-07-30 16:30:41 +02:00
|
|
|
if (bgp_static == NULL)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
p = &rn->p;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-06-03 21:06:16 +02:00
|
|
|
vty_out(vty, " network %s/%d",
|
|
|
|
inet_ntop(p->family, &p->u.prefix, buf, SU_ADDRSTRLEN),
|
|
|
|
p->prefixlen);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
if (bgp_static->label_index != BGP_INVALID_LABEL_INDEX)
|
|
|
|
vty_out(vty, " label-index %u",
|
|
|
|
bgp_static->label_index);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
if (bgp_static->rmap.name)
|
|
|
|
vty_out(vty, " route-map %s", bgp_static->rmap.name);
|
2017-12-18 16:40:56 +01:00
|
|
|
|
|
|
|
if (bgp_static->backdoor)
|
|
|
|
vty_out(vty, " backdoor");
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
vty_out(vty, "\n");
|
|
|
|
}
|
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* Aggregate-address configuration. */
|
|
|
|
for (rn = bgp_table_top(bgp->aggregate[afi][safi]); rn;
|
2017-08-27 22:51:35 +02:00
|
|
|
rn = bgp_route_next(rn)) {
|
2018-11-02 13:31:22 +01:00
|
|
|
bgp_aggregate = bgp_node_get_bgp_aggregate_info(rn);
|
2018-07-30 14:50:47 +02:00
|
|
|
if (bgp_aggregate == NULL)
|
2017-08-27 22:51:35 +02:00
|
|
|
continue;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
p = &rn->p;
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2019-06-03 21:06:16 +02:00
|
|
|
vty_out(vty, " aggregate-address %s/%d",
|
|
|
|
inet_ntop(p->family, &p->u.prefix, buf, SU_ADDRSTRLEN),
|
|
|
|
p->prefixlen);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
if (bgp_aggregate->as_set)
|
|
|
|
vty_out(vty, " as-set");
|
2017-07-17 14:03:14 +02:00
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
if (bgp_aggregate->summary_only)
|
|
|
|
vty_out(vty, " summary-only");
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2019-08-21 17:16:05 +02:00
|
|
|
if (bgp_aggregate->rmap.name)
|
|
|
|
vty_out(vty, " route-map %s", bgp_aggregate->rmap.name);
|
|
|
|
|
2017-08-27 22:51:35 +02:00
|
|
|
vty_out(vty, "\n");
|
|
|
|
}
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
2014-03-17 14:01:42 +01:00
|
|
|
|
2017-08-27 22:18:32 +02:00
|
|
|
void bgp_config_write_distance(struct vty *vty, struct bgp *bgp, afi_t afi,
|
2017-08-30 17:23:01 +02:00
|
|
|
safi_t safi)
|
2017-07-17 14:03:14 +02:00
|
|
|
{
|
|
|
|
struct bgp_node *rn;
|
|
|
|
struct bgp_distance *bdistance;
|
|
|
|
|
|
|
|
/* Distance configuration. */
|
|
|
|
if (bgp->distance_ebgp[afi][safi] && bgp->distance_ibgp[afi][safi]
|
|
|
|
&& bgp->distance_local[afi][safi]
|
|
|
|
&& (bgp->distance_ebgp[afi][safi] != ZEBRA_EBGP_DISTANCE_DEFAULT
|
|
|
|
|| bgp->distance_ibgp[afi][safi] != ZEBRA_IBGP_DISTANCE_DEFAULT
|
|
|
|
|| bgp->distance_local[afi][safi]
|
|
|
|
!= ZEBRA_IBGP_DISTANCE_DEFAULT)) {
|
|
|
|
vty_out(vty, " distance bgp %d %d %d\n",
|
|
|
|
bgp->distance_ebgp[afi][safi],
|
|
|
|
bgp->distance_ibgp[afi][safi],
|
|
|
|
bgp->distance_local[afi][safi]);
|
|
|
|
}
|
2014-03-17 14:01:42 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
for (rn = bgp_table_top(bgp_distance_table[afi][safi]); rn;
|
2018-07-30 16:29:28 +02:00
|
|
|
rn = bgp_route_next(rn)) {
|
2018-11-16 14:50:26 +01:00
|
|
|
bdistance = bgp_node_get_bgp_distance_info(rn);
|
2018-07-30 16:29:28 +02:00
|
|
|
if (bdistance != NULL) {
|
2017-07-17 14:03:14 +02:00
|
|
|
char buf[PREFIX_STRLEN];
|
|
|
|
|
|
|
|
vty_out(vty, " distance %d %s %s\n",
|
|
|
|
bdistance->distance,
|
|
|
|
prefix2str(&rn->p, buf, sizeof(buf)),
|
|
|
|
bdistance->access_list ? bdistance->access_list
|
|
|
|
: "");
|
|
|
|
}
|
2018-07-30 16:29:28 +02:00
|
|
|
}
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Allocate routing table structure and install commands. */
|
2017-07-17 14:03:14 +02:00
|
|
|
void bgp_route_init(void)
|
|
|
|
{
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
|
|
|
|
|
|
|
/* Init BGP distance table. */
|
2017-11-21 19:02:06 +01:00
|
|
|
FOREACH_AFI_SAFI (afi, safi)
|
2018-03-24 00:57:03 +01:00
|
|
|
bgp_distance_table[afi][safi] = bgp_table_init(NULL, afi, safi);
|
2017-07-17 14:03:14 +02:00
|
|
|
|
|
|
|
/* IPv4 BGP commands. */
|
|
|
|
install_element(BGP_NODE, &bgp_table_map_cmd);
|
|
|
|
install_element(BGP_NODE, &bgp_network_cmd);
|
|
|
|
install_element(BGP_NODE, &no_bgp_table_map_cmd);
|
|
|
|
|
|
|
|
install_element(BGP_NODE, &aggregate_address_cmd);
|
|
|
|
install_element(BGP_NODE, &aggregate_address_mask_cmd);
|
|
|
|
install_element(BGP_NODE, &no_aggregate_address_cmd);
|
|
|
|
install_element(BGP_NODE, &no_aggregate_address_mask_cmd);
|
|
|
|
|
|
|
|
/* IPv4 unicast configuration. */
|
|
|
|
install_element(BGP_IPV4_NODE, &bgp_table_map_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &bgp_network_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &no_bgp_table_map_cmd);
|
|
|
|
|
|
|
|
install_element(BGP_IPV4_NODE, &aggregate_address_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &aggregate_address_mask_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &no_aggregate_address_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &no_aggregate_address_mask_cmd);
|
|
|
|
|
|
|
|
/* IPv4 multicast configuration. */
|
|
|
|
install_element(BGP_IPV4M_NODE, &bgp_table_map_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &bgp_network_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &no_bgp_table_map_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &aggregate_address_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &aggregate_address_mask_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &no_aggregate_address_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &no_aggregate_address_mask_cmd);
|
|
|
|
|
|
|
|
/* IPv4 labeled-unicast configuration. */
|
|
|
|
install_element(VIEW_NODE, &show_ip_bgp_instance_all_cmd);
|
|
|
|
install_element(VIEW_NODE, &show_ip_bgp_cmd);
|
2017-08-22 21:11:31 +02:00
|
|
|
install_element(VIEW_NODE, &show_ip_bgp_json_cmd);
|
2017-07-17 14:03:14 +02:00
|
|
|
install_element(VIEW_NODE, &show_ip_bgp_route_cmd);
|
|
|
|
install_element(VIEW_NODE, &show_ip_bgp_regexp_cmd);
|
|
|
|
|
|
|
|
install_element(VIEW_NODE,
|
|
|
|
&show_ip_bgp_instance_neighbor_advertised_route_cmd);
|
|
|
|
install_element(VIEW_NODE, &show_ip_bgp_neighbor_routes_cmd);
|
|
|
|
install_element(VIEW_NODE,
|
|
|
|
&show_ip_bgp_neighbor_received_prefix_filter_cmd);
|
2017-01-18 12:27:52 +01:00
|
|
|
#ifdef KEEP_OLD_VPN_COMMANDS
|
2017-07-17 14:03:14 +02:00
|
|
|
install_element(VIEW_NODE, &show_ip_bgp_vpn_all_route_prefix_cmd);
|
2017-01-18 12:27:52 +01:00
|
|
|
#endif /* KEEP_OLD_VPN_COMMANDS */
|
2017-07-17 14:03:14 +02:00
|
|
|
install_element(VIEW_NODE, &show_bgp_afi_vpn_rd_route_cmd);
|
|
|
|
install_element(VIEW_NODE,
|
2019-09-11 09:01:39 +02:00
|
|
|
&show_bgp_l2vpn_evpn_route_prefix_cmd);
|
2016-11-15 11:00:39 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* BGP dampening clear commands */
|
|
|
|
install_element(ENABLE_NODE, &clear_ip_bgp_dampening_cmd);
|
|
|
|
install_element(ENABLE_NODE, &clear_ip_bgp_dampening_prefix_cmd);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
install_element(ENABLE_NODE, &clear_ip_bgp_dampening_address_cmd);
|
|
|
|
install_element(ENABLE_NODE, &clear_ip_bgp_dampening_address_mask_cmd);
|
|
|
|
|
|
|
|
/* prefix count */
|
|
|
|
install_element(ENABLE_NODE,
|
|
|
|
&show_ip_bgp_instance_neighbor_prefix_counts_cmd);
|
2017-01-18 12:27:52 +01:00
|
|
|
#ifdef KEEP_OLD_VPN_COMMANDS
|
2017-07-17 14:03:14 +02:00
|
|
|
install_element(ENABLE_NODE,
|
|
|
|
&show_ip_bgp_vpn_neighbor_prefix_counts_cmd);
|
2017-01-18 12:27:52 +01:00
|
|
|
#endif /* KEEP_OLD_VPN_COMMANDS */
|
2006-09-04 03:10:36 +02:00
|
|
|
|
2017-07-17 14:03:14 +02:00
|
|
|
/* New config IPv6 BGP commands. */
|
|
|
|
install_element(BGP_IPV6_NODE, &bgp_table_map_cmd);
|
|
|
|
install_element(BGP_IPV6_NODE, &ipv6_bgp_network_cmd);
|
|
|
|
install_element(BGP_IPV6_NODE, &no_bgp_table_map_cmd);
|
|
|
|
|
|
|
|
install_element(BGP_IPV6_NODE, &ipv6_aggregate_address_cmd);
|
|
|
|
install_element(BGP_IPV6_NODE, &no_ipv6_aggregate_address_cmd);
|
|
|
|
|
|
|
|
install_element(BGP_IPV6M_NODE, &ipv6_bgp_network_cmd);
|
|
|
|
|
|
|
|
install_element(BGP_NODE, &bgp_distance_cmd);
|
|
|
|
install_element(BGP_NODE, &no_bgp_distance_cmd);
|
|
|
|
install_element(BGP_NODE, &bgp_distance_source_cmd);
|
|
|
|
install_element(BGP_NODE, &no_bgp_distance_source_cmd);
|
|
|
|
install_element(BGP_NODE, &bgp_distance_source_access_list_cmd);
|
|
|
|
install_element(BGP_NODE, &no_bgp_distance_source_access_list_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &bgp_distance_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &no_bgp_distance_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &bgp_distance_source_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &no_bgp_distance_source_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &bgp_distance_source_access_list_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &no_bgp_distance_source_access_list_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &bgp_distance_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &no_bgp_distance_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &bgp_distance_source_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &no_bgp_distance_source_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &bgp_distance_source_access_list_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE,
|
|
|
|
&no_bgp_distance_source_access_list_cmd);
|
|
|
|
install_element(BGP_IPV6_NODE, &bgp_distance_cmd);
|
|
|
|
install_element(BGP_IPV6_NODE, &no_bgp_distance_cmd);
|
|
|
|
install_element(BGP_IPV6_NODE, &ipv6_bgp_distance_source_cmd);
|
|
|
|
install_element(BGP_IPV6_NODE, &no_ipv6_bgp_distance_source_cmd);
|
|
|
|
install_element(BGP_IPV6_NODE,
|
|
|
|
&ipv6_bgp_distance_source_access_list_cmd);
|
|
|
|
install_element(BGP_IPV6_NODE,
|
|
|
|
&no_ipv6_bgp_distance_source_access_list_cmd);
|
|
|
|
install_element(BGP_IPV6M_NODE, &bgp_distance_cmd);
|
|
|
|
install_element(BGP_IPV6M_NODE, &no_bgp_distance_cmd);
|
|
|
|
install_element(BGP_IPV6M_NODE, &ipv6_bgp_distance_source_cmd);
|
|
|
|
install_element(BGP_IPV6M_NODE, &no_ipv6_bgp_distance_source_cmd);
|
|
|
|
install_element(BGP_IPV6M_NODE,
|
|
|
|
&ipv6_bgp_distance_source_access_list_cmd);
|
|
|
|
install_element(BGP_IPV6M_NODE,
|
|
|
|
&no_ipv6_bgp_distance_source_access_list_cmd);
|
|
|
|
|
|
|
|
install_element(BGP_NODE, &bgp_damp_set_cmd);
|
|
|
|
install_element(BGP_NODE, &bgp_damp_unset_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &bgp_damp_set_cmd);
|
|
|
|
install_element(BGP_IPV4_NODE, &bgp_damp_unset_cmd);
|
|
|
|
|
|
|
|
/* IPv4 Multicast Mode */
|
|
|
|
install_element(BGP_IPV4M_NODE, &bgp_damp_set_cmd);
|
|
|
|
install_element(BGP_IPV4M_NODE, &bgp_damp_unset_cmd);
|
|
|
|
|
|
|
|
/* Large Communities */
|
|
|
|
install_element(VIEW_NODE, &show_ip_bgp_large_community_list_cmd);
|
|
|
|
install_element(VIEW_NODE, &show_ip_bgp_large_community_cmd);
|
2018-02-19 17:17:41 +01:00
|
|
|
|
|
|
|
/* show bgp ipv4 flowspec detailed */
|
|
|
|
install_element(VIEW_NODE, &show_ip_bgp_flowspec_routes_detailed_cmd);
|
|
|
|
|
2018-10-08 02:47:42 +02:00
|
|
|
install_element(VIEW_NODE, &show_bgp_peerhash_cmd);
|
2017-07-17 14:03:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
void bgp_route_finish(void)
|
|
|
|
{
|
|
|
|
afi_t afi;
|
|
|
|
safi_t safi;
|
|
|
|
|
2017-11-21 19:02:06 +01:00
|
|
|
FOREACH_AFI_SAFI (afi, safi) {
|
|
|
|
bgp_table_unlock(bgp_distance_table[afi][safi]);
|
|
|
|
bgp_distance_table[afi][safi] = NULL;
|
|
|
|
}
|
[bgpd] Stability fixes including bugs 397, 492
I've spent the last several weeks working on stability fixes to bgpd.
These patches fix all of the numerous crashes, assertion failures, memory
leaks and memory stomping I could find. Valgrind was used extensively.
Added new function bgp_exit() to help catch problems. If "debug bgp" is
configured and bgpd exits with status of 0, statistics on remaining
lib/memory.c allocations are printed to stderr. It is my hope that other
developers will use this to stay on top of memory issues.
Example questionable exit:
bgpd: memstats: Current memory utilization in module LIB:
bgpd: memstats: Link List : 6
bgpd: memstats: Link Node : 5
bgpd: memstats: Hash : 8
bgpd: memstats: Hash Bucket : 2
bgpd: memstats: Hash Index : 8
bgpd: memstats: Work queue : 3
bgpd: memstats: Work queue item : 2
bgpd: memstats: Work queue name string : 3
bgpd: memstats: Current memory utilization in module BGP:
bgpd: memstats: BGP instance : 1
bgpd: memstats: BGP peer : 1
bgpd: memstats: BGP peer hostname : 1
bgpd: memstats: BGP attribute : 1
bgpd: memstats: BGP extra attributes : 1
bgpd: memstats: BGP aspath : 1
bgpd: memstats: BGP aspath str : 1
bgpd: memstats: BGP table : 24
bgpd: memstats: BGP node : 1
bgpd: memstats: BGP route : 1
bgpd: memstats: BGP synchronise : 8
bgpd: memstats: BGP Process queue : 1
bgpd: memstats: BGP node clear queue : 1
bgpd: memstats: NOTE: If configuration exists, utilization may be expected.
Example clean exit:
bgpd: memstats: No remaining tracked memory utilization.
This patch fixes bug #397: "Invalid free in bgp_announce_check()".
This patch fixes bug #492: "SIGBUS in bgpd/bgp_route.c:
bgp_clear_route_node()".
My apologies for not separating out these changes into individual patches.
The complexity of doing so boggled what is left of my brain. I hope this
is all still useful to the community.
This code has been production tested, in non-route-server-client mode, on
a linux 32-bit box and a 64-bit box.
Release/reset functions, used by bgp_exit(), added to:
bgpd/bgp_attr.c,h
bgpd/bgp_community.c,h
bgpd/bgp_dump.c,h
bgpd/bgp_ecommunity.c,h
bgpd/bgp_filter.c,h
bgpd/bgp_nexthop.c,h
bgpd/bgp_route.c,h
lib/routemap.c,h
File by file analysis:
* bgpd/bgp_aspath.c: Prevent re-use of ashash after it is released.
* bgpd/bgp_attr.c: #if removed uncalled cluster_dup().
* bgpd/bgp_clist.c,h: Allow community_list_terminate() to be called from
bgp_exit().
* bgpd/bgp_filter.c: Fix aslist->name use without allocation check, and
also fix memory leak.
* bgpd/bgp_main.c: Created bgp_exit() exit routine. This function frees
allocations made as part of bgpd initialization and, to some extent,
configuration. If "debug bgp" is configured, memory stats are printed
as described above.
* bgpd/bgp_nexthop.c: zclient_new() already allocates stream for
ibuf/obuf, so bgp_scan_init() shouldn't do it too. Also, made it so
zlookup is global so bgp_exit() can use it.
* bgpd/bgp_packet.c: bgp_capability_msg_parse() call to bgp_clear_route()
adjusted to use new BGP_CLEAR_ROUTE_NORMAL flag.
* bgpd/bgp_route.h: Correct reference counter "lock" to be signed.
bgp_clear_route() now accepts a bgp_clear_route_type of either
BGP_CLEAR_ROUTE_NORMAL or BGP_CLEAR_ROUTE_MY_RSCLIENT.
* bgpd/bgp_route.c:
- bgp_process_rsclient(): attr was being zero'ed and then
bgp_attr_extra_free() was being called with it, even though it was
never filled with valid data.
- bgp_process_rsclient(): Make sure rsclient->group is not NULL before
use.
- bgp_processq_del(): Add call to bgp_table_unlock().
- bgp_process(): Add call to bgp_table_lock().
- bgp_update_rsclient(): memset clearing of new_attr not needed since
declarationw with "= { 0 }" does it. memset was already commented
out.
- bgp_update_rsclient(): Fix screwed up misleading indentation.
- bgp_withdraw_rsclient(): Fix screwed up misleading indentation.
- bgp_clear_route_node(): Support BGP_CLEAR_ROUTE_MY_RSCLIENT.
- bgp_clear_node_queue_del(): Add call to bgp_table_unlock() and also
free struct bgp_clear_node_queue used for work item.
- bgp_clear_node_complete(): Do peer_unlock() after BGP_EVENT_ADD() in
case peer is released by peer_unlock() call.
- bgp_clear_route_table(): Support BGP_CLEAR_ROUTE_MY_RSCLIENT. Use
struct bgp_clear_node_queue to supply data to worker. Add call to
bgp_table_lock().
- bgp_clear_route(): Add support for BGP_CLEAR_ROUTE_NORMAL or
BGP_CLEAR_ROUTE_MY_RSCLIENT.
- bgp_clear_route_all(): Use BGP_CLEAR_ROUTE_NORMAL.
Bug 397 fixes:
- bgp_default_originate()
- bgp_announce_table()
* bgpd/bgp_table.h:
- struct bgp_table: Added reference count. Changed type of owner to be
"struct peer *" rather than "void *".
- struct bgp_node: Correct reference counter "lock" to be signed.
* bgpd/bgp_table.c:
- Added bgp_table reference counting.
- bgp_table_free(): Fixed cleanup code. Call peer_unlock() on owner if
set.
- bgp_unlock_node(): Added assertion.
- bgp_node_get(): Added call to bgp_lock_node() to code path that it was
missing from.
* bgpd/bgp_vty.c:
- peer_rsclient_set_vty(): Call peer_lock() as part of peer assignment
to owner. Handle failure gracefully.
- peer_rsclient_unset_vty(): Add call to bgp_clear_route() with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
* bgpd/bgp_zebra.c: Made it so zclient is global so bgp_exit() can use it.
* bgpd/bgpd.c:
- peer_lock(): Allow to be called when status is "Deleted".
- peer_deactivate(): Supply BGP_CLEAR_ROUTE_NORMAL purpose to
bgp_clear_route() call.
- peer_delete(): Common variable listnode pn. Fix bug in which rsclient
was only dealt with if not part of a peer group. Call
bgp_clear_route() for rsclient, if appropriate, and do so with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
- peer_group_get(): Use XSTRDUP() instead of strdup() for conf->host.
- peer_group_bind(): Call bgp_clear_route() for rsclient, and do so with
BGP_CLEAR_ROUTE_MY_RSCLIENT purpose.
- bgp_create(): Use XSTRDUP() instead of strdup() for peer_self->host.
- bgp_delete(): Delete peers before groups, rather than after. And then
rather than deleting rsclients, verify that there are none at this
point.
- bgp_unlock(): Add assertion.
- bgp_free(): Call bgp_table_finish() rather than doing XFREE() itself.
* lib/command.c,h: Compiler warning fixes. Add cmd_terminate(). Fixed
massive leak in install_element() in which cmd_make_descvec() was being
called more than once for the same cmd->strvec/string/doc.
* lib/log.c: Make closezlog() check fp before calling fclose().
* lib/memory.c: Catch when alloc count goes negative by using signed
counts. Correct #endif comment. Add log_memstats_stderr().
* lib/memory.h: Add log_memstats_stderr().
* lib/thread.c: thread->funcname was being accessed in thread_call() after
it had been freed. Rearranged things so that thread_call() frees
funcname. Also made it so thread_master_free() cleans up cpu_record.
* lib/vty.c,h: Use global command_cr. Add vty_terminate().
* lib/zclient.c,h: Re-enable zclient_free().
2009-07-18 07:44:03 +02:00
|
|
|
}
|