After calling `rib_unlink` the variable `re` will point to `free()`d
memory, so don't attempt to use it after this point.
Found by Coverity Scan (Coverity ID 1519784)
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
When FRR receives a route from the kernel about the route
offload success/failure. The metric being reported is not
going to be correct since we may not know it appropriately
at this point in time. If we can set the metric to something
appropriate.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When we are notified about the kernel about a route being offloaded
or not correctly set the distance.
Ticket: CM-33097
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The early route queue has a series of `struct zebra_early_route *`
entries. Zebra is treating this memory as just a `struct route entry`.
This is wrong. Correct this to free the memory correctly.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The wq->spec.errorfunc is never used in the code.
It's been in the code base since 2005 and I also
do not remember ever seeing it being called. No
workqueue process function ever returns error.
Since it's not used let's just remove it from the
code base.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Read from the fpm dplane a route update that will
include status about whether or not the asic was
successfull in offloading the route.
Have this data passed up to zebra for processing and disseminate
this data as appropriate.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Volta submitted notification changes for the dplane that had a
special use case for their system. Volta is no more, the code
is not being actively developed and from talking with ex-Volta
employees there is no current plans to even maintain this code.
Wrap the special handling of nexthops that their asic-dataplane
did in a bit of code to isolate it and allow for future removal,
as that I do not actually believe anyone else is using this code.
Add a CPP_NOTICE several years into the future that will tell us
to remove the code. If someone starts using it then they will
have to notice this variable to set it and hopefully they will
see my CPP_NOTICE to come talk to us. If this is being used then
we can just remove this wrapper.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This allows Zebra to manage QDISC, TCLASS, TFILTER in kernel and do cleaning
jobs when it starts up.
Signed-off-by: Siger Yang <siger.yang@outlook.com>
When zebra receives routes from upper level protocols it decodes the
zapi message and places the routes on the metaQ for processing. Suppose
we have a route A that is already installed by some routing protocol.
And there is a route B that has a nexthop that will be recursively
resolved through A. Imagine if a route replace operation for A is
going to happen from an upper level protocol at about the same time
the route B is going to be installed into zebra. If these routes
are received, and decoded, at about the same time there exists a
chance that the metaQ will contain both of them at the same time.
If the order of installation is [ B, A ]. B will be resolved
correctly through A and installed, A will be processed and
re-installed into the FIB. If the nexthops have changed for
A then the owner of B should be notified about the change( and B
can do the correct action here and decide to withdraw or re-install ).
Now imagine if the order of routes received for processing on the
metaQ is [ A, B ]. A will be received, processed and sent to the
dataplane for reinstall. B will then be pulled off the metaQ and
fail the install since A is in a `not Installed` state.
Let's loosen the restriction in nexthop resolution for B such
that if the route we are dependent on is a route replace operation
allow the resolution to suceed. This requires zebra to track a new
route state( ROUTE_ENTRY_ROUTE_REPLACING ) that can be looked at
during nexthop resolution. I believe this is ok because A is
a route replace operation, which could result in this:
-route install failed, in which case B should be nht'ing and
will receive the nht failure and the upper level protocol should
remove B.
-route install succeeded, no nexthop changes. In this case
allowing the resolution for B is ok, NHT will not notify the upper
level protocol so no action is needed.
-route install succeeded, nexthops changes. In this case
allowing the resolution for B is ok, NHT will notify the upper
level protocol and it can decide to reinstall B or not based
upon it's own algorithm.
This set of events was found by the bgp_distance_change topotest(s).
Effectively the tests were looking for the bug ( A, B order in the metaQ )
as the `correct` state. When under very heavy load, the A, B ordering
caused A to just be installed and fully resolved in the dataplane before
B is gotten to( which is entirely possible ).
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Fix issue#11996.
When removing VRF ( all routes of this VRF), zebra mistakenly forgot to check
whether its routes are in update queue of FPM. So FPM module will crash during
its dealing with these routes, which are already freed.
Add a new HOOK `rib_shutdown()`, `zebra_rtable_node_cleanup()` will use it
to remove these routes from update queue of FPM module before freeing them.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Currently if an operator does this operation:
sharpd@eva ~/frr8> sudo ip nexthop add id 5000 via 192.168.119.44 dev enp39s0 ; sudo ip route add 10.0.0.1 nhid 5000
2022/06/30 08:52:40 ZEBRA: [ZHQK5-J9M1R] proto2zebra: Please add this protocol(0) to proper rt_netlink.c handling
2022/06/30 08:52:40 ZEBRA: [PS16P-365FK][EC 4043309076] Zebra failed to find the nexthop hash entry for id=5000 in a route entry
sharpd@eva ~/frr8> vtysh -c "show ip route 10.0.0.1"
Routing entry for 0.0.0.0/0
Known via "kernel", distance 0, metric 100, best
Last update 00:01:58 ago
* 192.168.119.1, via enp39s0
The route is dropped by zebra with no warnings. This is not good,
but unlikely to happen at this point in time. In order to fix
this issue route processing from inputs needs to happen after nexthop
group processing from inputs. This was not possible because
nexthop groups are placed on the metaQ. As such the above
nexthop group creation is placed on the metaQ for processing
in META_QUEUE_NHG. Then the route is read in and processed
immediately. The nexthop group is not found ( not processed yet!)
and the route is dropped in zebra.
Modify the code to have early route processing of validity
on the MetaQ. This preserves the order of operations.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Convert label processing that comes from zapi messages
into being handled by the meta-Q. This is because early
route processing is going to be moved to the meta-Q as
well and we will have a chicken and egg problem without
moving this code to be processed by the meta-Q.
Ordering of messages from ospf as an example:
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:48] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:48] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:48] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:48] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:62] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:43] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:61] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_ROUTE_ADD:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_MPLS_LABELS_REPLACE:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_MPLS_LABELS_REPLACE:0:66] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_MPLS_LABELS_REPLACE:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_MPLS_LABELS_REPLACE:0:47] comes from socket [36]
2022/08/09 08:55:52.740 ZEBRA: [YXG8K-BCYMV] zebra message[ZEBRA_MPLS_LABELS_REPLACE:0:47] comes from socket [36]
The ZEBRA_MPLS_LABELS_REPLACE immediately turn around and attempt to replace nexthop labels on routes that
were added. If the route add is placed on the metaQ, it will not exist yet and as such the label replace
will fail.
Modify the zebra code to take the label operations and place them on the metaQ as well.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This commit implements necessary netlink encoders for traffic control
including QDISC, TCLASS and TFILTER, and adds basic dplane operations.
Co-authored-by: Stephen Worley <sworley@nvidia.com>
Signed-off-by: Siger Yang <siger.yang@outlook.com>
For whatever reason. ZEBRA_ROUTE_SYSTEM routes were being processed
last. Since a system route is just another kernel route type. Let's
just switch it to be processed the same time as kernel routes.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There were more than a few places where the NHG meta
queue was not being explicitly called out. Let's
be consistent and use the same nomenclature as much
as possible when talking about metaQ's.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
convert:
frr_with_mutex(..)
to:
frr_with_mutex (..)
To make all our code agree with what clang-format is going to produce
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
New output example:
2022-07-03 09:40:29.310 [DEBG] zebra: [JF0K0-DVHWH] rib_meta_queue_add: (0:254):4.5.6.8/32: queued rn 0x55937f586ee0 into sub-queue Kernel Routes
2022-07-03 09:40:29.321 [DEBG] zebra: [HH6N2-PDCJS] default(0:254):4.5.6.8/32 rn 0x55937f586ee0 dequeued from sub-queue Kernel Routes
Let's make it a bit more human readable.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Instead of having global allow_delete move it to
where it belongs in the zrouter data structure.
Additionally show this data in `show zebra`
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The rib_process_dplane_results function was having each
sub function handler process the results and then
free the ctx. Lot's of functionality that needs to remember
to free the context. Let's just free it in the main loop.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Since the calling hook for old fpm is done in `rib_uninstall_kernel()`
inside, this calling place outside should be redundant. Just remove it.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Multipath route may have mixed nexthops of EVPN and IP unicast. Move
EVPN flag to nexthop to support such cases.
Signed-off-by: Xiao Liang <shaw.leon@gmail.com>
Add support for setting the protodown reason code.
829eb208e8
These patches handle all our netlink code for setting the reason.
For protodown reason we only set `frr` as the reason externally
but internally we have more descriptive reasoning available via
`show interface IFNAME`. The kernel only provides a bitwidth of 32
that all userspace programs have to share so this makes the most sense.
Since this is new functionality, it needs to be added to the dplane
pthread instead. So these patches, also move the protodown setting we
were doing before into the dplane pthread. For this, we abstract it a
bit more to make it a general interface LINK update dplane API. This
API can be expanded to support gernal link creation/updating when/if
someone ever adds that code.
We also move a more common entrypoint for evpn-mh and from zapi clients
like vrrpd. They both call common code now to set our internal flags
for protodown and protodown reason.
Also add debugging code for dumping netlink packets with
protodown/protodown_reason.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
FRR will crash when the re->type is a ZEBRA_ROUTE_ALL and it
is inserted into the meta-queue. Let's just put some basic
code in place to prevent a crash from happening. No routing
protocol should be using ZEBRA_ROUTE_ALL as a value but
bugs do happen. Let's just accept the weird route type
gracefully and move on.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Use the dataplane to query and read interface NETCONF data;
add netconf-oriented data to the dplane context object, and
add accessors for it. Add handler for incoming update
processing.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
When using wait for install there exists situations where
zebra will issue several route change operations to the kernel
but end up in a state where we shouldn't be at the end
due to extra data being received. Example:
a) zebra receives from bgp a route change, installs sends the
route to the kernel.
b) zebra receives a route deletion from bgp, removes the
struct route entry and then sends to the kernel a deletion.
c) zebra receives an asynchronous notification that (a) succeeded
but we treat this as a new route.
This is the ships in the night problem. In this case if we receive
notification from the kernel about a route that we know nothing
about and we are not in startup and we are doing asic offload
then we can ignore this update.
Ticket: #2563300
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Current code treats all metaqueues as lists of route_node structures.
However, some queues contain other structures that need to be cleaned up
differently. Casting the elements of those queues to struct route_node
and dereferencing them leads to a crash. The crash may be seen when
executing bgp_multi_vrf_topo2.
Fix the code by using the proper list element types.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
The name 'opaque' is a little general - call the route_entry
struct 're_opaque' to make it more specific.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
Pass in the route_node that is under consideration
into route_notify_internal to allow calling functions
to reduce stack size as well as looking up data.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The dest_pfx was pretty much only ever used for
debug output and FRR already knows the rn. So
use that instead.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
the dest_p and src_p values were only ever used for
debugs and %pFX, when we already have the rn.
There is no need to do this lookup
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
FRR is passing around a bunch of data that is encapsulated
within the route node. Let's just pass that around instead.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
FRR is passing around a bunch of data that is encapsulated
within the route node. Let's just pass that around instead.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Topology:
IXIA-----(ens192)FRR(ens224)------iXIA
Configuration:
1. Create 8 sub-interfaces on ens192 under Default VRF and configure 8
EBGP session between FRR and IXIA.
2. Create 1000 sub-interfaces on ens224 under Default VRF and configure
1000 EBGP session between FRR and IXIA.
3. 2M prefixes distributed from Left side Ixia each with 8 ECMP path.
4. So in total, there are 2M prefixes * 8 ECMP = 16M prefixes entries
in RIB and FIB.
Issue:
Shut ens192 and ens224, this is taking 1hr 15 mins to clean up the routes.
Root Cause:
In the case of route deletion, if the particular route node is having
nht count = 0, we are going to the parent and doing nht evaluation,
which is not needed.
Fix:
If the deleted the route node is having nht count > 0, then do a nht
evaluation on the parent node.
Shut ens192 and ens224, it is taking 1 min to clean up the routes
with the fix.
Signed-off-by: Sarita Patra <saritap@vmware.com>
In some cases, zebra may install a nexthop-group id that is
different from the id of the nhe struct attached to a
route-entry. This happens for a singleton recursive nexthop,
for example, where a route is installed with the resolving
nexthop's id.
The installed value is the most useful value - that corresponds
to information in the kernel on linux/netlink platforms that
support nhgs. Display both values if they differ in ascii
output, and include both values in the json form.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>
We should always treat the VRF interface as a loopback. Currently, this
is not the case, because in some old pre-VRF code we use if_is_loopback
instead of if_is_loopback_or_vrf. To avoid any future problems, the
proposal is to rename if_is_loopback_or_vrf to if_is_loopback and use it
everywhere. if_is_loopback is renamed to if_is_loopback_exact in case
it's ever needed, but currently it's not used anywhere.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
There is a bit of an impedance mismatch in the sequence of events here.
Depending on the dplane behavior, the `ROUTE_ENTRY_SELECTED` bit will be
inconsistent for rib_process_result().
With an asynchronous dataplane:
0. rib_process() is called
1. rib_install_kernel() is called, dplane action is queued
2. rib_install_kernel() returns
3. rib_process() sets the SELECTED bit appropriately, returns
4. dplane is done, triggers rib_process_result()
5. SELECTED bit is seen in "after" state
(5a. NHT code looks at the SELECTED bit, works correctly.)
With a synchronous dataplane:
0. rib_process() is called
1. rib_install_kernel() is called, dplane action is executed
2. dplane (should) trigger rib_process_result()
3. SELECTED bit is seen in "before" state
(3a. NHT code looks at the SELECTED bit, fails.)
4. rib_install_kernel() returns
5. rib_process() sets the SELECTED bit appropriately, too late.
Essentially, poking the dataplane is a sequencing point where control is
handed over to the dplane. Control may or may not return immediately.
Doing /anything/ after triggering the dataplane is a recipe for odd race
conditions.
(FWIW, I'm not sure rib_process_result() is called correctly in the
synchronous case, but that's a separate problem.)
Unfortunately, this change might have some unforeseen side effects. I
haven't dug through the code to see if anything breaks. There
/shouldn't/ be anything looking at the SELECTED bit here, but who knows.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
rib_update() was mallocing memory then attempting to schedule
and if the schedule failed( it was already going to be run )
FRR would then free the memory. Fix this memory usage pattern
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
PIM is going to need to be able to send down the address it is
trying to resolve in the multicast rib. We need a way to signal
this to the end developer. Start the conversion by adding the
ability to have a safi. But only allow SAFI_UNICAST at the moment.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Read incoming interface address change notifications in the
dplane pthread; enqueue the events to the main pthread
for processing. This is netlink-only for now - the bsd
kernel socket path remains unchanged.
Signed-off-by: Mark Stapp <mjs.ietf@gmail.com>
When calling rib_add_multipath_nhe ensure that we have
well aligned return codes that mean something so that
interersted parties can properly handle the situation.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There were a bunch of places where we converted the
route node to a prefix string via srcdest_rnode2str when
we should have been using %pRN in zebra_rib.c. Just
convert over the ones we should to use it.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When we are calling rib_process and the route_node
in question has no dest, there is no work to do here
at all. As such we should just return before
attempting to do any other work. This is just a tiny bit
of simplification being done.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Move remote VTEP updates from immediate, inline processing
in their ZAPI message handlers to the main workqueue.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Enqueue incoming vxlan remote macip updates on the main
workqueue, instead of performing the updates immediately,
in-line.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Add workqueue subqueue for EVPN/VxLAN updates; migrate the
evpn route and remote ES processing from their ZAPI handlers
to the workqueue.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Since _rnode_zlog was wrapping zlog(), these messages weren't getting an
unique ID assigned through the xref mechanism. Replace macro with a
small extension that prints (almost) the same thing.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
This action is initiated by nhrp and has been stubbed when
moving to zebra. Now, a netlink request is forged to set
the link interface of a gre interface if that gre interface
does not have already a link interface.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Use the main zebra workqueue for daemon-owned NHGs, in addition
to processing kernel-owned NHGs. The zapi message processing
creates a temporary object that's enqueued to the workqueue,
then processed/installed as part of the workqueue processing.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
do not add a new route type, and consider 0 as a value meaning
that zebra should be the owner.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Instead of directly configuring the neighbor table after read from zapi
interface, a zebra dplane context is prepared to host the interface and
the family where the neighbor table is updated. Also, some other fields
are hosted: app_probes, ucast_probes, and mcast_probes. More information
on those fields can be found on ip-ntable configuration.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
EVPN neighbor operations were already done in the zebra dataplane
framework. Now that NHRP is able to use zebra to perform neighbor IP
operations (by programming link IP operations), handle this operation
under dataplane framework:
- assign two new operations NEIGH_IP_INSTALL and NEIGH_IP_DELETE; this
is reserved for GRE like interfaces:
example: ip neigh add A.B.C.D lladdr E.F.G.H
- use 'struct ipaddr' to store and encode the link ip address
- reuse dplane_neigh_info, and create an union with mac address
- reuse the protocol type and use it for neighbor operations; this
permits to store the daemon originating this neighbor operation.
a new route type is created: ZEBRA_ROUTE_NEIGH.
- the netlink level functions will handle a pointer, and a type; the
type indicates the family of the pointer: AF_INET or AF_INET6 if the
link type is an ip address, mac address otherwise.
- to keep backward compatibility with old queries, as no extension was
done, an option NEIGH_NO_EXTENSION has been put in place
- also, 2 new state flags are used: NUD_PERMANENT and NUD_FAILED.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
This one also needed a bit of shuffling around, but MTYPE_RE is the only
one left used across file boundaries now.
Signed-off-by: David Lamparter <equinox@diac24.net>
like it has been done for iptable contexts, a zebra dplane context is
created for each ipset/ipset entry event. The zebra_dplane_ctx job is
then enqueued and processed by separate thread. Like it has been done
for zebra_pbr_iptable context, the ipset and ipset entry contexts are
encapsulated into an union of structures in zebra_dplane_ctx.
There is a specificity in that when storing ipset_entry structure, there
was a backpointer pointer to the ipset structure that is necessary
to get some complementary information before calling the hook. The
proposal is to use an ipset_entry_info structure next to the ipset_entry,
in the zebra_dplane context. That information is used for ipset_entry
processing. The ipset name and the ipset type are the only fields
necessary.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
The iptable processing was not handled in remote dataplane, and was
directly processed by the thread in charge of zapi calls. Now that call
can be handled in the zebra_dplane separate thread. once a
zebra_dplane_ctx is allocated for iptable handling, the hook call is
performed later. Subsequently, a return code may be triggered to zclient
interface if any problem occurs when calling the hook call.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Neither tabs nor newlines are acceptable in syslog messages. They also
break line-based parsing of file logs.
Signed-off-by: David Lamparter <equinox@diac24.net>
When we need to cause a reprocessing of data the code currently
marks all routes as needing to be looked at. Modify the
rib_update_table code to allow us to specify a specific route
type we only want to reprocess. At this point none
of the code is behaving differently this is just setup
for a future code change.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The re->flags and re->status in debugs were being dumped as hex values.
I can never quickly decode this. Here is an idea. Let's let FRR do
it for me.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
In the case where a routes nexthops cannot be resolved as part
of route processing, immmediately notify the upper level protocol
that their routes failed to install if they are interested in
being informed about this issue.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
in rib_handle_nhg_replace, do not use new as a parameter name to
allow compilation of c++ code including zebra headers.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
Just gather the opaque data into the route entry. Later
commits will display this data for end users as well as
to send it down.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Zebra accumulates route-entry objects and then processes them
as a group. If that rib processing is delayed, because the
dataplane/fib programming has built up a queue e.g., zebra can
hold multiple deleted route objects in memory. At scale, this can
be a problem. Delete unneeded route entries promptly, if they
can't contribute to rib processing.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Add a command that allows FRR to know it's being used with
an underlying asic offload, from the linux kernel perspective.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Issue:
The bgp routes learnt from peers which are not installed in kernel are
advertised to peers. This can cause routers to send traffic to these
destinations only to get dropped. The fix is to provide a configurable
option "bgp suppress-fib-pending". When the option is enabled, bgp will
advertise routes only if it these are successfully installed in kernel.
Fix (Part1) :
* Added message ZEBRA_ROUTE_NOTIFY_REQUEST used by client to request
FIB install status for routes
* Added AFI/SAFI to ZAPI messages
* Modified the functions zapi_route_notify_decode(), zsend_route_notify_owner()
and route_notify_internal() to include AFI, SAFI as parameters
Signed-off-by: kssoman <somanks@gmail.com>
When we get a route for installation via any method we should
consolidate on 32 bits as the flag size, since we have
actually more than 8 bits of data to bass around.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This includes -
1. non-DF block filter
2. List of es-peers that need to be blocked per-access port (for
split horizon filtering)
3. Backup nexthop group to failover local-es via the VxLAN overlay
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
We are loading a buffer with the prefix2str results then
using it in the debugs throughout functions. Replace
with just using %pFX and remove the buffer.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Create appropriate accessor functions for the rn->lock
data. We should be accessing this data through accessor
functions since it is private data to the data structure.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When zebra is running with debugs turned on there
is a use after free reported by the address sanitizer:
2020/10/16 12:58:02 ZEBRA: rib_delnode: (0:254):4.5.6.16/32: rn 0x60b000026f20, re 0x6080000131a0, removing
2020/10/16 12:58:02 ZEBRA: rib_meta_queue_add: (0:254):4.5.6.16/32: queued rn 0x60b000026f20 into sub-queue 3
=================================================================
==3101430==ERROR: AddressSanitizer: heap-use-after-free on address 0x608000011d28 at pc 0x555555705ab6 bp 0x7fffffffdab0 sp 0x7fffffffdaa8
READ of size 8 at 0x608000011d28 thread T0
#0 0x555555705ab5 in re_list_const_first zebra/rib.h:222
#1 0x555555705b54 in re_list_first zebra/rib.h:222
#2 0x555555711a4f in process_subq_route zebra/zebra_rib.c:2248
#3 0x555555711d2e in process_subq zebra/zebra_rib.c:2286
#4 0x555555711ec7 in meta_queue_process zebra/zebra_rib.c:2320
#5 0x7ffff74701f7 in work_queue_run lib/workqueue.c:291
#6 0x7ffff7450e9c in thread_call lib/thread.c:1581
#7 0x7ffff738eaf7 in frr_run lib/libfrr.c:1099
#8 0x55555561a578 in main zebra/main.c:455
#9 0x7ffff7079cc9 in __libc_start_main ../csu/libc-start.c:308
#10 0x5555555e3429 in _start (/usr/lib/frr/zebra+0x8f429)
0x608000011d28 is located 8 bytes inside of 88-byte region [0x608000011d20,0x608000011d78)
freed by thread T0 here:
#0 0x7ffff768bb6f in __interceptor_free (/lib/x86_64-linux-gnu/libasan.so.6+0xa9b6f)
#1 0x7ffff739ccad in qfree lib/memory.c:129
#2 0x555555709ee4 in rib_gc_dest zebra/zebra_rib.c:746
#3 0x55555570ca76 in rib_process zebra/zebra_rib.c:1240
#4 0x555555711a05 in process_subq_route zebra/zebra_rib.c:2245
#5 0x555555711d2e in process_subq zebra/zebra_rib.c:2286
#6 0x555555711ec7 in meta_queue_process zebra/zebra_rib.c:2320
#7 0x7ffff74701f7 in work_queue_run lib/workqueue.c:291
#8 0x7ffff7450e9c in thread_call lib/thread.c:1581
#9 0x7ffff738eaf7 in frr_run lib/libfrr.c:1099
#10 0x55555561a578 in main zebra/main.c:455
#11 0x7ffff7079cc9 in __libc_start_main ../csu/libc-start.c:308
previously allocated by thread T0 here:
#0 0x7ffff768c037 in calloc (/lib/x86_64-linux-gnu/libasan.so.6+0xaa037)
#1 0x7ffff739cb98 in qcalloc lib/memory.c:110
#2 0x555555712ace in zebra_rib_create_dest zebra/zebra_rib.c:2515
#3 0x555555712c6c in rib_link zebra/zebra_rib.c:2576
#4 0x555555712faa in rib_addnode zebra/zebra_rib.c:2607
#5 0x555555715bf0 in rib_add_multipath_nhe zebra/zebra_rib.c:3012
#6 0x555555715f56 in rib_add_multipath zebra/zebra_rib.c:3049
#7 0x55555571788b in rib_add zebra/zebra_rib.c:3327
#8 0x5555555e584a in connected_up zebra/connected.c:254
#9 0x5555555e42ff in connected_announce zebra/connected.c:94
#10 0x5555555e4fd3 in connected_update zebra/connected.c:195
#11 0x5555555e61ad in connected_add_ipv4 zebra/connected.c:340
#12 0x5555555f26f5 in netlink_interface_addr zebra/if_netlink.c:1213
#13 0x55555560f756 in netlink_information_fetch zebra/kernel_netlink.c:350
#14 0x555555612e49 in netlink_parse_info zebra/kernel_netlink.c:941
#15 0x55555560f9f1 in kernel_read zebra/kernel_netlink.c:402
#16 0x7ffff7450e9c in thread_call lib/thread.c:1581
#17 0x7ffff738eaf7 in frr_run lib/libfrr.c:1099
#18 0x55555561a578 in main zebra/main.c:455
#19 0x7ffff7079cc9 in __libc_start_main ../csu/libc-start.c:308
SUMMARY: AddressSanitizer: heap-use-after-free zebra/rib.h:222 in re_list_const_first
This is happening because we are using the dest pointer after a call into
rib_gc_dest. In process_subq_route, we call rib_process() and if the
dest is deleted dest pointer is now garbage. We must reload the
dest pointer in this case.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
We support configuration of multiple addresses in the same
subnet on a single interface: make sure that zebra supports
multiple instances of the corresponding connected route.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
During quick ifdown / ifup events from the linux kernel there
exists a situation where a prefix that has both a kernel route
and a static route can queued up on the meta-q. If the static
route happens to point at a connected route for nexthop resolution
and we receive a series of quick up/down events *after* the
static route and kernel route are queued up for rib reprocessing.
Since the static route and kernel route are queued on meta-q 1
and the connected route is also on meta-q 1 there exists a situation
where the connected route will be resolved after the static route
fails to resolve, leaving the static route in a unresolved state.
Add a new queue level and put connected routes on their own level,
since they are the fundamental building blocks of pretty much
all the other routes.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When zebra is processing routes to determine what to send
to the rib, suppose we have two routes (a) a route processed
earlier that none of it's nexthops were active and (b)
a route that has good nexthops but has a worse admin distance.
rib_process, would not relook at (a)'s nexthops because
the ROUTE_ENTRY_CHANGED flag was not true and it would
win when compared to (b) because it's admin distance
was better, leaving us with a state where we would
attempt and fail to install route (a) because it
was not valid.
Modify the code to consider the number of nexthops
we have as a determiner if we can use the route.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
In rib_process_update_fib, the function is sent two route entries
the old ( previously installed ) and new ( the one to install )
When the function detects that the new is unusable because
the number of nexthops that are usable for that route is 0,
then we uninstall the old route. The problem here is that
we should not attempt to uninstall any route that is
not owned by FRR. Modify the code to not attempt
this behavior
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Fix some reference counting issues seen when replacing
a NHG and deleting one.
For replacement, we should end with the same refcnt on the new
one.
For delete, its the caller's job to decrement its ref after
its done with it.
Further, update routes in the rib with the new pointer after replace.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
Add code to properly handle routes sent with NHG ID rather
than a nexthop_group.
For now, we separate this from backup nexthop handling since that
should probably be added to the nhg_proto_add calls.
Signed-off-by: Stephen Worley <sworley@cumulusnetworks.com>
When we get a rib deletion event and we already have
that particular route node in the queue to be reprocessed,
just note that someone from kernel land has done us dirty
and allow it to be cleaned up by normal processing
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Imagine a situation where a interface is bouncing up/down.
The interface comes up and daemons like pbr will get a nht
tracking callback for a connected interface up and will install
the routes down to zebra. At this same time the interface can
go down. But since zebra is busy handling route changes ( from pbr )
it has not read the netlink message and can get into a situation
where the route resolves properly and then we attempt to install
it into the kernel( which is rejected ). If the interface
bounces back up fast at this point, the down then up netlink
message will be read and create two route entries off the connected
route node. Zebra will then enqueue both route entries for future processing.
After this processing happens the down/up is collapsed into an up
and nexthop tracking sees no changes and does not inform any upper
level protocol( in this case pbr ) that nexthop tracking has changed.
So pbr still believes the nexthops are good but the routes are not
installed since pbr has taken no action.
Fix this by immediately running rnh when we signal a connected
route entry is scheduled for removal. This should cause
upper level protocols to get a rnh notification for the small
amount of time that the connected route was bouncing around like
a madman.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
During testing it was noticed that routes were considered
installed by zebra, but the kernel did not have the route.
Upon close debugging of the rib it was noticed that FRR
was turning a dplane_ctx_route_init into a success and
FRR was now in a bad state.
2020/08/26 17:55:53.897436 PBR: route_notify_owner: [0.0.0.0/0] Route Removed succeeded for table: 10012
2020/08/26 17:55:53.897572 ZEBRA: 0.0.0.0/0: uptime == 432033, type == 24, instance == 0, table == 10012
2020/08/26 17:55:53.897622 ZEBRA: rib_meta_queue_add: (0:10012):0.0.0.0/0: queued rn 0x5566b0ea7680 into sub-queue 5
2020/08/26 17:55:53.907637 ZEBRA: default(0:10012):0.0.0.0/0: Processing rn 0x5566b0ea7680
2020/08/26 17:55:53.907665 ZEBRA: default(0:10012):0.0.0.0/0: Examine re 0x5566b0d01200 (pbr) status 2 flags 1 dist 200 metric 0
2020/08/26 17:55:53.907702 ZEBRA: default(0:10012):0.0.0.0/0: After processing: old_selected 0x0 new_selected 0x5566b0d01200 old_fib 0x0 new_fib 0x5566b0d01200
2020/08/26 17:55:53.907713 ZEBRA: default(0:10012):0.0.0.0/0: Adding route rn 0x5566b0ea7680, re 0x5566b0d01200 (pbr)
2020/08/26 17:55:53.907879 ZEBRA: default(0:10012):0.0.0.0/0: rn 0x5566b0ea7680 dequeued from sub-queue 5
2020/08/26 17:55:53.907943 ZEBRA: netlink_route_multipath: RTM_NEWROUTE 0.0.0.0/0 vrf 0(10012)
2020/08/26 17:55:53.910756 ZEBRA: default(0:10012):0.0.0.0/0 Processing dplane result ctx 0x5566b0ea82f0, op ROUTE_INSTALL result SUCCESS
2020/08/26 17:55:53.910769 ZEBRA: update_from_ctx: default(0:10012):0.0.0.0/0: SELECTED, re 0x5566b0d01200
2020/08/26 17:55:53.910785 ZEBRA: default(0:10012):0.0.0.0/0 update_from_ctx(): no fib nhg
2020/08/26 17:55:53.910793 ZEBRA: default(0:10012):0.0.0.0/0 update_from_ctx(): rib nhg matched, changed 'true'
2020/08/26 17:55:53.910802 ZEBRA: (0:10012):0.0.0.0/0: Redist update re 0x5566b0d01200 (pbr), old 0x0 (None)
2020/08/26 17:55:53.910812 ZEBRA: Notifying Owner: 24 about prefix 0.0.0.0/0(10012) 2 vrf: 0
2020/08/26 17:55:53.910912 PBR: route_notify_owner: [0.0.0.0/0] Route installed succeeded for table: 10012
2020/08/26 17:55:55.400516 ZEBRA: RTM_DELROUTE 0.0.0.0/0 vrf default(0) table_id: 10012 metric: 20 Admin Distance: 0
2020/08/26 17:55:55.400527 ZEBRA: rib_delete: (0:10012):0.0.0.0/0: rn 0x5566b0ea7680, re 0x5566b0d01200 (pbr) was deleted from kernel, adding
We were receiving a notification from the kernel that the route was deleted and deciding
that we needed to reinstall it. At that point in time when it got into the dplane
handlers to convert it to the dplane pthread, the dplane decided to drop the request
convert it too a success and not do anything.
This code change removes the conversion from this failure to success and
notifies the upper level about it. After this change the default route
to table 10012 is now properly marked as rejected:
root@mlx-2700-07:mgmt:/var/log/frr# vtysh -c "show ip route table 10012"
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
F - PBR, f - OpenFabric,
> - selected route, * - FIB route, q - queued route, r - rejected route
VRF default table 10012:
F>r 0.0.0.0/0 [200/0] via 172.168.1.164, isp2-uplink (vrf PUBLIC), weight 1, 00:24:48
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
There are a bunch of places where the table id is not being outputed
in debug messages for routing changes. Add in the table id we
are operating on. This is especially useful for the case where
pbr is working.
Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
We can make the Linux kernel send an ARP/NDP request by adding
a neighbour with the 'NUD_INCOMPLETE' state and the 'NTF_USE' flag.
This commit adds new dataplane operation as well as new zapi message
to allow other daemons send ARP/NDP requests.
Signed-off-by: Jakub Urbańczyk <xthaid@gmail.com>
For the sake of Segment Routing (SR) and Traffic Engineering (TE)
Policies there's a need for additional infrastructure within zebra.
The infrastructure in this PR is supposed to manage such policies
in terms of installing binding SIDs and LSPs. Also it is capable of
managing MPLS labels using the label manager, keeping track of
nexthops (for resolving labels) and notifying interested parties about
changes of a policy/LSP state. Further it enables a route map mechanism
for BGP and SR-TE colors such that learned BGP routes can be mapped
onto SR-TE Policies.
This PR does not introduce any usable features by now, it is just
infrastructure for other upcoming PRs which will introduce 'pathd',
a new SR-TE daemon.
Co-authored-by: Renato Westphal <renato@opensourcerouting.org>
Co-authored-by: GalaxyGorilla <sascha@netdef.org>
Signed-off-by: Sebastien Merle <sebastien@netdef.org>
Add a route_entry flag to indicate the presence of a fib
(installed) list of nexthops - more explicit and clearer.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Initial changes to support a nexthop with multiple backups. Lib
changes to hold a small array in each primary, zapi message
changes to support sending multiple backups, and daemon
changes to show commands to support multiple backups. The config
input for multiple backup indices is not present here.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Remove mid-string line breaks, cf. workflow doc:
.. [#tool_style_conflicts] For example, lines over 80 characters are allowed
for text strings to make it possible to search the code for them: please
see `Linux kernel style (breaking long lines and strings)
<https://www.kernel.org/doc/html/v4.10/process/coding-style.html#breaking-long-lines-and-strings>`_
and `Issue #1794 <https://github.com/FRRouting/frr/issues/1794>`_.
Scripted commit, idempotent to running:
```
python3 tools/stringmangle.py --unwrap `git ls-files | egrep '\.[ch]$'`
```
Signed-off-by: David Lamparter <equinox@diac24.net>
When handling a fib notification event that involves a route
with backup nexthops, be clearer about representing the
installed state of the backups: any installed backup will be
on a dedicated route_entry list.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Improve and centralize some logic used to a) compare two
route_entries, and b) to locate a route_entry that matches
a dplane context object that contains the results of a
fib update. We were not rigorous enough in checking routes'
properties, especially when examining connected routes where
we allow multiple route_entries.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
* Implement new dataplane operations
* Convert existing code to use dataplane context object
* Modify function preparing netlink message to use dataplane
context object
Signed-off-by: Jakub Urbańczyk <xthaid@gmail.com>
Provide a way for the data plane to indicate pseudowire
status (such as: not forwarding, AC failure).
On a data plane pseudowire install failure, data plane
sets the pseudowire status.
Zebra relays the pseudowire status to LDP.
LDP includes the pseudowire status in the LDP notification
to the LDP peer.
Signed-off-by: Karen Schoener <karen@voltanet.io>
to make sure that c++ code can include them, avoid using reserved
keywords like 'delete' or 'new'.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
An async route notification can indicate that installation
has failed, but the handling code wasn't dealing with that
possibility correctly.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Replace sprintf with snprintf where straightforward to do so.
- sprintf's into local scope buffers of known size are replaced with the
equivalent snprintf call
- snprintf's into local scope buffers of known size that use the buffer
size expression now use sizeof(buffer)
- sprintf(buf + strlen(buf), ...) replaced with snprintf() into temp
buffer followed by strlcat
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>