2002-12-13 21:15:29 +01:00
|
|
|
/*
|
|
|
|
* OSPFd dump routine.
|
|
|
|
* Copyright (C) 1999, 2000 Toshiaki Takada
|
|
|
|
*
|
|
|
|
* This file is part of GNU Zebra.
|
|
|
|
*
|
|
|
|
* GNU Zebra is free software; you can redistribute it and/or modify it
|
|
|
|
* under the terms of the GNU General Public License as published by the
|
|
|
|
* Free Software Foundation; either version 2, or (at your option) any
|
|
|
|
* later version.
|
|
|
|
*
|
|
|
|
* GNU Zebra is distributed in the hope that it will be useful, but
|
|
|
|
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* General Public License for more details.
|
|
|
|
*
|
2017-05-13 10:25:29 +02:00
|
|
|
* You should have received a copy of the GNU General Public License along
|
|
|
|
* with this program; see the file COPYING; if not, write to the Free Software
|
|
|
|
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
|
2002-12-13 21:15:29 +01:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <zebra.h>
|
|
|
|
|
2017-01-17 22:05:56 +01:00
|
|
|
#include "monotime.h"
|
2002-12-13 21:15:29 +01:00
|
|
|
#include "linklist.h"
|
|
|
|
#include "thread.h"
|
|
|
|
#include "prefix.h"
|
|
|
|
#include "command.h"
|
|
|
|
#include "stream.h"
|
|
|
|
#include "log.h"
|
2004-10-07 16:19:36 +02:00
|
|
|
#include "sockopt.h"
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
#include "ospfd/ospfd.h"
|
|
|
|
#include "ospfd/ospf_interface.h"
|
|
|
|
#include "ospfd/ospf_ism.h"
|
|
|
|
#include "ospfd/ospf_asbr.h"
|
|
|
|
#include "ospfd/ospf_lsa.h"
|
|
|
|
#include "ospfd/ospf_lsdb.h"
|
|
|
|
#include "ospfd/ospf_neighbor.h"
|
|
|
|
#include "ospfd/ospf_nsm.h"
|
|
|
|
#include "ospfd/ospf_dump.h"
|
|
|
|
#include "ospfd/ospf_packet.h"
|
|
|
|
#include "ospfd/ospf_network.h"
|
|
|
|
|
|
|
|
/* Configuration debug option variables. */
|
|
|
|
unsigned long conf_debug_ospf_packet[5] = {0, 0, 0, 0, 0};
|
|
|
|
unsigned long conf_debug_ospf_event = 0;
|
|
|
|
unsigned long conf_debug_ospf_ism = 0;
|
|
|
|
unsigned long conf_debug_ospf_nsm = 0;
|
|
|
|
unsigned long conf_debug_ospf_lsa = 0;
|
|
|
|
unsigned long conf_debug_ospf_zebra = 0;
|
|
|
|
unsigned long conf_debug_ospf_nssa = 0;
|
Update Traffic Engineering Support for OSPFD
NOTE: I am squashing several commits together because they
do not independently compile and we need this ability to
do any type of sane testing on the patches. Since this
series builds together I am doing this. -DBS
This new structure is the basis to get new link parameters for
Traffic Engineering from Zebra/interface layer to OSPFD and ISISD
for the support of Traffic Engineering
* lib/if.[c,h]: link parameters struture and get/set functions
* lib/command.[c,h]: creation of a new link-node
* lib/zclient.[c,h]: modification to the ZBUS message to convey the
link parameters structure
* lib/zebra.h: New ZBUS message
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support for IEEE 754 format
* lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to
safely convert between big-endian IEEE-754 single and double binary
format, as used in IETF RFCs, and C99. Implementation depends on host
using __STDC_IEC_559__, which should be everything we care about. Should
correctly error out otherwise.
* lib/network.[c,h]: Add ntohf and htonf converter
* lib/memtypes.c: Add new memeory type for Traffic Engineering support
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add link parameters support to Zebra
* zebra/interface.c:
- Add new link-params CLI commands
- Add new functions to set/get link parameters for interface
* zebra/redistribute.[c,h]: Add new function to propagate link parameters
to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering.
* zebra/redistribute_null.c: Add new function
zebra_interface_parameters_update()
* zebra/zserv.[c,h]: Add new functions to send link parameters
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support of new link-params CLI to vtysh
In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue
to use the ordered version for adding line i.e. config_add_line_uniq() to print
Interface CLI commands as it completely break the new LINK_PARAMS_NODE.
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Update Traffic Engineering support for OSPFD
These patches update original code to RFC3630 (OSPF-TE) and add support of
RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support
of RFC6827 (ASON - GMPLS).
* ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering
* ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392
* ospfd/ospf_packet.c: Update checking of OSPF_OPTION
* ospfd/ospf_vty.[c,h]: Update ospf_str2area_id
* ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get
Link Parameters information from the interface to populate Traffic Engineering
metrics
* ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN)
* ospfd/ospf_te.[c,h]: Major modifications to update the code to new
link parameters structure and new RFCs
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
tmp
2016-04-19 16:21:46 +02:00
|
|
|
unsigned long conf_debug_ospf_te = 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
/* Enable debug option variables -- valid only session. */
|
|
|
|
unsigned long term_debug_ospf_packet[5] = {0, 0, 0, 0, 0};
|
|
|
|
unsigned long term_debug_ospf_event = 0;
|
|
|
|
unsigned long term_debug_ospf_ism = 0;
|
|
|
|
unsigned long term_debug_ospf_nsm = 0;
|
|
|
|
unsigned long term_debug_ospf_lsa = 0;
|
|
|
|
unsigned long term_debug_ospf_zebra = 0;
|
|
|
|
unsigned long term_debug_ospf_nssa = 0;
|
Update Traffic Engineering Support for OSPFD
NOTE: I am squashing several commits together because they
do not independently compile and we need this ability to
do any type of sane testing on the patches. Since this
series builds together I am doing this. -DBS
This new structure is the basis to get new link parameters for
Traffic Engineering from Zebra/interface layer to OSPFD and ISISD
for the support of Traffic Engineering
* lib/if.[c,h]: link parameters struture and get/set functions
* lib/command.[c,h]: creation of a new link-node
* lib/zclient.[c,h]: modification to the ZBUS message to convey the
link parameters structure
* lib/zebra.h: New ZBUS message
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support for IEEE 754 format
* lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to
safely convert between big-endian IEEE-754 single and double binary
format, as used in IETF RFCs, and C99. Implementation depends on host
using __STDC_IEC_559__, which should be everything we care about. Should
correctly error out otherwise.
* lib/network.[c,h]: Add ntohf and htonf converter
* lib/memtypes.c: Add new memeory type for Traffic Engineering support
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add link parameters support to Zebra
* zebra/interface.c:
- Add new link-params CLI commands
- Add new functions to set/get link parameters for interface
* zebra/redistribute.[c,h]: Add new function to propagate link parameters
to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering.
* zebra/redistribute_null.c: Add new function
zebra_interface_parameters_update()
* zebra/zserv.[c,h]: Add new functions to send link parameters
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support of new link-params CLI to vtysh
In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue
to use the ordered version for adding line i.e. config_add_line_uniq() to print
Interface CLI commands as it completely break the new LINK_PARAMS_NODE.
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Update Traffic Engineering support for OSPFD
These patches update original code to RFC3630 (OSPF-TE) and add support of
RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support
of RFC6827 (ASON - GMPLS).
* ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering
* ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392
* ospfd/ospf_packet.c: Update checking of OSPF_OPTION
* ospfd/ospf_vty.[c,h]: Update ospf_str2area_id
* ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get
Link Parameters information from the interface to populate Traffic Engineering
metrics
* ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN)
* ospfd/ospf_te.[c,h]: Major modifications to update the code to new
link parameters structure and new RFCs
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
tmp
2016-04-19 16:21:46 +02:00
|
|
|
unsigned long term_debug_ospf_te = 0;
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2005-10-01 Andrew J. Schorr <ajschorr@alumni.princeton.edu>
* zebra.h: Declare new functions zebra_route_string() and
zebra_route_char().
* log.c: (zroute_lookup,zebra_route_string,zebra_route_char) New
functions to map zebra route numbers to strings.
* zebra_vty.c: (route_type_str) Remove obsolete function: use new
library function zebra_route_string() instead. Note that there
are a few differences: for IPv6 routes, we now get "ripng" and
"ospf6" instead of the old behavior ("rip" and "ospf").
(route_type_char) Remove obsolete function: ues new library function
zebra_route_char() instead. Note that there is one difference:
the old function returned 'S' for a ZEBRA_ROUTE_SYSTEM route,
whereas the new one returns 'X'.
(vty_show_ip_route_detail,vty_show_ipv6_route_detail) Replace
route_type_str() with zebra_route_string().
(vty_show_ip_route,vty_show_ipv6_route) Replace route_type_char()
with zebra_route_char().
* bgp_vty.c: (bgp_config_write_redistribute) Use new library function
zebra_route_string instead of a local hard-coded table.
* ospf6_asbr.c: Remove local hard-coded tables zroute_name and
zroute_abname. Change the ZROUTE_NAME macro to use new library
function zebra_route_string(). Remove the ZROUTE_ABNAME macro.
(ospf6_asbr_external_route_show): Replace ZROUTE_ABNAME() with
a call to zebra_route_char(), and be sure to fix the format string,
since we now have a char instead of a char *.
* ospf6_zebra.c: Remove local hard-coded tables zebra_route_name and
zebra_route_abname. Note that the zebra_route_name[] table
contained mixed-case strings, whereas the zebra_route_string()
function returns lower-case strings.
(ospf6_zebra_read_ipv6): Change debug message to use new library
function zebra_route_string() instead of zebra_route_name[].
(show_zebra): Use new library function zebra_route_string() instead
of zebra_route_name[].
* ospf_dump.c: Remove local hard-coded table ospf_redistributed_proto.
(ospf_redist_string) New function implemented using new library
function zebra_route_string(). Note that there are a few differences
in the output that will result: the new function returns strings
that are lower-case, whereas the old table was mixed case. Also,
the old table mapped ZEBRA_ROUTE_OSPF6 to "OSPFv3", whereas the
new function returns "ospf6".
* ospfd.h: Remove extern struct message ospf_redistributed_proto[],
and add extern const char *ospf_redist_string(u_int route_type)
instead.
* ospf_asbr.c: (ospf_external_info_add) In two messages, use
ospf_redist_string instead of LOOKUP(ospf_redistributed_proto).
* ospf_vty.c: Remove local hard-coded table distribute_str.
(config_write_ospf_redistribute,config_write_ospf_distribute): Use
new library function zebra_route_string() instead of distribute_str[].
* ospf_zebra.c: (ospf_redistribute_set,ospf_redistribute_unset,
ospf_redistribute_default_set,ospf_redistribute_check)
In debug messages, use ospf_redist_string() instead of
LOOKUP(ospf_redistributed_proto).
* rip_zebra.c: (config_write_rip_redistribute): Remove local hard-coded
table str[]. Replace str[] with calls to new library function
zebra_route_string().
* ripd.c: Remove local hard-coded table route_info[].
(show_ip_rip) Replace uses of str[] with calls to new library
functions zebra_route_char and zebra_route_string.
* ripng_zebra.c: (ripng_redistribute_write) Remove local hard-coded
table str[]. Replace str[i] with new library function
zebra_route_string(i).
* ripngd.c: Remove local hard-coded table route_info[].
(show_ipv6_ripng) Use new library function zebra_route_char() instead
of table route_info[].
2005-10-01 19:38:06 +02:00
|
|
|
|
|
|
|
const char *
|
|
|
|
ospf_redist_string(u_int route_type)
|
|
|
|
{
|
|
|
|
return (route_type == ZEBRA_ROUTE_MAX) ?
|
|
|
|
"Default" : zebra_route_string(route_type);
|
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
#define OSPF_AREA_STRING_MAXLEN 16
|
2004-10-08 10:17:22 +02:00
|
|
|
const char *
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_area_name_string (struct ospf_area *area)
|
|
|
|
{
|
|
|
|
static char buf[OSPF_AREA_STRING_MAXLEN] = "";
|
|
|
|
u_int32_t area_id;
|
|
|
|
|
|
|
|
if (!area)
|
|
|
|
return "-";
|
|
|
|
|
|
|
|
area_id = ntohl (area->area_id.s_addr);
|
|
|
|
snprintf (buf, OSPF_AREA_STRING_MAXLEN, "%d.%d.%d.%d",
|
|
|
|
(area_id >> 24) & 0xff, (area_id >> 16) & 0xff,
|
|
|
|
(area_id >> 8) & 0xff, area_id & 0xff);
|
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
|
|
|
|
#define OSPF_AREA_DESC_STRING_MAXLEN 23
|
2004-10-08 10:17:22 +02:00
|
|
|
const char *
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_area_desc_string (struct ospf_area *area)
|
|
|
|
{
|
|
|
|
static char buf[OSPF_AREA_DESC_STRING_MAXLEN] = "";
|
|
|
|
u_char type;
|
|
|
|
|
|
|
|
if (!area)
|
|
|
|
return "(incomplete)";
|
|
|
|
|
|
|
|
type = area->external_routing;
|
|
|
|
switch (type)
|
|
|
|
{
|
|
|
|
case OSPF_AREA_NSSA:
|
|
|
|
snprintf (buf, OSPF_AREA_DESC_STRING_MAXLEN, "%s [NSSA]",
|
|
|
|
ospf_area_name_string (area));
|
|
|
|
break;
|
|
|
|
case OSPF_AREA_STUB:
|
|
|
|
snprintf (buf, OSPF_AREA_DESC_STRING_MAXLEN, "%s [Stub]",
|
|
|
|
ospf_area_name_string (area));
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return ospf_area_name_string (area);
|
|
|
|
}
|
|
|
|
|
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
|
|
|
|
#define OSPF_IF_STRING_MAXLEN 40
|
2004-10-08 10:17:22 +02:00
|
|
|
const char *
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_if_name_string (struct ospf_interface *oi)
|
|
|
|
{
|
|
|
|
static char buf[OSPF_IF_STRING_MAXLEN] = "";
|
|
|
|
u_int32_t ifaddr;
|
|
|
|
|
2015-05-20 02:24:41 +02:00
|
|
|
if (!oi || !oi->address)
|
2002-12-13 21:15:29 +01:00
|
|
|
return "inactive";
|
|
|
|
|
|
|
|
if (oi->type == OSPF_IFTYPE_VIRTUALLINK)
|
|
|
|
return oi->ifp->name;
|
|
|
|
|
|
|
|
ifaddr = ntohl (oi->address->u.prefix4.s_addr);
|
|
|
|
snprintf (buf, OSPF_IF_STRING_MAXLEN,
|
|
|
|
"%s:%d.%d.%d.%d", oi->ifp->name,
|
|
|
|
(ifaddr >> 24) & 0xff, (ifaddr >> 16) & 0xff,
|
|
|
|
(ifaddr >> 8) & 0xff, ifaddr & 0xff);
|
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
void
|
|
|
|
ospf_nbr_state_message (struct ospf_neighbor *nbr, char *buf, size_t size)
|
|
|
|
{
|
|
|
|
int state;
|
|
|
|
struct ospf_interface *oi = nbr->oi;
|
|
|
|
|
|
|
|
if (IPV4_ADDR_SAME (&DR (oi), &nbr->address.u.prefix4))
|
|
|
|
state = ISM_DR;
|
|
|
|
else if (IPV4_ADDR_SAME (&BDR (oi), &nbr->address.u.prefix4))
|
|
|
|
state = ISM_Backup;
|
|
|
|
else
|
|
|
|
state = ISM_DROther;
|
|
|
|
|
|
|
|
memset (buf, 0, size);
|
|
|
|
|
|
|
|
snprintf (buf, size, "%s/%s",
|
2017-06-21 01:56:50 +02:00
|
|
|
lookup_msg(ospf_nsm_state_msg, nbr->state, NULL),
|
|
|
|
lookup_msg(ospf_ism_state_msg, state, NULL));
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2004-10-08 10:17:22 +02:00
|
|
|
const char *
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
ospf_timeval_dump (struct timeval *t, char *buf, size_t size)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
/* Making formatted timer strings. */
|
|
|
|
#define MINUTE_IN_SECONDS 60
|
|
|
|
#define HOUR_IN_SECONDS (60*MINUTE_IN_SECONDS)
|
|
|
|
#define DAY_IN_SECONDS (24*HOUR_IN_SECONDS)
|
|
|
|
#define WEEK_IN_SECONDS (7*DAY_IN_SECONDS)
|
2015-05-20 01:36:05 +02:00
|
|
|
unsigned long w, d, h, m, s, ms, us;
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
if (!t)
|
|
|
|
return "inactive";
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) OSPF fast, sub-second hello and 1s dead-interval
support. A warning fix. Millisec support for ospf_timer_dump.
Change auto-cost ref-bandwidth to add a comment to write out
of config, rather than printing annoying messages to vty on
startup.
* ospf_dump.c: (ospf_timer_dump) Print out milliseconds too.
Callers typically specify a length of 9, so most see
millisecs unless they specify the additional length.
* ospf_interface.h: (struct ospf_interface) new interface param,
fast_hello.
* ospf_interface.c: (ospf_if_table_lookup) add brackets,
gcc warning fix.
(ospf_new_if_params) Initialise fast_hello param.
(ospf_free_if_params) Check whether fast_hello is configured.
(ospf_if_new_hook) set fast_hello to default.
* ospf_ism.h: Wrap OSPF_ISM_TIMER_ON inside do {} while (0) to
prevent funny side-effects from its if statement when this
macro is used conditionally by other macros.
(OSPF_ISM_TIMER_MSEC_ON) new macro, set in milliseconds.
(OSPF_HELLO_TIMER_ON) new macro to set hello timer according
to whether fast_hello is set.
* ospf_ism.c: Update all setting of the hello timer to use
either OSPF_ISM_TIMER_MSEC_ON or OSPF_HELLO_TIMER_ON. The
former is used when hello is to be sent immediately.
* ospf_nsm.c: ditto
* ospf_packet.c: (ospf_hello) hello-interval is not checked
for mismatch if fast_hello is set.
(ospf_read) Annoying nit, fix "no ospf_interface" to be debug
rather than a warning, as it can be perfectly normal to
receive packets when logical subnets are used.
(ospf_make_hello) Set hello-interval to 0 if fast-hellos are
configured.
* ospf_vty.c: (ospf_auto_cost_reference_bandwidth) annoying
nit, don't vty_out if this command is given, it gets tired
quick.
(show_ip_ospf_interface_sub) Print the hello-interval
according to whether fast-hello is set or not.
Print the extra 5 millisec characters from (ospf_timer_dump)
if fast-hello is configured.
(ospf_vty_dead_interval_set) new function, common to all
forms of dead-interval command, to set dead-interval and
fast-hello correctly. If a dead-interval is given, unset
fast-hello, else if a hello-multiplier is set, set
dead-interval to 1 and fast-hello to given multiplier.
(ip_ospf_dead_interval_addr_cmd) use
ospf_vty_dead_interval_set().
(ip_ospf_dead_interval_minimal_addr_cmd) ditto.
(no_ip_ospf_dead_interval) Unset fast-hello.
(no_ip_ospf_hello_interval) Bug-fix, unset of hello-interval
should set it to OSPF_HELLO_INTERVAL_DEFAULT, not
OSPF_ROUTER_DEAD_INTERVAL_DEFAULT.
(config_write_interface) Write out fast-hello.
(ospf_config_write) Write a comment about
"auto-cost reference-bandwidth" having to be equal on all
routers. Hopefully just as noticeable as old practice of
writing to vty, but less annoying.
(ospf_vty_if_init) install the two new dead-interval
commands.
* ospfd.h: Add defines for OSPF_ROUTER_DEAD_INTERVAL_MINIMAL
and OSPF_FAST_HELLO_DEFAULT.
2005-10-21 02:45:17 +02:00
|
|
|
|
2015-05-20 01:36:05 +02:00
|
|
|
w = d = h = m = s = ms = us = 0;
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
memset (buf, 0, size);
|
2015-05-20 01:36:05 +02:00
|
|
|
|
|
|
|
us = t->tv_usec;
|
|
|
|
if (us >= 1000)
|
|
|
|
{
|
|
|
|
ms = us / 1000;
|
|
|
|
us %= 1000;
|
|
|
|
}
|
|
|
|
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) OSPF fast, sub-second hello and 1s dead-interval
support. A warning fix. Millisec support for ospf_timer_dump.
Change auto-cost ref-bandwidth to add a comment to write out
of config, rather than printing annoying messages to vty on
startup.
* ospf_dump.c: (ospf_timer_dump) Print out milliseconds too.
Callers typically specify a length of 9, so most see
millisecs unless they specify the additional length.
* ospf_interface.h: (struct ospf_interface) new interface param,
fast_hello.
* ospf_interface.c: (ospf_if_table_lookup) add brackets,
gcc warning fix.
(ospf_new_if_params) Initialise fast_hello param.
(ospf_free_if_params) Check whether fast_hello is configured.
(ospf_if_new_hook) set fast_hello to default.
* ospf_ism.h: Wrap OSPF_ISM_TIMER_ON inside do {} while (0) to
prevent funny side-effects from its if statement when this
macro is used conditionally by other macros.
(OSPF_ISM_TIMER_MSEC_ON) new macro, set in milliseconds.
(OSPF_HELLO_TIMER_ON) new macro to set hello timer according
to whether fast_hello is set.
* ospf_ism.c: Update all setting of the hello timer to use
either OSPF_ISM_TIMER_MSEC_ON or OSPF_HELLO_TIMER_ON. The
former is used when hello is to be sent immediately.
* ospf_nsm.c: ditto
* ospf_packet.c: (ospf_hello) hello-interval is not checked
for mismatch if fast_hello is set.
(ospf_read) Annoying nit, fix "no ospf_interface" to be debug
rather than a warning, as it can be perfectly normal to
receive packets when logical subnets are used.
(ospf_make_hello) Set hello-interval to 0 if fast-hellos are
configured.
* ospf_vty.c: (ospf_auto_cost_reference_bandwidth) annoying
nit, don't vty_out if this command is given, it gets tired
quick.
(show_ip_ospf_interface_sub) Print the hello-interval
according to whether fast-hello is set or not.
Print the extra 5 millisec characters from (ospf_timer_dump)
if fast-hello is configured.
(ospf_vty_dead_interval_set) new function, common to all
forms of dead-interval command, to set dead-interval and
fast-hello correctly. If a dead-interval is given, unset
fast-hello, else if a hello-multiplier is set, set
dead-interval to 1 and fast-hello to given multiplier.
(ip_ospf_dead_interval_addr_cmd) use
ospf_vty_dead_interval_set().
(ip_ospf_dead_interval_minimal_addr_cmd) ditto.
(no_ip_ospf_dead_interval) Unset fast-hello.
(no_ip_ospf_hello_interval) Bug-fix, unset of hello-interval
should set it to OSPF_HELLO_INTERVAL_DEFAULT, not
OSPF_ROUTER_DEAD_INTERVAL_DEFAULT.
(config_write_interface) Write out fast-hello.
(ospf_config_write) Write a comment about
"auto-cost reference-bandwidth" having to be equal on all
routers. Hopefully just as noticeable as old practice of
writing to vty, but less annoying.
(ospf_vty_if_init) install the two new dead-interval
commands.
* ospfd.h: Add defines for OSPF_ROUTER_DEAD_INTERVAL_MINIMAL
and OSPF_FAST_HELLO_DEFAULT.
2005-10-21 02:45:17 +02:00
|
|
|
if (ms >= 1000)
|
|
|
|
{
|
2005-10-21 22:04:41 +02:00
|
|
|
t->tv_sec += ms / 1000;
|
|
|
|
ms %= 1000;
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) OSPF fast, sub-second hello and 1s dead-interval
support. A warning fix. Millisec support for ospf_timer_dump.
Change auto-cost ref-bandwidth to add a comment to write out
of config, rather than printing annoying messages to vty on
startup.
* ospf_dump.c: (ospf_timer_dump) Print out milliseconds too.
Callers typically specify a length of 9, so most see
millisecs unless they specify the additional length.
* ospf_interface.h: (struct ospf_interface) new interface param,
fast_hello.
* ospf_interface.c: (ospf_if_table_lookup) add brackets,
gcc warning fix.
(ospf_new_if_params) Initialise fast_hello param.
(ospf_free_if_params) Check whether fast_hello is configured.
(ospf_if_new_hook) set fast_hello to default.
* ospf_ism.h: Wrap OSPF_ISM_TIMER_ON inside do {} while (0) to
prevent funny side-effects from its if statement when this
macro is used conditionally by other macros.
(OSPF_ISM_TIMER_MSEC_ON) new macro, set in milliseconds.
(OSPF_HELLO_TIMER_ON) new macro to set hello timer according
to whether fast_hello is set.
* ospf_ism.c: Update all setting of the hello timer to use
either OSPF_ISM_TIMER_MSEC_ON or OSPF_HELLO_TIMER_ON. The
former is used when hello is to be sent immediately.
* ospf_nsm.c: ditto
* ospf_packet.c: (ospf_hello) hello-interval is not checked
for mismatch if fast_hello is set.
(ospf_read) Annoying nit, fix "no ospf_interface" to be debug
rather than a warning, as it can be perfectly normal to
receive packets when logical subnets are used.
(ospf_make_hello) Set hello-interval to 0 if fast-hellos are
configured.
* ospf_vty.c: (ospf_auto_cost_reference_bandwidth) annoying
nit, don't vty_out if this command is given, it gets tired
quick.
(show_ip_ospf_interface_sub) Print the hello-interval
according to whether fast-hello is set or not.
Print the extra 5 millisec characters from (ospf_timer_dump)
if fast-hello is configured.
(ospf_vty_dead_interval_set) new function, common to all
forms of dead-interval command, to set dead-interval and
fast-hello correctly. If a dead-interval is given, unset
fast-hello, else if a hello-multiplier is set, set
dead-interval to 1 and fast-hello to given multiplier.
(ip_ospf_dead_interval_addr_cmd) use
ospf_vty_dead_interval_set().
(ip_ospf_dead_interval_minimal_addr_cmd) ditto.
(no_ip_ospf_dead_interval) Unset fast-hello.
(no_ip_ospf_hello_interval) Bug-fix, unset of hello-interval
should set it to OSPF_HELLO_INTERVAL_DEFAULT, not
OSPF_ROUTER_DEAD_INTERVAL_DEFAULT.
(config_write_interface) Write out fast-hello.
(ospf_config_write) Write a comment about
"auto-cost reference-bandwidth" having to be equal on all
routers. Hopefully just as noticeable as old practice of
writing to vty, but less annoying.
(ospf_vty_if_init) install the two new dead-interval
commands.
* ospfd.h: Add defines for OSPF_ROUTER_DEAD_INTERVAL_MINIMAL
and OSPF_FAST_HELLO_DEFAULT.
2005-10-21 02:45:17 +02:00
|
|
|
}
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
|
|
|
|
if (t->tv_sec > WEEK_IN_SECONDS)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
w = t->tv_sec / WEEK_IN_SECONDS;
|
|
|
|
t->tv_sec -= w * WEEK_IN_SECONDS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
|
|
|
|
if (t->tv_sec > DAY_IN_SECONDS)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
d = t->tv_sec / DAY_IN_SECONDS;
|
|
|
|
t->tv_sec -= d * DAY_IN_SECONDS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
|
|
|
|
if (t->tv_sec >= HOUR_IN_SECONDS)
|
|
|
|
{
|
|
|
|
h = t->tv_sec / HOUR_IN_SECONDS;
|
|
|
|
t->tv_sec -= h * HOUR_IN_SECONDS;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (t->tv_sec >= MINUTE_IN_SECONDS)
|
|
|
|
{
|
|
|
|
m = t->tv_sec / MINUTE_IN_SECONDS;
|
|
|
|
t->tv_sec -= m * MINUTE_IN_SECONDS;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (w > 99)
|
|
|
|
snprintf (buf, size, "%ldw%1ldd", w, d);
|
|
|
|
else if (w)
|
|
|
|
snprintf (buf, size, "%ldw%1ldd%02ldh", w, d, h);
|
|
|
|
else if (d)
|
|
|
|
snprintf (buf, size, "%1ldd%02ldh%02ldm", d, h, m);
|
|
|
|
else if (h)
|
2015-03-03 08:48:11 +01:00
|
|
|
snprintf (buf, size, "%ldh%02ldm%02lds", h, m, (long)t->tv_sec);
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
else if (m)
|
2015-03-03 08:48:11 +01:00
|
|
|
snprintf (buf, size, "%ldm%02lds", m, (long)t->tv_sec);
|
2015-05-20 01:36:05 +02:00
|
|
|
else if (ms)
|
2015-03-03 08:48:11 +01:00
|
|
|
snprintf (buf, size, "%ld.%03lds", (long)t->tv_sec, ms);
|
2015-05-20 01:36:05 +02:00
|
|
|
else
|
2015-03-03 08:48:11 +01:00
|
|
|
snprintf (buf, size, "%ld usecs", (long)t->tv_usec);
|
2015-05-20 01:36:05 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
const char *
|
|
|
|
ospf_timer_dump (struct thread *t, char *buf, size_t size)
|
|
|
|
{
|
|
|
|
struct timeval result;
|
|
|
|
if (!t)
|
|
|
|
return "inactive";
|
2017-01-17 22:05:56 +01:00
|
|
|
|
|
|
|
monotime_until (&t->u.sands, &result);
|
2005-10-21 Paul Jakma <paul.jakma@sun.com>
* (general) SPF millisecond resolution timer with adaptive,
linear back-off holdtime. Prettification of ospf_timer_dump.
* ospf_dump.c: (ospf_timeval_dump) new function. The guts of
ospf_timer_dump, but made to be more dynamic in printing out
the relative timeval, sliding the precision printed out
according to the value.
(ospf_timer_dump) guts moved to ospf_timeval_dump.
* ospf_dump.h: export ospf_timeval_dump.
* ospf_flood.c: (ospf_flood) remove gettimeofday, use
the libzebra exported recent_time instead, as it's not
terribly critical to have time exactly right - the dropped
LSA will be retransmited to us if we don't ACK it.
* ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're
not transmitting, just putting LSA back on update transmit list.
* ospfd.h: delay and holdtimes should be unsigned.
Add spf_max_holdtime and spf_hold_multiplier.
Update default defines for delay and hold time to be in msec.
(struct ospf) change the SPF timestamp to a struct timeval.
Remove ospf_timers_spf_(un)?set.
* ospfd.c: (ospf_timers_spf_{set,unset}) removed.
(ospf_new) initialise spf_max_holdtime and spf_hold_multiplier
* ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval
now, update with gettimeofday.
(ospf_spf_calculate_schedule) Change SPF timers to millisecond
resolution.
Make the holdtime be adaptive, with a linear increase in
holdtime ever consecutive SPF run which occurs within holdtime
of previous SPF, bounded by spf_max_holdtime.
* ospf_vty.c: Update spf timers commands.
(ospf_timers_spf_set) trivial helper.
(ospf_timers_throttle_spf_cmd) new command to set SPF delay,
initial hold and max hold times with millisecond resolution.
(ospf_timers_spf_cmd) Deprecated. Accept the old values,
convert to msec, truncate to new limits.
(no_ospf_timers_throttle_spf_cmd) set timers to defaults.
(no_ospf_timers_spf_cmd) deprecated form, same as previous.
(show_ip_ospf_cmd) Display SPF parameters and times.
(show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne'
header.
(show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of
the multiple spaces which were making the lines even longer.
(show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header
(show_ip_ospf_neighbor_all_cmd) ditto and fix the field
widths for NBMA neighbours.
(show_ip_ospf_neighbor_int) Use header function.
(show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf,
local array - safer.
(show_ip_ospf_neighbor_detail_sub) ditto
(ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
|
|
|
return ospf_timeval_dump (&result, buf, size);
|
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static void
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_packet_hello_dump (struct stream *s, u_int16_t length)
|
|
|
|
{
|
|
|
|
struct ospf_hello *hello;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
hello = (struct ospf_hello *) STREAM_PNT (s);
|
|
|
|
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug ("Hello");
|
|
|
|
zlog_debug (" NetworkMask %s", inet_ntoa (hello->network_mask));
|
|
|
|
zlog_debug (" HelloInterval %d", ntohs (hello->hello_interval));
|
|
|
|
zlog_debug (" Options %d (%s)", hello->options,
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_options_dump (hello->options));
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" RtrPriority %d", hello->priority);
|
|
|
|
zlog_debug (" RtrDeadInterval %ld", (u_long)ntohl (hello->dead_interval));
|
|
|
|
zlog_debug (" DRouter %s", inet_ntoa (hello->d_router));
|
|
|
|
zlog_debug (" BDRouter %s", inet_ntoa (hello->bd_router));
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
length -= OSPF_HEADER_SIZE + OSPF_HELLO_MIN_SIZE;
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" # Neighbors %d", length / 4);
|
2002-12-13 21:15:29 +01:00
|
|
|
for (i = 0; length > 0; i++, length -= sizeof (struct in_addr))
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Neighbor %s", inet_ntoa (hello->neighbors[i]));
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static char *
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_dd_flags_dump (u_char flags, char *buf, size_t size)
|
|
|
|
{
|
|
|
|
memset (buf, 0, size);
|
|
|
|
|
|
|
|
snprintf (buf, size, "%s|%s|%s",
|
|
|
|
(flags & OSPF_DD_FLAG_I) ? "I" : "-",
|
|
|
|
(flags & OSPF_DD_FLAG_M) ? "M" : "-",
|
|
|
|
(flags & OSPF_DD_FLAG_MS) ? "MS" : "-");
|
|
|
|
|
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static char *
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_router_lsa_flags_dump (u_char flags, char *buf, size_t size)
|
|
|
|
{
|
|
|
|
memset (buf, 0, size);
|
|
|
|
|
|
|
|
snprintf (buf, size, "%s|%s|%s",
|
|
|
|
(flags & ROUTER_LSA_VIRTUAL) ? "V" : "-",
|
|
|
|
(flags & ROUTER_LSA_EXTERNAL) ? "E" : "-",
|
|
|
|
(flags & ROUTER_LSA_BORDER) ? "B" : "-");
|
|
|
|
|
|
|
|
return buf;
|
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static void
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_router_lsa_dump (struct stream *s, u_int16_t length)
|
|
|
|
{
|
|
|
|
char buf[BUFSIZ];
|
|
|
|
struct router_lsa *rl;
|
|
|
|
int i, len;
|
|
|
|
|
|
|
|
rl = (struct router_lsa *) STREAM_PNT (s);
|
|
|
|
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Router-LSA");
|
|
|
|
zlog_debug (" flags %s",
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_router_lsa_flags_dump (rl->flags, buf, BUFSIZ));
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" # links %d", ntohs (rl->links));
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
len = ntohs (rl->header.length) - OSPF_LSA_HEADER_SIZE - 4;
|
|
|
|
for (i = 0; len > 0; i++)
|
|
|
|
{
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Link ID %s", inet_ntoa (rl->link[i].link_id));
|
|
|
|
zlog_debug (" Link Data %s", inet_ntoa (rl->link[i].link_data));
|
|
|
|
zlog_debug (" Type %d", (u_char) rl->link[i].type);
|
|
|
|
zlog_debug (" TOS %d", (u_char) rl->link[i].tos);
|
|
|
|
zlog_debug (" metric %d", ntohs (rl->link[i].metric));
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
len -= 12;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static void
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_network_lsa_dump (struct stream *s, u_int16_t length)
|
|
|
|
{
|
|
|
|
struct network_lsa *nl;
|
|
|
|
int i, cnt;
|
|
|
|
|
|
|
|
nl = (struct network_lsa *) STREAM_PNT (s);
|
|
|
|
cnt = (ntohs (nl->header.length) - (OSPF_LSA_HEADER_SIZE + 4)) / 4;
|
|
|
|
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Network-LSA");
|
2002-12-13 21:15:29 +01:00
|
|
|
/*
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug ("LSA total size %d", ntohs (nl->header.length));
|
|
|
|
zlog_debug ("Network-LSA size %d",
|
2002-12-13 21:15:29 +01:00
|
|
|
ntohs (nl->header.length) - OSPF_LSA_HEADER_SIZE);
|
|
|
|
*/
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Network Mask %s", inet_ntoa (nl->mask));
|
|
|
|
zlog_debug (" # Attached Routers %d", cnt);
|
2002-12-13 21:15:29 +01:00
|
|
|
for (i = 0; i < cnt; i++)
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Attached Router %s", inet_ntoa (nl->routers[i]));
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static void
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_summary_lsa_dump (struct stream *s, u_int16_t length)
|
|
|
|
{
|
|
|
|
struct summary_lsa *sl;
|
|
|
|
int size;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
sl = (struct summary_lsa *) STREAM_PNT (s);
|
|
|
|
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Summary-LSA");
|
|
|
|
zlog_debug (" Network Mask %s", inet_ntoa (sl->mask));
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
size = ntohs (sl->header.length) - OSPF_LSA_HEADER_SIZE - 4;
|
|
|
|
for (i = 0; size > 0; size -= 4, i++)
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" TOS=%d metric %d", sl->tos,
|
2002-12-13 21:15:29 +01:00
|
|
|
GET_METRIC (sl->metric));
|
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static void
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_as_external_lsa_dump (struct stream *s, u_int16_t length)
|
|
|
|
{
|
|
|
|
struct as_external_lsa *al;
|
|
|
|
int size;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
al = (struct as_external_lsa *) STREAM_PNT (s);
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" %s", ospf_lsa_type_msg[al->header.type].str);
|
|
|
|
zlog_debug (" Network Mask %s", inet_ntoa (al->mask));
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
size = ntohs (al->header.length) - OSPF_LSA_HEADER_SIZE -4;
|
|
|
|
for (i = 0; size > 0; size -= 12, i++)
|
|
|
|
{
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" bit %s TOS=%d metric %d",
|
2002-12-13 21:15:29 +01:00
|
|
|
IS_EXTERNAL_METRIC (al->e[i].tos) ? "E" : "-",
|
|
|
|
al->e[i].tos & 0x7f, GET_METRIC (al->e[i].metric));
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Forwarding address %s", inet_ntoa (al->e[i].fwd_addr));
|
2016-10-01 20:42:34 +02:00
|
|
|
zlog_debug (" External Route Tag %"ROUTE_TAG_PRI, al->e[i].route_tag);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static void
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_lsa_header_list_dump (struct stream *s, u_int16_t length)
|
|
|
|
{
|
|
|
|
struct lsa_header *lsa;
|
|
|
|
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" # LSA Headers %d", length / OSPF_LSA_HEADER_SIZE);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
/* LSA Headers. */
|
|
|
|
while (length > 0)
|
|
|
|
{
|
|
|
|
lsa = (struct lsa_header *) STREAM_PNT (s);
|
|
|
|
ospf_lsa_header_dump (lsa);
|
|
|
|
|
2005-02-09 16:51:56 +01:00
|
|
|
stream_forward_getp (s, OSPF_LSA_HEADER_SIZE);
|
2002-12-13 21:15:29 +01:00
|
|
|
length -= OSPF_LSA_HEADER_SIZE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static void
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_packet_db_desc_dump (struct stream *s, u_int16_t length)
|
|
|
|
{
|
|
|
|
struct ospf_db_desc *dd;
|
|
|
|
char dd_flags[8];
|
|
|
|
|
|
|
|
u_int32_t gp;
|
|
|
|
|
|
|
|
gp = stream_get_getp (s);
|
|
|
|
dd = (struct ospf_db_desc *) STREAM_PNT (s);
|
|
|
|
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug ("Database Description");
|
|
|
|
zlog_debug (" Interface MTU %d", ntohs (dd->mtu));
|
|
|
|
zlog_debug (" Options %d (%s)", dd->options,
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_options_dump (dd->options));
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Flags %d (%s)", dd->flags,
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_dd_flags_dump (dd->flags, dd_flags, sizeof dd_flags));
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Sequence Number 0x%08lx", (u_long)ntohl (dd->dd_seqnum));
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
length -= OSPF_HEADER_SIZE + OSPF_DB_DESC_MIN_SIZE;
|
|
|
|
|
2005-02-09 16:51:56 +01:00
|
|
|
stream_forward_getp (s, OSPF_DB_DESC_MIN_SIZE);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
ospf_lsa_header_list_dump (s, length);
|
|
|
|
|
|
|
|
stream_set_getp (s, gp);
|
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static void
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_packet_ls_req_dump (struct stream *s, u_int16_t length)
|
|
|
|
{
|
|
|
|
u_int32_t sp;
|
|
|
|
u_int32_t ls_type;
|
|
|
|
struct in_addr ls_id;
|
|
|
|
struct in_addr adv_router;
|
|
|
|
|
|
|
|
sp = stream_get_getp (s);
|
|
|
|
|
|
|
|
length -= OSPF_HEADER_SIZE;
|
|
|
|
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug ("Link State Request");
|
|
|
|
zlog_debug (" # Requests %d", length / 12);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
for (; length > 0; length -= 12)
|
|
|
|
{
|
|
|
|
ls_type = stream_getl (s);
|
|
|
|
ls_id.s_addr = stream_get_ipv4 (s);
|
|
|
|
adv_router.s_addr = stream_get_ipv4 (s);
|
|
|
|
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" LS type %d", ls_type);
|
|
|
|
zlog_debug (" Link State ID %s", inet_ntoa (ls_id));
|
|
|
|
zlog_debug (" Advertising Router %s",
|
2002-12-13 21:15:29 +01:00
|
|
|
inet_ntoa (adv_router));
|
|
|
|
}
|
|
|
|
|
|
|
|
stream_set_getp (s, sp);
|
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static void
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_packet_ls_upd_dump (struct stream *s, u_int16_t length)
|
|
|
|
{
|
|
|
|
u_int32_t sp;
|
|
|
|
struct lsa_header *lsa;
|
|
|
|
int lsa_len;
|
|
|
|
u_int32_t count;
|
|
|
|
|
|
|
|
length -= OSPF_HEADER_SIZE;
|
|
|
|
|
|
|
|
sp = stream_get_getp (s);
|
|
|
|
|
|
|
|
count = stream_getl (s);
|
|
|
|
length -= 4;
|
|
|
|
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug ("Link State Update");
|
|
|
|
zlog_debug (" # LSAs %d", count);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
while (length > 0 && count > 0)
|
|
|
|
{
|
|
|
|
if (length < OSPF_HEADER_SIZE || length % 4 != 0)
|
|
|
|
{
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Remaining %d bytes; Incorrect length.", length);
|
2002-12-13 21:15:29 +01:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
lsa = (struct lsa_header *) STREAM_PNT (s);
|
|
|
|
lsa_len = ntohs (lsa->length);
|
|
|
|
ospf_lsa_header_dump (lsa);
|
|
|
|
|
|
|
|
switch (lsa->type)
|
|
|
|
{
|
|
|
|
case OSPF_ROUTER_LSA:
|
|
|
|
ospf_router_lsa_dump (s, length);
|
|
|
|
break;
|
|
|
|
case OSPF_NETWORK_LSA:
|
|
|
|
ospf_network_lsa_dump (s, length);
|
|
|
|
break;
|
|
|
|
case OSPF_SUMMARY_LSA:
|
|
|
|
case OSPF_ASBR_SUMMARY_LSA:
|
|
|
|
ospf_summary_lsa_dump (s, length);
|
|
|
|
break;
|
|
|
|
case OSPF_AS_EXTERNAL_LSA:
|
|
|
|
ospf_as_external_lsa_dump (s, length);
|
|
|
|
break;
|
|
|
|
case OSPF_AS_NSSA_LSA:
|
2003-07-12 23:30:57 +02:00
|
|
|
ospf_as_external_lsa_dump (s, length);
|
2002-12-13 21:15:29 +01:00
|
|
|
break;
|
|
|
|
case OSPF_OPAQUE_LINK_LSA:
|
|
|
|
case OSPF_OPAQUE_AREA_LSA:
|
|
|
|
case OSPF_OPAQUE_AS_LSA:
|
|
|
|
ospf_opaque_lsa_dump (s, length);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2005-02-09 16:51:56 +01:00
|
|
|
stream_forward_getp (s, lsa_len);
|
2002-12-13 21:15:29 +01:00
|
|
|
length -= lsa_len;
|
|
|
|
count--;
|
|
|
|
}
|
|
|
|
|
|
|
|
stream_set_getp (s, sp);
|
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static void
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_packet_ls_ack_dump (struct stream *s, u_int16_t length)
|
|
|
|
{
|
|
|
|
u_int32_t sp;
|
|
|
|
|
|
|
|
length -= OSPF_HEADER_SIZE;
|
|
|
|
sp = stream_get_getp (s);
|
|
|
|
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug ("Link State Acknowledgment");
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_lsa_header_list_dump (s, length);
|
|
|
|
|
|
|
|
stream_set_getp (s, sp);
|
|
|
|
}
|
|
|
|
|
2004-10-11 12:11:25 +02:00
|
|
|
/* Expects header to be in host order */
|
2002-12-13 21:15:29 +01:00
|
|
|
void
|
2004-10-11 12:11:25 +02:00
|
|
|
ospf_ip_header_dump (struct ip *iph)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
|
|
|
/* IP Header dump. */
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug ("ip_v %d", iph->ip_v);
|
|
|
|
zlog_debug ("ip_hl %d", iph->ip_hl);
|
|
|
|
zlog_debug ("ip_tos %d", iph->ip_tos);
|
|
|
|
zlog_debug ("ip_len %d", iph->ip_len);
|
|
|
|
zlog_debug ("ip_id %u", (u_int32_t) iph->ip_id);
|
|
|
|
zlog_debug ("ip_off %u", (u_int32_t) iph->ip_off);
|
|
|
|
zlog_debug ("ip_ttl %d", iph->ip_ttl);
|
|
|
|
zlog_debug ("ip_p %d", iph->ip_p);
|
|
|
|
zlog_debug ("ip_sum 0x%x", (u_int32_t) iph->ip_sum);
|
|
|
|
zlog_debug ("ip_src %s", inet_ntoa (iph->ip_src));
|
|
|
|
zlog_debug ("ip_dst %s", inet_ntoa (iph->ip_dst));
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static void
|
2002-12-13 21:15:29 +01:00
|
|
|
ospf_header_dump (struct ospf_header *ospfh)
|
|
|
|
{
|
|
|
|
char buf[9];
|
2012-02-26 14:00:57 +01:00
|
|
|
u_int16_t auth_type = ntohs (ospfh->auth_type);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug ("Header");
|
|
|
|
zlog_debug (" Version %d", ospfh->version);
|
|
|
|
zlog_debug (" Type %d (%s)", ospfh->type,
|
2017-06-21 01:56:50 +02:00
|
|
|
lookup_msg(ospf_packet_type_str, ospfh->type, NULL));
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Packet Len %d", ntohs (ospfh->length));
|
|
|
|
zlog_debug (" Router ID %s", inet_ntoa (ospfh->router_id));
|
|
|
|
zlog_debug (" Area ID %s", inet_ntoa (ospfh->area_id));
|
|
|
|
zlog_debug (" Checksum 0x%x", ntohs (ospfh->checksum));
|
2017-06-21 01:56:50 +02:00
|
|
|
zlog_debug (" AuType %s", lookup_msg(ospf_auth_type_str, auth_type, NULL));
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2012-02-26 14:00:57 +01:00
|
|
|
switch (auth_type)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
|
|
|
case OSPF_AUTH_NULL:
|
|
|
|
break;
|
|
|
|
case OSPF_AUTH_SIMPLE:
|
|
|
|
memset (buf, 0, 9);
|
2004-09-26 18:09:34 +02:00
|
|
|
strncpy (buf, (char *) ospfh->u.auth_data, 8);
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Simple Password %s", buf);
|
2002-12-13 21:15:29 +01:00
|
|
|
break;
|
|
|
|
case OSPF_AUTH_CRYPTOGRAPHIC:
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug (" Cryptographic Authentication");
|
|
|
|
zlog_debug (" Key ID %d", ospfh->u.crypt.key_id);
|
|
|
|
zlog_debug (" Auth Data Len %d", ospfh->u.crypt.auth_data_len);
|
|
|
|
zlog_debug (" Sequence number %ld",
|
2002-12-13 21:15:29 +01:00
|
|
|
(u_long)ntohl (ospfh->u.crypt.crypt_seqnum));
|
|
|
|
break;
|
|
|
|
default:
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug ("* This is not supported authentication type");
|
2002-12-13 21:15:29 +01:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
ospf_packet_dump (struct stream *s)
|
|
|
|
{
|
|
|
|
struct ospf_header *ospfh;
|
|
|
|
unsigned long gp;
|
|
|
|
|
|
|
|
/* Preserve pointer. */
|
|
|
|
gp = stream_get_getp (s);
|
|
|
|
|
|
|
|
/* OSPF Header dump. */
|
|
|
|
ospfh = (struct ospf_header *) STREAM_PNT (s);
|
|
|
|
|
|
|
|
/* Until detail flag is set, return. */
|
|
|
|
if (!(term_debug_ospf_packet[ospfh->type - 1] & OSPF_DEBUG_DETAIL))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Show OSPF header detail. */
|
|
|
|
ospf_header_dump (ospfh);
|
2005-02-09 16:51:56 +01:00
|
|
|
stream_forward_getp (s, OSPF_HEADER_SIZE);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
switch (ospfh->type)
|
|
|
|
{
|
|
|
|
case OSPF_MSG_HELLO:
|
|
|
|
ospf_packet_hello_dump (s, ntohs (ospfh->length));
|
|
|
|
break;
|
|
|
|
case OSPF_MSG_DB_DESC:
|
|
|
|
ospf_packet_db_desc_dump (s, ntohs (ospfh->length));
|
|
|
|
break;
|
|
|
|
case OSPF_MSG_LS_REQ:
|
|
|
|
ospf_packet_ls_req_dump (s, ntohs (ospfh->length));
|
|
|
|
break;
|
|
|
|
case OSPF_MSG_LS_UPD:
|
|
|
|
ospf_packet_ls_upd_dump (s, ntohs (ospfh->length));
|
|
|
|
break;
|
|
|
|
case OSPF_MSG_LS_ACK:
|
|
|
|
ospf_packet_ls_ack_dump (s, ntohs (ospfh->length));
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
stream_set_getp (s, gp);
|
|
|
|
}
|
|
|
|
|
2016-09-29 03:26:55 +02:00
|
|
|
DEFUN (debug_ospf_packet,
|
|
|
|
debug_ospf_packet_cmd,
|
|
|
|
"debug ospf [(1-65535)] packet <hello|dd|ls-request|ls-update|ls-ack|all> [<send [detail]|recv [detail]|detail>]",
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"Instance ID\n"
|
|
|
|
"OSPF packets\n"
|
|
|
|
"OSPF Hello\n"
|
|
|
|
"OSPF Database Description\n"
|
|
|
|
"OSPF Link State Request\n"
|
|
|
|
"OSPF Link State Update\n"
|
|
|
|
"OSPF Link State Acknowledgment\n"
|
|
|
|
"OSPF all packets\n"
|
|
|
|
"Packet sent\n"
|
|
|
|
"Detail Information\n"
|
|
|
|
"Packet received\n"
|
|
|
|
"Detail Information\n"
|
|
|
|
"Detail Information\n")
|
|
|
|
{
|
|
|
|
int inst = (argv[2]->type == RANGE_TKN) ? 1 : 0;
|
|
|
|
int detail = strmatch (argv[argc - 1]->text, "detail");
|
|
|
|
int send = strmatch (argv[argc - (1+detail)]->text, "send");
|
|
|
|
int recv = strmatch (argv[argc - (1+detail)]->text, "recv");
|
|
|
|
char *packet = argv[3 + inst]->text;
|
|
|
|
|
|
|
|
if (inst) // user passed instance ID
|
|
|
|
{
|
|
|
|
if (!ospf_lookup_instance (strtoul (argv[2]->arg, NULL, 10)))
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
int type = 0;
|
|
|
|
int flag = 0;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* Check packet type. */
|
2016-09-29 03:26:55 +02:00
|
|
|
if (strmatch (packet, "hello"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_HELLO;
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (packet, "dd"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_DB_DESC;
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (packet, "ls-request"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_LS_REQ;
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (packet, "ls-update"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_LS_UPD;
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (packet, "ls-ack"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_LS_ACK;
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (packet, "all"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_ALL;
|
|
|
|
|
2016-09-29 03:26:55 +02:00
|
|
|
/* Cases:
|
|
|
|
* (none) = send + recv
|
|
|
|
* detail = send + recv + detail
|
|
|
|
* recv = recv
|
|
|
|
* send = send
|
|
|
|
* recv detail = recv + detail
|
|
|
|
* send detail = send + detail
|
|
|
|
*/
|
|
|
|
if (!send && !recv)
|
|
|
|
send = recv = 1;
|
|
|
|
|
|
|
|
flag |= (send) ? OSPF_DEBUG_SEND : 0;
|
|
|
|
flag |= (recv) ? OSPF_DEBUG_RECV : 0;
|
|
|
|
flag |= (detail) ? OSPF_DEBUG_DETAIL : 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
for (i = 0; i < 5; i++)
|
|
|
|
if (type & (0x01 << i))
|
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
DEBUG_PACKET_ON (i, flag);
|
|
|
|
else
|
|
|
|
TERM_DEBUG_PACKET_ON (i, flag);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2016-09-29 03:26:55 +02:00
|
|
|
DEFUN (no_debug_ospf_packet,
|
|
|
|
no_debug_ospf_packet_cmd,
|
|
|
|
"no debug ospf [(1-65535)] packet <hello|dd|ls-request|ls-update|ls-ack|all> [<send [detail]|recv [detail]|detail>]",
|
|
|
|
NO_STR
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
"Instance ID\n"
|
2002-12-13 21:15:29 +01:00
|
|
|
"OSPF packets\n"
|
|
|
|
"OSPF Hello\n"
|
|
|
|
"OSPF Database Description\n"
|
|
|
|
"OSPF Link State Request\n"
|
|
|
|
"OSPF Link State Update\n"
|
|
|
|
"OSPF Link State Acknowledgment\n"
|
2016-09-29 03:26:55 +02:00
|
|
|
"OSPF all packets\n"
|
|
|
|
"Packet sent\n"
|
|
|
|
"Detail Information\n"
|
|
|
|
"Packet received\n"
|
|
|
|
"Detail Information\n"
|
|
|
|
"Detail Information\n")
|
|
|
|
{
|
|
|
|
int inst = (argv[3]->type == RANGE_TKN) ? 1 : 0;
|
|
|
|
int detail = strmatch (argv[argc - 1]->text, "detail");
|
|
|
|
int send = strmatch (argv[argc - (1+detail)]->text, "send");
|
|
|
|
int recv = strmatch (argv[argc - (1+detail)]->text, "recv");
|
|
|
|
char *packet = argv[4 + inst]->text;
|
|
|
|
|
|
|
|
if (inst) // user passed instance ID
|
|
|
|
{
|
|
|
|
if (!ospf_lookup_instance (strtoul (argv[3]->arg, NULL, 10)))
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
int type = 0;
|
|
|
|
int flag = 0;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* Check packet type. */
|
2016-09-29 03:26:55 +02:00
|
|
|
if (strmatch (packet, "hello"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_HELLO;
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (packet, "dd"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_DB_DESC;
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (packet, "ls-request"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_LS_REQ;
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (packet, "ls-update"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_LS_UPD;
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (packet, "ls-ack"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_LS_ACK;
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (packet, "all"))
|
2002-12-13 21:15:29 +01:00
|
|
|
type = OSPF_DEBUG_ALL;
|
|
|
|
|
2016-09-29 03:26:55 +02:00
|
|
|
/* Cases:
|
|
|
|
* (none) = send + recv
|
|
|
|
* detail = send + recv + detail
|
|
|
|
* recv = recv
|
|
|
|
* send = send
|
|
|
|
* recv detail = recv + detail
|
|
|
|
* send detail = send + detail
|
|
|
|
*/
|
|
|
|
if (!send && !recv)
|
|
|
|
send = recv = 1;
|
|
|
|
|
|
|
|
flag |= (send) ? OSPF_DEBUG_SEND : 0;
|
|
|
|
flag |= (recv) ? OSPF_DEBUG_RECV : 0;
|
|
|
|
flag |= (detail) ? OSPF_DEBUG_DETAIL : 0;
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
for (i = 0; i < 5; i++)
|
|
|
|
if (type & (0x01 << i))
|
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
DEBUG_PACKET_OFF (i, flag);
|
|
|
|
else
|
|
|
|
TERM_DEBUG_PACKET_OFF (i, flag);
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef DEBUG
|
2005-10-01 02:08:54 +02:00
|
|
|
/*
|
2002-12-13 21:15:29 +01:00
|
|
|
for (i = 0; i < 5; i++)
|
2004-12-08 18:45:02 +01:00
|
|
|
zlog_debug ("flag[%d] = %d", i, ospf_debug_packet[i]);
|
2005-10-01 02:08:54 +02:00
|
|
|
*/
|
2002-12-13 21:15:29 +01:00
|
|
|
#endif /* DEBUG */
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2016-09-29 03:26:55 +02:00
|
|
|
DEFUN (debug_ospf_ism,
|
|
|
|
debug_ospf_ism_cmd,
|
|
|
|
"debug ospf [(1-65535)] ism [<status|events|timers>]",
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
"Instance ID\n"
|
2016-09-29 03:26:55 +02:00
|
|
|
"OSPF Interface State Machine\n"
|
|
|
|
"ISM Status Information\n"
|
|
|
|
"ISM Event Information\n"
|
|
|
|
"ISM TImer Information\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-29 03:26:55 +02:00
|
|
|
int inst = (argv[2]->type == RANGE_TKN);
|
|
|
|
char *dbgparam = (argc == 4 + inst) ? argv[argc - 1]->text : NULL;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
|
2016-09-29 03:26:55 +02:00
|
|
|
if (inst) // user passed instance ID
|
|
|
|
{
|
|
|
|
if (!ospf_lookup_instance (strtoul (argv[2]->arg, NULL, 10)))
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
{
|
2016-09-29 03:26:55 +02:00
|
|
|
if (!dbgparam)
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (ism, ISM);
|
2016-09-29 03:26:55 +02:00
|
|
|
else
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-29 03:26:55 +02:00
|
|
|
if (strmatch (dbgparam, "status"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (ism, ISM_STATUS);
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (dbgparam, "events"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (ism, ISM_EVENTS);
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (dbgparam, "timers"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (ism, ISM_TIMERS);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ENABLE_NODE. */
|
2016-09-29 03:26:55 +02:00
|
|
|
if (!dbgparam)
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (ism, ISM);
|
2016-09-29 03:26:55 +02:00
|
|
|
else
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-29 03:26:55 +02:00
|
|
|
if (strmatch (dbgparam, "status"))
|
|
|
|
TERM_DEBUG_ON (ism, ISM_STATUS);
|
|
|
|
else if (strmatch (dbgparam, "events"))
|
|
|
|
TERM_DEBUG_ON (ism, ISM_EVENTS);
|
|
|
|
else if (strmatch (dbgparam, "timers"))
|
|
|
|
TERM_DEBUG_ON (ism, ISM_TIMERS);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2016-09-29 19:37:07 +02:00
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
2016-09-29 03:26:55 +02:00
|
|
|
DEFUN (no_debug_ospf_ism,
|
|
|
|
no_debug_ospf_ism_cmd,
|
|
|
|
"no debug ospf [(1-65535)] ism [<status|events|timers>]",
|
2016-11-05 00:03:03 +01:00
|
|
|
NO_STR
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
"Instance ID\n"
|
2016-09-29 03:26:55 +02:00
|
|
|
"OSPF Interface State Machine\n"
|
|
|
|
"ISM Status Information\n"
|
|
|
|
"ISM Event Information\n"
|
|
|
|
"ISM TImer Information\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-29 03:26:55 +02:00
|
|
|
int inst = (argv[3]->type == RANGE_TKN);
|
|
|
|
char *dbgparam = (argc == 5 + inst) ? argv[argc - 1]->text : NULL;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
|
2016-09-29 03:26:55 +02:00
|
|
|
if (inst) // user passed instance ID
|
|
|
|
{
|
2016-09-30 18:05:55 +02:00
|
|
|
if (!ospf_lookup_instance (strtoul (argv[3]->arg, NULL, 10)))
|
2016-09-29 03:26:55 +02:00
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
{
|
2016-09-29 03:26:55 +02:00
|
|
|
if (!dbgparam)
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (ism, ISM);
|
2016-09-29 03:26:55 +02:00
|
|
|
else
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-29 03:26:55 +02:00
|
|
|
if (strmatch (dbgparam, "status"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (ism, ISM_STATUS);
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (dbgparam, "events"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (ism, ISM_EVENTS);
|
2016-09-29 03:26:55 +02:00
|
|
|
else if (strmatch (dbgparam, "timers"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (ism, ISM_TIMERS);
|
|
|
|
}
|
2016-09-29 03:26:55 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ENABLE_NODE. */
|
2016-09-29 03:26:55 +02:00
|
|
|
if (!dbgparam)
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (ism, ISM);
|
2016-09-29 03:26:55 +02:00
|
|
|
else
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-29 03:26:55 +02:00
|
|
|
if (strmatch (dbgparam, "status"))
|
|
|
|
TERM_DEBUG_OFF (ism, ISM_STATUS);
|
|
|
|
else if (strmatch (dbgparam, "events"))
|
|
|
|
TERM_DEBUG_OFF (ism, ISM_EVENTS);
|
|
|
|
else if (strmatch (dbgparam, "timers"))
|
|
|
|
TERM_DEBUG_OFF (ism, ISM_TIMERS);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2016-09-29 19:37:07 +02:00
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
static int
|
2016-09-22 21:15:24 +02:00
|
|
|
debug_ospf_nsm_common (struct vty *vty, int arg_base, int argc, struct cmd_token **argv)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
{
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (nsm, NSM);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "status"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (nsm, NSM_STATUS);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "events"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (nsm, NSM_EVENTS);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "timers"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (nsm, NSM_TIMERS);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ENABLE_NODE. */
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (nsm, NSM);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "status"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (nsm, NSM_STATUS);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "events"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (nsm, NSM_EVENTS);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "timers"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (nsm, NSM_TIMERS);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEFUN (debug_ospf_nsm,
|
|
|
|
debug_ospf_nsm_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"debug ospf nsm [<status|events|timers>]",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
2016-09-30 03:27:05 +02:00
|
|
|
"OSPF Neighbor State Machine\n"
|
|
|
|
"NSM Status Information\n"
|
|
|
|
"NSM Event Information\n"
|
|
|
|
"NSM Timer Information\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
return debug_ospf_nsm_common (vty, 3, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (debug_ospf_instance_nsm,
|
|
|
|
debug_ospf_instance_nsm_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"debug ospf (1-65535) nsm [<status|events|timers>]",
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
"Instance ID\n"
|
2016-09-30 03:27:05 +02:00
|
|
|
"OSPF Neighbor State Machine\n"
|
|
|
|
"NSM Status Information\n"
|
|
|
|
"NSM Event Information\n"
|
|
|
|
"NSM Timer Information\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-23 22:01:26 +02:00
|
|
|
int idx_number = 2;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
u_short instance = 0;
|
|
|
|
|
*: remove VTY_GET_*
CLI validates input tokens, so there's no need to do it in handler
functions anymore.
spatch follows
----------------
@getull@
expression v;
expression str;
@@
<...
- VTY_GET_ULL(..., v, str)
+ v = strtoull (str, NULL, 10)
...>
@getul@
expression v;
expression str;
@@
<...
- VTY_GET_ULONG(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getintrange@
expression name;
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER_RANGE(name, v, str, ...)
+ v = strtoul (str, NULL, 10)
...>
@getint@
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getv4@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_ADDRESS(..., v, str)
+ inet_aton (str, &v)
...>
@getv4pfx@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_PREFIX(..., v, str)
+ str2prefix_ipv4 (str, &v)
...>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-06-27 20:47:03 +02:00
|
|
|
instance = strtoul(argv[idx_number]->arg, NULL, 10);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (!ospf_lookup_instance (instance))
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
|
2016-09-30 03:27:05 +02:00
|
|
|
return debug_ospf_nsm_common (vty, 4, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
2016-09-22 21:15:24 +02:00
|
|
|
no_debug_ospf_nsm_common (struct vty *vty, int arg_base, int argc, struct cmd_token **argv)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-11-05 00:03:03 +01:00
|
|
|
/* XXX qlyoung */
|
2002-12-13 21:15:29 +01:00
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
{
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (nsm, NSM);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "status"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (nsm, NSM_STATUS);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "events"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (nsm, NSM_EVENTS);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "timers"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (nsm, NSM_TIMERS);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ENABLE_NODE. */
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (nsm, NSM);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "status"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (nsm, NSM_STATUS);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "events"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (nsm, NSM_EVENTS);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "timers"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (nsm, NSM_TIMERS);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEFUN (no_debug_ospf_nsm,
|
|
|
|
no_debug_ospf_nsm_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"no debug ospf nsm [<status|events|timers>]",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
NO_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
2016-11-05 00:03:03 +01:00
|
|
|
"OSPF Neighbor State Machine\n"
|
2016-09-30 03:27:05 +02:00
|
|
|
"NSM Status Information\n"
|
|
|
|
"NSM Event Information\n"
|
|
|
|
"NSM Timer Information\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
return no_debug_ospf_nsm_common(vty, 4, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEFUN (no_debug_ospf_instance_nsm,
|
|
|
|
no_debug_ospf_instance_nsm_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"no debug ospf (1-65535) nsm [<status|events|timers>]",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
NO_STR
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
"Instance ID\n"
|
2016-11-05 00:03:03 +01:00
|
|
|
"OSPF Neighbor State Machine\n"
|
2016-09-30 03:27:05 +02:00
|
|
|
"NSM Status Information\n"
|
|
|
|
"NSM Event Information\n"
|
|
|
|
"NSM Timer Information\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-23 22:01:26 +02:00
|
|
|
int idx_number = 3;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
u_short instance = 0;
|
|
|
|
|
*: remove VTY_GET_*
CLI validates input tokens, so there's no need to do it in handler
functions anymore.
spatch follows
----------------
@getull@
expression v;
expression str;
@@
<...
- VTY_GET_ULL(..., v, str)
+ v = strtoull (str, NULL, 10)
...>
@getul@
expression v;
expression str;
@@
<...
- VTY_GET_ULONG(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getintrange@
expression name;
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER_RANGE(name, v, str, ...)
+ v = strtoul (str, NULL, 10)
...>
@getint@
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getv4@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_ADDRESS(..., v, str)
+ inet_aton (str, &v)
...>
@getv4pfx@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_PREFIX(..., v, str)
+ str2prefix_ipv4 (str, &v)
...>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-06-27 20:47:03 +02:00
|
|
|
instance = strtoul(argv[idx_number]->arg, NULL, 10);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (!ospf_lookup_instance (instance))
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
|
2016-09-30 03:27:05 +02:00
|
|
|
return no_debug_ospf_nsm_common(vty, 5, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
2016-09-22 21:15:24 +02:00
|
|
|
debug_ospf_lsa_common (struct vty *vty, int arg_base, int argc, struct cmd_token **argv)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
{
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (lsa, LSA);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "generate"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (lsa, LSA_GENERATE);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "flooding"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (lsa, LSA_FLOODING);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "install"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (lsa, LSA_INSTALL);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "refresh"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (lsa, LSA_REFRESH);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ENABLE_NODE. */
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (lsa, LSA);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "generate"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (lsa, LSA_GENERATE);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "flooding"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (lsa, LSA_FLOODING);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "install"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (lsa, LSA_INSTALL);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "refresh"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (lsa, LSA_REFRESH);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEFUN (debug_ospf_lsa,
|
|
|
|
debug_ospf_lsa_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"debug ospf lsa [<generate|flooding|install|refresh>]",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
2016-09-30 03:27:05 +02:00
|
|
|
"OSPF Link State Advertisement\n"
|
|
|
|
"LSA Generation\n"
|
|
|
|
"LSA Flooding\n"
|
|
|
|
"LSA Install/Delete\n"
|
|
|
|
"LSA Refresh\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
return debug_ospf_lsa_common(vty, 3, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (debug_ospf_instance_lsa,
|
|
|
|
debug_ospf_instance_lsa_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"debug ospf (1-65535) lsa [<generate|flooding|install|refresh>]",
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
"Instance ID\n"
|
2016-09-30 03:27:05 +02:00
|
|
|
"OSPF Link State Advertisement\n"
|
|
|
|
"LSA Generation\n"
|
|
|
|
"LSA Flooding\n"
|
|
|
|
"LSA Install/Delete\n"
|
|
|
|
"LSA Refresh\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-23 22:01:26 +02:00
|
|
|
int idx_number = 2;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
u_short instance = 0;
|
|
|
|
|
*: remove VTY_GET_*
CLI validates input tokens, so there's no need to do it in handler
functions anymore.
spatch follows
----------------
@getull@
expression v;
expression str;
@@
<...
- VTY_GET_ULL(..., v, str)
+ v = strtoull (str, NULL, 10)
...>
@getul@
expression v;
expression str;
@@
<...
- VTY_GET_ULONG(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getintrange@
expression name;
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER_RANGE(name, v, str, ...)
+ v = strtoul (str, NULL, 10)
...>
@getint@
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getv4@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_ADDRESS(..., v, str)
+ inet_aton (str, &v)
...>
@getv4pfx@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_PREFIX(..., v, str)
+ str2prefix_ipv4 (str, &v)
...>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-06-27 20:47:03 +02:00
|
|
|
instance = strtoul(argv[idx_number]->arg, NULL, 10);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (!ospf_lookup_instance (instance))
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
|
2016-09-30 03:27:05 +02:00
|
|
|
return debug_ospf_lsa_common(vty, 4, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
2016-09-22 21:15:24 +02:00
|
|
|
no_debug_ospf_lsa_common (struct vty *vty, int arg_base, int argc, struct cmd_token **argv)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
{
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (lsa, LSA);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "generate"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (lsa, LSA_GENERATE);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "flooding"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (lsa, LSA_FLOODING);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "install"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (lsa, LSA_INSTALL);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "refresh"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (lsa, LSA_REFRESH);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ENABLE_NODE. */
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (lsa, LSA);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "generate"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (lsa, LSA_GENERATE);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "flooding"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (lsa, LSA_FLOODING);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "install"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (lsa, LSA_INSTALL);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "refresh"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (lsa, LSA_REFRESH);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEFUN (no_debug_ospf_lsa,
|
|
|
|
no_debug_ospf_lsa_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"no debug ospf lsa [<generate|flooding|install|refresh>]",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
NO_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
2016-09-30 03:27:05 +02:00
|
|
|
"OSPF Link State Advertisement\n"
|
|
|
|
"LSA Generation\n"
|
|
|
|
"LSA Flooding\n"
|
|
|
|
"LSA Install/Delete\n"
|
|
|
|
"LSA Refres\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
return no_debug_ospf_lsa_common (vty, 4, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_debug_ospf_instance_lsa,
|
|
|
|
no_debug_ospf_instance_lsa_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"no debug ospf (1-65535) lsa [<generate|flooding|install|refresh>]",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
NO_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"Instance ID\n"
|
2016-09-30 03:27:05 +02:00
|
|
|
"OSPF Link State Advertisement\n"
|
|
|
|
"LSA Generation\n"
|
|
|
|
"LSA Flooding\n"
|
|
|
|
"LSA Install/Delete\n"
|
|
|
|
"LSA Refres\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-23 22:01:26 +02:00
|
|
|
int idx_number = 3;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
u_short instance = 0;
|
|
|
|
|
*: remove VTY_GET_*
CLI validates input tokens, so there's no need to do it in handler
functions anymore.
spatch follows
----------------
@getull@
expression v;
expression str;
@@
<...
- VTY_GET_ULL(..., v, str)
+ v = strtoull (str, NULL, 10)
...>
@getul@
expression v;
expression str;
@@
<...
- VTY_GET_ULONG(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getintrange@
expression name;
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER_RANGE(name, v, str, ...)
+ v = strtoul (str, NULL, 10)
...>
@getint@
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getv4@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_ADDRESS(..., v, str)
+ inet_aton (str, &v)
...>
@getv4pfx@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_PREFIX(..., v, str)
+ str2prefix_ipv4 (str, &v)
...>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-06-27 20:47:03 +02:00
|
|
|
instance = strtoul(argv[idx_number]->arg, NULL, 10);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (!ospf_lookup_instance (instance))
|
|
|
|
return CMD_SUCCESS;
|
2014-06-04 06:53:35 +02:00
|
|
|
|
2016-09-30 03:27:05 +02:00
|
|
|
return no_debug_ospf_lsa_common (vty, 5, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
2016-09-22 21:15:24 +02:00
|
|
|
debug_ospf_zebra_common (struct vty *vty, int arg_base, int argc, struct cmd_token **argv)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
{
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (zebra, ZEBRA);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "interface"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (zebra, ZEBRA_INTERFACE);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "redistribute"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_ON (zebra, ZEBRA_REDISTRIBUTE);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ENABLE_NODE. */
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (zebra, ZEBRA);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "interface"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (zebra, ZEBRA_INTERFACE);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "redistribute"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_ON (zebra, ZEBRA_REDISTRIBUTE);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEFUN (debug_ospf_zebra,
|
|
|
|
debug_ospf_zebra_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"debug ospf zebra [<interface|redistribute>]",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
2016-09-30 03:27:05 +02:00
|
|
|
"OSPF Zebra information\n"
|
|
|
|
"Zebra interface\n"
|
|
|
|
"Zebra redistribute\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
return debug_ospf_zebra_common(vty, 3, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (debug_ospf_instance_zebra,
|
|
|
|
debug_ospf_instance_zebra_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"debug ospf (1-65535) zebra [<interface|redistribute>]",
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
"Instance ID\n"
|
2016-09-30 03:27:05 +02:00
|
|
|
"OSPF Zebra information\n"
|
|
|
|
"Zebra interface\n"
|
|
|
|
"Zebra redistribute\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-23 22:01:26 +02:00
|
|
|
int idx_number = 2;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
u_short instance = 0;
|
|
|
|
|
*: remove VTY_GET_*
CLI validates input tokens, so there's no need to do it in handler
functions anymore.
spatch follows
----------------
@getull@
expression v;
expression str;
@@
<...
- VTY_GET_ULL(..., v, str)
+ v = strtoull (str, NULL, 10)
...>
@getul@
expression v;
expression str;
@@
<...
- VTY_GET_ULONG(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getintrange@
expression name;
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER_RANGE(name, v, str, ...)
+ v = strtoul (str, NULL, 10)
...>
@getint@
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getv4@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_ADDRESS(..., v, str)
+ inet_aton (str, &v)
...>
@getv4pfx@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_PREFIX(..., v, str)
+ str2prefix_ipv4 (str, &v)
...>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-06-27 20:47:03 +02:00
|
|
|
instance = strtoul(argv[idx_number]->arg, NULL, 10);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (!ospf_lookup_instance (instance))
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
|
2016-09-30 03:27:05 +02:00
|
|
|
return debug_ospf_zebra_common(vty, 4, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
no_debug_ospf_zebra_common(struct vty *vty, int arg_base, int argc,
|
2016-09-22 21:15:24 +02:00
|
|
|
struct cmd_token **argv)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
{
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (zebra, ZEBRA);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "interface"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (zebra, ZEBRA_INTERFACE);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "redistribute"))
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_OFF (zebra, ZEBRA_REDISTRIBUTE);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ENABLE_NODE. */
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (argc == arg_base + 0)
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (zebra, ZEBRA);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
else if (argc == arg_base + 1)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
if (strmatch(argv[arg_base]->text, "interface"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (zebra, ZEBRA_INTERFACE);
|
2016-09-30 03:27:05 +02:00
|
|
|
else if (strmatch(argv[arg_base]->text, "redistribute"))
|
2002-12-13 21:15:29 +01:00
|
|
|
TERM_DEBUG_OFF (zebra, ZEBRA_REDISTRIBUTE);
|
|
|
|
}
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEFUN (no_debug_ospf_zebra,
|
|
|
|
no_debug_ospf_zebra_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"no debug ospf zebra [<interface|redistribute>]",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
NO_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
2016-09-30 03:27:05 +02:00
|
|
|
"OSPF Zebra information\n"
|
|
|
|
"Zebra interface\n"
|
|
|
|
"Zebra redistribute\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-30 03:27:05 +02:00
|
|
|
return no_debug_ospf_zebra_common(vty, 4, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_debug_ospf_instance_zebra,
|
|
|
|
no_debug_ospf_instance_zebra_cmd,
|
2016-09-30 17:31:48 +02:00
|
|
|
"no debug ospf (1-65535) zebra [<interface|redistribute>]",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
NO_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"Instance ID\n"
|
2016-09-30 03:27:05 +02:00
|
|
|
"OSPF Zebra information\n"
|
|
|
|
"Zebra interface\n"
|
|
|
|
"Zebra redistribute\n")
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
{
|
2016-09-23 22:01:26 +02:00
|
|
|
int idx_number = 3;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
u_short instance = 0;
|
|
|
|
|
*: remove VTY_GET_*
CLI validates input tokens, so there's no need to do it in handler
functions anymore.
spatch follows
----------------
@getull@
expression v;
expression str;
@@
<...
- VTY_GET_ULL(..., v, str)
+ v = strtoull (str, NULL, 10)
...>
@getul@
expression v;
expression str;
@@
<...
- VTY_GET_ULONG(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getintrange@
expression name;
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER_RANGE(name, v, str, ...)
+ v = strtoul (str, NULL, 10)
...>
@getint@
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getv4@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_ADDRESS(..., v, str)
+ inet_aton (str, &v)
...>
@getv4pfx@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_PREFIX(..., v, str)
+ str2prefix_ipv4 (str, &v)
...>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-06-27 20:47:03 +02:00
|
|
|
instance = strtoul(argv[idx_number]->arg, NULL, 10);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (!ospf_lookup_instance (instance))
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
|
2016-09-30 03:27:05 +02:00
|
|
|
return no_debug_ospf_zebra_common(vty, 5, argc, argv);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
DEFUN (debug_ospf_event,
|
|
|
|
debug_ospf_event_cmd,
|
|
|
|
"debug ospf event",
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"OSPF event information\n")
|
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
CONF_DEBUG_ON (event, EVENT);
|
|
|
|
TERM_DEBUG_ON (event, EVENT);
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_debug_ospf_event,
|
|
|
|
no_debug_ospf_event_cmd,
|
|
|
|
"no debug ospf event",
|
|
|
|
NO_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"OSPF event information\n")
|
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
CONF_DEBUG_OFF (event, EVENT);
|
|
|
|
TERM_DEBUG_OFF (event, EVENT);
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEFUN (debug_ospf_instance_event,
|
|
|
|
debug_ospf_instance_event_cmd,
|
2016-09-23 15:47:20 +02:00
|
|
|
"debug ospf (1-65535) event",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"Instance ID\n"
|
|
|
|
"OSPF event information\n")
|
|
|
|
{
|
2016-09-23 22:01:26 +02:00
|
|
|
int idx_number = 2;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
u_short instance = 0;
|
|
|
|
|
*: remove VTY_GET_*
CLI validates input tokens, so there's no need to do it in handler
functions anymore.
spatch follows
----------------
@getull@
expression v;
expression str;
@@
<...
- VTY_GET_ULL(..., v, str)
+ v = strtoull (str, NULL, 10)
...>
@getul@
expression v;
expression str;
@@
<...
- VTY_GET_ULONG(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getintrange@
expression name;
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER_RANGE(name, v, str, ...)
+ v = strtoul (str, NULL, 10)
...>
@getint@
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getv4@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_ADDRESS(..., v, str)
+ inet_aton (str, &v)
...>
@getv4pfx@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_PREFIX(..., v, str)
+ str2prefix_ipv4 (str, &v)
...>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-06-27 20:47:03 +02:00
|
|
|
instance = strtoul(argv[idx_number]->arg, NULL, 10);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (!ospf_lookup_instance (instance))
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
CONF_DEBUG_ON (event, EVENT);
|
|
|
|
TERM_DEBUG_ON (event, EVENT);
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_debug_ospf_instance_event,
|
|
|
|
no_debug_ospf_instance_event_cmd,
|
2016-09-23 15:47:20 +02:00
|
|
|
"no debug ospf (1-65535) event",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
NO_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"Instance ID\n"
|
|
|
|
"OSPF event information\n")
|
|
|
|
{
|
2016-09-23 22:01:26 +02:00
|
|
|
int idx_number = 3;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
u_short instance = 0;
|
|
|
|
|
*: remove VTY_GET_*
CLI validates input tokens, so there's no need to do it in handler
functions anymore.
spatch follows
----------------
@getull@
expression v;
expression str;
@@
<...
- VTY_GET_ULL(..., v, str)
+ v = strtoull (str, NULL, 10)
...>
@getul@
expression v;
expression str;
@@
<...
- VTY_GET_ULONG(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getintrange@
expression name;
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER_RANGE(name, v, str, ...)
+ v = strtoul (str, NULL, 10)
...>
@getint@
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getv4@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_ADDRESS(..., v, str)
+ inet_aton (str, &v)
...>
@getv4pfx@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_PREFIX(..., v, str)
+ str2prefix_ipv4 (str, &v)
...>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-06-27 20:47:03 +02:00
|
|
|
instance = strtoul(argv[idx_number]->arg, NULL, 10);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (!ospf_lookup_instance (instance))
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
CONF_DEBUG_OFF (event, EVENT);
|
|
|
|
TERM_DEBUG_OFF (event, EVENT);
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
DEFUN (debug_ospf_nssa,
|
|
|
|
debug_ospf_nssa_cmd,
|
|
|
|
"debug ospf nssa",
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"OSPF nssa information\n")
|
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
CONF_DEBUG_ON (nssa, NSSA);
|
|
|
|
TERM_DEBUG_ON (nssa, NSSA);
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_debug_ospf_nssa,
|
|
|
|
no_debug_ospf_nssa_cmd,
|
|
|
|
"no debug ospf nssa",
|
|
|
|
NO_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"OSPF nssa information\n")
|
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
CONF_DEBUG_OFF (nssa, NSSA);
|
|
|
|
TERM_DEBUG_OFF (nssa, NSSA);
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEFUN (debug_ospf_instance_nssa,
|
|
|
|
debug_ospf_instance_nssa_cmd,
|
2016-09-23 15:47:20 +02:00
|
|
|
"debug ospf (1-65535) nssa",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"Instance ID\n"
|
|
|
|
"OSPF nssa information\n")
|
|
|
|
{
|
2016-09-23 22:01:26 +02:00
|
|
|
int idx_number = 2;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
u_short instance = 0;
|
|
|
|
|
*: remove VTY_GET_*
CLI validates input tokens, so there's no need to do it in handler
functions anymore.
spatch follows
----------------
@getull@
expression v;
expression str;
@@
<...
- VTY_GET_ULL(..., v, str)
+ v = strtoull (str, NULL, 10)
...>
@getul@
expression v;
expression str;
@@
<...
- VTY_GET_ULONG(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getintrange@
expression name;
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER_RANGE(name, v, str, ...)
+ v = strtoul (str, NULL, 10)
...>
@getint@
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getv4@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_ADDRESS(..., v, str)
+ inet_aton (str, &v)
...>
@getv4pfx@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_PREFIX(..., v, str)
+ str2prefix_ipv4 (str, &v)
...>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-06-27 20:47:03 +02:00
|
|
|
instance = strtoul(argv[idx_number]->arg, NULL, 10);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (!ospf_lookup_instance (instance))
|
|
|
|
return CMD_SUCCESS;
|
2014-06-04 06:53:35 +02:00
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
CONF_DEBUG_ON (nssa, NSSA);
|
|
|
|
TERM_DEBUG_ON (nssa, NSSA);
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_debug_ospf_instance_nssa,
|
|
|
|
no_debug_ospf_instance_nssa_cmd,
|
2016-09-23 15:47:20 +02:00
|
|
|
"no debug ospf (1-65535) nssa",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
NO_STR
|
2002-12-13 21:15:29 +01:00
|
|
|
DEBUG_STR
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
OSPF_STR
|
|
|
|
"Instance ID\n"
|
|
|
|
"OSPF nssa information\n")
|
|
|
|
{
|
2016-09-23 22:01:26 +02:00
|
|
|
int idx_number = 3;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
u_short instance = 0;
|
|
|
|
|
*: remove VTY_GET_*
CLI validates input tokens, so there's no need to do it in handler
functions anymore.
spatch follows
----------------
@getull@
expression v;
expression str;
@@
<...
- VTY_GET_ULL(..., v, str)
+ v = strtoull (str, NULL, 10)
...>
@getul@
expression v;
expression str;
@@
<...
- VTY_GET_ULONG(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getintrange@
expression name;
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER_RANGE(name, v, str, ...)
+ v = strtoul (str, NULL, 10)
...>
@getint@
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getv4@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_ADDRESS(..., v, str)
+ inet_aton (str, &v)
...>
@getv4pfx@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_PREFIX(..., v, str)
+ str2prefix_ipv4 (str, &v)
...>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-06-27 20:47:03 +02:00
|
|
|
instance = strtoul(argv[idx_number]->arg, NULL, 10);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (!ospf_lookup_instance (instance))
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
CONF_DEBUG_OFF (nssa, NSSA);
|
|
|
|
TERM_DEBUG_OFF (nssa, NSSA);
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
Update Traffic Engineering Support for OSPFD
NOTE: I am squashing several commits together because they
do not independently compile and we need this ability to
do any type of sane testing on the patches. Since this
series builds together I am doing this. -DBS
This new structure is the basis to get new link parameters for
Traffic Engineering from Zebra/interface layer to OSPFD and ISISD
for the support of Traffic Engineering
* lib/if.[c,h]: link parameters struture and get/set functions
* lib/command.[c,h]: creation of a new link-node
* lib/zclient.[c,h]: modification to the ZBUS message to convey the
link parameters structure
* lib/zebra.h: New ZBUS message
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support for IEEE 754 format
* lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to
safely convert between big-endian IEEE-754 single and double binary
format, as used in IETF RFCs, and C99. Implementation depends on host
using __STDC_IEC_559__, which should be everything we care about. Should
correctly error out otherwise.
* lib/network.[c,h]: Add ntohf and htonf converter
* lib/memtypes.c: Add new memeory type for Traffic Engineering support
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add link parameters support to Zebra
* zebra/interface.c:
- Add new link-params CLI commands
- Add new functions to set/get link parameters for interface
* zebra/redistribute.[c,h]: Add new function to propagate link parameters
to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering.
* zebra/redistribute_null.c: Add new function
zebra_interface_parameters_update()
* zebra/zserv.[c,h]: Add new functions to send link parameters
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support of new link-params CLI to vtysh
In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue
to use the ordered version for adding line i.e. config_add_line_uniq() to print
Interface CLI commands as it completely break the new LINK_PARAMS_NODE.
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Update Traffic Engineering support for OSPFD
These patches update original code to RFC3630 (OSPF-TE) and add support of
RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support
of RFC6827 (ASON - GMPLS).
* ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering
* ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392
* ospfd/ospf_packet.c: Update checking of OSPF_OPTION
* ospfd/ospf_vty.[c,h]: Update ospf_str2area_id
* ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get
Link Parameters information from the interface to populate Traffic Engineering
metrics
* ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN)
* ospfd/ospf_te.[c,h]: Major modifications to update the code to new
link parameters structure and new RFCs
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
tmp
2016-04-19 16:21:46 +02:00
|
|
|
DEFUN (debug_ospf_te,
|
|
|
|
debug_ospf_te_cmd,
|
|
|
|
"debug ospf te",
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"OSPF-TE information\n")
|
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
CONF_DEBUG_ON (te, TE);
|
|
|
|
TERM_DEBUG_ON (te, TE);
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (no_debug_ospf_te,
|
|
|
|
no_debug_ospf_te_cmd,
|
|
|
|
"no debug ospf te",
|
|
|
|
NO_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"OSPF-TE information\n")
|
|
|
|
{
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
CONF_DEBUG_OFF (te, TE);
|
|
|
|
TERM_DEBUG_OFF (te, TE);
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2015-11-03 19:37:25 +01:00
|
|
|
DEFUN (no_debug_ospf,
|
|
|
|
no_debug_ospf_cmd,
|
|
|
|
"no debug ospf",
|
|
|
|
NO_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR)
|
|
|
|
{
|
|
|
|
int flag = OSPF_DEBUG_SEND | OSPF_DEBUG_RECV | OSPF_DEBUG_DETAIL;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (vty->node == CONFIG_NODE)
|
|
|
|
{
|
|
|
|
CONF_DEBUG_OFF (event, EVENT);
|
|
|
|
CONF_DEBUG_OFF (nssa, NSSA);
|
|
|
|
DEBUG_OFF (ism, ISM_EVENTS);
|
|
|
|
DEBUG_OFF (ism, ISM_STATUS);
|
|
|
|
DEBUG_OFF (ism, ISM_TIMERS);
|
|
|
|
DEBUG_OFF (lsa, LSA);
|
|
|
|
DEBUG_OFF (lsa, LSA_FLOODING);
|
|
|
|
DEBUG_OFF (lsa, LSA_GENERATE);
|
|
|
|
DEBUG_OFF (lsa, LSA_INSTALL);
|
|
|
|
DEBUG_OFF (lsa, LSA_REFRESH);
|
|
|
|
DEBUG_OFF (nsm, NSM);
|
|
|
|
DEBUG_OFF (nsm, NSM_EVENTS);
|
|
|
|
DEBUG_OFF (nsm, NSM_STATUS);
|
|
|
|
DEBUG_OFF (nsm, NSM_TIMERS);
|
|
|
|
DEBUG_OFF (zebra, ZEBRA);
|
|
|
|
DEBUG_OFF (zebra, ZEBRA_INTERFACE);
|
|
|
|
DEBUG_OFF (zebra, ZEBRA_REDISTRIBUTE);
|
|
|
|
|
|
|
|
for (i = 0; i < 5; i++)
|
|
|
|
DEBUG_PACKET_OFF (i, flag);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < 5; i++)
|
|
|
|
TERM_DEBUG_PACKET_OFF (i, flag);
|
|
|
|
|
|
|
|
TERM_DEBUG_OFF (event, EVENT);
|
|
|
|
TERM_DEBUG_OFF (ism, ISM);
|
|
|
|
TERM_DEBUG_OFF (ism, ISM_EVENTS);
|
|
|
|
TERM_DEBUG_OFF (ism, ISM_STATUS);
|
|
|
|
TERM_DEBUG_OFF (ism, ISM_TIMERS);
|
|
|
|
TERM_DEBUG_OFF (lsa, LSA);
|
|
|
|
TERM_DEBUG_OFF (lsa, LSA_FLOODING);
|
|
|
|
TERM_DEBUG_OFF (lsa, LSA_GENERATE);
|
|
|
|
TERM_DEBUG_OFF (lsa, LSA_INSTALL);
|
|
|
|
TERM_DEBUG_OFF (lsa, LSA_REFRESH);
|
|
|
|
TERM_DEBUG_OFF (nsm, NSM);
|
|
|
|
TERM_DEBUG_OFF (nsm, NSM_EVENTS);
|
|
|
|
TERM_DEBUG_OFF (nsm, NSM_STATUS);
|
|
|
|
TERM_DEBUG_OFF (nsm, NSM_TIMERS);
|
|
|
|
TERM_DEBUG_OFF (nssa, NSSA);
|
|
|
|
TERM_DEBUG_OFF (zebra, ZEBRA);
|
|
|
|
TERM_DEBUG_OFF (zebra, ZEBRA_INTERFACE);
|
|
|
|
TERM_DEBUG_OFF (zebra, ZEBRA_REDISTRIBUTE);
|
|
|
|
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
|
|
|
|
static int
|
|
|
|
show_debugging_ospf_common (struct vty *vty, struct ospf *ospf)
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if (ospf->instance)
|
2017-07-13 19:42:42 +02:00
|
|
|
vty_out (vty, "\nOSPF Instance: %d\n\n", ospf->instance);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "OSPF debugging status:\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2005-03-31 17:18:21 +02:00
|
|
|
/* Show debug status for events. */
|
|
|
|
if (IS_DEBUG_OSPF(event,EVENT))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF event debugging is on\n");
|
2005-03-31 17:18:21 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Show debug status for ISM. */
|
|
|
|
if (IS_DEBUG_OSPF (ism, ISM) == OSPF_DEBUG_ISM)
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF ISM debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
if (IS_DEBUG_OSPF (ism, ISM_STATUS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF ISM status debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_DEBUG_OSPF (ism, ISM_EVENTS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF ISM event debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_DEBUG_OSPF (ism, ISM_TIMERS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF ISM timer debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Show debug status for NSM. */
|
|
|
|
if (IS_DEBUG_OSPF (nsm, NSM) == OSPF_DEBUG_NSM)
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF NSM debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
if (IS_DEBUG_OSPF (nsm, NSM_STATUS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF NSM status debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_DEBUG_OSPF (nsm, NSM_EVENTS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF NSM event debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_DEBUG_OSPF (nsm, NSM_TIMERS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF NSM timer debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Show debug status for OSPF Packets. */
|
|
|
|
for (i = 0; i < 5; i++)
|
|
|
|
if (IS_DEBUG_OSPF_PACKET (i, SEND) && IS_DEBUG_OSPF_PACKET (i, RECV))
|
|
|
|
{
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF packet %s%s debugging is on\n",
|
2017-06-21 05:10:57 +02:00
|
|
|
lookup_msg(ospf_packet_type_str, i + 1, NULL),
|
|
|
|
IS_DEBUG_OSPF_PACKET (i, DETAIL) ? " detail" : "");
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
if (IS_DEBUG_OSPF_PACKET (i, SEND))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF packet %s send%s debugging is on\n",
|
2017-06-21 05:10:57 +02:00
|
|
|
lookup_msg(ospf_packet_type_str, i + 1, NULL),
|
|
|
|
IS_DEBUG_OSPF_PACKET (i, DETAIL) ? " detail" : "");
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_DEBUG_OSPF_PACKET (i, RECV))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF packet %s receive%s debugging is on\n",
|
2017-06-21 05:10:57 +02:00
|
|
|
lookup_msg(ospf_packet_type_str, i + 1, NULL),
|
|
|
|
IS_DEBUG_OSPF_PACKET (i, DETAIL) ? " detail" : "");
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Show debug status for OSPF LSAs. */
|
|
|
|
if (IS_DEBUG_OSPF (lsa, LSA) == OSPF_DEBUG_LSA)
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF LSA debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
if (IS_DEBUG_OSPF (lsa, LSA_GENERATE))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF LSA generation debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_DEBUG_OSPF (lsa, LSA_FLOODING))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF LSA flooding debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_DEBUG_OSPF (lsa, LSA_INSTALL))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF LSA install debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_DEBUG_OSPF (lsa, LSA_REFRESH))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF LSA refresh debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Show debug status for Zebra. */
|
|
|
|
if (IS_DEBUG_OSPF (zebra, ZEBRA) == OSPF_DEBUG_ZEBRA)
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF Zebra debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
if (IS_DEBUG_OSPF (zebra, ZEBRA_INTERFACE))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF Zebra interface debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_DEBUG_OSPF (zebra, ZEBRA_REDISTRIBUTE))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF Zebra redistribute debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
2005-03-31 17:18:21 +02:00
|
|
|
|
|
|
|
/* Show debug status for NSSA. */
|
2003-04-07 19:12:12 +02:00
|
|
|
if (IS_DEBUG_OSPF (nssa, NSSA) == OSPF_DEBUG_NSSA)
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, " OSPF NSSA debugging is on\n");
|
2002-12-13 21:15:29 +01:00
|
|
|
|
2017-07-13 19:04:25 +02:00
|
|
|
vty_out (vty, "\n");
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
return CMD_SUCCESS;
|
|
|
|
}
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
DEFUN (show_debugging_ospf,
|
|
|
|
show_debugging_ospf_cmd,
|
|
|
|
"show debugging ospf",
|
|
|
|
SHOW_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR)
|
|
|
|
{
|
|
|
|
struct ospf *ospf;
|
|
|
|
|
|
|
|
if ((ospf = ospf_lookup()) == NULL)
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
|
|
|
|
return show_debugging_ospf_common(vty, ospf);
|
|
|
|
}
|
|
|
|
|
|
|
|
DEFUN (show_debugging_ospf_instance,
|
|
|
|
show_debugging_ospf_instance_cmd,
|
2016-09-23 15:47:20 +02:00
|
|
|
"show debugging ospf (1-65535)",
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
SHOW_STR
|
|
|
|
DEBUG_STR
|
|
|
|
OSPF_STR
|
|
|
|
"Instance ID\n")
|
|
|
|
{
|
2016-09-23 22:01:26 +02:00
|
|
|
int idx_number = 3;
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
struct ospf *ospf;
|
|
|
|
u_short instance = 0;
|
|
|
|
|
*: remove VTY_GET_*
CLI validates input tokens, so there's no need to do it in handler
functions anymore.
spatch follows
----------------
@getull@
expression v;
expression str;
@@
<...
- VTY_GET_ULL(..., v, str)
+ v = strtoull (str, NULL, 10)
...>
@getul@
expression v;
expression str;
@@
<...
- VTY_GET_ULONG(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getintrange@
expression name;
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER_RANGE(name, v, str, ...)
+ v = strtoul (str, NULL, 10)
...>
@getint@
expression v;
expression str;
@@
<...
- VTY_GET_INTEGER(..., v, str)
+ v = strtoul (str, NULL, 10)
...>
@getv4@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_ADDRESS(..., v, str)
+ inet_aton (str, &v)
...>
@getv4pfx@
expression v;
expression str;
@@
<...
- VTY_GET_IPV4_PREFIX(..., v, str)
+ str2prefix_ipv4 (str, &v)
...>
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
2017-06-27 20:47:03 +02:00
|
|
|
instance = strtoul(argv[idx_number]->arg, NULL, 10);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
if ((ospf = ospf_lookup_instance (instance)) == NULL )
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
|
|
|
|
return show_debugging_ospf_common(vty, ospf);
|
|
|
|
}
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* Debug node. */
|
2008-12-01 20:10:34 +01:00
|
|
|
static struct cmd_node debug_node =
|
2002-12-13 21:15:29 +01:00
|
|
|
{
|
|
|
|
DEBUG_NODE,
|
2004-12-22 10:43:20 +01:00
|
|
|
"",
|
|
|
|
1 /* VTYSH */
|
2002-12-13 21:15:29 +01:00
|
|
|
};
|
|
|
|
|
2005-05-06 23:37:42 +02:00
|
|
|
static int
|
2002-12-13 21:15:29 +01:00
|
|
|
config_write_debug (struct vty *vty)
|
|
|
|
{
|
|
|
|
int write = 0;
|
|
|
|
int i, r;
|
|
|
|
|
2004-10-08 10:17:22 +02:00
|
|
|
const char *type_str[] = {"hello", "dd", "ls-request", "ls-update", "ls-ack"};
|
|
|
|
const char *detail_str[] = {"", " send", " recv", "", " detail",
|
2002-12-13 21:15:29 +01:00
|
|
|
" send detail", " recv detail", " detail"};
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
struct ospf *ospf;
|
|
|
|
char str[16];
|
2016-08-05 23:47:42 +02:00
|
|
|
memset (str, 0, 16);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
|
|
|
|
if ((ospf = ospf_lookup()) == NULL)
|
|
|
|
return CMD_SUCCESS;
|
|
|
|
|
|
|
|
if (ospf->instance)
|
|
|
|
sprintf(str, " %d", ospf->instance);
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
/* debug ospf ism (status|events|timers). */
|
|
|
|
if (IS_CONF_DEBUG_OSPF (ism, ISM) == OSPF_DEBUG_ISM)
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s ism\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
if (IS_CONF_DEBUG_OSPF (ism, ISM_STATUS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s ism status\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_CONF_DEBUG_OSPF (ism, ISM_EVENTS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s ism event\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_CONF_DEBUG_OSPF (ism, ISM_TIMERS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s ism timer\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* debug ospf nsm (status|events|timers). */
|
|
|
|
if (IS_CONF_DEBUG_OSPF (nsm, NSM) == OSPF_DEBUG_NSM)
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s nsm\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
if (IS_CONF_DEBUG_OSPF (nsm, NSM_STATUS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s nsm status\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_CONF_DEBUG_OSPF (nsm, NSM_EVENTS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s nsm event\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_CONF_DEBUG_OSPF (nsm, NSM_TIMERS))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s nsm timer\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* debug ospf lsa (generate|flooding|install|refresh). */
|
|
|
|
if (IS_CONF_DEBUG_OSPF (lsa, LSA) == OSPF_DEBUG_LSA)
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s lsa\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
if (IS_CONF_DEBUG_OSPF (lsa, LSA_GENERATE))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s lsa generate\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_CONF_DEBUG_OSPF (lsa, LSA_FLOODING))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s lsa flooding\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_CONF_DEBUG_OSPF (lsa, LSA_INSTALL))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s lsa install\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_CONF_DEBUG_OSPF (lsa, LSA_REFRESH))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s lsa refresh\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
write = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* debug ospf zebra (interface|redistribute). */
|
|
|
|
if (IS_CONF_DEBUG_OSPF (zebra, ZEBRA) == OSPF_DEBUG_ZEBRA)
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s zebra\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
if (IS_CONF_DEBUG_OSPF (zebra, ZEBRA_INTERFACE))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s zebra interface\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
if (IS_CONF_DEBUG_OSPF (zebra, ZEBRA_REDISTRIBUTE))
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s zebra redistribute\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
|
|
|
write = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* debug ospf event. */
|
|
|
|
if (IS_CONF_DEBUG_OSPF (event, EVENT) == OSPF_DEBUG_EVENT)
|
|
|
|
{
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s event\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
write = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* debug ospf nssa. */
|
|
|
|
if (IS_CONF_DEBUG_OSPF (nssa, NSSA) == OSPF_DEBUG_NSSA)
|
|
|
|
{
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s nssa\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
write = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* debug ospf packet all detail. */
|
|
|
|
r = OSPF_DEBUG_SEND_RECV|OSPF_DEBUG_DETAIL;
|
|
|
|
for (i = 0; i < 5; i++)
|
|
|
|
r &= conf_debug_ospf_packet[i] & (OSPF_DEBUG_SEND_RECV|OSPF_DEBUG_DETAIL);
|
|
|
|
if (r == (OSPF_DEBUG_SEND_RECV|OSPF_DEBUG_DETAIL))
|
|
|
|
{
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s packet all detail\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* debug ospf packet all. */
|
|
|
|
r = OSPF_DEBUG_SEND_RECV;
|
|
|
|
for (i = 0; i < 5; i++)
|
|
|
|
r &= conf_debug_ospf_packet[i] & OSPF_DEBUG_SEND_RECV;
|
|
|
|
if (r == OSPF_DEBUG_SEND_RECV)
|
|
|
|
{
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s packet all\n", str);
|
2002-12-13 21:15:29 +01:00
|
|
|
for (i = 0; i < 5; i++)
|
|
|
|
if (conf_debug_ospf_packet[i] & OSPF_DEBUG_DETAIL)
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s packet %s detail\n", str,
|
2017-06-21 05:10:57 +02:00
|
|
|
type_str[i]);
|
2002-12-13 21:15:29 +01:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* debug ospf packet (hello|dd|ls-request|ls-update|ls-ack)
|
|
|
|
(send|recv) (detail). */
|
|
|
|
for (i = 0; i < 5; i++)
|
|
|
|
{
|
|
|
|
if (conf_debug_ospf_packet[i] == 0)
|
|
|
|
continue;
|
|
|
|
|
2017-07-13 17:49:13 +02:00
|
|
|
vty_out (vty, "debug ospf%s packet %s%s\n", str,
|
2017-06-21 05:10:57 +02:00
|
|
|
type_str[i],detail_str[conf_debug_ospf_packet[i]]);
|
2002-12-13 21:15:29 +01:00
|
|
|
write = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return write;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize debug commands. */
|
|
|
|
void
|
|
|
|
debug_init ()
|
|
|
|
{
|
|
|
|
install_node (&debug_node, config_write_debug);
|
|
|
|
|
|
|
|
install_element (ENABLE_NODE, &show_debugging_ospf_cmd);
|
|
|
|
install_element (ENABLE_NODE, &debug_ospf_ism_cmd);
|
|
|
|
install_element (ENABLE_NODE, &debug_ospf_nsm_cmd);
|
|
|
|
install_element (ENABLE_NODE, &debug_ospf_lsa_cmd);
|
|
|
|
install_element (ENABLE_NODE, &debug_ospf_zebra_cmd);
|
|
|
|
install_element (ENABLE_NODE, &debug_ospf_event_cmd);
|
|
|
|
install_element (ENABLE_NODE, &debug_ospf_nssa_cmd);
|
Update Traffic Engineering Support for OSPFD
NOTE: I am squashing several commits together because they
do not independently compile and we need this ability to
do any type of sane testing on the patches. Since this
series builds together I am doing this. -DBS
This new structure is the basis to get new link parameters for
Traffic Engineering from Zebra/interface layer to OSPFD and ISISD
for the support of Traffic Engineering
* lib/if.[c,h]: link parameters struture and get/set functions
* lib/command.[c,h]: creation of a new link-node
* lib/zclient.[c,h]: modification to the ZBUS message to convey the
link parameters structure
* lib/zebra.h: New ZBUS message
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support for IEEE 754 format
* lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to
safely convert between big-endian IEEE-754 single and double binary
format, as used in IETF RFCs, and C99. Implementation depends on host
using __STDC_IEC_559__, which should be everything we care about. Should
correctly error out otherwise.
* lib/network.[c,h]: Add ntohf and htonf converter
* lib/memtypes.c: Add new memeory type for Traffic Engineering support
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add link parameters support to Zebra
* zebra/interface.c:
- Add new link-params CLI commands
- Add new functions to set/get link parameters for interface
* zebra/redistribute.[c,h]: Add new function to propagate link parameters
to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering.
* zebra/redistribute_null.c: Add new function
zebra_interface_parameters_update()
* zebra/zserv.[c,h]: Add new functions to send link parameters
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support of new link-params CLI to vtysh
In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue
to use the ordered version for adding line i.e. config_add_line_uniq() to print
Interface CLI commands as it completely break the new LINK_PARAMS_NODE.
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Update Traffic Engineering support for OSPFD
These patches update original code to RFC3630 (OSPF-TE) and add support of
RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support
of RFC6827 (ASON - GMPLS).
* ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering
* ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392
* ospfd/ospf_packet.c: Update checking of OSPF_OPTION
* ospfd/ospf_vty.[c,h]: Update ospf_str2area_id
* ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get
Link Parameters information from the interface to populate Traffic Engineering
metrics
* ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN)
* ospfd/ospf_te.[c,h]: Major modifications to update the code to new
link parameters structure and new RFCs
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
tmp
2016-04-19 16:21:46 +02:00
|
|
|
install_element (ENABLE_NODE, &debug_ospf_te_cmd);
|
2002-12-13 21:15:29 +01:00
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_ism_cmd);
|
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_nsm_cmd);
|
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_lsa_cmd);
|
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_zebra_cmd);
|
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_event_cmd);
|
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_nssa_cmd);
|
Update Traffic Engineering Support for OSPFD
NOTE: I am squashing several commits together because they
do not independently compile and we need this ability to
do any type of sane testing on the patches. Since this
series builds together I am doing this. -DBS
This new structure is the basis to get new link parameters for
Traffic Engineering from Zebra/interface layer to OSPFD and ISISD
for the support of Traffic Engineering
* lib/if.[c,h]: link parameters struture and get/set functions
* lib/command.[c,h]: creation of a new link-node
* lib/zclient.[c,h]: modification to the ZBUS message to convey the
link parameters structure
* lib/zebra.h: New ZBUS message
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support for IEEE 754 format
* lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to
safely convert between big-endian IEEE-754 single and double binary
format, as used in IETF RFCs, and C99. Implementation depends on host
using __STDC_IEC_559__, which should be everything we care about. Should
correctly error out otherwise.
* lib/network.[c,h]: Add ntohf and htonf converter
* lib/memtypes.c: Add new memeory type for Traffic Engineering support
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add link parameters support to Zebra
* zebra/interface.c:
- Add new link-params CLI commands
- Add new functions to set/get link parameters for interface
* zebra/redistribute.[c,h]: Add new function to propagate link parameters
to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering.
* zebra/redistribute_null.c: Add new function
zebra_interface_parameters_update()
* zebra/zserv.[c,h]: Add new functions to send link parameters
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support of new link-params CLI to vtysh
In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue
to use the ordered version for adding line i.e. config_add_line_uniq() to print
Interface CLI commands as it completely break the new LINK_PARAMS_NODE.
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Update Traffic Engineering support for OSPFD
These patches update original code to RFC3630 (OSPF-TE) and add support of
RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support
of RFC6827 (ASON - GMPLS).
* ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering
* ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392
* ospfd/ospf_packet.c: Update checking of OSPF_OPTION
* ospfd/ospf_vty.[c,h]: Update ospf_str2area_id
* ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get
Link Parameters information from the interface to populate Traffic Engineering
metrics
* ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN)
* ospfd/ospf_te.[c,h]: Major modifications to update the code to new
link parameters structure and new RFCs
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
tmp
2016-04-19 16:21:46 +02:00
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_te_cmd);
|
2002-12-13 21:15:29 +01:00
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
install_element (ENABLE_NODE, &show_debugging_ospf_instance_cmd);
|
2016-09-29 03:26:55 +02:00
|
|
|
install_element (ENABLE_NODE, &debug_ospf_packet_cmd);
|
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_packet_cmd);
|
|
|
|
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
install_element (ENABLE_NODE, &debug_ospf_instance_nsm_cmd);
|
|
|
|
install_element (ENABLE_NODE, &debug_ospf_instance_lsa_cmd);
|
|
|
|
install_element (ENABLE_NODE, &debug_ospf_instance_zebra_cmd);
|
|
|
|
install_element (ENABLE_NODE, &debug_ospf_instance_event_cmd);
|
|
|
|
install_element (ENABLE_NODE, &debug_ospf_instance_nssa_cmd);
|
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_instance_nsm_cmd);
|
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_instance_lsa_cmd);
|
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_instance_zebra_cmd);
|
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_instance_event_cmd);
|
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_instance_nssa_cmd);
|
2015-11-03 19:37:25 +01:00
|
|
|
install_element (ENABLE_NODE, &no_debug_ospf_cmd);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
|
2016-09-29 03:26:55 +02:00
|
|
|
install_element (CONFIG_NODE, &debug_ospf_packet_cmd);
|
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_packet_cmd);
|
2002-12-13 21:15:29 +01:00
|
|
|
install_element (CONFIG_NODE, &debug_ospf_ism_cmd);
|
2016-09-29 03:26:55 +02:00
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_ism_cmd);
|
|
|
|
|
2002-12-13 21:15:29 +01:00
|
|
|
install_element (CONFIG_NODE, &debug_ospf_nsm_cmd);
|
|
|
|
install_element (CONFIG_NODE, &debug_ospf_lsa_cmd);
|
|
|
|
install_element (CONFIG_NODE, &debug_ospf_zebra_cmd);
|
|
|
|
install_element (CONFIG_NODE, &debug_ospf_event_cmd);
|
|
|
|
install_element (CONFIG_NODE, &debug_ospf_nssa_cmd);
|
Update Traffic Engineering Support for OSPFD
NOTE: I am squashing several commits together because they
do not independently compile and we need this ability to
do any type of sane testing on the patches. Since this
series builds together I am doing this. -DBS
This new structure is the basis to get new link parameters for
Traffic Engineering from Zebra/interface layer to OSPFD and ISISD
for the support of Traffic Engineering
* lib/if.[c,h]: link parameters struture and get/set functions
* lib/command.[c,h]: creation of a new link-node
* lib/zclient.[c,h]: modification to the ZBUS message to convey the
link parameters structure
* lib/zebra.h: New ZBUS message
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support for IEEE 754 format
* lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to
safely convert between big-endian IEEE-754 single and double binary
format, as used in IETF RFCs, and C99. Implementation depends on host
using __STDC_IEC_559__, which should be everything we care about. Should
correctly error out otherwise.
* lib/network.[c,h]: Add ntohf and htonf converter
* lib/memtypes.c: Add new memeory type for Traffic Engineering support
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add link parameters support to Zebra
* zebra/interface.c:
- Add new link-params CLI commands
- Add new functions to set/get link parameters for interface
* zebra/redistribute.[c,h]: Add new function to propagate link parameters
to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering.
* zebra/redistribute_null.c: Add new function
zebra_interface_parameters_update()
* zebra/zserv.[c,h]: Add new functions to send link parameters
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support of new link-params CLI to vtysh
In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue
to use the ordered version for adding line i.e. config_add_line_uniq() to print
Interface CLI commands as it completely break the new LINK_PARAMS_NODE.
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Update Traffic Engineering support for OSPFD
These patches update original code to RFC3630 (OSPF-TE) and add support of
RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support
of RFC6827 (ASON - GMPLS).
* ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering
* ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392
* ospfd/ospf_packet.c: Update checking of OSPF_OPTION
* ospfd/ospf_vty.[c,h]: Update ospf_str2area_id
* ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get
Link Parameters information from the interface to populate Traffic Engineering
metrics
* ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN)
* ospfd/ospf_te.[c,h]: Major modifications to update the code to new
link parameters structure and new RFCs
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
tmp
2016-04-19 16:21:46 +02:00
|
|
|
install_element (CONFIG_NODE, &debug_ospf_te_cmd);
|
2002-12-13 21:15:29 +01:00
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_nsm_cmd);
|
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_lsa_cmd);
|
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_zebra_cmd);
|
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_event_cmd);
|
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_nssa_cmd);
|
Update Traffic Engineering Support for OSPFD
NOTE: I am squashing several commits together because they
do not independently compile and we need this ability to
do any type of sane testing on the patches. Since this
series builds together I am doing this. -DBS
This new structure is the basis to get new link parameters for
Traffic Engineering from Zebra/interface layer to OSPFD and ISISD
for the support of Traffic Engineering
* lib/if.[c,h]: link parameters struture and get/set functions
* lib/command.[c,h]: creation of a new link-node
* lib/zclient.[c,h]: modification to the ZBUS message to convey the
link parameters structure
* lib/zebra.h: New ZBUS message
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support for IEEE 754 format
* lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to
safely convert between big-endian IEEE-754 single and double binary
format, as used in IETF RFCs, and C99. Implementation depends on host
using __STDC_IEC_559__, which should be everything we care about. Should
correctly error out otherwise.
* lib/network.[c,h]: Add ntohf and htonf converter
* lib/memtypes.c: Add new memeory type for Traffic Engineering support
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add link parameters support to Zebra
* zebra/interface.c:
- Add new link-params CLI commands
- Add new functions to set/get link parameters for interface
* zebra/redistribute.[c,h]: Add new function to propagate link parameters
to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering.
* zebra/redistribute_null.c: Add new function
zebra_interface_parameters_update()
* zebra/zserv.[c,h]: Add new functions to send link parameters
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Add support of new link-params CLI to vtysh
In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue
to use the ordered version for adding line i.e. config_add_line_uniq() to print
Interface CLI commands as it completely break the new LINK_PARAMS_NODE.
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Update Traffic Engineering support for OSPFD
These patches update original code to RFC3630 (OSPF-TE) and add support of
RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support
of RFC6827 (ASON - GMPLS).
* ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering
* ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392
* ospfd/ospf_packet.c: Update checking of OSPF_OPTION
* ospfd/ospf_vty.[c,h]: Update ospf_str2area_id
* ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get
Link Parameters information from the interface to populate Traffic Engineering
metrics
* ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN)
* ospfd/ospf_te.[c,h]: Major modifications to update the code to new
link parameters structure and new RFCs
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
tmp
2016-04-19 16:21:46 +02:00
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_te_cmd);
|
Multi-Instance OSPF Summary
——————————————-------------
- etc/init.d/quagga is modified to support creating separate ospf daemon
process for each instance. Each individual instance is monitored by
watchquagga just like any protocol daemons.(requires initd-mi.patch).
- Vtysh is modified to able to connect to multiple daemons of the same
protocol (supported for OSPF only for now).
- ospfd is modified to remember the Instance-ID that its invoked with. For
the entire life of the process it caters to any command request that
matches that instance-ID (unless its a non instance specific command).
Routes/messages to zebra are tagged with instance-ID.
- zebra route/redistribute mechanisms are modified to work with
[protocol type + instance-id]
- bgpd now has ability to have multiple instance specific redistribution
for a protocol (OSPF only supported/tested for now).
- zlog ability to display instance-id besides the protocol/daemon name.
- Changes in other daemons are to because of the needed integration with
some of the modified APIs/routines. (Didn’t prefer replicating too many
separate instance specific APIs.)
- config/show/debug commands are modified to take instance-id argument
as appropriate.
Guidelines to start using multi-instance ospf
---------------------------------------------
The patch is backward compatible, i.e for any previous way of single ospf
deamon(router ospf <cr>) will continue to work as is, including all the
show commands etc.
To enable multiple instances, do the following:
1. service quagga stop
2. Modify /etc/quagga/daemons to add instance-ids of each desired
instance in the following format:
ospfd=“yes"
ospfd_instances="1,2,3"
assuming you want to enable 3 instances with those instance ids.
3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf
and ospfd-3.conf.
4. service quagga start/restart
5. Verify that the deamons are started as expected. You should see
ospfd started with -n <instance-id> option.
ps –ef | grep quagga
With that /var/run/quagga/ should have ospfd-<instance-id>.pid and
ospfd-<instance-id>/vty to each instance.
6. vtysh to work with instances as you would with any other deamons.
7. Overall most quagga semantics are the same working with the instance
deamon, like it is for any other daemon.
NOTE:
To safeguard against errors leading to too many processes getting invoked,
a hard limit on number of instance-ids is in place, currently its 5.
Allowed instance-id range is <1-65535>
Once daemons are up, show running from vtysh should show the instance-id
of each daemon as 'router ospf <instance-id>’ (without needing explicit
configuration)
Instance-id can not be changed via vtysh, other router ospf configuration
is allowed as before.
Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com>
Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com>
Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
|
|
|
|
|
|
|
install_element (CONFIG_NODE, &debug_ospf_instance_nsm_cmd);
|
|
|
|
install_element (CONFIG_NODE, &debug_ospf_instance_lsa_cmd);
|
|
|
|
install_element (CONFIG_NODE, &debug_ospf_instance_zebra_cmd);
|
|
|
|
install_element (CONFIG_NODE, &debug_ospf_instance_event_cmd);
|
|
|
|
install_element (CONFIG_NODE, &debug_ospf_instance_nssa_cmd);
|
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_instance_nsm_cmd);
|
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_instance_lsa_cmd);
|
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_instance_zebra_cmd);
|
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_instance_event_cmd);
|
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_instance_nssa_cmd);
|
2015-11-03 19:37:25 +01:00
|
|
|
install_element (CONFIG_NODE, &no_debug_ospf_cmd);
|
2002-12-13 21:15:29 +01:00
|
|
|
}
|