frr/ospfd/ospf_dump.c

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

1940 lines
51 KiB
C
Raw Normal View History

2002-12-13 21:15:29 +01:00
/*
* OSPFd dump routine.
* Copyright (C) 1999, 2000 Toshiaki Takada
*
* This file is part of GNU Zebra.
*
* GNU Zebra is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2, or (at your option) any
* later version.
*
* GNU Zebra is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; see the file COPYING; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
2002-12-13 21:15:29 +01:00
*/
#include <zebra.h>
#include "monotime.h"
2002-12-13 21:15:29 +01:00
#include "linklist.h"
#include "thread.h"
#include "prefix.h"
#include "command.h"
#include "stream.h"
#include "log.h"
#include "sockopt.h"
2002-12-13 21:15:29 +01:00
#include "ospfd/ospfd.h"
#include "ospfd/ospf_interface.h"
#include "ospfd/ospf_ism.h"
#include "ospfd/ospf_asbr.h"
#include "ospfd/ospf_lsa.h"
#include "ospfd/ospf_lsdb.h"
#include "ospfd/ospf_neighbor.h"
#include "ospfd/ospf_nsm.h"
#include "ospfd/ospf_dump.h"
#include "ospfd/ospf_packet.h"
#include "ospfd/ospf_network.h"
/* Configuration debug option variables. */
unsigned long conf_debug_ospf_packet[5] = {0, 0, 0, 0, 0};
unsigned long conf_debug_ospf_event = 0;
unsigned long conf_debug_ospf_ism = 0;
unsigned long conf_debug_ospf_nsm = 0;
unsigned long conf_debug_ospf_lsa = 0;
unsigned long conf_debug_ospf_zebra = 0;
unsigned long conf_debug_ospf_nssa = 0;
Update Traffic Engineering Support for OSPFD NOTE: I am squashing several commits together because they do not independently compile and we need this ability to do any type of sane testing on the patches. Since this series builds together I am doing this. -DBS This new structure is the basis to get new link parameters for Traffic Engineering from Zebra/interface layer to OSPFD and ISISD for the support of Traffic Engineering * lib/if.[c,h]: link parameters struture and get/set functions * lib/command.[c,h]: creation of a new link-node * lib/zclient.[c,h]: modification to the ZBUS message to convey the link parameters structure * lib/zebra.h: New ZBUS message Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support for IEEE 754 format * lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to safely convert between big-endian IEEE-754 single and double binary format, as used in IETF RFCs, and C99. Implementation depends on host using __STDC_IEC_559__, which should be everything we care about. Should correctly error out otherwise. * lib/network.[c,h]: Add ntohf and htonf converter * lib/memtypes.c: Add new memeory type for Traffic Engineering support Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add link parameters support to Zebra * zebra/interface.c: - Add new link-params CLI commands - Add new functions to set/get link parameters for interface * zebra/redistribute.[c,h]: Add new function to propagate link parameters to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering. * zebra/redistribute_null.c: Add new function zebra_interface_parameters_update() * zebra/zserv.[c,h]: Add new functions to send link parameters Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support of new link-params CLI to vtysh In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue to use the ordered version for adding line i.e. config_add_line_uniq() to print Interface CLI commands as it completely break the new LINK_PARAMS_NODE. Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Update Traffic Engineering support for OSPFD These patches update original code to RFC3630 (OSPF-TE) and add support of RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support of RFC6827 (ASON - GMPLS). * ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering * ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392 * ospfd/ospf_packet.c: Update checking of OSPF_OPTION * ospfd/ospf_vty.[c,h]: Update ospf_str2area_id * ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get Link Parameters information from the interface to populate Traffic Engineering metrics * ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN) * ospfd/ospf_te.[c,h]: Major modifications to update the code to new link parameters structure and new RFCs Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> tmp
2016-04-19 16:21:46 +02:00
unsigned long conf_debug_ospf_te = 0;
OSPFD: Add Experimental Segment Routing support This is an implementation of draft-ietf-ospf-segment-routing-extensions-24 and RFC7684 for Extended Link & Prefix Opaque LSA. Look to doc/OSPF_SR.rst for implementation details & known limitations. New files: - ospfd/ospf_sr.h: Segment Routing structure definition (SubTLVs + SRDB) - ospfd/ospf_sr.c: Main functions for Segment Routing support - ospfd/ospf_ext.h: TLVs and SubTLVs definition for RFC7684 - ospfd/ospf_ext.c: RFC7684 Extended Link / Prefix implementation - doc/OSPF-SRr.rst: Documentation Modified Files: - doc/ospfd.texi: Add new Segment Routing CLI command definition - lib/command.h: Add new string command for Segment Routing CLI - lib/mpls.h: Add default value for SRGB - lib/route_types.txt: Add new OSPF Segment Routing route type - ospfd/ospf_dump.[c,h]: Add OSPF SR debug - ospfd/ospf_memory.[c,h]: Add new Segment Routing memory type - ospfd/ospf_opaque.[c,h]: Add ospf_sr_init() starting function - ospfd/ospf_ri.c: Add new functions to Set/Get Segment Routing TLVs Add new ospf_router_info_lsa_upadte() to send Opaque LSA to ospf_sr.c() - ospfd/ospf_ri.h: Add new Router Information SR SubTLVs - ospfd/ospf_spf.c: Add new scheduler when running SPF to trigger update of NHLFE - ospfd/ospfd.h: Add new thread for Segment Routing scheduler - ospfd/subdir.am: Add new files - vtysh/Makefile.am: Add new ospf_sr.c file for vtysh - zebra/kernel_netlink.c: Add new OSPF_SR route type - zebra/rt_netlink.[c,h]: Add new OSPF_SR route type - zebra/zebra_mpls.h: Add new OSPF_SR route type Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
2018-01-18 19:11:11 +01:00
unsigned long conf_debug_ospf_ext = 0;
unsigned long conf_debug_ospf_sr = 0;
unsigned long conf_debug_ospf_defaultinfo = 0;
unsigned long conf_debug_ospf_ldp_sync = 0;
unsigned long conf_debug_ospf_gr = 0;
2002-12-13 21:15:29 +01:00
/* Enable debug option variables -- valid only session. */
unsigned long term_debug_ospf_packet[5] = {0, 0, 0, 0, 0};
unsigned long term_debug_ospf_event;
2002-12-13 21:15:29 +01:00
unsigned long term_debug_ospf_ism = 0;
unsigned long term_debug_ospf_nsm = 0;
unsigned long term_debug_ospf_lsa = 0;
unsigned long term_debug_ospf_zebra = 0;
unsigned long term_debug_ospf_nssa = 0;
Update Traffic Engineering Support for OSPFD NOTE: I am squashing several commits together because they do not independently compile and we need this ability to do any type of sane testing on the patches. Since this series builds together I am doing this. -DBS This new structure is the basis to get new link parameters for Traffic Engineering from Zebra/interface layer to OSPFD and ISISD for the support of Traffic Engineering * lib/if.[c,h]: link parameters struture and get/set functions * lib/command.[c,h]: creation of a new link-node * lib/zclient.[c,h]: modification to the ZBUS message to convey the link parameters structure * lib/zebra.h: New ZBUS message Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support for IEEE 754 format * lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to safely convert between big-endian IEEE-754 single and double binary format, as used in IETF RFCs, and C99. Implementation depends on host using __STDC_IEC_559__, which should be everything we care about. Should correctly error out otherwise. * lib/network.[c,h]: Add ntohf and htonf converter * lib/memtypes.c: Add new memeory type for Traffic Engineering support Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add link parameters support to Zebra * zebra/interface.c: - Add new link-params CLI commands - Add new functions to set/get link parameters for interface * zebra/redistribute.[c,h]: Add new function to propagate link parameters to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering. * zebra/redistribute_null.c: Add new function zebra_interface_parameters_update() * zebra/zserv.[c,h]: Add new functions to send link parameters Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support of new link-params CLI to vtysh In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue to use the ordered version for adding line i.e. config_add_line_uniq() to print Interface CLI commands as it completely break the new LINK_PARAMS_NODE. Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Update Traffic Engineering support for OSPFD These patches update original code to RFC3630 (OSPF-TE) and add support of RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support of RFC6827 (ASON - GMPLS). * ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering * ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392 * ospfd/ospf_packet.c: Update checking of OSPF_OPTION * ospfd/ospf_vty.[c,h]: Update ospf_str2area_id * ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get Link Parameters information from the interface to populate Traffic Engineering metrics * ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN) * ospfd/ospf_te.[c,h]: Major modifications to update the code to new link parameters structure and new RFCs Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> tmp
2016-04-19 16:21:46 +02:00
unsigned long term_debug_ospf_te = 0;
OSPFD: Add Experimental Segment Routing support This is an implementation of draft-ietf-ospf-segment-routing-extensions-24 and RFC7684 for Extended Link & Prefix Opaque LSA. Look to doc/OSPF_SR.rst for implementation details & known limitations. New files: - ospfd/ospf_sr.h: Segment Routing structure definition (SubTLVs + SRDB) - ospfd/ospf_sr.c: Main functions for Segment Routing support - ospfd/ospf_ext.h: TLVs and SubTLVs definition for RFC7684 - ospfd/ospf_ext.c: RFC7684 Extended Link / Prefix implementation - doc/OSPF-SRr.rst: Documentation Modified Files: - doc/ospfd.texi: Add new Segment Routing CLI command definition - lib/command.h: Add new string command for Segment Routing CLI - lib/mpls.h: Add default value for SRGB - lib/route_types.txt: Add new OSPF Segment Routing route type - ospfd/ospf_dump.[c,h]: Add OSPF SR debug - ospfd/ospf_memory.[c,h]: Add new Segment Routing memory type - ospfd/ospf_opaque.[c,h]: Add ospf_sr_init() starting function - ospfd/ospf_ri.c: Add new functions to Set/Get Segment Routing TLVs Add new ospf_router_info_lsa_upadte() to send Opaque LSA to ospf_sr.c() - ospfd/ospf_ri.h: Add new Router Information SR SubTLVs - ospfd/ospf_spf.c: Add new scheduler when running SPF to trigger update of NHLFE - ospfd/ospfd.h: Add new thread for Segment Routing scheduler - ospfd/subdir.am: Add new files - vtysh/Makefile.am: Add new ospf_sr.c file for vtysh - zebra/kernel_netlink.c: Add new OSPF_SR route type - zebra/rt_netlink.[c,h]: Add new OSPF_SR route type - zebra/zebra_mpls.h: Add new OSPF_SR route type Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
2018-01-18 19:11:11 +01:00
unsigned long term_debug_ospf_ext = 0;
unsigned long term_debug_ospf_sr = 0;
unsigned long term_debug_ospf_defaultinfo;
unsigned long term_debug_ospf_ldp_sync;
unsigned long term_debug_ospf_gr = 0;
2005-10-01 Andrew J. Schorr <ajschorr@alumni.princeton.edu> * zebra.h: Declare new functions zebra_route_string() and zebra_route_char(). * log.c: (zroute_lookup,zebra_route_string,zebra_route_char) New functions to map zebra route numbers to strings. * zebra_vty.c: (route_type_str) Remove obsolete function: use new library function zebra_route_string() instead. Note that there are a few differences: for IPv6 routes, we now get "ripng" and "ospf6" instead of the old behavior ("rip" and "ospf"). (route_type_char) Remove obsolete function: ues new library function zebra_route_char() instead. Note that there is one difference: the old function returned 'S' for a ZEBRA_ROUTE_SYSTEM route, whereas the new one returns 'X'. (vty_show_ip_route_detail,vty_show_ipv6_route_detail) Replace route_type_str() with zebra_route_string(). (vty_show_ip_route,vty_show_ipv6_route) Replace route_type_char() with zebra_route_char(). * bgp_vty.c: (bgp_config_write_redistribute) Use new library function zebra_route_string instead of a local hard-coded table. * ospf6_asbr.c: Remove local hard-coded tables zroute_name and zroute_abname. Change the ZROUTE_NAME macro to use new library function zebra_route_string(). Remove the ZROUTE_ABNAME macro. (ospf6_asbr_external_route_show): Replace ZROUTE_ABNAME() with a call to zebra_route_char(), and be sure to fix the format string, since we now have a char instead of a char *. * ospf6_zebra.c: Remove local hard-coded tables zebra_route_name and zebra_route_abname. Note that the zebra_route_name[] table contained mixed-case strings, whereas the zebra_route_string() function returns lower-case strings. (ospf6_zebra_read_ipv6): Change debug message to use new library function zebra_route_string() instead of zebra_route_name[]. (show_zebra): Use new library function zebra_route_string() instead of zebra_route_name[]. * ospf_dump.c: Remove local hard-coded table ospf_redistributed_proto. (ospf_redist_string) New function implemented using new library function zebra_route_string(). Note that there are a few differences in the output that will result: the new function returns strings that are lower-case, whereas the old table was mixed case. Also, the old table mapped ZEBRA_ROUTE_OSPF6 to "OSPFv3", whereas the new function returns "ospf6". * ospfd.h: Remove extern struct message ospf_redistributed_proto[], and add extern const char *ospf_redist_string(u_int route_type) instead. * ospf_asbr.c: (ospf_external_info_add) In two messages, use ospf_redist_string instead of LOOKUP(ospf_redistributed_proto). * ospf_vty.c: Remove local hard-coded table distribute_str. (config_write_ospf_redistribute,config_write_ospf_distribute): Use new library function zebra_route_string() instead of distribute_str[]. * ospf_zebra.c: (ospf_redistribute_set,ospf_redistribute_unset, ospf_redistribute_default_set,ospf_redistribute_check) In debug messages, use ospf_redist_string() instead of LOOKUP(ospf_redistributed_proto). * rip_zebra.c: (config_write_rip_redistribute): Remove local hard-coded table str[]. Replace str[] with calls to new library function zebra_route_string(). * ripd.c: Remove local hard-coded table route_info[]. (show_ip_rip) Replace uses of str[] with calls to new library functions zebra_route_char and zebra_route_string. * ripng_zebra.c: (ripng_redistribute_write) Remove local hard-coded table str[]. Replace str[i] with new library function zebra_route_string(i). * ripngd.c: Remove local hard-coded table route_info[]. (show_ipv6_ripng) Use new library function zebra_route_char() instead of table route_info[].
2005-10-01 19:38:06 +02:00
const char *ospf_redist_string(unsigned int route_type)
2005-10-01 Andrew J. Schorr <ajschorr@alumni.princeton.edu> * zebra.h: Declare new functions zebra_route_string() and zebra_route_char(). * log.c: (zroute_lookup,zebra_route_string,zebra_route_char) New functions to map zebra route numbers to strings. * zebra_vty.c: (route_type_str) Remove obsolete function: use new library function zebra_route_string() instead. Note that there are a few differences: for IPv6 routes, we now get "ripng" and "ospf6" instead of the old behavior ("rip" and "ospf"). (route_type_char) Remove obsolete function: ues new library function zebra_route_char() instead. Note that there is one difference: the old function returned 'S' for a ZEBRA_ROUTE_SYSTEM route, whereas the new one returns 'X'. (vty_show_ip_route_detail,vty_show_ipv6_route_detail) Replace route_type_str() with zebra_route_string(). (vty_show_ip_route,vty_show_ipv6_route) Replace route_type_char() with zebra_route_char(). * bgp_vty.c: (bgp_config_write_redistribute) Use new library function zebra_route_string instead of a local hard-coded table. * ospf6_asbr.c: Remove local hard-coded tables zroute_name and zroute_abname. Change the ZROUTE_NAME macro to use new library function zebra_route_string(). Remove the ZROUTE_ABNAME macro. (ospf6_asbr_external_route_show): Replace ZROUTE_ABNAME() with a call to zebra_route_char(), and be sure to fix the format string, since we now have a char instead of a char *. * ospf6_zebra.c: Remove local hard-coded tables zebra_route_name and zebra_route_abname. Note that the zebra_route_name[] table contained mixed-case strings, whereas the zebra_route_string() function returns lower-case strings. (ospf6_zebra_read_ipv6): Change debug message to use new library function zebra_route_string() instead of zebra_route_name[]. (show_zebra): Use new library function zebra_route_string() instead of zebra_route_name[]. * ospf_dump.c: Remove local hard-coded table ospf_redistributed_proto. (ospf_redist_string) New function implemented using new library function zebra_route_string(). Note that there are a few differences in the output that will result: the new function returns strings that are lower-case, whereas the old table was mixed case. Also, the old table mapped ZEBRA_ROUTE_OSPF6 to "OSPFv3", whereas the new function returns "ospf6". * ospfd.h: Remove extern struct message ospf_redistributed_proto[], and add extern const char *ospf_redist_string(u_int route_type) instead. * ospf_asbr.c: (ospf_external_info_add) In two messages, use ospf_redist_string instead of LOOKUP(ospf_redistributed_proto). * ospf_vty.c: Remove local hard-coded table distribute_str. (config_write_ospf_redistribute,config_write_ospf_distribute): Use new library function zebra_route_string() instead of distribute_str[]. * ospf_zebra.c: (ospf_redistribute_set,ospf_redistribute_unset, ospf_redistribute_default_set,ospf_redistribute_check) In debug messages, use ospf_redist_string() instead of LOOKUP(ospf_redistributed_proto). * rip_zebra.c: (config_write_rip_redistribute): Remove local hard-coded table str[]. Replace str[] with calls to new library function zebra_route_string(). * ripd.c: Remove local hard-coded table route_info[]. (show_ip_rip) Replace uses of str[] with calls to new library functions zebra_route_char and zebra_route_string. * ripng_zebra.c: (ripng_redistribute_write) Remove local hard-coded table str[]. Replace str[i] with new library function zebra_route_string(i). * ripngd.c: Remove local hard-coded table route_info[]. (show_ipv6_ripng) Use new library function zebra_route_char() instead of table route_info[].
2005-10-01 19:38:06 +02:00
{
return (route_type == ZEBRA_ROUTE_MAX) ? "Default"
: zebra_route_string(route_type);
}
2002-12-13 21:15:29 +01:00
#define OSPF_AREA_STRING_MAXLEN 16
const char *ospf_area_name_string(struct ospf_area *area)
{
static char buf[OSPF_AREA_STRING_MAXLEN] = "";
uint32_t area_id;
2002-12-13 21:15:29 +01:00
if (!area)
return "-";
area_id = ntohl(area->area_id.s_addr);
snprintf(buf, sizeof(buf), "%d.%d.%d.%d", (area_id >> 24) & 0xff,
(area_id >> 16) & 0xff, (area_id >> 8) & 0xff, area_id & 0xff);
2002-12-13 21:15:29 +01:00
return buf;
}
#define OSPF_AREA_DESC_STRING_MAXLEN 23
const char *ospf_area_desc_string(struct ospf_area *area)
{
static char buf[OSPF_AREA_DESC_STRING_MAXLEN] = "";
uint8_t type;
2002-12-13 21:15:29 +01:00
if (!area)
return "(incomplete)";
2002-12-13 21:15:29 +01:00
type = area->external_routing;
switch (type) {
case OSPF_AREA_NSSA:
snprintf(buf, sizeof(buf), "%s [NSSA]",
2002-12-13 21:15:29 +01:00
ospf_area_name_string(area));
break;
case OSPF_AREA_STUB:
snprintf(buf, sizeof(buf), "%s [Stub]",
2002-12-13 21:15:29 +01:00
ospf_area_name_string(area));
break;
default:
return ospf_area_name_string(area);
}
2002-12-13 21:15:29 +01:00
return buf;
}
#define OSPF_IF_STRING_MAXLEN 40
const char *ospf_if_name_string(struct ospf_interface *oi)
{
static char buf[OSPF_IF_STRING_MAXLEN] = "";
uint32_t ifaddr;
2002-12-13 21:15:29 +01:00
if (!oi || !oi->address)
2002-12-13 21:15:29 +01:00
return "inactive";
if (oi->type == OSPF_IFTYPE_VIRTUALLINK)
return oi->ifp->name;
ifaddr = ntohl(oi->address->u.prefix4.s_addr);
snprintf(buf, sizeof(buf), "%s:%d.%d.%d.%d", oi->ifp->name,
2002-12-13 21:15:29 +01:00
(ifaddr >> 24) & 0xff, (ifaddr >> 16) & 0xff,
(ifaddr >> 8) & 0xff, ifaddr & 0xff);
return buf;
}
2002-12-13 21:15:29 +01:00
void ospf_nbr_state_message(struct ospf_neighbor *nbr, char *buf, size_t size)
{
int state;
struct ospf_interface *oi = nbr->oi;
if (IPV4_ADDR_SAME(&DR(oi), &nbr->address.u.prefix4))
state = ISM_DR;
else if (IPV4_ADDR_SAME(&BDR(oi), &nbr->address.u.prefix4))
state = ISM_Backup;
else
state = ISM_DROther;
snprintf(buf, size, "%s/%s",
lookup_msg(ospf_nsm_state_msg, nbr->state, NULL),
lookup_msg(ospf_ism_state_msg, state, NULL));
2002-12-13 21:15:29 +01:00
}
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
const char *ospf_timeval_dump(struct timeval *t, char *buf, size_t size)
2002-12-13 21:15:29 +01:00
{
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
/* Making formatted timer strings. */
#define MINUTE_IN_SECONDS 60
#define HOUR_IN_SECONDS (60*MINUTE_IN_SECONDS)
#define DAY_IN_SECONDS (24*HOUR_IN_SECONDS)
#define WEEK_IN_SECONDS (7*DAY_IN_SECONDS)
unsigned long w, d, h, m, ms, us;
2002-12-13 21:15:29 +01:00
if (!t)
return "inactive";
w = d = h = m = ms = 0;
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
memset(buf, 0, size);
us = t->tv_usec;
if (us >= 1000) {
ms = us / 1000;
us %= 1000;
(void)us; /* unused */
}
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) OSPF fast, sub-second hello and 1s dead-interval support. A warning fix. Millisec support for ospf_timer_dump. Change auto-cost ref-bandwidth to add a comment to write out of config, rather than printing annoying messages to vty on startup. * ospf_dump.c: (ospf_timer_dump) Print out milliseconds too. Callers typically specify a length of 9, so most see millisecs unless they specify the additional length. * ospf_interface.h: (struct ospf_interface) new interface param, fast_hello. * ospf_interface.c: (ospf_if_table_lookup) add brackets, gcc warning fix. (ospf_new_if_params) Initialise fast_hello param. (ospf_free_if_params) Check whether fast_hello is configured. (ospf_if_new_hook) set fast_hello to default. * ospf_ism.h: Wrap OSPF_ISM_TIMER_ON inside do {} while (0) to prevent funny side-effects from its if statement when this macro is used conditionally by other macros. (OSPF_ISM_TIMER_MSEC_ON) new macro, set in milliseconds. (OSPF_HELLO_TIMER_ON) new macro to set hello timer according to whether fast_hello is set. * ospf_ism.c: Update all setting of the hello timer to use either OSPF_ISM_TIMER_MSEC_ON or OSPF_HELLO_TIMER_ON. The former is used when hello is to be sent immediately. * ospf_nsm.c: ditto * ospf_packet.c: (ospf_hello) hello-interval is not checked for mismatch if fast_hello is set. (ospf_read) Annoying nit, fix "no ospf_interface" to be debug rather than a warning, as it can be perfectly normal to receive packets when logical subnets are used. (ospf_make_hello) Set hello-interval to 0 if fast-hellos are configured. * ospf_vty.c: (ospf_auto_cost_reference_bandwidth) annoying nit, don't vty_out if this command is given, it gets tired quick. (show_ip_ospf_interface_sub) Print the hello-interval according to whether fast-hello is set or not. Print the extra 5 millisec characters from (ospf_timer_dump) if fast-hello is configured. (ospf_vty_dead_interval_set) new function, common to all forms of dead-interval command, to set dead-interval and fast-hello correctly. If a dead-interval is given, unset fast-hello, else if a hello-multiplier is set, set dead-interval to 1 and fast-hello to given multiplier. (ip_ospf_dead_interval_addr_cmd) use ospf_vty_dead_interval_set(). (ip_ospf_dead_interval_minimal_addr_cmd) ditto. (no_ip_ospf_dead_interval) Unset fast-hello. (no_ip_ospf_hello_interval) Bug-fix, unset of hello-interval should set it to OSPF_HELLO_INTERVAL_DEFAULT, not OSPF_ROUTER_DEAD_INTERVAL_DEFAULT. (config_write_interface) Write out fast-hello. (ospf_config_write) Write a comment about "auto-cost reference-bandwidth" having to be equal on all routers. Hopefully just as noticeable as old practice of writing to vty, but less annoying. (ospf_vty_if_init) install the two new dead-interval commands. * ospfd.h: Add defines for OSPF_ROUTER_DEAD_INTERVAL_MINIMAL and OSPF_FAST_HELLO_DEFAULT.
2005-10-21 02:45:17 +02:00
if (ms >= 1000) {
t->tv_sec += ms / 1000;
ms %= 1000;
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) OSPF fast, sub-second hello and 1s dead-interval support. A warning fix. Millisec support for ospf_timer_dump. Change auto-cost ref-bandwidth to add a comment to write out of config, rather than printing annoying messages to vty on startup. * ospf_dump.c: (ospf_timer_dump) Print out milliseconds too. Callers typically specify a length of 9, so most see millisecs unless they specify the additional length. * ospf_interface.h: (struct ospf_interface) new interface param, fast_hello. * ospf_interface.c: (ospf_if_table_lookup) add brackets, gcc warning fix. (ospf_new_if_params) Initialise fast_hello param. (ospf_free_if_params) Check whether fast_hello is configured. (ospf_if_new_hook) set fast_hello to default. * ospf_ism.h: Wrap OSPF_ISM_TIMER_ON inside do {} while (0) to prevent funny side-effects from its if statement when this macro is used conditionally by other macros. (OSPF_ISM_TIMER_MSEC_ON) new macro, set in milliseconds. (OSPF_HELLO_TIMER_ON) new macro to set hello timer according to whether fast_hello is set. * ospf_ism.c: Update all setting of the hello timer to use either OSPF_ISM_TIMER_MSEC_ON or OSPF_HELLO_TIMER_ON. The former is used when hello is to be sent immediately. * ospf_nsm.c: ditto * ospf_packet.c: (ospf_hello) hello-interval is not checked for mismatch if fast_hello is set. (ospf_read) Annoying nit, fix "no ospf_interface" to be debug rather than a warning, as it can be perfectly normal to receive packets when logical subnets are used. (ospf_make_hello) Set hello-interval to 0 if fast-hellos are configured. * ospf_vty.c: (ospf_auto_cost_reference_bandwidth) annoying nit, don't vty_out if this command is given, it gets tired quick. (show_ip_ospf_interface_sub) Print the hello-interval according to whether fast-hello is set or not. Print the extra 5 millisec characters from (ospf_timer_dump) if fast-hello is configured. (ospf_vty_dead_interval_set) new function, common to all forms of dead-interval command, to set dead-interval and fast-hello correctly. If a dead-interval is given, unset fast-hello, else if a hello-multiplier is set, set dead-interval to 1 and fast-hello to given multiplier. (ip_ospf_dead_interval_addr_cmd) use ospf_vty_dead_interval_set(). (ip_ospf_dead_interval_minimal_addr_cmd) ditto. (no_ip_ospf_dead_interval) Unset fast-hello. (no_ip_ospf_hello_interval) Bug-fix, unset of hello-interval should set it to OSPF_HELLO_INTERVAL_DEFAULT, not OSPF_ROUTER_DEAD_INTERVAL_DEFAULT. (config_write_interface) Write out fast-hello. (ospf_config_write) Write a comment about "auto-cost reference-bandwidth" having to be equal on all routers. Hopefully just as noticeable as old practice of writing to vty, but less annoying. (ospf_vty_if_init) install the two new dead-interval commands. * ospfd.h: Add defines for OSPF_ROUTER_DEAD_INTERVAL_MINIMAL and OSPF_FAST_HELLO_DEFAULT.
2005-10-21 02:45:17 +02:00
}
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
if (t->tv_sec > WEEK_IN_SECONDS) {
w = t->tv_sec / WEEK_IN_SECONDS;
t->tv_sec -= w * WEEK_IN_SECONDS;
2002-12-13 21:15:29 +01:00
}
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
if (t->tv_sec > DAY_IN_SECONDS) {
d = t->tv_sec / DAY_IN_SECONDS;
t->tv_sec -= d * DAY_IN_SECONDS;
2002-12-13 21:15:29 +01:00
}
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
if (t->tv_sec >= HOUR_IN_SECONDS) {
h = t->tv_sec / HOUR_IN_SECONDS;
t->tv_sec -= h * HOUR_IN_SECONDS;
}
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
if (t->tv_sec >= MINUTE_IN_SECONDS) {
m = t->tv_sec / MINUTE_IN_SECONDS;
t->tv_sec -= m * MINUTE_IN_SECONDS;
}
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
if (w > 99)
snprintf(buf, size, "%luw%1lud", w, d);
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
else if (w)
snprintf(buf, size, "%luw%1lud%02luh", w, d, h);
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
else if (d)
snprintf(buf, size, "%1lud%02luh%02lum", d, h, m);
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
else if (h)
snprintf(buf, size, "%luh%02lum%02lds", h, m, (long)t->tv_sec);
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
else if (m)
snprintf(buf, size, "%lum%02lds", m, (long)t->tv_sec);
else if (ms)
snprintf(buf, size, "%ld.%03lus", (long)t->tv_sec, ms);
else
snprintf(buf, size, "%ld usecs", (long)t->tv_usec);
2002-12-13 21:15:29 +01:00
return buf;
}
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
const char *ospf_timer_dump(struct thread *t, char *buf, size_t size)
{
struct timeval result;
if (!t)
return "inactive";
monotime_until(&t->u.sands, &result);
2005-10-21 Paul Jakma <paul.jakma@sun.com> * (general) SPF millisecond resolution timer with adaptive, linear back-off holdtime. Prettification of ospf_timer_dump. * ospf_dump.c: (ospf_timeval_dump) new function. The guts of ospf_timer_dump, but made to be more dynamic in printing out the relative timeval, sliding the precision printed out according to the value. (ospf_timer_dump) guts moved to ospf_timeval_dump. * ospf_dump.h: export ospf_timeval_dump. * ospf_flood.c: (ospf_flood) remove gettimeofday, use the libzebra exported recent_time instead, as it's not terribly critical to have time exactly right - the dropped LSA will be retransmited to us if we don't ACK it. * ospf_packet.c: (ospf_ls_upd_timer) Ditto, but here we're not transmitting, just putting LSA back on update transmit list. * ospfd.h: delay and holdtimes should be unsigned. Add spf_max_holdtime and spf_hold_multiplier. Update default defines for delay and hold time to be in msec. (struct ospf) change the SPF timestamp to a struct timeval. Remove ospf_timers_spf_(un)?set. * ospfd.c: (ospf_timers_spf_{set,unset}) removed. (ospf_new) initialise spf_max_holdtime and spf_hold_multiplier * ospf_spf.c: (ospf_spf_calculate) SPF timestamp is a timeval now, update with gettimeofday. (ospf_spf_calculate_schedule) Change SPF timers to millisecond resolution. Make the holdtime be adaptive, with a linear increase in holdtime ever consecutive SPF run which occurs within holdtime of previous SPF, bounded by spf_max_holdtime. * ospf_vty.c: Update spf timers commands. (ospf_timers_spf_set) trivial helper. (ospf_timers_throttle_spf_cmd) new command to set SPF delay, initial hold and max hold times with millisecond resolution. (ospf_timers_spf_cmd) Deprecated. Accept the old values, convert to msec, truncate to new limits. (no_ospf_timers_throttle_spf_cmd) set timers to defaults. (no_ospf_timers_spf_cmd) deprecated form, same as previous. (show_ip_ospf_cmd) Display SPF parameters and times. (show_ip_ospf_neighbour_header) Centralise the 'sh ip os ne' header. (show_ip_ospf_neighbor_sub) Fix the field widths. Get rid of the multiple spaces which were making the lines even longer. (show_ip_ospf_neighbor_cmd) Use show_ip_ospf_neighbour_header (show_ip_ospf_neighbor_all_cmd) ditto and fix the field widths for NBMA neighbours. (show_ip_ospf_neighbor_int) Use header function. (show_ip_ospf_nbr_nbma_detail_sub) use sizeof for timebuf, local array - safer. (show_ip_ospf_neighbor_detail_sub) ditto (ospf_vty_init) install the new SPF throttle timer commands.
2005-10-21 11:23:12 +02:00
return ospf_timeval_dump(&result, buf, size);
}
static void ospf_packet_hello_dump(struct stream *s, uint16_t length)
2002-12-13 21:15:29 +01:00
{
struct ospf_hello *hello;
int i;
hello = (struct ospf_hello *)stream_pnt(s);
zlog_debug("Hello");
zlog_debug(" NetworkMask %s", inet_ntoa(hello->network_mask));
zlog_debug(" HelloInterval %d", ntohs(hello->hello_interval));
zlog_debug(" Options %d (%s)", hello->options,
2002-12-13 21:15:29 +01:00
ospf_options_dump(hello->options));
zlog_debug(" RtrPriority %d", hello->priority);
zlog_debug(" RtrDeadInterval %ld",
(unsigned long)ntohl(hello->dead_interval));
zlog_debug(" DRouter %s", inet_ntoa(hello->d_router));
zlog_debug(" BDRouter %s", inet_ntoa(hello->bd_router));
2002-12-13 21:15:29 +01:00
length -= OSPF_HEADER_SIZE + OSPF_HELLO_MIN_SIZE;
zlog_debug(" # Neighbors %d", length / 4);
2002-12-13 21:15:29 +01:00
for (i = 0; length > 0; i++, length -= sizeof(struct in_addr))
zlog_debug(" Neighbor %s", inet_ntoa(hello->neighbors[i]));
2002-12-13 21:15:29 +01:00
}
static char *ospf_dd_flags_dump(uint8_t flags, char *buf, size_t size)
2002-12-13 21:15:29 +01:00
{
snprintf(buf, size, "%s|%s|%s", (flags & OSPF_DD_FLAG_I) ? "I" : "-",
(flags & OSPF_DD_FLAG_M) ? "M" : "-",
(flags & OSPF_DD_FLAG_MS) ? "MS" : "-");
return buf;
}
static char *ospf_router_lsa_flags_dump(uint8_t flags, char *buf, size_t size)
2002-12-13 21:15:29 +01:00
{
snprintf(buf, size, "%s|%s|%s",
(flags & ROUTER_LSA_VIRTUAL) ? "V" : "-",
(flags & ROUTER_LSA_EXTERNAL) ? "E" : "-",
(flags & ROUTER_LSA_BORDER) ? "B" : "-");
return buf;
}
static void ospf_router_lsa_dump(struct stream *s, uint16_t length)
2002-12-13 21:15:29 +01:00
{
char buf[BUFSIZ];
struct router_lsa *rl;
int i, len;
rl = (struct router_lsa *)stream_pnt(s);
zlog_debug(" Router-LSA");
zlog_debug(" flags %s",
2002-12-13 21:15:29 +01:00
ospf_router_lsa_flags_dump(rl->flags, buf, BUFSIZ));
zlog_debug(" # links %d", ntohs(rl->links));
2002-12-13 21:15:29 +01:00
len = ntohs(rl->header.length) - OSPF_LSA_HEADER_SIZE - 4;
for (i = 0; len > 0; i++) {
zlog_debug(" Link ID %s", inet_ntoa(rl->link[i].link_id));
zlog_debug(" Link Data %s",
inet_ntoa(rl->link[i].link_data));
zlog_debug(" Type %d", (uint8_t)rl->link[i].type);
zlog_debug(" TOS %d", (uint8_t)rl->link[i].tos);
zlog_debug(" metric %d", ntohs(rl->link[i].metric));
2002-12-13 21:15:29 +01:00
len -= 12;
}
}
static void ospf_network_lsa_dump(struct stream *s, uint16_t length)
2002-12-13 21:15:29 +01:00
{
struct network_lsa *nl;
int i, cnt;
nl = (struct network_lsa *)stream_pnt(s);
2002-12-13 21:15:29 +01:00
cnt = (ntohs(nl->header.length) - (OSPF_LSA_HEADER_SIZE + 4)) / 4;
zlog_debug(" Network-LSA");
2002-12-13 21:15:29 +01:00
/*
zlog_debug ("LSA total size %d", ntohs (nl->header.length));
zlog_debug ("Network-LSA size %d",
2002-12-13 21:15:29 +01:00
ntohs (nl->header.length) - OSPF_LSA_HEADER_SIZE);
*/
zlog_debug(" Network Mask %s", inet_ntoa(nl->mask));
zlog_debug(" # Attached Routers %d", cnt);
2002-12-13 21:15:29 +01:00
for (i = 0; i < cnt; i++)
zlog_debug(" Attached Router %s",
inet_ntoa(nl->routers[i]));
2002-12-13 21:15:29 +01:00
}
static void ospf_summary_lsa_dump(struct stream *s, uint16_t length)
2002-12-13 21:15:29 +01:00
{
struct summary_lsa *sl;
int size;
int i;
sl = (struct summary_lsa *)stream_pnt(s);
2002-12-13 21:15:29 +01:00
zlog_debug(" Summary-LSA");
zlog_debug(" Network Mask %s", inet_ntoa(sl->mask));
2002-12-13 21:15:29 +01:00
size = ntohs(sl->header.length) - OSPF_LSA_HEADER_SIZE - 4;
for (i = 0; size > 0; size -= 4, i++)
zlog_debug(" TOS=%d metric %d", sl->tos,
2002-12-13 21:15:29 +01:00
GET_METRIC(sl->metric));
}
static void ospf_as_external_lsa_dump(struct stream *s, uint16_t length)
2002-12-13 21:15:29 +01:00
{
struct as_external_lsa *al;
int size;
int i;
al = (struct as_external_lsa *)stream_pnt(s);
zlog_debug(" %s", ospf_lsa_type_msg[al->header.type].str);
zlog_debug(" Network Mask %s", inet_ntoa(al->mask));
2002-12-13 21:15:29 +01:00
size = ntohs(al->header.length) - OSPF_LSA_HEADER_SIZE - 4;
for (i = 0; size > 0; size -= 12, i++) {
zlog_debug(" bit %s TOS=%d metric %d",
2002-12-13 21:15:29 +01:00
IS_EXTERNAL_METRIC(al->e[i].tos) ? "E" : "-",
al->e[i].tos & 0x7f, GET_METRIC(al->e[i].metric));
zlog_debug(" Forwarding address %s",
inet_ntoa(al->e[i].fwd_addr));
zlog_debug(" External Route Tag %" ROUTE_TAG_PRI,
al->e[i].route_tag);
2002-12-13 21:15:29 +01:00
}
}
static void ospf_lsa_header_list_dump(struct stream *s, uint16_t length)
2002-12-13 21:15:29 +01:00
{
struct lsa_header *lsa;
zlog_debug(" # LSA Headers %d", length / OSPF_LSA_HEADER_SIZE);
2002-12-13 21:15:29 +01:00
/* LSA Headers. */
while (length > 0) {
lsa = (struct lsa_header *)stream_pnt(s);
2002-12-13 21:15:29 +01:00
ospf_lsa_header_dump(lsa);
stream_forward_getp(s, OSPF_LSA_HEADER_SIZE);
2002-12-13 21:15:29 +01:00
length -= OSPF_LSA_HEADER_SIZE;
}
}
static void ospf_packet_db_desc_dump(struct stream *s, uint16_t length)
2002-12-13 21:15:29 +01:00
{
struct ospf_db_desc *dd;
char dd_flags[8];
uint32_t gp;
2002-12-13 21:15:29 +01:00
gp = stream_get_getp(s);
dd = (struct ospf_db_desc *)stream_pnt(s);
2002-12-13 21:15:29 +01:00
zlog_debug("Database Description");
zlog_debug(" Interface MTU %d", ntohs(dd->mtu));
zlog_debug(" Options %d (%s)", dd->options,
2002-12-13 21:15:29 +01:00
ospf_options_dump(dd->options));
zlog_debug(" Flags %d (%s)", dd->flags,
ospf_dd_flags_dump(dd->flags, dd_flags, sizeof(dd_flags)));
zlog_debug(" Sequence Number 0x%08lx",
(unsigned long)ntohl(dd->dd_seqnum));
2002-12-13 21:15:29 +01:00
length -= OSPF_HEADER_SIZE + OSPF_DB_DESC_MIN_SIZE;
stream_forward_getp(s, OSPF_DB_DESC_MIN_SIZE);
2002-12-13 21:15:29 +01:00
ospf_lsa_header_list_dump(s, length);
stream_set_getp(s, gp);
}
static void ospf_packet_ls_req_dump(struct stream *s, uint16_t length)
2002-12-13 21:15:29 +01:00
{
uint32_t sp;
uint32_t ls_type;
2002-12-13 21:15:29 +01:00
struct in_addr ls_id;
struct in_addr adv_router;
sp = stream_get_getp(s);
length -= OSPF_HEADER_SIZE;
zlog_debug("Link State Request");
zlog_debug(" # Requests %d", length / 12);
2002-12-13 21:15:29 +01:00
for (; length > 0; length -= 12) {
ls_type = stream_getl(s);
ls_id.s_addr = stream_get_ipv4(s);
adv_router.s_addr = stream_get_ipv4(s);
zlog_debug(" LS type %d", ls_type);
zlog_debug(" Link State ID %s", inet_ntoa(ls_id));
zlog_debug(" Advertising Router %s", inet_ntoa(adv_router));
2002-12-13 21:15:29 +01:00
}
stream_set_getp(s, sp);
}
static void ospf_packet_ls_upd_dump(struct stream *s, uint16_t length)
2002-12-13 21:15:29 +01:00
{
uint32_t sp;
2002-12-13 21:15:29 +01:00
struct lsa_header *lsa;
int lsa_len;
uint32_t count;
2002-12-13 21:15:29 +01:00
length -= OSPF_HEADER_SIZE;
2002-12-13 21:15:29 +01:00
sp = stream_get_getp(s);
2002-12-13 21:15:29 +01:00
count = stream_getl(s);
length -= 4;
zlog_debug("Link State Update");
zlog_debug(" # LSAs %d", count);
2002-12-13 21:15:29 +01:00
while (length > 0 && count > 0) {
if (length < OSPF_HEADER_SIZE || length % 4 != 0) {
zlog_debug(" Remaining %d bytes; Incorrect length.",
2002-12-13 21:15:29 +01:00
length);
break;
}
lsa = (struct lsa_header *)stream_pnt(s);
2002-12-13 21:15:29 +01:00
lsa_len = ntohs(lsa->length);
ospf_lsa_header_dump(lsa);
2002-12-13 21:15:29 +01:00
switch (lsa->type) {
case OSPF_ROUTER_LSA:
ospf_router_lsa_dump(s, length);
break;
2002-12-13 21:15:29 +01:00
case OSPF_NETWORK_LSA:
ospf_network_lsa_dump(s, length);
break;
2002-12-13 21:15:29 +01:00
case OSPF_SUMMARY_LSA:
case OSPF_ASBR_SUMMARY_LSA:
2002-12-13 21:15:29 +01:00
ospf_summary_lsa_dump(s, length);
break;
2002-12-13 21:15:29 +01:00
case OSPF_AS_EXTERNAL_LSA:
ospf_as_external_lsa_dump(s, length);
break;
2002-12-13 21:15:29 +01:00
case OSPF_AS_NSSA_LSA:
ospf_as_external_lsa_dump(s, length);
break;
case OSPF_OPAQUE_LINK_LSA:
2002-12-13 21:15:29 +01:00
case OSPF_OPAQUE_AREA_LSA:
case OSPF_OPAQUE_AS_LSA:
ospf_opaque_lsa_dump(s, length);
break;
default:
break;
}
stream_forward_getp(s, lsa_len);
2002-12-13 21:15:29 +01:00
length -= lsa_len;
count--;
2002-12-13 21:15:29 +01:00
}
stream_set_getp(s, sp);
}
static void ospf_packet_ls_ack_dump(struct stream *s, uint16_t length)
2002-12-13 21:15:29 +01:00
{
uint32_t sp;
2002-12-13 21:15:29 +01:00
length -= OSPF_HEADER_SIZE;
sp = stream_get_getp(s);
zlog_debug("Link State Acknowledgment");
2002-12-13 21:15:29 +01:00
ospf_lsa_header_list_dump(s, length);
stream_set_getp(s, sp);
}
static void ospf_header_dump(struct ospf_header *ospfh)
{
char buf[9];
uint16_t auth_type = ntohs(ospfh->auth_type);
zlog_debug("Header");
zlog_debug(" Version %d", ospfh->version);
zlog_debug(" Type %d (%s)", ospfh->type,
lookup_msg(ospf_packet_type_str, ospfh->type, NULL));
zlog_debug(" Packet Len %d", ntohs(ospfh->length));
zlog_debug(" Router ID %s", inet_ntoa(ospfh->router_id));
zlog_debug(" Area ID %s", inet_ntoa(ospfh->area_id));
zlog_debug(" Checksum 0x%x", ntohs(ospfh->checksum));
zlog_debug(" AuType %s",
lookup_msg(ospf_auth_type_str, auth_type, NULL));
2012-02-26 14:00:57 +01:00
switch (auth_type) {
2002-12-13 21:15:29 +01:00
case OSPF_AUTH_NULL:
break;
case OSPF_AUTH_SIMPLE:
strlcpy(buf, (char *)ospfh->u.auth_data, sizeof(buf));
zlog_debug(" Simple Password %s", buf);
2002-12-13 21:15:29 +01:00
break;
case OSPF_AUTH_CRYPTOGRAPHIC:
zlog_debug(" Cryptographic Authentication");
zlog_debug(" Key ID %d", ospfh->u.crypt.key_id);
zlog_debug(" Auth Data Len %d", ospfh->u.crypt.auth_data_len);
zlog_debug(" Sequence number %ld",
(unsigned long)ntohl(ospfh->u.crypt.crypt_seqnum));
2002-12-13 21:15:29 +01:00
break;
default:
zlog_debug("* This is not supported authentication type");
2002-12-13 21:15:29 +01:00
break;
}
}
void ospf_packet_dump(struct stream *s)
{
struct ospf_header *ospfh;
unsigned long gp;
2002-12-13 21:15:29 +01:00
/* Preserve pointer. */
gp = stream_get_getp(s);
2002-12-13 21:15:29 +01:00
/* OSPF Header dump. */
ospfh = (struct ospf_header *)stream_pnt(s);
2002-12-13 21:15:29 +01:00
/* Until detail flag is set, return. */
if (!(term_debug_ospf_packet[ospfh->type - 1] & OSPF_DEBUG_DETAIL))
return;
2002-12-13 21:15:29 +01:00
/* Show OSPF header detail. */
ospf_header_dump(ospfh);
stream_forward_getp(s, OSPF_HEADER_SIZE);
2002-12-13 21:15:29 +01:00
switch (ospfh->type) {
case OSPF_MSG_HELLO:
ospf_packet_hello_dump(s, ntohs(ospfh->length));
break;
case OSPF_MSG_DB_DESC:
ospf_packet_db_desc_dump(s, ntohs(ospfh->length));
break;
case OSPF_MSG_LS_REQ:
ospf_packet_ls_req_dump(s, ntohs(ospfh->length));
break;
case OSPF_MSG_LS_UPD:
ospf_packet_ls_upd_dump(s, ntohs(ospfh->length));
break;
case OSPF_MSG_LS_ACK:
ospf_packet_ls_ack_dump(s, ntohs(ospfh->length));
break;
default:
break;
}
2002-12-13 21:15:29 +01:00
stream_set_getp(s, gp);
}
DEFUN (debug_ospf_packet,
debug_ospf_packet_cmd,
"debug ospf [(1-65535)] packet <hello|dd|ls-request|ls-update|ls-ack|all> [<send [detail]|recv [detail]|detail>]",
DEBUG_STR
OSPF_STR
"Instance ID\n"
"OSPF packets\n"
"OSPF Hello\n"
"OSPF Database Description\n"
"OSPF Link State Request\n"
"OSPF Link State Update\n"
"OSPF Link State Acknowledgment\n"
"OSPF all packets\n"
"Packet sent\n"
"Detail Information\n"
"Packet received\n"
"Detail Information\n"
"Detail Information\n")
{
int inst = (argv[2]->type == RANGE_TKN) ? 1 : 0;
int detail = strmatch(argv[argc - 1]->text, "detail");
int send = strmatch(argv[argc - (1 + detail)]->text, "send");
int recv = strmatch(argv[argc - (1 + detail)]->text, "recv");
2002-12-13 21:15:29 +01:00
char *packet = argv[3 + inst]->text;
if (inst) // user passed instance ID
{
if (!ospf_lookup_instance(strtoul(argv[2]->arg, NULL, 10)))
return CMD_NOT_MY_INSTANCE;
}
2002-12-13 21:15:29 +01:00
int type = 0;
int flag = 0;
int i;
2002-12-13 21:15:29 +01:00
/* Check packet type. */
if (strmatch(packet, "hello"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_HELLO;
else if (strmatch(packet, "dd"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_DB_DESC;
else if (strmatch(packet, "ls-request"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_LS_REQ;
else if (strmatch(packet, "ls-update"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_LS_UPD;
else if (strmatch(packet, "ls-ack"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_LS_ACK;
else if (strmatch(packet, "all"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_ALL;
/* Cases:
* (none) = send + recv
* detail = send + recv + detail
* recv = recv
* send = send
* recv detail = recv + detail
* send detail = send + detail
*/
if (!send && !recv)
send = recv = 1;
flag |= (send) ? OSPF_DEBUG_SEND : 0;
flag |= (recv) ? OSPF_DEBUG_RECV : 0;
flag |= (detail) ? OSPF_DEBUG_DETAIL : 0;
2002-12-13 21:15:29 +01:00
for (i = 0; i < 5; i++)
if (type & (0x01 << i)) {
if (vty->node == CONFIG_NODE)
DEBUG_PACKET_ON(i, flag);
else
2002-12-13 21:15:29 +01:00
TERM_DEBUG_PACKET_ON(i, flag);
}
return CMD_SUCCESS;
2002-12-13 21:15:29 +01:00
}
DEFUN (no_debug_ospf_packet,
no_debug_ospf_packet_cmd,
"no debug ospf [(1-65535)] packet <hello|dd|ls-request|ls-update|ls-ack|all> [<send [detail]|recv [detail]|detail>]",
NO_STR
2002-12-13 21:15:29 +01:00
DEBUG_STR
OSPF_STR
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
"Instance ID\n"
2002-12-13 21:15:29 +01:00
"OSPF packets\n"
"OSPF Hello\n"
"OSPF Database Description\n"
"OSPF Link State Request\n"
"OSPF Link State Update\n"
"OSPF Link State Acknowledgment\n"
"OSPF all packets\n"
"Packet sent\n"
"Detail Information\n"
"Packet received\n"
"Detail Information\n"
"Detail Information\n")
{
int inst = (argv[3]->type == RANGE_TKN) ? 1 : 0;
int detail = strmatch(argv[argc - 1]->text, "detail");
int send = strmatch(argv[argc - (1 + detail)]->text, "send");
int recv = strmatch(argv[argc - (1 + detail)]->text, "recv");
char *packet = argv[4 + inst]->text;
if (inst) // user passed instance ID
{
if (!ospf_lookup_instance(strtoul(argv[3]->arg, NULL, 10)))
return CMD_NOT_MY_INSTANCE;
}
2002-12-13 21:15:29 +01:00
int type = 0;
int flag = 0;
int i;
2002-12-13 21:15:29 +01:00
/* Check packet type. */
if (strmatch(packet, "hello"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_HELLO;
else if (strmatch(packet, "dd"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_DB_DESC;
else if (strmatch(packet, "ls-request"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_LS_REQ;
else if (strmatch(packet, "ls-update"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_LS_UPD;
else if (strmatch(packet, "ls-ack"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_LS_ACK;
else if (strmatch(packet, "all"))
2002-12-13 21:15:29 +01:00
type = OSPF_DEBUG_ALL;
/* Cases:
* (none) = send + recv
* detail = send + recv + detail
* recv = recv
* send = send
* recv detail = recv + detail
* send detail = send + detail
*/
if (!send && !recv)
send = recv = 1;
flag |= (send) ? OSPF_DEBUG_SEND : 0;
flag |= (recv) ? OSPF_DEBUG_RECV : 0;
flag |= (detail) ? OSPF_DEBUG_DETAIL : 0;
2002-12-13 21:15:29 +01:00
for (i = 0; i < 5; i++)
if (type & (0x01 << i)) {
if (vty->node == CONFIG_NODE)
DEBUG_PACKET_OFF(i, flag);
else
TERM_DEBUG_PACKET_OFF(i, flag);
}
#ifdef DEBUG
/*
2002-12-13 21:15:29 +01:00
for (i = 0; i < 5; i++)
zlog_debug ("flag[%d] = %d", i, ospf_debug_packet[i]);
*/
2002-12-13 21:15:29 +01:00
#endif /* DEBUG */
return CMD_SUCCESS;
}
DEFUN (debug_ospf_ism,
debug_ospf_ism_cmd,
"debug ospf [(1-65535)] ism [<status|events|timers>]",
2002-12-13 21:15:29 +01:00
DEBUG_STR
OSPF_STR
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
"Instance ID\n"
"OSPF Interface State Machine\n"
"ISM Status Information\n"
"ISM Event Information\n"
"ISM TImer Information\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
int inst = (argv[2]->type == RANGE_TKN);
char *dbgparam = (argc == 4 + inst) ? argv[argc - 1]->text : NULL;
if (inst) // user passed instance ID
2002-12-13 21:15:29 +01:00
{
if (!ospf_lookup_instance(strtoul(argv[2]->arg, NULL, 10)))
return CMD_NOT_MY_INSTANCE;
2002-12-13 21:15:29 +01:00
}
if (vty->node == CONFIG_NODE) {
if (!dbgparam)
2002-12-13 21:15:29 +01:00
DEBUG_ON(ism, ISM);
else {
if (strmatch(dbgparam, "status"))
2002-12-13 21:15:29 +01:00
DEBUG_ON(ism, ISM_STATUS);
else if (strmatch(dbgparam, "events"))
2002-12-13 21:15:29 +01:00
DEBUG_ON(ism, ISM_EVENTS);
else if (strmatch(dbgparam, "timers"))
2002-12-13 21:15:29 +01:00
DEBUG_ON(ism, ISM_TIMERS);
}
2002-12-13 21:15:29 +01:00
return CMD_SUCCESS;
}
2002-12-13 21:15:29 +01:00
/* ENABLE_NODE. */
if (!dbgparam)
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(ism, ISM);
else {
if (strmatch(dbgparam, "status"))
TERM_DEBUG_ON(ism, ISM_STATUS);
else if (strmatch(dbgparam, "events"))
TERM_DEBUG_ON(ism, ISM_EVENTS);
else if (strmatch(dbgparam, "timers"))
TERM_DEBUG_ON(ism, ISM_TIMERS);
2002-12-13 21:15:29 +01:00
}
return CMD_SUCCESS;
2002-12-13 21:15:29 +01:00
}
DEFUN (no_debug_ospf_ism,
no_debug_ospf_ism_cmd,
"no debug ospf [(1-65535)] ism [<status|events|timers>]",
NO_STR
2002-12-13 21:15:29 +01:00
DEBUG_STR
OSPF_STR
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
"Instance ID\n"
"OSPF Interface State Machine\n"
"ISM Status Information\n"
"ISM Event Information\n"
"ISM TImer Information\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
int inst = (argv[3]->type == RANGE_TKN);
char *dbgparam = (argc == 5 + inst) ? argv[argc - 1]->text : NULL;
if (inst) // user passed instance ID
2002-12-13 21:15:29 +01:00
{
if (!ospf_lookup_instance(strtoul(argv[3]->arg, NULL, 10)))
return CMD_NOT_MY_INSTANCE;
}
2002-12-13 21:15:29 +01:00
if (vty->node == CONFIG_NODE) {
if (!dbgparam)
2002-12-13 21:15:29 +01:00
DEBUG_OFF(ism, ISM);
else {
if (strmatch(dbgparam, "status"))
2002-12-13 21:15:29 +01:00
DEBUG_OFF(ism, ISM_STATUS);
else if (strmatch(dbgparam, "events"))
2002-12-13 21:15:29 +01:00
DEBUG_OFF(ism, ISM_EVENTS);
else if (strmatch(dbgparam, "timers"))
DEBUG_OFF(ism, ISM_TIMERS);
}
return CMD_SUCCESS;
}
2002-12-13 21:15:29 +01:00
/* ENABLE_NODE. */
if (!dbgparam)
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(ism, ISM);
else {
if (strmatch(dbgparam, "status"))
TERM_DEBUG_OFF(ism, ISM_STATUS);
else if (strmatch(dbgparam, "events"))
TERM_DEBUG_OFF(ism, ISM_EVENTS);
else if (strmatch(dbgparam, "timers"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(ism, ISM_TIMERS);
}
2002-12-13 21:15:29 +01:00
return CMD_SUCCESS;
}
static int debug_ospf_nsm_common(struct vty *vty, int arg_base, int argc,
struct cmd_token **argv)
2002-12-13 21:15:29 +01:00
{
if (vty->node == CONFIG_NODE) {
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
DEBUG_ON(nsm, NSM);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "status"))
2002-12-13 21:15:29 +01:00
DEBUG_ON(nsm, NSM_STATUS);
else if (strmatch(argv[arg_base]->text, "events"))
DEBUG_ON(nsm, NSM_EVENTS);
else if (strmatch(argv[arg_base]->text, "timers"))
2002-12-13 21:15:29 +01:00
DEBUG_ON(nsm, NSM_TIMERS);
}
2002-12-13 21:15:29 +01:00
return CMD_SUCCESS;
}
2002-12-13 21:15:29 +01:00
/* ENABLE_NODE. */
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(nsm, NSM);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "status"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(nsm, NSM_STATUS);
else if (strmatch(argv[arg_base]->text, "events"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(nsm, NSM_EVENTS);
else if (strmatch(argv[arg_base]->text, "timers"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(nsm, NSM_TIMERS);
}
return CMD_SUCCESS;
}
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEFUN (debug_ospf_nsm,
debug_ospf_nsm_cmd,
"debug ospf nsm [<status|events|timers>]",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEBUG_STR
OSPF_STR
"OSPF Neighbor State Machine\n"
"NSM Status Information\n"
"NSM Event Information\n"
"NSM Timer Information\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
return debug_ospf_nsm_common(vty, 3, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
DEFUN (debug_ospf_instance_nsm,
debug_ospf_instance_nsm_cmd,
"debug ospf (1-65535) nsm [<status|events|timers>]",
2002-12-13 21:15:29 +01:00
DEBUG_STR
OSPF_STR
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
"Instance ID\n"
"OSPF Neighbor State Machine\n"
"NSM Status Information\n"
"NSM Event Information\n"
"NSM Timer Information\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
int idx_number = 2;
unsigned short instance = 0;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
instance = strtoul(argv[idx_number]->arg, NULL, 10);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (!ospf_lookup_instance(instance))
return CMD_SUCCESS;
return debug_ospf_nsm_common(vty, 4, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
static int no_debug_ospf_nsm_common(struct vty *vty, int arg_base, int argc,
struct cmd_token **argv)
2002-12-13 21:15:29 +01:00
{
/* XXX qlyoung */
2002-12-13 21:15:29 +01:00
if (vty->node == CONFIG_NODE) {
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
DEBUG_OFF(nsm, NSM);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "status"))
DEBUG_OFF(nsm, NSM_STATUS);
else if (strmatch(argv[arg_base]->text, "events"))
DEBUG_OFF(nsm, NSM_EVENTS);
else if (strmatch(argv[arg_base]->text, "timers"))
DEBUG_OFF(nsm, NSM_TIMERS);
}
2002-12-13 21:15:29 +01:00
return CMD_SUCCESS;
}
2002-12-13 21:15:29 +01:00
/* ENABLE_NODE. */
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(nsm, NSM);
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "status"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(nsm, NSM_STATUS);
else if (strmatch(argv[arg_base]->text, "events"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(nsm, NSM_EVENTS);
else if (strmatch(argv[arg_base]->text, "timers"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(nsm, NSM_TIMERS);
}
return CMD_SUCCESS;
}
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEFUN (no_debug_ospf_nsm,
no_debug_ospf_nsm_cmd,
"no debug ospf nsm [<status|events|timers>]",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
NO_STR
DEBUG_STR
OSPF_STR
"OSPF Neighbor State Machine\n"
"NSM Status Information\n"
"NSM Event Information\n"
"NSM Timer Information\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
return no_debug_ospf_nsm_common(vty, 4, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
2002-12-13 21:15:29 +01:00
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEFUN (no_debug_ospf_instance_nsm,
no_debug_ospf_instance_nsm_cmd,
"no debug ospf (1-65535) nsm [<status|events|timers>]",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
NO_STR
2002-12-13 21:15:29 +01:00
DEBUG_STR
OSPF_STR
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
"Instance ID\n"
"OSPF Neighbor State Machine\n"
"NSM Status Information\n"
"NSM Event Information\n"
"NSM Timer Information\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
int idx_number = 3;
unsigned short instance = 0;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
instance = strtoul(argv[idx_number]->arg, NULL, 10);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (!ospf_lookup_instance(instance))
return CMD_NOT_MY_INSTANCE;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
return no_debug_ospf_nsm_common(vty, 5, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
static int debug_ospf_lsa_common(struct vty *vty, int arg_base, int argc,
struct cmd_token **argv)
2002-12-13 21:15:29 +01:00
{
if (vty->node == CONFIG_NODE) {
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
DEBUG_ON(lsa, LSA);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "generate"))
2002-12-13 21:15:29 +01:00
DEBUG_ON(lsa, LSA_GENERATE);
else if (strmatch(argv[arg_base]->text, "flooding"))
2002-12-13 21:15:29 +01:00
DEBUG_ON(lsa, LSA_FLOODING);
else if (strmatch(argv[arg_base]->text, "install"))
2002-12-13 21:15:29 +01:00
DEBUG_ON(lsa, LSA_INSTALL);
else if (strmatch(argv[arg_base]->text, "refresh"))
2002-12-13 21:15:29 +01:00
DEBUG_ON(lsa, LSA_REFRESH);
}
2002-12-13 21:15:29 +01:00
return CMD_SUCCESS;
}
/* ENABLE_NODE. */
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(lsa, LSA);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "generate"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(lsa, LSA_GENERATE);
else if (strmatch(argv[arg_base]->text, "flooding"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(lsa, LSA_FLOODING);
else if (strmatch(argv[arg_base]->text, "install"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(lsa, LSA_INSTALL);
else if (strmatch(argv[arg_base]->text, "refresh"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(lsa, LSA_REFRESH);
}
2002-12-13 21:15:29 +01:00
return CMD_SUCCESS;
}
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEFUN (debug_ospf_lsa,
debug_ospf_lsa_cmd,
"debug ospf lsa [<generate|flooding|install|refresh>]",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEBUG_STR
OSPF_STR
"OSPF Link State Advertisement\n"
"LSA Generation\n"
"LSA Flooding\n"
"LSA Install/Delete\n"
"LSA Refresh\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
return debug_ospf_lsa_common(vty, 3, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
DEFUN (debug_ospf_instance_lsa,
debug_ospf_instance_lsa_cmd,
"debug ospf (1-65535) lsa [<generate|flooding|install|refresh>]",
2002-12-13 21:15:29 +01:00
DEBUG_STR
OSPF_STR
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
"Instance ID\n"
"OSPF Link State Advertisement\n"
"LSA Generation\n"
"LSA Flooding\n"
"LSA Install/Delete\n"
"LSA Refresh\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
int idx_number = 2;
unsigned short instance = 0;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
instance = strtoul(argv[idx_number]->arg, NULL, 10);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (!ospf_lookup_instance(instance))
return CMD_NOT_MY_INSTANCE;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
return debug_ospf_lsa_common(vty, 4, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
static int no_debug_ospf_lsa_common(struct vty *vty, int arg_base, int argc,
struct cmd_token **argv)
2002-12-13 21:15:29 +01:00
{
if (vty->node == CONFIG_NODE) {
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
DEBUG_OFF(lsa, LSA);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "generate"))
2002-12-13 21:15:29 +01:00
DEBUG_OFF(lsa, LSA_GENERATE);
else if (strmatch(argv[arg_base]->text, "flooding"))
2002-12-13 21:15:29 +01:00
DEBUG_OFF(lsa, LSA_FLOODING);
else if (strmatch(argv[arg_base]->text, "install"))
DEBUG_OFF(lsa, LSA_INSTALL);
else if (strmatch(argv[arg_base]->text, "refresh"))
DEBUG_OFF(lsa, LSA_REFRESH);
}
2002-12-13 21:15:29 +01:00
return CMD_SUCCESS;
}
2002-12-13 21:15:29 +01:00
/* ENABLE_NODE. */
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(lsa, LSA);
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "generate"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(lsa, LSA_GENERATE);
else if (strmatch(argv[arg_base]->text, "flooding"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(lsa, LSA_FLOODING);
else if (strmatch(argv[arg_base]->text, "install"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(lsa, LSA_INSTALL);
else if (strmatch(argv[arg_base]->text, "refresh"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(lsa, LSA_REFRESH);
}
return CMD_SUCCESS;
}
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEFUN (no_debug_ospf_lsa,
no_debug_ospf_lsa_cmd,
"no debug ospf lsa [<generate|flooding|install|refresh>]",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
NO_STR
DEBUG_STR
OSPF_STR
"OSPF Link State Advertisement\n"
"LSA Generation\n"
"LSA Flooding\n"
"LSA Install/Delete\n"
"LSA Refres\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
return no_debug_ospf_lsa_common(vty, 4, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
DEFUN (no_debug_ospf_instance_lsa,
no_debug_ospf_instance_lsa_cmd,
"no debug ospf (1-65535) lsa [<generate|flooding|install|refresh>]",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
NO_STR
DEBUG_STR
OSPF_STR
"Instance ID\n"
"OSPF Link State Advertisement\n"
"LSA Generation\n"
"LSA Flooding\n"
"LSA Install/Delete\n"
"LSA Refres\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
int idx_number = 3;
unsigned short instance = 0;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
instance = strtoul(argv[idx_number]->arg, NULL, 10);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (!ospf_lookup_instance(instance))
return CMD_NOT_MY_INSTANCE;
return no_debug_ospf_lsa_common(vty, 5, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
static int debug_ospf_zebra_common(struct vty *vty, int arg_base, int argc,
struct cmd_token **argv)
2002-12-13 21:15:29 +01:00
{
if (vty->node == CONFIG_NODE) {
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
DEBUG_ON(zebra, ZEBRA);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "interface"))
2002-12-13 21:15:29 +01:00
DEBUG_ON(zebra, ZEBRA_INTERFACE);
else if (strmatch(argv[arg_base]->text, "redistribute"))
2002-12-13 21:15:29 +01:00
DEBUG_ON(zebra, ZEBRA_REDISTRIBUTE);
}
2002-12-13 21:15:29 +01:00
return CMD_SUCCESS;
}
2002-12-13 21:15:29 +01:00
/* ENABLE_NODE. */
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(zebra, ZEBRA);
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "interface"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(zebra, ZEBRA_INTERFACE);
else if (strmatch(argv[arg_base]->text, "redistribute"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_ON(zebra, ZEBRA_REDISTRIBUTE);
}
return CMD_SUCCESS;
}
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEFUN (debug_ospf_zebra,
debug_ospf_zebra_cmd,
"debug ospf zebra [<interface|redistribute>]",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEBUG_STR
OSPF_STR
ZEBRA_STR
"Zebra interface\n"
"Zebra redistribute\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
return debug_ospf_zebra_common(vty, 3, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
DEFUN (debug_ospf_instance_zebra,
debug_ospf_instance_zebra_cmd,
"debug ospf (1-65535) zebra [<interface|redistribute>]",
2002-12-13 21:15:29 +01:00
DEBUG_STR
OSPF_STR
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
"Instance ID\n"
ZEBRA_STR
"Zebra interface\n"
"Zebra redistribute\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
int idx_number = 2;
unsigned short instance = 0;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
instance = strtoul(argv[idx_number]->arg, NULL, 10);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (!ospf_lookup_instance(instance))
return CMD_NOT_MY_INSTANCE;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
return debug_ospf_zebra_common(vty, 4, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
static int no_debug_ospf_zebra_common(struct vty *vty, int arg_base, int argc,
struct cmd_token **argv)
2002-12-13 21:15:29 +01:00
{
if (vty->node == CONFIG_NODE) {
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
DEBUG_OFF(zebra, ZEBRA);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "interface"))
DEBUG_OFF(zebra, ZEBRA_INTERFACE);
else if (strmatch(argv[arg_base]->text, "redistribute"))
DEBUG_OFF(zebra, ZEBRA_REDISTRIBUTE);
}
2002-12-13 21:15:29 +01:00
return CMD_SUCCESS;
}
2002-12-13 21:15:29 +01:00
/* ENABLE_NODE. */
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (argc == arg_base + 0)
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(zebra, ZEBRA);
else if (argc == arg_base + 1) {
if (strmatch(argv[arg_base]->text, "interface"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(zebra, ZEBRA_INTERFACE);
else if (strmatch(argv[arg_base]->text, "redistribute"))
2002-12-13 21:15:29 +01:00
TERM_DEBUG_OFF(zebra, ZEBRA_REDISTRIBUTE);
}
return CMD_SUCCESS;
}
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEFUN (no_debug_ospf_zebra,
no_debug_ospf_zebra_cmd,
"no debug ospf zebra [<interface|redistribute>]",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
NO_STR
DEBUG_STR
OSPF_STR
ZEBRA_STR
"Zebra interface\n"
"Zebra redistribute\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
return no_debug_ospf_zebra_common(vty, 4, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
DEFUN (no_debug_ospf_instance_zebra,
no_debug_ospf_instance_zebra_cmd,
"no debug ospf (1-65535) zebra [<interface|redistribute>]",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
NO_STR
DEBUG_STR
OSPF_STR
"Instance ID\n"
ZEBRA_STR
"Zebra interface\n"
"Zebra redistribute\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
int idx_number = 3;
unsigned short instance = 0;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
instance = strtoul(argv[idx_number]->arg, NULL, 10);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (!ospf_lookup_instance(instance))
return CMD_SUCCESS;
return no_debug_ospf_zebra_common(vty, 5, argc, argv);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
}
2002-12-13 21:15:29 +01:00
DEFUN (debug_ospf_event,
debug_ospf_event_cmd,
"debug ospf event",
DEBUG_STR
OSPF_STR
"OSPF event information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_ON(event, EVENT);
TERM_DEBUG_ON(event, EVENT);
return CMD_SUCCESS;
}
DEFUN (no_debug_ospf_event,
no_debug_ospf_event_cmd,
"no debug ospf event",
NO_STR
DEBUG_STR
OSPF_STR
"OSPF event information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_OFF(event, EVENT);
TERM_DEBUG_OFF(event, EVENT);
return CMD_SUCCESS;
}
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEFUN (debug_ospf_instance_event,
debug_ospf_instance_event_cmd,
"debug ospf (1-65535) event",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEBUG_STR
OSPF_STR
"Instance ID\n"
"OSPF event information\n")
{
int idx_number = 2;
unsigned short instance = 0;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
instance = strtoul(argv[idx_number]->arg, NULL, 10);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (!ospf_lookup_instance(instance))
return CMD_SUCCESS;
if (vty->node == CONFIG_NODE)
CONF_DEBUG_ON(event, EVENT);
TERM_DEBUG_ON(event, EVENT);
return CMD_SUCCESS;
}
DEFUN (no_debug_ospf_instance_event,
no_debug_ospf_instance_event_cmd,
"no debug ospf (1-65535) event",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
NO_STR
DEBUG_STR
OSPF_STR
"Instance ID\n"
"OSPF event information\n")
{
int idx_number = 3;
unsigned short instance = 0;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
instance = strtoul(argv[idx_number]->arg, NULL, 10);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (!ospf_lookup_instance(instance))
return CMD_SUCCESS;
if (vty->node == CONFIG_NODE)
CONF_DEBUG_OFF(event, EVENT);
TERM_DEBUG_OFF(event, EVENT);
return CMD_SUCCESS;
}
2002-12-13 21:15:29 +01:00
DEFUN (debug_ospf_nssa,
debug_ospf_nssa_cmd,
"debug ospf nssa",
DEBUG_STR
OSPF_STR
"OSPF nssa information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_ON(nssa, NSSA);
TERM_DEBUG_ON(nssa, NSSA);
return CMD_SUCCESS;
}
DEFUN (no_debug_ospf_nssa,
no_debug_ospf_nssa_cmd,
"no debug ospf nssa",
NO_STR
DEBUG_STR
OSPF_STR
"OSPF nssa information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_OFF(nssa, NSSA);
TERM_DEBUG_OFF(nssa, NSSA);
return CMD_SUCCESS;
}
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEFUN (debug_ospf_instance_nssa,
debug_ospf_instance_nssa_cmd,
"debug ospf (1-65535) nssa",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
DEBUG_STR
OSPF_STR
"Instance ID\n"
"OSPF nssa information\n")
{
int idx_number = 2;
unsigned short instance = 0;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
instance = strtoul(argv[idx_number]->arg, NULL, 10);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (!ospf_lookup_instance(instance))
return CMD_SUCCESS;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (vty->node == CONFIG_NODE)
CONF_DEBUG_ON(nssa, NSSA);
TERM_DEBUG_ON(nssa, NSSA);
return CMD_SUCCESS;
}
DEFUN (no_debug_ospf_instance_nssa,
no_debug_ospf_instance_nssa_cmd,
"no debug ospf (1-65535) nssa",
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
NO_STR
2002-12-13 21:15:29 +01:00
DEBUG_STR
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
OSPF_STR
"Instance ID\n"
"OSPF nssa information\n")
{
int idx_number = 3;
unsigned short instance = 0;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
instance = strtoul(argv[idx_number]->arg, NULL, 10);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (!ospf_lookup_instance(instance))
return CMD_SUCCESS;
if (vty->node == CONFIG_NODE)
CONF_DEBUG_OFF(nssa, NSSA);
TERM_DEBUG_OFF(nssa, NSSA);
return CMD_SUCCESS;
}
Update Traffic Engineering Support for OSPFD NOTE: I am squashing several commits together because they do not independently compile and we need this ability to do any type of sane testing on the patches. Since this series builds together I am doing this. -DBS This new structure is the basis to get new link parameters for Traffic Engineering from Zebra/interface layer to OSPFD and ISISD for the support of Traffic Engineering * lib/if.[c,h]: link parameters struture and get/set functions * lib/command.[c,h]: creation of a new link-node * lib/zclient.[c,h]: modification to the ZBUS message to convey the link parameters structure * lib/zebra.h: New ZBUS message Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support for IEEE 754 format * lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to safely convert between big-endian IEEE-754 single and double binary format, as used in IETF RFCs, and C99. Implementation depends on host using __STDC_IEC_559__, which should be everything we care about. Should correctly error out otherwise. * lib/network.[c,h]: Add ntohf and htonf converter * lib/memtypes.c: Add new memeory type for Traffic Engineering support Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add link parameters support to Zebra * zebra/interface.c: - Add new link-params CLI commands - Add new functions to set/get link parameters for interface * zebra/redistribute.[c,h]: Add new function to propagate link parameters to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering. * zebra/redistribute_null.c: Add new function zebra_interface_parameters_update() * zebra/zserv.[c,h]: Add new functions to send link parameters Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support of new link-params CLI to vtysh In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue to use the ordered version for adding line i.e. config_add_line_uniq() to print Interface CLI commands as it completely break the new LINK_PARAMS_NODE. Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Update Traffic Engineering support for OSPFD These patches update original code to RFC3630 (OSPF-TE) and add support of RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support of RFC6827 (ASON - GMPLS). * ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering * ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392 * ospfd/ospf_packet.c: Update checking of OSPF_OPTION * ospfd/ospf_vty.[c,h]: Update ospf_str2area_id * ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get Link Parameters information from the interface to populate Traffic Engineering metrics * ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN) * ospfd/ospf_te.[c,h]: Major modifications to update the code to new link parameters structure and new RFCs Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> tmp
2016-04-19 16:21:46 +02:00
DEFUN (debug_ospf_te,
debug_ospf_te_cmd,
"debug ospf te",
DEBUG_STR
OSPF_STR
"OSPF-TE information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_ON(te, TE);
TERM_DEBUG_ON(te, TE);
return CMD_SUCCESS;
}
DEFUN (no_debug_ospf_te,
no_debug_ospf_te_cmd,
"no debug ospf te",
NO_STR
DEBUG_STR
OSPF_STR
"OSPF-TE information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_OFF(te, TE);
TERM_DEBUG_OFF(te, TE);
return CMD_SUCCESS;
}
OSPFD: Add Experimental Segment Routing support This is an implementation of draft-ietf-ospf-segment-routing-extensions-24 and RFC7684 for Extended Link & Prefix Opaque LSA. Look to doc/OSPF_SR.rst for implementation details & known limitations. New files: - ospfd/ospf_sr.h: Segment Routing structure definition (SubTLVs + SRDB) - ospfd/ospf_sr.c: Main functions for Segment Routing support - ospfd/ospf_ext.h: TLVs and SubTLVs definition for RFC7684 - ospfd/ospf_ext.c: RFC7684 Extended Link / Prefix implementation - doc/OSPF-SRr.rst: Documentation Modified Files: - doc/ospfd.texi: Add new Segment Routing CLI command definition - lib/command.h: Add new string command for Segment Routing CLI - lib/mpls.h: Add default value for SRGB - lib/route_types.txt: Add new OSPF Segment Routing route type - ospfd/ospf_dump.[c,h]: Add OSPF SR debug - ospfd/ospf_memory.[c,h]: Add new Segment Routing memory type - ospfd/ospf_opaque.[c,h]: Add ospf_sr_init() starting function - ospfd/ospf_ri.c: Add new functions to Set/Get Segment Routing TLVs Add new ospf_router_info_lsa_upadte() to send Opaque LSA to ospf_sr.c() - ospfd/ospf_ri.h: Add new Router Information SR SubTLVs - ospfd/ospf_spf.c: Add new scheduler when running SPF to trigger update of NHLFE - ospfd/ospfd.h: Add new thread for Segment Routing scheduler - ospfd/subdir.am: Add new files - vtysh/Makefile.am: Add new ospf_sr.c file for vtysh - zebra/kernel_netlink.c: Add new OSPF_SR route type - zebra/rt_netlink.[c,h]: Add new OSPF_SR route type - zebra/zebra_mpls.h: Add new OSPF_SR route type Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
2018-01-18 19:11:11 +01:00
DEFUN (debug_ospf_sr,
debug_ospf_sr_cmd,
"debug ospf sr",
DEBUG_STR
OSPF_STR
"OSPF-SR information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_ON(sr, SR);
TERM_DEBUG_ON(sr, SR);
return CMD_SUCCESS;
}
DEFUN (no_debug_ospf_sr,
no_debug_ospf_sr_cmd,
"no debug ospf sr",
NO_STR
DEBUG_STR
OSPF_STR
"OSPF-SR information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_OFF(sr, SR);
TERM_DEBUG_OFF(sr, SR);
return CMD_SUCCESS;
}
DEFUN (debug_ospf_default_info,
debug_ospf_default_info_cmd,
"debug ospf default-information",
DEBUG_STR
OSPF_STR
"OSPF default information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_ON(defaultinfo, DEFAULTINFO);
TERM_DEBUG_ON(defaultinfo, DEFAULTINFO);
return CMD_SUCCESS;
}
DEFUN (no_debug_ospf_default_info,
no_debug_ospf_default_info_cmd,
"no debug ospf default-information",
NO_STR
DEBUG_STR
OSPF_STR
"OSPF default information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_OFF(defaultinfo, DEFAULTINFO);
TERM_DEBUG_OFF(defaultinfo, DEFAULTINFO);
return CMD_SUCCESS;
}
DEFUN(debug_ospf_ldp_sync,
debug_ospf_ldp_sync_cmd,
"debug ospf ldp-sync",
DEBUG_STR OSPF_STR
"OSPF LDP-Sync information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_ON(ldp_sync, LDP_SYNC);
TERM_DEBUG_ON(ldp_sync, LDP_SYNC);
return CMD_SUCCESS;
}
DEFUN(no_debug_ospf_ldp_sync,
no_debug_ospf_ldp_sync_cmd,
"no debug ospf ldp-sync",
NO_STR
DEBUG_STR
OSPF_STR
"OSPF LDP-Sync information\n")
{
if (vty->node == CONFIG_NODE)
CONF_DEBUG_OFF(ldp_sync, LDP_SYNC);
TERM_DEBUG_OFF(ldp_sync, LDP_SYNC);
return CMD_SUCCESS;
}
DEFUN (no_debug_ospf,
no_debug_ospf_cmd,
"no debug ospf",
NO_STR
DEBUG_STR
OSPF_STR)
{
int flag = OSPF_DEBUG_SEND | OSPF_DEBUG_RECV | OSPF_DEBUG_DETAIL;
int i;
if (vty->node == CONFIG_NODE) {
CONF_DEBUG_OFF(event, EVENT);
CONF_DEBUG_OFF(nssa, NSSA);
DEBUG_OFF(ism, ISM_EVENTS);
DEBUG_OFF(ism, ISM_STATUS);
DEBUG_OFF(ism, ISM_TIMERS);
DEBUG_OFF(lsa, LSA);
DEBUG_OFF(lsa, LSA_FLOODING);
DEBUG_OFF(lsa, LSA_GENERATE);
DEBUG_OFF(lsa, LSA_INSTALL);
DEBUG_OFF(lsa, LSA_REFRESH);
DEBUG_OFF(nsm, NSM);
DEBUG_OFF(nsm, NSM_EVENTS);
DEBUG_OFF(nsm, NSM_STATUS);
DEBUG_OFF(nsm, NSM_TIMERS);
DEBUG_OFF(zebra, ZEBRA);
DEBUG_OFF(zebra, ZEBRA_INTERFACE);
DEBUG_OFF(zebra, ZEBRA_REDISTRIBUTE);
DEBUG_OFF(defaultinfo, DEFAULTINFO);
DEBUG_OFF(ldp_sync, LDP_SYNC);
for (i = 0; i < 5; i++)
DEBUG_PACKET_OFF(i, flag);
}
for (i = 0; i < 5; i++)
TERM_DEBUG_PACKET_OFF(i, flag);
TERM_DEBUG_OFF(event, EVENT);
TERM_DEBUG_OFF(ism, ISM);
TERM_DEBUG_OFF(ism, ISM_EVENTS);
TERM_DEBUG_OFF(ism, ISM_STATUS);
TERM_DEBUG_OFF(ism, ISM_TIMERS);
TERM_DEBUG_OFF(lsa, LSA);
TERM_DEBUG_OFF(lsa, LSA_FLOODING);
TERM_DEBUG_OFF(lsa, LSA_GENERATE);
TERM_DEBUG_OFF(lsa, LSA_INSTALL);
TERM_DEBUG_OFF(lsa, LSA_REFRESH);
TERM_DEBUG_OFF(nsm, NSM);
TERM_DEBUG_OFF(nsm, NSM_EVENTS);
TERM_DEBUG_OFF(nsm, NSM_STATUS);
TERM_DEBUG_OFF(nsm, NSM_TIMERS);
TERM_DEBUG_OFF(nssa, NSSA);
TERM_DEBUG_OFF(zebra, ZEBRA);
TERM_DEBUG_OFF(zebra, ZEBRA_INTERFACE);
TERM_DEBUG_OFF(zebra, ZEBRA_REDISTRIBUTE);
TERM_DEBUG_OFF(defaultinfo, DEFAULTINFO);
TERM_DEBUG_OFF(ldp_sync, LDP_SYNC);
return CMD_SUCCESS;
}
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
static int show_debugging_ospf_common(struct vty *vty, struct ospf *ospf)
2002-12-13 21:15:29 +01:00
{
int i;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (ospf->instance)
vty_out(vty, "\nOSPF Instance: %d\n\n", ospf->instance);
vty_out(vty, "OSPF debugging status:\n");
/* Show debug status for events. */
if (IS_DEBUG_OSPF(event, EVENT))
vty_out(vty, " OSPF event debugging is on\n");
2002-12-13 21:15:29 +01:00
/* Show debug status for ISM. */
if (IS_DEBUG_OSPF(ism, ISM) == OSPF_DEBUG_ISM)
vty_out(vty, " OSPF ISM debugging is on\n");
2002-12-13 21:15:29 +01:00
else {
if (IS_DEBUG_OSPF(ism, ISM_STATUS))
vty_out(vty, " OSPF ISM status debugging is on\n");
2002-12-13 21:15:29 +01:00
if (IS_DEBUG_OSPF(ism, ISM_EVENTS))
vty_out(vty, " OSPF ISM event debugging is on\n");
2002-12-13 21:15:29 +01:00
if (IS_DEBUG_OSPF(ism, ISM_TIMERS))
vty_out(vty, " OSPF ISM timer debugging is on\n");
2002-12-13 21:15:29 +01:00
}
2002-12-13 21:15:29 +01:00
/* Show debug status for NSM. */
if (IS_DEBUG_OSPF(nsm, NSM) == OSPF_DEBUG_NSM)
vty_out(vty, " OSPF NSM debugging is on\n");
2002-12-13 21:15:29 +01:00
else {
if (IS_DEBUG_OSPF(nsm, NSM_STATUS))
vty_out(vty, " OSPF NSM status debugging is on\n");
2002-12-13 21:15:29 +01:00
if (IS_DEBUG_OSPF(nsm, NSM_EVENTS))
vty_out(vty, " OSPF NSM event debugging is on\n");
2002-12-13 21:15:29 +01:00
if (IS_DEBUG_OSPF(nsm, NSM_TIMERS))
vty_out(vty, " OSPF NSM timer debugging is on\n");
2002-12-13 21:15:29 +01:00
}
2002-12-13 21:15:29 +01:00
/* Show debug status for OSPF Packets. */
for (i = 0; i < 5; i++)
if (IS_DEBUG_OSPF_PACKET(i, SEND)
&& IS_DEBUG_OSPF_PACKET(i, RECV)) {
vty_out(vty, " OSPF packet %s%s debugging is on\n",
lookup_msg(ospf_packet_type_str, i + 1, NULL),
IS_DEBUG_OSPF_PACKET(i, DETAIL) ? " detail"
: "");
2002-12-13 21:15:29 +01:00
} else {
if (IS_DEBUG_OSPF_PACKET(i, SEND))
vty_out(vty,
" OSPF packet %s send%s debugging is on\n",
lookup_msg(ospf_packet_type_str, i + 1,
NULL),
IS_DEBUG_OSPF_PACKET(i, DETAIL)
? " detail"
: "");
2002-12-13 21:15:29 +01:00
if (IS_DEBUG_OSPF_PACKET(i, RECV))
vty_out(vty,
" OSPF packet %s receive%s debugging is on\n",
lookup_msg(ospf_packet_type_str, i + 1,
NULL),
IS_DEBUG_OSPF_PACKET(i, DETAIL)
? " detail"
: "");
2002-12-13 21:15:29 +01:00
}
2002-12-13 21:15:29 +01:00
/* Show debug status for OSPF LSAs. */
if (IS_DEBUG_OSPF(lsa, LSA) == OSPF_DEBUG_LSA)
vty_out(vty, " OSPF LSA debugging is on\n");
2002-12-13 21:15:29 +01:00
else {
if (IS_DEBUG_OSPF(lsa, LSA_GENERATE))
vty_out(vty, " OSPF LSA generation debugging is on\n");
2002-12-13 21:15:29 +01:00
if (IS_DEBUG_OSPF(lsa, LSA_FLOODING))
vty_out(vty, " OSPF LSA flooding debugging is on\n");
2002-12-13 21:15:29 +01:00
if (IS_DEBUG_OSPF(lsa, LSA_INSTALL))
vty_out(vty, " OSPF LSA install debugging is on\n");
2002-12-13 21:15:29 +01:00
if (IS_DEBUG_OSPF(lsa, LSA_REFRESH))
vty_out(vty, " OSPF LSA refresh debugging is on\n");
2002-12-13 21:15:29 +01:00
}
2002-12-13 21:15:29 +01:00
/* Show debug status for Zebra. */
if (IS_DEBUG_OSPF(zebra, ZEBRA) == OSPF_DEBUG_ZEBRA)
vty_out(vty, " OSPF Zebra debugging is on\n");
2002-12-13 21:15:29 +01:00
else {
if (IS_DEBUG_OSPF(zebra, ZEBRA_INTERFACE))
vty_out(vty,
" OSPF Zebra interface debugging is on\n");
2002-12-13 21:15:29 +01:00
if (IS_DEBUG_OSPF(zebra, ZEBRA_REDISTRIBUTE))
vty_out(vty,
" OSPF Zebra redistribute debugging is on\n");
2002-12-13 21:15:29 +01:00
}
if (IS_DEBUG_OSPF(defaultinfo, DEFAULTINFO) == OSPF_DEBUG_DEFAULTINFO)
vty_out(vty, "OSPF default information is on\n");
/* Show debug status for NSSA. */
if (IS_DEBUG_OSPF(nssa, NSSA) == OSPF_DEBUG_NSSA)
vty_out(vty, " OSPF NSSA debugging is on\n");
/* Show debug status for LDP-SYNC. */
if (IS_DEBUG_OSPF(ldp_sync, LDP_SYNC) == OSPF_DEBUG_LDP_SYNC)
vty_out(vty, " OSPF ldp-sync debugging is on\n");
vty_out(vty, "\n");
2002-12-13 21:15:29 +01:00
return CMD_SUCCESS;
}
DEFUN_NOSH (show_debugging_ospf,
show_debugging_ospf_cmd,
"show debugging [ospf]",
SHOW_STR
DEBUG_STR
OSPF_STR)
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
struct ospf *ospf = NULL;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
ospf = ospf_lookup_by_vrf_id(VRF_DEFAULT);
if (ospf == NULL)
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
return CMD_SUCCESS;
return show_debugging_ospf_common(vty, ospf);
}
DEFUN_NOSH (show_debugging_ospf_instance,
show_debugging_ospf_instance_cmd,
"show debugging ospf (1-65535)",
SHOW_STR
DEBUG_STR
OSPF_STR
"Instance ID\n")
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
{
int idx_number = 3;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
struct ospf *ospf;
unsigned short instance = 0;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
instance = strtoul(argv[idx_number]->arg, NULL, 10);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if ((ospf = ospf_lookup_instance(instance)) == NULL)
return CMD_SUCCESS;
return show_debugging_ospf_common(vty, ospf);
}
static int config_write_debug(struct vty *vty);
2002-12-13 21:15:29 +01:00
/* Debug node. */
static struct cmd_node debug_node = {
.name = "debug",
.node = DEBUG_NODE,
.prompt = "",
.config_write = config_write_debug,
2002-12-13 21:15:29 +01:00
};
static int config_write_debug(struct vty *vty)
{
int write = 0;
int i, r;
2004-10-08 10:17:22 +02:00
const char *type_str[] = {"hello", "dd", "ls-request", "ls-update",
"ls-ack"};
const char *detail_str[] = {
"", " send", " recv", "",
2002-12-13 21:15:29 +01:00
" detail", " send detail", " recv detail", " detail"};
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
struct ospf *ospf;
char str[16];
memset(str, 0, 16);
ospf = ospf_lookup_by_vrf_id(VRF_DEFAULT);
if (ospf == NULL)
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
return CMD_SUCCESS;
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
if (ospf->instance)
snprintf(str, sizeof(str), " %u", ospf->instance);
2002-12-13 21:15:29 +01:00
/* debug ospf ism (status|events|timers). */
if (IS_CONF_DEBUG_OSPF(ism, ISM) == OSPF_DEBUG_ISM)
vty_out(vty, "debug ospf%s ism\n", str);
2002-12-13 21:15:29 +01:00
else {
if (IS_CONF_DEBUG_OSPF(ism, ISM_STATUS))
vty_out(vty, "debug ospf%s ism status\n", str);
2002-12-13 21:15:29 +01:00
if (IS_CONF_DEBUG_OSPF(ism, ISM_EVENTS))
vty_out(vty, "debug ospf%s ism event\n", str);
2002-12-13 21:15:29 +01:00
if (IS_CONF_DEBUG_OSPF(ism, ISM_TIMERS))
vty_out(vty, "debug ospf%s ism timer\n", str);
2002-12-13 21:15:29 +01:00
}
2002-12-13 21:15:29 +01:00
/* debug ospf nsm (status|events|timers). */
if (IS_CONF_DEBUG_OSPF(nsm, NSM) == OSPF_DEBUG_NSM)
vty_out(vty, "debug ospf%s nsm\n", str);
2002-12-13 21:15:29 +01:00
else {
if (IS_CONF_DEBUG_OSPF(nsm, NSM_STATUS))
vty_out(vty, "debug ospf%s nsm status\n", str);
2002-12-13 21:15:29 +01:00
if (IS_CONF_DEBUG_OSPF(nsm, NSM_EVENTS))
vty_out(vty, "debug ospf%s nsm event\n", str);
2002-12-13 21:15:29 +01:00
if (IS_CONF_DEBUG_OSPF(nsm, NSM_TIMERS))
vty_out(vty, "debug ospf%s nsm timer\n", str);
2002-12-13 21:15:29 +01:00
}
2002-12-13 21:15:29 +01:00
/* debug ospf lsa (generate|flooding|install|refresh). */
if (IS_CONF_DEBUG_OSPF(lsa, LSA) == OSPF_DEBUG_LSA)
vty_out(vty, "debug ospf%s lsa\n", str);
2002-12-13 21:15:29 +01:00
else {
if (IS_CONF_DEBUG_OSPF(lsa, LSA_GENERATE))
vty_out(vty, "debug ospf%s lsa generate\n", str);
2002-12-13 21:15:29 +01:00
if (IS_CONF_DEBUG_OSPF(lsa, LSA_FLOODING))
vty_out(vty, "debug ospf%s lsa flooding\n", str);
2002-12-13 21:15:29 +01:00
if (IS_CONF_DEBUG_OSPF(lsa, LSA_INSTALL))
vty_out(vty, "debug ospf%s lsa install\n", str);
2002-12-13 21:15:29 +01:00
if (IS_CONF_DEBUG_OSPF(lsa, LSA_REFRESH))
vty_out(vty, "debug ospf%s lsa refresh\n", str);
2002-12-13 21:15:29 +01:00
write = 1;
}
2002-12-13 21:15:29 +01:00
/* debug ospf zebra (interface|redistribute). */
if (IS_CONF_DEBUG_OSPF(zebra, ZEBRA) == OSPF_DEBUG_ZEBRA)
vty_out(vty, "debug ospf%s zebra\n", str);
2002-12-13 21:15:29 +01:00
else {
if (IS_CONF_DEBUG_OSPF(zebra, ZEBRA_INTERFACE))
vty_out(vty, "debug ospf%s zebra interface\n", str);
2002-12-13 21:15:29 +01:00
if (IS_CONF_DEBUG_OSPF(zebra, ZEBRA_REDISTRIBUTE))
vty_out(vty, "debug ospf%s zebra redistribute\n", str);
2002-12-13 21:15:29 +01:00
write = 1;
}
2002-12-13 21:15:29 +01:00
/* debug ospf event. */
if (IS_CONF_DEBUG_OSPF(event, EVENT) == OSPF_DEBUG_EVENT) {
vty_out(vty, "debug ospf%s event\n", str);
2002-12-13 21:15:29 +01:00
write = 1;
}
2002-12-13 21:15:29 +01:00
/* debug ospf nssa. */
if (IS_CONF_DEBUG_OSPF(nssa, NSSA) == OSPF_DEBUG_NSSA) {
vty_out(vty, "debug ospf%s nssa\n", str);
2002-12-13 21:15:29 +01:00
write = 1;
}
2002-12-13 21:15:29 +01:00
/* debug ospf packet all detail. */
r = OSPF_DEBUG_SEND_RECV | OSPF_DEBUG_DETAIL;
for (i = 0; i < 5; i++)
r &= conf_debug_ospf_packet[i]
& (OSPF_DEBUG_SEND_RECV | OSPF_DEBUG_DETAIL);
if (r == (OSPF_DEBUG_SEND_RECV | OSPF_DEBUG_DETAIL)) {
vty_out(vty, "debug ospf%s packet all detail\n", str);
2002-12-13 21:15:29 +01:00
return 1;
}
2002-12-13 21:15:29 +01:00
/* debug ospf packet all. */
r = OSPF_DEBUG_SEND_RECV;
for (i = 0; i < 5; i++)
r &= conf_debug_ospf_packet[i] & OSPF_DEBUG_SEND_RECV;
if (r == OSPF_DEBUG_SEND_RECV) {
vty_out(vty, "debug ospf%s packet all\n", str);
2002-12-13 21:15:29 +01:00
for (i = 0; i < 5; i++)
if (conf_debug_ospf_packet[i] & OSPF_DEBUG_DETAIL)
vty_out(vty, "debug ospf%s packet %s detail\n",
str, type_str[i]);
2002-12-13 21:15:29 +01:00
return 1;
}
2002-12-13 21:15:29 +01:00
/* debug ospf packet (hello|dd|ls-request|ls-update|ls-ack)
(send|recv) (detail). */
for (i = 0; i < 5; i++) {
if (conf_debug_ospf_packet[i] == 0)
continue;
vty_out(vty, "debug ospf%s packet %s%s\n", str, type_str[i],
detail_str[conf_debug_ospf_packet[i]]);
2002-12-13 21:15:29 +01:00
write = 1;
}
/* debug ospf te */
if (IS_CONF_DEBUG_OSPF(te, TE) == OSPF_DEBUG_TE) {
vty_out(vty, "debug ospf%s te\n", str);
write = 1;
}
/* debug ospf sr */
if (IS_CONF_DEBUG_OSPF(sr, SR) == OSPF_DEBUG_SR) {
vty_out(vty, "debug ospf%s sr\n", str);
write = 1;
}
/* debug ospf ldp-sync */
if (IS_CONF_DEBUG_OSPF(ldp_sync, LDP_SYNC) == OSPF_DEBUG_LDP_SYNC) {
vty_out(vty, "debug ospf%s ldp-sync\n", str);
write = 1;
}
2002-12-13 21:15:29 +01:00
return write;
}
/* Initialize debug commands. */
void ospf_debug_init(void)
2002-12-13 21:15:29 +01:00
{
install_node(&debug_node);
2002-12-13 21:15:29 +01:00
install_element(ENABLE_NODE, &show_debugging_ospf_cmd);
install_element(ENABLE_NODE, &debug_ospf_ism_cmd);
install_element(ENABLE_NODE, &debug_ospf_nsm_cmd);
install_element(ENABLE_NODE, &debug_ospf_lsa_cmd);
install_element(ENABLE_NODE, &debug_ospf_zebra_cmd);
install_element(ENABLE_NODE, &debug_ospf_event_cmd);
install_element(ENABLE_NODE, &debug_ospf_nssa_cmd);
Update Traffic Engineering Support for OSPFD NOTE: I am squashing several commits together because they do not independently compile and we need this ability to do any type of sane testing on the patches. Since this series builds together I am doing this. -DBS This new structure is the basis to get new link parameters for Traffic Engineering from Zebra/interface layer to OSPFD and ISISD for the support of Traffic Engineering * lib/if.[c,h]: link parameters struture and get/set functions * lib/command.[c,h]: creation of a new link-node * lib/zclient.[c,h]: modification to the ZBUS message to convey the link parameters structure * lib/zebra.h: New ZBUS message Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support for IEEE 754 format * lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to safely convert between big-endian IEEE-754 single and double binary format, as used in IETF RFCs, and C99. Implementation depends on host using __STDC_IEC_559__, which should be everything we care about. Should correctly error out otherwise. * lib/network.[c,h]: Add ntohf and htonf converter * lib/memtypes.c: Add new memeory type for Traffic Engineering support Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add link parameters support to Zebra * zebra/interface.c: - Add new link-params CLI commands - Add new functions to set/get link parameters for interface * zebra/redistribute.[c,h]: Add new function to propagate link parameters to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering. * zebra/redistribute_null.c: Add new function zebra_interface_parameters_update() * zebra/zserv.[c,h]: Add new functions to send link parameters Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support of new link-params CLI to vtysh In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue to use the ordered version for adding line i.e. config_add_line_uniq() to print Interface CLI commands as it completely break the new LINK_PARAMS_NODE. Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Update Traffic Engineering support for OSPFD These patches update original code to RFC3630 (OSPF-TE) and add support of RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support of RFC6827 (ASON - GMPLS). * ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering * ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392 * ospfd/ospf_packet.c: Update checking of OSPF_OPTION * ospfd/ospf_vty.[c,h]: Update ospf_str2area_id * ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get Link Parameters information from the interface to populate Traffic Engineering metrics * ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN) * ospfd/ospf_te.[c,h]: Major modifications to update the code to new link parameters structure and new RFCs Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> tmp
2016-04-19 16:21:46 +02:00
install_element(ENABLE_NODE, &debug_ospf_te_cmd);
OSPFD: Add Experimental Segment Routing support This is an implementation of draft-ietf-ospf-segment-routing-extensions-24 and RFC7684 for Extended Link & Prefix Opaque LSA. Look to doc/OSPF_SR.rst for implementation details & known limitations. New files: - ospfd/ospf_sr.h: Segment Routing structure definition (SubTLVs + SRDB) - ospfd/ospf_sr.c: Main functions for Segment Routing support - ospfd/ospf_ext.h: TLVs and SubTLVs definition for RFC7684 - ospfd/ospf_ext.c: RFC7684 Extended Link / Prefix implementation - doc/OSPF-SRr.rst: Documentation Modified Files: - doc/ospfd.texi: Add new Segment Routing CLI command definition - lib/command.h: Add new string command for Segment Routing CLI - lib/mpls.h: Add default value for SRGB - lib/route_types.txt: Add new OSPF Segment Routing route type - ospfd/ospf_dump.[c,h]: Add OSPF SR debug - ospfd/ospf_memory.[c,h]: Add new Segment Routing memory type - ospfd/ospf_opaque.[c,h]: Add ospf_sr_init() starting function - ospfd/ospf_ri.c: Add new functions to Set/Get Segment Routing TLVs Add new ospf_router_info_lsa_upadte() to send Opaque LSA to ospf_sr.c() - ospfd/ospf_ri.h: Add new Router Information SR SubTLVs - ospfd/ospf_spf.c: Add new scheduler when running SPF to trigger update of NHLFE - ospfd/ospfd.h: Add new thread for Segment Routing scheduler - ospfd/subdir.am: Add new files - vtysh/Makefile.am: Add new ospf_sr.c file for vtysh - zebra/kernel_netlink.c: Add new OSPF_SR route type - zebra/rt_netlink.[c,h]: Add new OSPF_SR route type - zebra/zebra_mpls.h: Add new OSPF_SR route type Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
2018-01-18 19:11:11 +01:00
install_element(ENABLE_NODE, &debug_ospf_sr_cmd);
install_element(ENABLE_NODE, &debug_ospf_default_info_cmd);
install_element(ENABLE_NODE, &debug_ospf_ldp_sync_cmd);
2002-12-13 21:15:29 +01:00
install_element(ENABLE_NODE, &no_debug_ospf_ism_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_nsm_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_lsa_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_zebra_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_event_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_nssa_cmd);
Update Traffic Engineering Support for OSPFD NOTE: I am squashing several commits together because they do not independently compile and we need this ability to do any type of sane testing on the patches. Since this series builds together I am doing this. -DBS This new structure is the basis to get new link parameters for Traffic Engineering from Zebra/interface layer to OSPFD and ISISD for the support of Traffic Engineering * lib/if.[c,h]: link parameters struture and get/set functions * lib/command.[c,h]: creation of a new link-node * lib/zclient.[c,h]: modification to the ZBUS message to convey the link parameters structure * lib/zebra.h: New ZBUS message Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support for IEEE 754 format * lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to safely convert between big-endian IEEE-754 single and double binary format, as used in IETF RFCs, and C99. Implementation depends on host using __STDC_IEC_559__, which should be everything we care about. Should correctly error out otherwise. * lib/network.[c,h]: Add ntohf and htonf converter * lib/memtypes.c: Add new memeory type for Traffic Engineering support Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add link parameters support to Zebra * zebra/interface.c: - Add new link-params CLI commands - Add new functions to set/get link parameters for interface * zebra/redistribute.[c,h]: Add new function to propagate link parameters to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering. * zebra/redistribute_null.c: Add new function zebra_interface_parameters_update() * zebra/zserv.[c,h]: Add new functions to send link parameters Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support of new link-params CLI to vtysh In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue to use the ordered version for adding line i.e. config_add_line_uniq() to print Interface CLI commands as it completely break the new LINK_PARAMS_NODE. Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Update Traffic Engineering support for OSPFD These patches update original code to RFC3630 (OSPF-TE) and add support of RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support of RFC6827 (ASON - GMPLS). * ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering * ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392 * ospfd/ospf_packet.c: Update checking of OSPF_OPTION * ospfd/ospf_vty.[c,h]: Update ospf_str2area_id * ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get Link Parameters information from the interface to populate Traffic Engineering metrics * ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN) * ospfd/ospf_te.[c,h]: Major modifications to update the code to new link parameters structure and new RFCs Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> tmp
2016-04-19 16:21:46 +02:00
install_element(ENABLE_NODE, &no_debug_ospf_te_cmd);
OSPFD: Add Experimental Segment Routing support This is an implementation of draft-ietf-ospf-segment-routing-extensions-24 and RFC7684 for Extended Link & Prefix Opaque LSA. Look to doc/OSPF_SR.rst for implementation details & known limitations. New files: - ospfd/ospf_sr.h: Segment Routing structure definition (SubTLVs + SRDB) - ospfd/ospf_sr.c: Main functions for Segment Routing support - ospfd/ospf_ext.h: TLVs and SubTLVs definition for RFC7684 - ospfd/ospf_ext.c: RFC7684 Extended Link / Prefix implementation - doc/OSPF-SRr.rst: Documentation Modified Files: - doc/ospfd.texi: Add new Segment Routing CLI command definition - lib/command.h: Add new string command for Segment Routing CLI - lib/mpls.h: Add default value for SRGB - lib/route_types.txt: Add new OSPF Segment Routing route type - ospfd/ospf_dump.[c,h]: Add OSPF SR debug - ospfd/ospf_memory.[c,h]: Add new Segment Routing memory type - ospfd/ospf_opaque.[c,h]: Add ospf_sr_init() starting function - ospfd/ospf_ri.c: Add new functions to Set/Get Segment Routing TLVs Add new ospf_router_info_lsa_upadte() to send Opaque LSA to ospf_sr.c() - ospfd/ospf_ri.h: Add new Router Information SR SubTLVs - ospfd/ospf_spf.c: Add new scheduler when running SPF to trigger update of NHLFE - ospfd/ospfd.h: Add new thread for Segment Routing scheduler - ospfd/subdir.am: Add new files - vtysh/Makefile.am: Add new ospf_sr.c file for vtysh - zebra/kernel_netlink.c: Add new OSPF_SR route type - zebra/rt_netlink.[c,h]: Add new OSPF_SR route type - zebra/zebra_mpls.h: Add new OSPF_SR route type Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
2018-01-18 19:11:11 +01:00
install_element(ENABLE_NODE, &no_debug_ospf_sr_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_default_info_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_ldp_sync_cmd);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
install_element(ENABLE_NODE, &show_debugging_ospf_instance_cmd);
install_element(ENABLE_NODE, &debug_ospf_packet_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_packet_cmd);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
install_element(ENABLE_NODE, &debug_ospf_instance_nsm_cmd);
install_element(ENABLE_NODE, &debug_ospf_instance_lsa_cmd);
install_element(ENABLE_NODE, &debug_ospf_instance_zebra_cmd);
install_element(ENABLE_NODE, &debug_ospf_instance_event_cmd);
install_element(ENABLE_NODE, &debug_ospf_instance_nssa_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_instance_nsm_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_instance_lsa_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_instance_zebra_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_instance_event_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_instance_nssa_cmd);
install_element(ENABLE_NODE, &no_debug_ospf_cmd);
install_element(CONFIG_NODE, &debug_ospf_packet_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_packet_cmd);
2002-12-13 21:15:29 +01:00
install_element(CONFIG_NODE, &debug_ospf_ism_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_ism_cmd);
2002-12-13 21:15:29 +01:00
install_element(CONFIG_NODE, &debug_ospf_nsm_cmd);
install_element(CONFIG_NODE, &debug_ospf_lsa_cmd);
install_element(CONFIG_NODE, &debug_ospf_zebra_cmd);
install_element(CONFIG_NODE, &debug_ospf_event_cmd);
install_element(CONFIG_NODE, &debug_ospf_nssa_cmd);
Update Traffic Engineering Support for OSPFD NOTE: I am squashing several commits together because they do not independently compile and we need this ability to do any type of sane testing on the patches. Since this series builds together I am doing this. -DBS This new structure is the basis to get new link parameters for Traffic Engineering from Zebra/interface layer to OSPFD and ISISD for the support of Traffic Engineering * lib/if.[c,h]: link parameters struture and get/set functions * lib/command.[c,h]: creation of a new link-node * lib/zclient.[c,h]: modification to the ZBUS message to convey the link parameters structure * lib/zebra.h: New ZBUS message Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support for IEEE 754 format * lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to safely convert between big-endian IEEE-754 single and double binary format, as used in IETF RFCs, and C99. Implementation depends on host using __STDC_IEC_559__, which should be everything we care about. Should correctly error out otherwise. * lib/network.[c,h]: Add ntohf and htonf converter * lib/memtypes.c: Add new memeory type for Traffic Engineering support Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add link parameters support to Zebra * zebra/interface.c: - Add new link-params CLI commands - Add new functions to set/get link parameters for interface * zebra/redistribute.[c,h]: Add new function to propagate link parameters to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering. * zebra/redistribute_null.c: Add new function zebra_interface_parameters_update() * zebra/zserv.[c,h]: Add new functions to send link parameters Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support of new link-params CLI to vtysh In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue to use the ordered version for adding line i.e. config_add_line_uniq() to print Interface CLI commands as it completely break the new LINK_PARAMS_NODE. Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Update Traffic Engineering support for OSPFD These patches update original code to RFC3630 (OSPF-TE) and add support of RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support of RFC6827 (ASON - GMPLS). * ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering * ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392 * ospfd/ospf_packet.c: Update checking of OSPF_OPTION * ospfd/ospf_vty.[c,h]: Update ospf_str2area_id * ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get Link Parameters information from the interface to populate Traffic Engineering metrics * ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN) * ospfd/ospf_te.[c,h]: Major modifications to update the code to new link parameters structure and new RFCs Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> tmp
2016-04-19 16:21:46 +02:00
install_element(CONFIG_NODE, &debug_ospf_te_cmd);
OSPFD: Add Experimental Segment Routing support This is an implementation of draft-ietf-ospf-segment-routing-extensions-24 and RFC7684 for Extended Link & Prefix Opaque LSA. Look to doc/OSPF_SR.rst for implementation details & known limitations. New files: - ospfd/ospf_sr.h: Segment Routing structure definition (SubTLVs + SRDB) - ospfd/ospf_sr.c: Main functions for Segment Routing support - ospfd/ospf_ext.h: TLVs and SubTLVs definition for RFC7684 - ospfd/ospf_ext.c: RFC7684 Extended Link / Prefix implementation - doc/OSPF-SRr.rst: Documentation Modified Files: - doc/ospfd.texi: Add new Segment Routing CLI command definition - lib/command.h: Add new string command for Segment Routing CLI - lib/mpls.h: Add default value for SRGB - lib/route_types.txt: Add new OSPF Segment Routing route type - ospfd/ospf_dump.[c,h]: Add OSPF SR debug - ospfd/ospf_memory.[c,h]: Add new Segment Routing memory type - ospfd/ospf_opaque.[c,h]: Add ospf_sr_init() starting function - ospfd/ospf_ri.c: Add new functions to Set/Get Segment Routing TLVs Add new ospf_router_info_lsa_upadte() to send Opaque LSA to ospf_sr.c() - ospfd/ospf_ri.h: Add new Router Information SR SubTLVs - ospfd/ospf_spf.c: Add new scheduler when running SPF to trigger update of NHLFE - ospfd/ospfd.h: Add new thread for Segment Routing scheduler - ospfd/subdir.am: Add new files - vtysh/Makefile.am: Add new ospf_sr.c file for vtysh - zebra/kernel_netlink.c: Add new OSPF_SR route type - zebra/rt_netlink.[c,h]: Add new OSPF_SR route type - zebra/zebra_mpls.h: Add new OSPF_SR route type Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
2018-01-18 19:11:11 +01:00
install_element(CONFIG_NODE, &debug_ospf_sr_cmd);
install_element(CONFIG_NODE, &debug_ospf_default_info_cmd);
install_element(CONFIG_NODE, &debug_ospf_ldp_sync_cmd);
2002-12-13 21:15:29 +01:00
install_element(CONFIG_NODE, &no_debug_ospf_nsm_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_lsa_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_zebra_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_event_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_nssa_cmd);
Update Traffic Engineering Support for OSPFD NOTE: I am squashing several commits together because they do not independently compile and we need this ability to do any type of sane testing on the patches. Since this series builds together I am doing this. -DBS This new structure is the basis to get new link parameters for Traffic Engineering from Zebra/interface layer to OSPFD and ISISD for the support of Traffic Engineering * lib/if.[c,h]: link parameters struture and get/set functions * lib/command.[c,h]: creation of a new link-node * lib/zclient.[c,h]: modification to the ZBUS message to convey the link parameters structure * lib/zebra.h: New ZBUS message Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support for IEEE 754 format * lib/stream.[c,h]: Add stream_get{f,d} and stream_put{f,d}) demux and muxers to safely convert between big-endian IEEE-754 single and double binary format, as used in IETF RFCs, and C99. Implementation depends on host using __STDC_IEC_559__, which should be everything we care about. Should correctly error out otherwise. * lib/network.[c,h]: Add ntohf and htonf converter * lib/memtypes.c: Add new memeory type for Traffic Engineering support Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add link parameters support to Zebra * zebra/interface.c: - Add new link-params CLI commands - Add new functions to set/get link parameters for interface * zebra/redistribute.[c,h]: Add new function to propagate link parameters to routing daemon (essentially OSPFD and ISISD) for Traffic Engineering. * zebra/redistribute_null.c: Add new function zebra_interface_parameters_update() * zebra/zserv.[c,h]: Add new functions to send link parameters Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Add support of new link-params CLI to vtysh In vtysh_config.c/vtysh_config_parse_line(), it is not possible to continue to use the ordered version for adding line i.e. config_add_line_uniq() to print Interface CLI commands as it completely break the new LINK_PARAMS_NODE. Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> Update Traffic Engineering support for OSPFD These patches update original code to RFC3630 (OSPF-TE) and add support of RFC5392 (Inter-AS v2) & RFC7471 (TE metric extensions) and partial support of RFC6827 (ASON - GMPLS). * ospfd/ospf_dump.[c,h]: Add new dump functions for Traffic Engineering * ospfd/ospf_opaque.[c,h]: Add new TLV code points for RFC5392 * ospfd/ospf_packet.c: Update checking of OSPF_OPTION * ospfd/ospf_vty.[c,h]: Update ospf_str2area_id * ospfd/ospf_zebra.c: Add new function ospf_interface_link_params() to get Link Parameters information from the interface to populate Traffic Engineering metrics * ospfd/ospfd.[c,h]: Update OSPF_OPTION flags (T -> MT and new DN) * ospfd/ospf_te.[c,h]: Major modifications to update the code to new link parameters structure and new RFCs Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com> tmp
2016-04-19 16:21:46 +02:00
install_element(CONFIG_NODE, &no_debug_ospf_te_cmd);
OSPFD: Add Experimental Segment Routing support This is an implementation of draft-ietf-ospf-segment-routing-extensions-24 and RFC7684 for Extended Link & Prefix Opaque LSA. Look to doc/OSPF_SR.rst for implementation details & known limitations. New files: - ospfd/ospf_sr.h: Segment Routing structure definition (SubTLVs + SRDB) - ospfd/ospf_sr.c: Main functions for Segment Routing support - ospfd/ospf_ext.h: TLVs and SubTLVs definition for RFC7684 - ospfd/ospf_ext.c: RFC7684 Extended Link / Prefix implementation - doc/OSPF-SRr.rst: Documentation Modified Files: - doc/ospfd.texi: Add new Segment Routing CLI command definition - lib/command.h: Add new string command for Segment Routing CLI - lib/mpls.h: Add default value for SRGB - lib/route_types.txt: Add new OSPF Segment Routing route type - ospfd/ospf_dump.[c,h]: Add OSPF SR debug - ospfd/ospf_memory.[c,h]: Add new Segment Routing memory type - ospfd/ospf_opaque.[c,h]: Add ospf_sr_init() starting function - ospfd/ospf_ri.c: Add new functions to Set/Get Segment Routing TLVs Add new ospf_router_info_lsa_upadte() to send Opaque LSA to ospf_sr.c() - ospfd/ospf_ri.h: Add new Router Information SR SubTLVs - ospfd/ospf_spf.c: Add new scheduler when running SPF to trigger update of NHLFE - ospfd/ospfd.h: Add new thread for Segment Routing scheduler - ospfd/subdir.am: Add new files - vtysh/Makefile.am: Add new ospf_sr.c file for vtysh - zebra/kernel_netlink.c: Add new OSPF_SR route type - zebra/rt_netlink.[c,h]: Add new OSPF_SR route type - zebra/zebra_mpls.h: Add new OSPF_SR route type Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
2018-01-18 19:11:11 +01:00
install_element(CONFIG_NODE, &no_debug_ospf_sr_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_default_info_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_ldp_sync_cmd);
Multi-Instance OSPF Summary ——————————————------------- - etc/init.d/quagga is modified to support creating separate ospf daemon process for each instance. Each individual instance is monitored by watchquagga just like any protocol daemons.(requires initd-mi.patch). - Vtysh is modified to able to connect to multiple daemons of the same protocol (supported for OSPF only for now). - ospfd is modified to remember the Instance-ID that its invoked with. For the entire life of the process it caters to any command request that matches that instance-ID (unless its a non instance specific command). Routes/messages to zebra are tagged with instance-ID. - zebra route/redistribute mechanisms are modified to work with [protocol type + instance-id] - bgpd now has ability to have multiple instance specific redistribution for a protocol (OSPF only supported/tested for now). - zlog ability to display instance-id besides the protocol/daemon name. - Changes in other daemons are to because of the needed integration with some of the modified APIs/routines. (Didn’t prefer replicating too many separate instance specific APIs.) - config/show/debug commands are modified to take instance-id argument as appropriate. Guidelines to start using multi-instance ospf --------------------------------------------- The patch is backward compatible, i.e for any previous way of single ospf deamon(router ospf <cr>) will continue to work as is, including all the show commands etc. To enable multiple instances, do the following: 1. service quagga stop 2. Modify /etc/quagga/daemons to add instance-ids of each desired instance in the following format: ospfd=“yes" ospfd_instances="1,2,3" assuming you want to enable 3 instances with those instance ids. 3. Create corresponding ospfd config files as ospfd-1.conf, ospfd-2.conf and ospfd-3.conf. 4. service quagga start/restart 5. Verify that the deamons are started as expected. You should see ospfd started with -n <instance-id> option. ps –ef | grep quagga With that /var/run/quagga/ should have ospfd-<instance-id>.pid and ospfd-<instance-id>/vty to each instance. 6. vtysh to work with instances as you would with any other deamons. 7. Overall most quagga semantics are the same working with the instance deamon, like it is for any other daemon. NOTE: To safeguard against errors leading to too many processes getting invoked, a hard limit on number of instance-ids is in place, currently its 5. Allowed instance-id range is <1-65535> Once daemons are up, show running from vtysh should show the instance-id of each daemon as 'router ospf <instance-id>’ (without needing explicit configuration) Instance-id can not be changed via vtysh, other router ospf configuration is allowed as before. Signed-off-by: Vipin Kumar <vipin@cumulusnetworks.com> Reviewed-by: Daniel Walton <dwalton@cumulusnetworks.com> Reviewed-by: Dinesh G Dutt <ddutt@cumulusnetworks.com>
2015-05-20 03:03:42 +02:00
install_element(CONFIG_NODE, &debug_ospf_instance_nsm_cmd);
install_element(CONFIG_NODE, &debug_ospf_instance_lsa_cmd);
install_element(CONFIG_NODE, &debug_ospf_instance_zebra_cmd);
install_element(CONFIG_NODE, &debug_ospf_instance_event_cmd);
install_element(CONFIG_NODE, &debug_ospf_instance_nssa_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_instance_nsm_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_instance_lsa_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_instance_zebra_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_instance_event_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_instance_nssa_cmd);
install_element(CONFIG_NODE, &no_debug_ospf_cmd);
2002-12-13 21:15:29 +01:00
}