From 26ebe64b63f9608a811f2e1768ac1e20aaf9f880 Mon Sep 17 00:00:00 2001 From: isabelmsft Date: Thu, 23 Mar 2023 08:26:50 +0000 Subject: [PATCH] Squashed commit of the following: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 1d54781a1f90bda156b06b0734805babfba88b6d Merge: 460c7f39 c704b71c Author: isabelmsft Date: Thu Mar 23 07:32:32 2023 +0000 Merge branch 'mux_mclag' of https://github.com/isabelmsft/sonic-utilities into mux_mclag commit 460c7f390d352b1a0090708fde2f5ca2ace99209 Author: isabelmsft Date: Thu Mar 23 07:22:54 2023 +0000 fix UT commit d3e7f22a806d238b20e7e9db1cdfb1afc5d04ae1 Author: isabelmsft Date: Thu Mar 23 05:32:03 2023 +0000 fix UT commit e2660efe7f6de2531d966a8bf207b04456747374 Author: isabelmsft Date: Thu Mar 23 04:37:26 2023 +0000 add UT commit 68cc589f4d20e60461bf76cbe67cad931f10c7c2 Author: isabelmsft Date: Thu Mar 23 00:55:15 2023 +0000 add UT commit f55ea00bb1fd1d4827c67110498de6d49990d4d1 Author: Mai Bui Date: Tue Mar 21 00:25:39 2023 -0400 Revert "Replace pickle by json (#2636)" (#2746) This reverts commit 54e26359fccf45d2e40800cf5598a725798634cd. Due to https://github.com/sonic-net/sonic-buildimage/issues/14089 Signed-off-by: Mai Bui commit 3b842c1b215020b24e5934b618d8cb51542e4088 Author: abdosi <58047199+abdosi@users.noreply.github.com> Date: Fri Mar 17 16:27:48 2023 -0700 Fix the `show interface counters` throwing exception on device with no external interfaces (#2703) Fix the `show interface counters` throwing exception issue where device do not have any external ports and all are internal links (ethernet or fabric) which is possible in chassis commit ce9245d90a3ccdf903d34ba6966224b29de5d15b Author: Stepan Blyshchak <38952541+stepanblyschak@users.noreply.github.com> Date: Fri Mar 17 09:10:47 2023 +0200 [route_check] remove check-frr_patch mock (#2732) The test fails with python3.7 (works in 3.9) when stopping patch which hasn't been started. We can always mock check_output call and if FRR_ROUTES is not defined return empty dictionary by the mock. #### What I did Removed check_frr_patch mock to fix UT running on python3.7 #### How I did it Removed the mock #### How to verify it Run unit test in stretch env commit 370aa30fc3f51918d4d0c36c9dc2c79f54214e67 Author: Neetha John Date: Thu Mar 16 17:31:49 2023 -0700 Revert "Update load minigraph to load backend acl (#2236)" (#2735) This reverts commit 1518ca92df1e794222bf45100246c8ef956d7af6. commit e4415b5ed4ea3100580ee9aaf8060587b8f96611 Author: Vivek Date: Tue Mar 14 17:55:40 2023 -0700 Update the ref guide to reflect the vlan brief output (#2731) What I did show vlan brief will only be showing dhcpv4 addresses and not dhcpv6 destination Signed-off-by: Vivek Reddy Karri commit 093c964c576e28188ddb0181af1fcc6b7a3adfc5 Author: Aryeh Feigin <101218333+arfeigin@users.noreply.github.com> Date: Tue Mar 14 22:13:51 2023 +0200 Fix fast-reboot DB migration (#2734) Fix DB migrator logic for migrating fast-reboot table, fixing #2621 db_migrator. How I did it Checking if fast-reboot table exists in DB. How to verify it Verified manually, migrating after fast-reboot and after cold/warm reboot. commit 16baa1a1ddac85ab1db559a27d47b566b65d78e8 Author: Stephen Sun <5379172+stephenxs@users.noreply.github.com> Date: Tue Mar 14 21:01:52 2023 +0800 Enhance the logic to wait for all buffer tables to be removed in _clear_qos (#2720) - What I did This is an enhancement of PR #2503 - How I did it On top of waiting for BUFFER_POOL_TABLE to be cleared from APPL_DB, we need to wait for KEY_SET and DEL_SET as well. KEY_SET and DEL_SET are designed to accommodate the APPL_DB entries that were updated by manager daemons but have not yet been handled by the orchagent. In this case, even if the buffer tables are empty, entries in KEY_SET or DEL_SET will be in the buffer tables later on. So, we need to wait for key set tables as well. Do not delay for traditional buffer manager because it does not remove any buffer table. Provide a CLI option to print the detailed message if there is any table item which still exists - How to verify it Manually test and unit test - Previous command output (if the output of a command-line utility has changed) Running command: /usr/local/bin/sonic-cfggen -d --write-to-db -t /usr/share/sonic/device/x86_64-mlnx_msn2410-r0/ACS-MSN2410/buffers_dynamic.json.j2,config-db -t /usr/share/sonic/device/x86_64-mlnx_msn2410-r0/ACS-MSN2410/qos.json.j2,config-db -y /etc/sonic/sonic_version.yml - New command output (if the output of a command-line utility has changed) Only with option --verbose there are new output. Without the option, the output is the same as it is. admin@mtbc-sonic-01-2410:~$ sudo config qos reload --verbose Some entries matching BUFFER_*_TABLE:* still exist: BUFFER_QUEUE_TABLE:Ethernet108:0-2 Some entries matching BUFFER_*_SET still exist: BUFFER_PG_TABLE_KEY_SET Some entries matching BUFFER_*_TABLE:* still exist: BUFFER_QUEUE_TABLE:Ethernet108:0-2 Some entries matching BUFFER_*_SET still exist: BUFFER_PG_TABLE_KEY_SET Some entries matching BUFFER_*_TABLE:* still exist: BUFFER_QUEUE_TABLE:Ethernet108:0-2 Running command: /usr/local/bin/sonic-cfggen -d --write-to-db -t /usr/share/sonic/device/x86_64-mlnx_msn2410-r0/ACS-MSN2410/buffers_dynamic.json.j2,config-db -t /usr/share/sonic/device/x86_64-mlnx_msn2410-r0/ACS-MSN2410/qos.json.j2,config-db -y /etc/sonic/sonic_version.yml commit 81b4fcaa7f79976fdd5da07077e21902679579fb Author: Aryeh Feigin <101218333+arfeigin@users.noreply.github.com> Date: Fri Mar 10 18:41:30 2023 +0200 Remove timer from FAST_REBOOT STATE_DB entry and use finalizer (#2621) This should come along with sonic-buildimage PR (sonic-net/sonic-buildimage#13484) implementing fast-reboot finalizing logic in finalize-warmboot script and other submodules PRs utilizing the change. This PR should come along with the following PRs as well: sonic-net/sonic-swss-common#742 sonic-net/sonic-platform-daemons#335 sonic-net/sonic-sairedis#1196 This set of PRs solves the issue sonic-net/sonic-buildimage#13251 What I did Remove the timer used to clear fast-reboot entry from state-db, instead it will be cleared by fast-reboot finalize function implemented inside finalize-warmboot script (which will be invoked since fast-reboot is using warm-reboot infrastructure). As well instead of having "1" as the value for fast-reboot entry in state-db and deleting it when done it is now modified to set enable/disable according to the context. As well all scripts reading this entry should be modified to the new value options. How I did it Removed the timer usage in the fast-reboot script and adding fast-reboot finalize logic to warm-reboot in the linked PR. Use "enable/disable" instead of "1" as the entry value. How to verify it Run fast-reboot and check that the state-db entry for fast-reboot is being deleted after finalizing fast-reboot and not by an expiring timer. commit 9693c990191143605c74fe98c5a0f099598238fe Author: Stepan Blyshchak <38952541+stepanblyschak@users.noreply.github.com> Date: Fri Mar 10 04:07:25 2023 +0200 [route_check] fix IPv6 address handling (#2722) *In case user has configured an IPv6 address on an interface in CONFIG DB in non simplified form like 2000:31:0:0::1/64 it is present in a simplified form in ASIC_DB. This leads to route_check failure since it just compares strings. commit e65ffce059fc4164a59c17774346764232f2c10d Author: jhli-cisco <93410383+jhli-cisco@users.noreply.github.com> Date: Wed Mar 8 18:03:50 2023 -0800 update fast-reboot (#2728) commit 4f24b1137a00f596bf520fdf159ac8c4c6bb63c6 Author: jingwenxie Date: Thu Mar 9 09:12:19 2023 +0800 [GCU] Add vlanintf-validator (#2697) What I did Fix the bug of GCU vlan interface modification. It should call ip neigh flush dev after removing interface ip. The fix is basically following config CLI's tradition. How I did it Add vlanintf service validator to check if extra step of ip neigh flush is needed. How to verify it GCU E2E test in dualtor testbed. commit 40f4254c87f33145c121fc182601702df7fceced Author: Liu Shilong Date: Thu Mar 9 06:57:05 2023 +0800 Check SONiC dependencies before installation. (#2716) #### What I did SONiC related packages shouldn't be intalled from Pypi. It is security compliance requirement. Check SONiC related packages when using setup.py. commit 793b14ac75042e86f9f38852b9c2eafdf981ab18 Author: bingwang-ms <66248323+bingwang-ms@users.noreply.github.com> Date: Wed Mar 8 13:28:59 2023 -0800 Improve show acl commands (#2667) * Add status for ACL_TABLE and ACL_RULE in STATE_DB commit 3d24b00fcf0159e77eab656f793e9267f323fcbb Author: isabelmsft <67024108+isabelmsft@users.noreply.github.com> Date: Wed Mar 8 00:19:03 2023 -0800 [GCU] Add PFC_WD RDMA validator (#2619) commit dcccec9df35cd76045f0c623d058d0c87fcc3fe6 Author: vdahiya12 <67608553+vdahiya12@users.noreply.github.com> Date: Tue Mar 7 15:19:53 2023 -0800 [show][muxcable] increase timeout for displaying HW_STATUS (#2712) What I did probe mux direction not always return success. Sample output of: while [ 1 ]; do date; show mux hwmode muxdirection; show mux status; sleep 1; done Mon 27 Feb 2023 03:12:25 PM UTC Port Direction Presence ----------- ----------- ---------- Ethernet16 unknown True PORT STATUS HEALTH HWSTATUS LAST_SWITCHOVER_TIME ----------- -------- -------- ------------ --------------------------- Ethernet16 standby healthy inconsistent 2023-Feb-25 07:55:18.269177 If we increase the timeout to 0.5 secs to get the values back from ycabled, this will remove the inconsistency issue, and display the consistent values, because while telemetry is going on, the time to get actual mux value takes significantly longer than 0.1 seconds. PORT STATUS HEALTH HWSTATUS LAST_SWITCHOVER_TIME ----------- -------- -------- ------------ --------------------------- Ethernet16 standby healthy consistent 2023-Feb-25 07:55:18.269177 How I did it How to verify it Manually run changes on setup worst-case CLI return time could be 16 seconds for 32 ports. on avg each port is 200 mSec if telemetry is going, but on average show command will return in < 1 sec for all 32 ports. Signed-off-by: vaibhav-dahiya commit 75bb60fe4f22b2c0831e7b31e5675df0cd01ff7d Author: isabelmsft <67024108+isabelmsft@users.noreply.github.com> Date: Tue Mar 7 14:42:50 2023 -0800 YANG validation for ConfigDB Updates: MIRROR_SESSION use case (#2430) commit cf3f0ce86b3fd4f7b7548331aab8cc3337663e5d Author: kellyyeh <42761586+kellyyeh@users.noreply.github.com> Date: Tue Mar 7 10:47:13 2023 -0800 Fix non-zero status exit on non secure boot system (#2715) What I did Warm-reboot fails on kvm due to non-zero exit upon command bootctl status 2>/dev/null | grep -c "Secure Boot: enabled" How I did it Added || true to return 0 when previous command fails. Added CHECK_SECURE_UPGRADE_ENABLED to check output of previous command Added debug logs How to verify it Run warm-reboot on kvm and physical device when increased verbosity. Expects debug log to indicate secure/non secure boot. Successful warm reboot commit 74d6d77c3ae6cc255bf18755bd902ff7d86ace67 Author: Stepan Blyshchak <38952541+stepanblyschak@users.noreply.github.com> Date: Tue Mar 7 20:23:07 2023 +0200 [route_check] implement a check for FRR routes not marked offloaded (#2531) * [route_check] implement a check for FRR routes not marked offloaded * Implemented a route_check functioality that will check "show ip route json" output from FRR and will ensure that all routes are marked as offloaded. If some routes are not offloaded for 15 sec, this is considered as an issue and a mitigation logic is invoked. commit 36e98b3ddf584790a4f7e343c4fbe0895ef9bc85 Author: Vaibhav Hemant Dixit Date: Mon Mar 6 10:56:51 2023 -0800 [warm/fast-reboot] Backup logs from tmpfs to disk during fast/warm shutdown (#2714) Goal: Preserve logs during TOR upgrades and shutdown Need: Below PRs moved logs from disk to tmpfs for specific hwskus. Due to these changes, shutdown path logs are now lost. The logs in shutdown path are crucial for debug purposes. sonic-net/sonic-buildimage#13805 sonic-net/sonic-buildimage#13587 sonic-net/sonic-buildimage#13587 How I did it Check if logs are on tmpfs. If yes, backup logs from /var/log How to verify it Verified on a physical device - logs on tmfs are backed up for past 30 minutes. commit a1c3bd55eea983aae197282e10ac8099492a6194 Author: Vaibhav Hemant Dixit Date: Fri Mar 3 12:45:40 2023 -0800 [db_migrator] Add missing attribute 'weight' to route entries in APPL DB (#2691) Fixes: 201911 to 202205 warm upgrade failure in fpmsyncd reconciliation due to missing weight attr in routes. (sonic-net/sonic-buildimage#12625) How I did it Check for missing attribute weight in APPLDB route entries. If found missing this attribute is added with empty value. How to verify it Verified on physical device. 201911 to 202205 upgrade worked fine. commit 696da1878f2e275d8cf2fbb17881d63ca01df32a Author: Liu Shilong Date: Thu Mar 2 15:36:57 2023 +0800 [ci] Fix pipeline issue caused by sonic-slave-* change. (#2709) What I did These 3 packages maybe purged by default. Do not block pipeline. Download deb/whl packages only to accelerate download process. How I did it How to verify it commit bf24267fddc95e8d83ef5908e0eab30ddd6c3ac1 Author: Yaqiang Zhu Date: Wed Mar 1 10:05:04 2023 +0800 [dhcp_relay] Fix dhcp_relay restart error while add/del vlan (#2688) Why I did In device that doesn't have dhcp_relay service, restart dhcp_relay after add/del vlan would encounter failed How I did it Add support to check whether device is support dhcp_relay service. How to verify it 1. Unit test 2. Build and install in device Signed-off-by: Yaqiang Zhu commit 484f5943931eef5ac1bd22467eca648aacbeabd3 Author: isabelmsft <67024108+isabelmsft@users.noreply.github.com> Date: Mon Feb 27 23:49:01 2023 -0800 [GCU] Add Sample Unit Test for RDMA Headroom Pool Size Tuning (#2692) * add rdma gcu unit test * fix comment * clean unused code * clean format * extend to mock patchapplier, in place of changeapplier * replace tabs with spaces commit fa291e1078be3676130c99bcec840c88c221bf8e Author: Junchao-Mellanox <57339448+Junchao-Mellanox@users.noreply.github.com> Date: Mon Feb 27 17:49:34 2023 +0800 Add begin logs to config reload/config minigraph/warm-reboot/fast-reboot (#2694) - What I did Add more logs for config reload/config minigraph/warm-reboot/fast/reboot to identify in the log (notice level) what was the command executed which could cause a service affect. - How I did it Add more logs for config reload/config minigraph/warm-reboot/fast/reboot. - How to verify it Manual test commit d58c4fbcbb5dd3b1be004926bf0584c2594049d7 Author: StormLiangMS <89824293+StormLiangMS@users.noreply.github.com> Date: Mon Feb 27 11:14:54 2023 +0800 Revert "Secure upgrade (#2337)" (#2675) This reverts commit 6fe8599216afb1c302e77c52235c4849be6042b2. commit 15a59c93093e779479a47e79f8bd4d5772d1fbdd Author: vdahiya12 <67608553+vdahiya12@users.noreply.github.com> Date: Fri Feb 24 12:46:36 2023 -0800 [show][muxcable] add some new commands health, reset-cause, queue_info support for muxcable (#2414) This PR adds the support for adding some utility commands for muxacble This includes commands for health, operationtime, queueinfo, resetcause vdahiya@sonic:~$ show mux health Ethernet4 PORT ATTR HEALTH --------- --------------- -------- Ethernet4 health_check Ok vdahiya@sonic:~$ show mux health Ethernet4 --json { "health_check": "Ok" } vdahiya@sonic:~$ show mux operation Ethernet4 --json { "operation_time": "22:22" } vdahiya@sonic:~$ show mux operation Ethernet4 PORT ATTR OPERATION_TIME --------- -------------- ---------------- Ethernet4 operation_time 22:22 vdahiya@sonic:~$ vdahiya@sonic:~$ show mux resetcause Ethernet4 PORT ATTR RESETCAUSE --------- ----------- ------------ Ethernet4 reset_cause 0 vdahiya@sonic:~$ show mux resetcause Ethernet4 --json { "reset_cause": "0" } vdahiya@sonic:~$ show mux queueinfo Ethernet4 --json { "Remote": "{'VSC': {'r_ptr': 0, 'w_ptr': 0, 'total_count': 0, 'free_count': 0, 'buff_addr': 0, 'node_size': 0}, 'UART1': {'r_ptr': 0, 'w_ptr': 0, 'total_count': 0, 'free_count': 0, 'buff_addr': 209870, 'node_size': 1682183}, 'UART2': {'r_ptr': 13262, 'w_ptr': 3, 'total_count': 0, 'free_count': 0, 'buff_addr': 12, 'node_size': 0}}", "Local": "{'VSC': {'r_ptr': 0, 'w_ptr': 0, 'total_count': 0, 'free_count': 0, 'buff_addr': 0, 'node_size': 0}, 'UART1': {'r_ptr': 0, 'w_ptr': 0, 'total_count': 0, 'free_count': 0, 'buff_addr': 209870, 'node_size': 1682183}, 'UART2': {'r_ptr': 13262, 'w_ptr': 3, 'total_count': 0, 'free_count': 0, 'buff_addr': 12, 'node_size': 0}}" } commit 07675feb09544f095e9a867634a16d1dee825a69 Author: Mai Bui Date: Fri Feb 24 12:26:32 2023 -0500 Replace pickle by json (#2636) Signed-off-by: maipbui #### What I did `pickle` can lead to lead to code execution vulnerabilities. Recommend to serializing the relevant data as JSON. #### How I did it Replace `pickle` by `json` #### How to verify it Pass UT Manual test commit 56a9d69bc79eda9d67953ed21fd42221b58ee04d Author: Yaqiang Zhu Date: Thu Feb 16 02:31:01 2023 +0800 [dhcp_relay] Remove add field of vlanid to DHCP_RELAY table while add vlan (#2678) What I did Remove add field of vlanid to DHCP_RELAY table while add vlan which would cause conflict with yang model. How I did it Remove add field of vlanid to DHCP_RELAY table while add vlan How to verify it By unit tests Signed-off-by: Yaqiang Zhu commit 8f7f8bd1810328fc0faa85b23f2033aa3fc61191 Author: davidpil2002 <91657985+davidpil2002@users.noreply.github.com> Date: Tue Feb 14 11:38:53 2023 +0200 Add support of secure warm-boot (#2532) - What I did Add support of secure warm-boot to SONiC. Basically, warm-boot is supporting to load a new kernel without doing full/cold boot. That is by loading a new kernel and exec with kexec Linux command. As a result of that, even when the Secure Boot feature is enabled, still a user or a malicious user can load an unsigned kernel, so to avoid that we added the support of the secure warm boot. More Description about this feature can be found in the Secure Boot HLD: sonic-net/SONiC#1028 - How I did it In general, Linux support it, so I enabled this support by doing the follow steps: I added some special flags in Linux Kernel when user build the sonic-buildimage with secure boot feature enabled. I added a flag "-s" to the kexec command Note: more details in the HLD above. - How to verify it * Good flow: manually just install with sonic-installed a new secure image (a SONiC image that was build with Secure Boot flag enabled) after the secure image is installed, do: warm-reboot Check now that the new kernel is really loaded and switched. * Bad flow: Do the same steps 1-2 as a good flow but with an insecure image (SONiC image that was built without setting Secure Boot enabled) After the insecure image is installed, and triggered warm-boot you should get an error that the new unsigned kernel from the unsecured image was not loaded. Automation test - TBD commit a05ce562e37463a7ff8d8c012aca347c8bb45e03 Author: Yaqiang Zhu Date: Tue Feb 14 09:18:37 2023 +0800 [doc] Add docs for dhcp_relay show/clear cli (#2649) What I did Add docs for dhcp_realy show/clear cli How I did it Add docs for dhcp_realy show/clear cli Signed-off-by: Yaqiang Zhu commit 3228979b2aa0de90444f385a8f6f1c8c66fd0e09 Author: wenyiz2021 <91497961+wenyiz2021@users.noreply.github.com> Date: Mon Feb 13 11:04:58 2023 -0800 [portstat CLI] don't print reminder if use json format (#2670) * no print if use json format * add print for chassis commit b741628f5f30283b40b75b784e1daf57671ae6d8 Author: Vadym Hlushko <62022266+vadymhlushko-mlnx@users.noreply.github.com> Date: Mon Feb 13 13:03:12 2023 +0200 [generate_dump] Revert "Revert generate_dump optimization PR's #2599", add fixes for empty /dump forder and symbolic links (#2645) - What I did 0ee19e5 Revert Revert the show-techsupport optimization PR's #2599 c8940ad Add a fix for the empty /dump folder inside the final tar archive generated by the show techsupport CLI command. 8a8668c Add a fix to not follow the symbolic links to avoid duplicate files inside the final tar archive generated by the show techsupport CLI command. - How I did it Modify the scripts/generate_dump script. - How to verify it 1. Manual verification do the show techsupport CLI command and save output original.tar.gz (with original generate_dump script) do the show techsupport CLI command and save output fixes.tar.gz (with the generate_dump script modified by this PR) unpack both archives original.tar.gz and fixes.tar.gz compare both directories with ncdu & diff --brief --recursive original fixes Linux utilities 2. Run the community tests sonic-mgmt/tests/show_techsupport Signed-off-by: vadymhlushko-mlnx commit 96d5c2d5fcc1967b0f5f517ccc490e3b95be3585 Author: Yaqiang Zhu Date: Fri Feb 10 17:49:38 2023 +0800 [vlan] Refresh dhcpv6_relay config while adding/deleting a vlan (#2660) What I did Currently, add/del a vlan doesn't change related dhcpv6_relay config, which is incorrect. How I did it 1. Add dhcp_relay table init entry while adding vlan 2. Delete dhcp_relay related config while deleting vlan 3. Add unitest How to verify it 1. By unitest 2. install whl and run cli Signed-off-by: Yaqiang Zhu commit a090523a9ef07eaab176893b7eaa660930fa5dbf Author: jingwenxie Date: Fri Feb 10 09:13:51 2023 +0800 [GCU] protect loopback0 from deletion (#2638) What I did Refer to sonic-net/sonic-buildimage#11171, protect loopback0 from deletion How I did it Add patch checker to fail the validation when remove loopback0 How to verify it Unit test commit 18a3d00ad160fd7d890c3f8061cc84b96374f7a3 Author: Stepan Blyshchak <38952541+stepanblyschak@users.noreply.github.com> Date: Thu Feb 9 05:20:11 2023 +0200 [config/show] Add command to control pending FIB suppression (#2495) * [config/show] Add command to control pending FIB suppression What I did I added a command config suppress-pending-fib that will allow user to enable/disable this feature. Once it is enabled, BGP will wait for route to be programmed to HW before announcing the route to the peers. I also added a corresponding show command that prints the status of this feature. commit 5244e3b5cbc5d6708f56401219a4257d47b4b0f7 Author: mihirpat1 <112018033+mihirpat1@users.noreply.github.com> Date: Wed Feb 8 16:39:00 2023 -0800 Add transceiver info CLI support to show output from TRANSCEIVER_INFO for ZR (#2630) * Add transceiver info CLI support to show output from TRANSCEIVER_INFO for ZR Signed-off-by: Mihir Patel * Added test case for info CLI * Updated command reference * Resolved merged conflicts * Made convert_sfp_info_to_output_string generic for CMIS and non CMIS and added test case to address PR comment * Resolved test_multi_asic_interface_status_all failure * Addressed PR comments --------- Signed-off-by: Mihir Patel commit 05aedd558dbe901b873e2e2c8e11afc15a67db85 Author: vdahiya12 <67608553+vdahiya12@users.noreply.github.com> Date: Tue Feb 7 12:30:18 2023 -0800 [show] add support for gRPC show commands for `active-active` (#2629) Signed-off-by: vaibhav-dahiya vdahiya@microsoft.com This PR adds support for show mux hwmode muxdirection as well as show mux grpc muxdirection to show the state of gRPC connected to the SoCs for 'active-active' acble type vdahiya@sonic:~$ show mux grpc muxdirection Port Direction Presence PeerDirection ConnectivityState --------- ----------- ---------- --------------- ------------------- Ethernet0 active False active READY vdahiya@sonic:~$ vdahiya@sonic:~$ show mux grpc muxdirection --json { "HWMODE": { "Ethernet0": { "Direction": "active", "Presence": "False", "PeerDirection": "active", "ConnectivityState": "READY" } } } What I did Added support for the commands. How I did it How to verify it UT and running the changes on Testbed commit 9512ccd2d2863d7bcb5e7f42cf60b0be39c61c70 Author: Sudharsan Dhamal Gopalarathnam Date: Tue Feb 7 12:14:49 2023 -0800 [sai_failure_dump]Invoking dump during SAI failure (#2633) * Added logic in techsupport script to collect SAI failure dump commit 4971b7b71067e86c7f86591efc86993aa0c0ce1d Author: Stepan Blyshchak <38952541+stepanblyschak@users.noreply.github.com> Date: Tue Feb 7 18:07:52 2023 +0200 [db_migrator] make LOG_LEVEL_DB migration more robust (#2651) It could be that LOG_LEVEL_DB includes some invalid data and/or a KEY_SET that is not cleaned up due to an issue, for example we observed _gearsyncd_KEY_SET set included in the LOG_LEVEL_DB and preserved in warm reboot. However, this key is not of type hash which leads to an exception and migration failure. The migration logic should be more robust allowing users to upgrade even though some daemon has left overs in the LOG_LEVEL_DB or invalid data is written. - What I did To fix migration issue that leads to device configuration being lost. - How I did it Wrap the logic in try/except/finally. - How to verify it 202205 -> 202211/master upgrade. Signed-off-by: Stepan Blyschak commit d80ec9722880d7b8a6786a27696bff97ae30b903 Author: siqbal1986 Date: Mon Feb 6 12:00:09 2023 -0800 Fixed a bug in "show vnet routes all" causing screen overrun. (#2644) Signed-off-by: siqbal1486 commit 6b567168bc971cac112681596d828c919d252bc8 Author: mihirpat1 <112018033+mihirpat1@users.noreply.github.com> Date: Wed Feb 1 13:48:57 2023 -0800 show logging CLI support for logs stored in tmpfs (#2641) * show logging CLI support for logs stored in tmpfs Signed-off-by: Mihir Patel * Fixed testcase failures * Reverted unwanted change in a file * Added testcase for syslog.1 in log.tmpfs directory * mend --------- Signed-off-by: Mihir Patel commit 38e5caadb7caebedb9237a9cd87c927bd6637fe5 Author: jfeng-arista <98421150+jfeng-arista@users.noreply.github.com> Date: Wed Feb 1 11:29:49 2023 -0800 [chassis][voq] Add asic id for linecards so "show fabric counters queue/port" can work. (#2499) * Add asic id for linecards so "show fabric counters queue/port" can work. * Add test coverage --------- Signed-off-by: Jie Feng commit 78e5f179772fc951732a33191865efabea77c965 Author: longhuan-cisco <84595962+longhuan-cisco@users.noreply.github.com> Date: Wed Feb 1 11:12:41 2023 -0800 Add Transceiver PM basic CLI support to show output from TRANSCEIVER_PM table for ZR (#2615) * Transceiver PM basic CLI support to show output from TRANSCEIVER_PM table * Fix alert typo * Fix display format and add cd short link * Add doc for pm * Update Command-Reference.md commit 8a7609930cae97934719609b42d61ad153c3350d Author: wenyiz2021 <91497961+wenyiz2021@users.noreply.github.com> Date: Wed Feb 1 09:33:14 2023 -0800 [masic support] 'show run bgp' support for multi-asic (#2427) Support 'show run bgp' for multi-asics Add mock tables and UTs for single-asic, multi-asic, bgp not running cases commit 370fe81229f3fbea29d5bf5b9ee2347824056d80 Author: kartik-arista <61531803+kartik-arista@users.noreply.github.com> Date: Tue Jan 31 10:19:26 2023 -0800 Making 'show feature autorestart' more resilient to missing auto_restart config in CONFIG_DB (#2592) Fixes BUG 762723 commit e6d880a0249f1f2e0b9d4ef2412e84e9a31b45a2 Author: Yaqiang Zhu Date: Mon Jan 30 21:07:12 2023 -0800 [doc] Update docs for dhcp_relay config cli (#2598) What I did Updated docs about dhcp_relay config cli How I did it Updated docs about dhcp_relay config cli Signed-off-by: Yaqiang Zhu commit 9865dda9b7075bc9c788cba893cba329a0548e24 Author: abdosi <58047199+abdosi@users.noreply.github.com> Date: Mon Jan 30 17:52:50 2023 -0800 Skip saidump for Spine Router as this can take more than 5 sec (#2637) To address sonic-net/sonic-buildimage#13561 skip saidump on T2 platforms for time-being. commit 56d41f2581157c31a09da365515ac9df9ebb540b Author: ycoheNvidia <99744138+ycoheNvidia@users.noreply.github.com> Date: Mon Jan 30 23:28:15 2023 +0200 Secure upgrade (#2337) #### What I did Added support for secure upgrade #### How I did it It includes image signing during build (in sonic buildimage repo) and verification during image install (in sonic-utilities). HLD can be found in the following PR: https://github.com/sonic-net/SONiC/pull/1024 #### How to verify it Feature is used to allow image was not modified since built from vendor. During installation, image can be verified with a signature attached to it. In order for image verification - image must be signed - need to provide signing key and certificate (paths in SECURE_UPGRADE_DEV_SIGNING_KEY and SECURE_UPGRADE_DEV_SIGNING_CERT in rules/config) during build , and during image install, need to enable secure boot flag in bios, and signing_certificate should be available in bios. #### Feature dependencies In order for this feature to work smoothly, need to have secure boot feature implemented as well. The Secure boot feature will be merged in the near future. sonic-buildimage PR: https://github.com/sonic-net/sonic-buildimage/pull/11862 commit 0744b19b7321aa33269ee7a76937f21e44c2750c Author: Junchao-Mellanox <57339448+Junchao-Mellanox@users.noreply.github.com> Date: Tue Jan 31 02:15:01 2023 +0800 [system-health] Fix issue: show system-health CLI crashes (#2635) - What I did Fix issue: show system-health CLI crashes root@switch:/home/admin# show system-health summary Traceback (most recent call last): File "/usr/local/bin/show", line 8, in sys.exit(cli()) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/show/system_health.py", line 113, in summary _, chassis, stat = get_system_health_status() File "/usr/local/lib/python3.9/dist-packages/show/system_health.py", line 10, in get_system_health_status if os.environ["UTILITIES_UNIT_TESTING"] == "1": File "/usr/lib/python3.9/os.py", line 679, in __getitem__ raise KeyError(key) from None KeyError: 'UTILITIES_UNIT_TESTING' - How I did it Use dict.get instead of [] operator. - How to verify it Manual test commit c3c4905bb1dac2fd201f4647b730b29424e20013 Author: anamehra <54692434+anamehra@users.noreply.github.com> Date: Mon Jan 30 10:05:10 2023 -0800 Fixed admin state config CLI for Backport interfaces (#2557) Fixed admin state config CLI for Backport interfaces Fixes sonic-net/sonic-buildimage#13057 commit 3d53f9930084c87bec12c498d8af625ae04a2a05 Author: zhixzhu <44230426+zhixzhu@users.noreply.github.com> Date: Tue Jan 31 02:02:33 2023 +0800 suppport multi asic for show queue counter (#2439) Added option -n for both "show queue counter" and "queuestat", using multi_asic module in queuestat to query database of specified namespace. Removed function get_queue_port() to decrease the times of connecting database. commit 5556fafc85edaa1d16276e02e7b34959033ffb29 Author: Baorong Liu <96146196+baorliu@users.noreply.github.com> Date: Fri Jan 27 11:19:23 2023 -0800 [show_bfd] add local discriminator in show bfd command (#2625) commit 17609919fd461521090113d1de6a77d5062905c9 Author: jingwenxie Date: Fri Jan 27 15:48:15 2023 +0800 [GCU] Ignore bgpraw table in GCU operation (#2628) What I did After the previous fix #2623 , GCU still fails in the rollback operation. The bgpraw table should be discard in all GCU operation. Thus, I change get_config_db_as_json function to crop out "bgpraw" table. How I did it Pop "bgpraw" table if exists. How to verify it Unittest commit 3db8c009a87e246f6f2e16e5e9f22aca264d4c51 Author: Dante (Kuo-Jung) Su Date: Thu Jan 26 01:30:55 2023 +0800 Add interface link-training command into the CLI doc (#2257) * LT Admin/Oper: Use 'N/A' when the data is unavailable Signed-off-by: Dante Su * fix test failure Signed-off-by: Dante Su * fix coverage failure Signed-off-by: Dante Su * [doc]: Update Command-Reference.md (#2257) Add interface link-training command into the CLI doc Use 'N/A' if link-training attribute is not supported in the SAI. Signed-off-by: Dante Su Signed-off-by: Dante Su commit 28b255afedb04a9214ca8a7bf10c38c5c64d4c48 Author: jingwenxie Date: Wed Jan 25 08:51:16 2023 +0800 [GCU] Ignore bgpraw in GCU applier (#2623) What I did show run all output will include bgpraw for business needs. GCU ipv6 test will update BGP_NEIGHBOR table which caused bgpraw content change, which will make the apply-patch operation fail. The solution is to add bgpraw to ignored tables. How I did it Add new added bgpraw table to ignored backend table. How to verify it Existing Unit test and local E2E GCU test. commit ff5167a1c4f2289b1c7b5cf23c802fa3ccde673a Author: Jing Zhang Date: Mon Jan 23 15:49:32 2023 -0800 [muxcable][config] Add support to enable/disable ceasing to be an advertisement interface when `radv` service is stopped (#2622) This PR is to add CLI support to enable or disable the feature to send out a good-bye packet when radv service is stopped on active-active dualtor devices. sign-off: Jing Zhang zhangjing@microsoft.com commit ed1d3c99b60bf8547342a1f98f349eac264fe887 Author: jfeng-arista <98421150+jfeng-arista@users.noreply.github.com> Date: Mon Jan 23 13:23:31 2023 -0800 [chassis][voq] Add "show fabric reachability" command. (#2528) What I did Added "show fabric reachability" command. The output of this command : Local Link Remote Module Remote Link Status ------------ --------------- ------------- -------- 0 304 171 up 1 304 156 up 2 304 147 up Added test for the change at tests/fabricstat_test.py. The test is at sonic-net/sonic-mgmt#6620 commit 049bacf95babe50d32d90c68cf7b4825f5a64b46 Author: Vadym Hlushko <62022266+vadymhlushko-mlnx@users.noreply.github.com> Date: Mon Jan 23 17:39:58 2023 +0200 Revert (#2599) b34a540c [generate_dump] Fix for deletion flow for all secret files from show-techsupport dump (#2571) 258ffa09 [generate_dump] Optimize the execution time of 'show techsupport' CLI by parallel function execution (#2512) 572c8cff Optimize the execution time of the 'show techsupport' script to 5-10%, (#2504) This reverts commits b34a540cca5555ab3aa74e19e81f24c2a20d311b 258ffa0928ce2c74ebdc180e13c6476dc2534983 572c8cffdddb7683e158d36067398600a71512ea commit fafb0dfef95607b5b7dc2da0307ebb2bcd4508bf Author: Saikrishna Arcot Date: Thu Jan 19 14:42:14 2023 -0800 [warm-reboot] Use kexec_file_load instead of kexec_load when available (#2608) On some dev VMs, warm reboot on a VS image fails. Specifically, after kexec is called and the new kernel starts, the new kernel tries to load the initramfs, but fails to do so for whatever reason. There may be messages about gzip decompression failing and that it's corrupted. After some experimentation, it was found that when first loading the new kernel and initramfs into memory, using the `kexec_file_load` syscall (`-s` flag in kexec) worked fine, whereas using the default `kexec_load` syscall resulted in a failure. It's unknown why `kexec_file_load` worked fine when `kexec_load` didn't; there shouldn't be any difference for non-secure boot kernels, as far as I can tell. What was seen, however, was that when taking a KVM dump in the failure case, the memory that stored the initramfs had differences compared to what was on disk. It's unknown what caused these differences. As a workaround (and as a bit of a feature enhancement), use the `-a` flag with kexec, which tells it to use `kexec_file_load` if available, and `kexec_load` if it's not available or otherwise fails. armhf doesn't support `kexec_file_load`, whereas arm64 gained support for `kexec_file_load` in the 5.19 kernel (we're currently on 5.10). `amd64` has supported `kexec_file_load` since 3.17. This also makes it possible to do kexec on secure boot systems, where the kernel image must be loaded via `kexec_file_load`. Signed-off-by: Saikrishna Arcot Signed-off-by: Saikrishna Arcot commit 954d9e9f7b1678cc794af34ef1ef782bec8e2ee4 Author: pettershao-ragilenetworks <81281940+pettershao-ragilenetworks@users.noreply.github.com> Date: Fri Jan 20 06:17:18 2023 +0800 fix show techsupport error (#2597) *Modify the order of "--allow-process-stop" option, it belongs to 'generate_dump'. commit 3c8a9309e5a409dd008b84159ea3924209dbf0bf Author: isabelmsft <67024108+isabelmsft@users.noreply.github.com> Date: Thu Jan 19 14:01:17 2023 -0600 [GCU] Prohibit removal of PFC_WD POLL_INTERVAL field (#2545) commit bde706b846e0c47e748ed3491177b3d5ad054175 Author: Stepan Blyshchak <38952541+stepanblyschak@users.noreply.github.com> Date: Thu Jan 19 17:33:38 2023 +0200 [techsupport] include APPL_STATE_DB dump (#2607) - What I did I added APPL_STATE_DB to techsupport dump - How I did it Added a call to save APPL_STATE_DB - How to verify it Run techsupport and verify dump/APPL_STATE_DB.json Signed-off-by: Stepan Blyschak commit cb3d462db82894eb38f1f3f6edd7f39f5a09a060 Author: isabelmsft <67024108+isabelmsft@users.noreply.github.com> Date: Tue Jan 17 15:31:46 2023 -0600 YANG Validation for ConfigDB Updates: RADIUS_SERVER (#2604) #### What I did Add YANG validation using GCU for writes to RADIUS_SERVER table in ConfigDB #### How I did it Using same method as https://github.com/sonic-net/sonic-utilities/pull/2190/files, extend to RADIUS table #### How to verify it verified testing on virtual switch CLI, unit tests commit b01737974e227040e5c3f0e1c48a4b4e8839c4e3 Author: Lior Avramov <73036155+liorghub@users.noreply.github.com> Date: Tue Jan 17 18:37:54 2023 +0200 Remove TODO comment which is no longer relevant (#2600) commit 521ecfd54317291014c584ecf7c11997381ab7c8 Author: jingwenxie Date: Sat Jan 14 09:34:36 2023 +0800 [show] Add bgpraw to show run all (#2537) #### What I did Add bgpraw output to `show runningconfiguration all` ``` Requirements: 1. current `show runningconfig` will print all the ConfigDB in a json format, we need to add a new key-value into the json output "bgpraw" with a long string value 2. The long string value should be the output of `vtysh -c "show run"`. It is normally multiline string, may include special characters like \". Need to make sure the escaping properly 3. We do not need to insert the key-value into ConfigDB is not existing there 4. If ConfigDB already has the key-value, we do not need to override it by vtysh command output 5. Not break multi-asic use ``` #### How I did it Generate bgpraw output then append it to `show runnningconfiguration all`'s output #### How to verify it Mannual test #### Previous command output (if the output of a command-line utility has changed) ``` admin@vlab-01:~$ show run all { "ACL_TABLE": { ...... "WRED_PROFILE": { "AZURE_LOSSLESS": { "ecn": "ecn_all", "green_drop_probability": "5", "green_max_threshold": "2097152", "green_min_threshold": "1048576", "red_drop_probability": "5", "red_max_threshold": "2097152", "red_min_threshold": "1048576", "wred_green_enable": "true", "wred_red_enable": "true", "wred_yellow_enable": "true", "yellow_drop_probability": "5", "yellow_max_threshold": "2097152", "yellow_min_threshold": "1048576" } } } ``` #### New command output (if the output of a command-line utility has changed) ``` admin@vlab-01:~$ show run all { "ACL_TABLE": { ...... "WRED_PROFILE": { "AZURE_LOSSLESS": { "ecn": "ecn_all", "green_drop_probability": "5", "green_max_threshold": "2097152", "green_min_threshold": "1048576", "red_drop_probability": "5", "red_max_threshold": "2097152", "red_min_threshold": "1048576", "wred_green_enable": "true", "wred_red_enable": "true", "wred_yellow_enable": "true", "yellow_drop_probability": "5", "yellow_max_threshold": "2097152", "yellow_min_threshold": "1048576" } }, "bgpraw": "Building configuration...\n\nCurrent configuration......end\n" } ``` commit 83295189cab227d640839c0079207bf17b6442d8 Author: Aryeh Feigin <101218333+arfeigin@users.noreply.github.com> Date: Fri Jan 13 05:47:22 2023 +0200 Extend fast-reboot STATE_DB entry timer (#2577) *Due to an issue of fallback from to cold-boot when using upgrade with fast-reboot combined with FW upgrade a short term solution is to extend the timer. Long term solution of using fast-reboot finalizer replacing the timer is in work. commit 68a11e77212c09d87d98b4a4724f57e06e6442da Author: Aryeh Feigin <101218333+arfeigin@users.noreply.github.com> Date: Wed Jan 11 10:18:07 2023 +0200 Preserve copp tables through DB migration (#2524) This PR should be merged together with sonic-net/sonic-swss#2548 and is required to 202205 and 202211. This PR implements [fastboot] Preserve CoPP table HLD to improve fastboot flow (sonic-net/SONiC#1107). - What I did Preserve COPP table contents through DB migration. (Mellanox only) - How I did it Skipped deleting of COPP tables in DB migrator. - How to verify it Observe COPP table contents are preserved right after reboot. commit c236b83a7afea4c5479c7cb18555f301847f080c Author: CliveNi Date: Tue Jan 10 01:11:19 2023 +0800 [sfputil] Firmware download/upgrade CLI support for QSFP-DD (#1947) (#2349) * [sfputil] Firmware download/upgrade CLI support for QSFP-DD (#1947) - Description Checking that the running image is switched or not after CDB_run during firmware upgrade process. - Motivation and Context CDB_run will maybe cause several seconds NACK or stretching on i2c bus which depend on the implementation of module vendor, checking the status after CDB_run for compatible with different implementation. * Update unit tests for sfputil. Test : Creating "is_fw_switch_done" test, this function expected to return 1 when 'status' == True and running image('result'[1, 5]) different with committed('result'[2, 6]) one, otherwise return -1. * [sfputil] Firmware download/upgrade CLI support for QSFP-DD (#1947) - Description Adding error judgements in "is_fw_switch_done" function. Update unit tests for "is_fw_switch_done". - Motivation and Context Checking status of images to avoid committing image with a wrong status. * [sfputil] Firmware download/upgrade CLI support for QSFP-DD (#1947) Fixing : Comparing error code with a wrong variable. Refactor : Renaming variables for more suitable its purpose. Refactor : Removing if case which is low correlation with function. Feat : Adding "echo" to display detail result. * Update unit tests for sfputil. * [sfputil] Firmware download/upgrade CLI support for QSFP-DD (#1947) Feat : Reducing frequency of check during "is_fw_switch_done". Refactor : Removing a repeated line. commit 5ac55f06fc3efcfc02450ff33410b1df2e290ddd Author: Qi Luo Date: Fri Jan 6 17:37:51 2023 -0800 Revert "sonic-utilities: Update config reload() to verify formatting of an input file (#2529)" (#2586) This reverts commit 42f51c26d1d0017f3211904ca19c023b5d784463. Reverts sonic-net/sonic-utilities#2529 Reason: There are use cases like `config reload /dev/stdin`, for example [L2 Switch mode ยท sonic-net/SONiC Wiki (github.com)](https://github.com/sonic-net/SONiC/wiki/L2-Switch-mode). The original PR would read input file twice, so /dev/stdin does not work. commit 2dc17968b6fa95289aa98fa30ff57eb87afaf231 Author: wenyiz2021 <91497961+wenyiz2021@users.noreply.github.com> Date: Fri Jan 6 15:24:02 2023 -0800 [masic] 'show interfaces counters' reminds to use '-d all' option to check for internal links (#2466) Print reminder to check internal links on multi-asic platforms Signed-off-by: Wenyi Zhang commit 551836f524504cbcf7e9066bfa64104912a545c1 Author: Jing Zhang Date: Fri Jan 6 13:28:14 2023 -0800 [storyteller] add link prober state change to story teller (#2585) What I did Add linkprober category to story teller. It will reflect dualtor heartbeat events. sign-off: Jing Zhang zhangjing@microsoft.com How to verify it Tested on dualtor device, was able to grep link prober state change events. commit bfe85fdbd6f4244a0c4d5903a3e6cf75e87f68e6 Author: Vadym Hlushko <62022266+vadymhlushko-mlnx@users.noreply.github.com> Date: Tue Jan 3 11:21:52 2023 +0200 [generate_dump] Fix for deletion flow for all secret files from show-techsupport dump (#2571) - What I did Fixed a deletion flow for all secret files in the tech support dump. - How I did it Delete files by using the find and rm Linux utilities. - How to verify it Run the show_techsupport/test_techsupport_no_secret.py Signed-off-by: Vadym Hlushko commit 80162b0bf02d6dff88c503a7c7310a7b0a287531 Author: Stepan Blyshchak <38952541+stepanblyschak@users.noreply.github.com> Date: Mon Jan 2 15:01:09 2023 +0200 [sonic_installer] use /etc/resolv.conf from the host when migrating packages (#2573) - What I did SONiC package migration has been failing due to the lack of DNS configuration for registries domain names. I used /etc/resolv.conf from host OS when migrating. - How I did it Copy /etc/resolv.conf into new image filesystem during migration, then, restore it back. - How to verify it Run sonic-installer install. Signed-off-by: Stepan Blyschak commit f22d6b0067d570b550b43ed98693cf23bf82a35b Author: Stephen Sun <5379172+stephenxs@users.noreply.github.com> Date: Thu Dec 29 15:37:38 2022 +0800 [Mellanox] Change severity to NOTICE in Mellanox buffer migrator when unable to fetch DEVICE_METADATA due to empty CONFIG_DB during initialization (#2569) - What I did It is expected that db_migrator is not able to fetch DEVICE_METADATA when it is invoked before the CONFIG_DB is initialized. In this case, we should not use ERROR to log the message since it's not an error. Change the severity to NOTICE - How I did it Change the severity. - How to verify it Manually test. Signed-off-by: Stephen Sun commit 18f9ae1b0e1b4a02646f389b696553074867dcbc Author: Stephen Sun <5379172+stephenxs@users.noreply.github.com> Date: Mon Dec 26 16:00:31 2022 +0800 Fix issue: unconfigured PGs are displayed in watermarkstat (#2556) - What I did All the PGs between minimal and maximal indexes are displayed regardless of whether they are configured. Originally, watermark counters were enabled for all PGs, so there is no issue. Now, watermark counters are enabled only for PGs with buffer configured, eg. if PG 0/2/3/4/6, is configured, PG 0-6 will be displayed, which is confusing by giving users a feeling that PG 7 is lost - How I did it Display valid PGs only - How to verify it Manually test and unit test. - Previous command output (if the output of a command-line utility has changed) Port PG0 PG1 PG2 PG3 PG4 ----------- ----- ----- ----- ----- ----- Ethernet0 0 0 0 0 0 Ethernet2 0 0 0 0 0 Ethernet8 0 0 0 0 0 Ethernet10 0 0 0 0 0 Ethernet16 0 0 0 0 0 Ethernet18 0 0 0 0 0 Ethernet32 0 0 0 0 0 - New command output (if the output of a command-line utility has changed) PG1 won't be displayed if it is not configured Port PG0 PG3 PG4 ----------- ----- ----- ----- Ethernet0 0 0 0 Ethernet2 0 0 0 Ethernet8 0 0 0 Ethernet10 0 0 0 Ethernet16 0 0 0 Ethernet18 0 0 0 Ethernet32 0 0 0 Signed-off-by: Stephen Sun commit 78566674edd15dd8aa618fcde520a7b170452840 Author: Junchao-Mellanox <57339448+Junchao-Mellanox@users.noreply.github.com> Date: Tue Dec 20 17:05:23 2022 +0800 [Command Ref] Add doc for syslog rate limit (#2508) - What I did Add command reference doc for syslog rate limit feature - How I did it Add command reference doc for syslog rate limit feature - How to verify it Manual check Previous command output (if the output of a command-line utility has changed) New command output (if the output of a command-line utility has changed) admin@sonic:~$ show syslog rate-limit-container SERVICE INTERVAL BURST -------------- ---------- ------- bgp 0 0 database 300 20000 lldp 300 20000 mgmt-framework 300 20000 pmon 300 20000 radv 300 20000 snmp 300 20000 swss 300 20000 syncd 300 20000 teamd 300 20000 telemetry 300 20000 admin@sonic:~$ show syslog rate-limit-container bgp SERVICE INTERVAL BURST -------------- ---------- ------- bgp 0 0 commit 5181264f203e1fa5f74ca069ca2a58f4e192d718 Author: Vadym Hlushko <62022266+vadymhlushko-mlnx@users.noreply.github.com> Date: Tue Dec 20 11:04:02 2022 +0200 [generate_dump] Optimize the execution time of 'show techsupport' CLI by parallel function execution (#2512) - What I did Optimize the execution time of the 'show techsupport' script. - How I did it The show techsupport CLI command calls the generate_dump bash script. In the script, there are a many functions that do the next scenario: 1. Run some CLI command 2. Save output from step 1 to the temporary file 3. Append the temporary file from step 2 to the `/var/dump/sonic_dump_XXXX.tar` file 4. Delete the temporary file from step 2 This PR will add the execution of these functions in parallel manner. Also, it will not spawn too many processes to not waste all CPU time. - How to verify it First test scenario Run the `time show techsupport` CLI command and compare the execution time to the original script (with no parallelism), the execution time will be decreased by 10-20%. Second test scenario 1. Stuck the FW by using next commands a. mcra /dev/mst/mt52100_pci_cr0 0xa01e4 0x10 b. mcra /dev/mst/mt52100_pci_cr0 0xa05e4 0x10 c. mcra /dev/mst/mt52100_pci_cr0 0xa07e4 0x10 d. mcra /dev/mst/mt52100_pci_cr0 0xa09e4 0x10 e. mcra /dev/mst/mt52100_pci_cr0 0xa0be4 0x10 f. mcra /dev/mst/mt52100_pci_cr0 0xa0de4 0x10 g. mcra /dev/mst/mt52100_pci_cr0 0xa0fe4 0x10 2. Run the `time show techsupport` CLI command and compare the execution time to the original script (with no parallelism), the execution time will be decreased by up to 50% because inside the script we launch CLI commands with `timeout --foreground 5m`. Signed-off-by: Vadym Hlushko commit 1ca3fedc4575c04b6578e6c5c66dac353be27072 Author: Stepan Blyshchak <38952541+stepanblyschak@users.noreply.github.com> Date: Mon Dec 19 07:32:18 2022 +0200 [timer.unit.j2] use wanted-by in timer unit (#2546) Signed-off-by: Stepan Blyschak Signed-off-by: Stepan Blyschak commit b3e7f1c07d18542d700b8162c164fbd24544f505 Author: Preetham <51771885+preetham-singh@users.noreply.github.com> Date: Fri Dec 16 23:38:03 2022 +0530 Fixes #12170: Delete subinterface and recreate the subinterface in (#2513) * Fixes #12170: Delete subinterface and recreate the subinterface in default-vrf while unbinding subinterface from user defined vrf. commit 44c5d1c23a1632cfb316ae1d93b3e4cbeeb3934e Author: Vaibhav Hemant Dixit Date: Thu Dec 15 23:21:58 2022 -0800 [db_migrator] Fix migration of Loopback data: handle all Loopback interfaces (#2560) Fix the issue where cross branch upgrades (base DB version 1_0_1) lead to a OA crash due to a duplicate IP2ME route being added when there are more than one Loopback interfaces. The issue happens as in current implementation lo is hardcoded to be replaced as Loopback0. When the base image's APP DB has more than one IP assigned to lo interface, upon migration, all the IPs are assinged to same loopback Loopback0. This is incorrect, as in newer images different IPs are assinged to distinct Loopback interfaces. How to verify it Verified on a physical testbed that this change fixes the OA crash issue. Also added a unit test to catch this issue in PR tests. commit ccefd454dd53c6332815b28a58fa20fd24215fdc Author: Vadym Hlushko <62022266+vadymhlushko-mlnx@users.noreply.github.com> Date: Thu Dec 15 09:55:08 2022 +0200 Optimize the execution time of the 'show techsupport' script to 5-10%, (#2504) - What I did Optimize the execution time of the 'show techsupport' script to 5-10%. - How I did it The show techsupport CLI command calls the generate_dump bash script. In the script, there are a many functions that do the next scenario: 1. Run some CLI command 2. Save output from step 1 to the temporary file 3. Append the temporary file from step 2 to the `/var/dump/sonic_dump_XXXX.tar` file 4. Delete the temporary file from step 2 This PR removes the 3 and 4 step from those functions and creates a new function save_to_tar() which will add to .tar archive the whole directory with temporary files (which means it will not spawn a tar -v -rhf ... process for each temporary file) - How to verify it Run the time show techsupport CLI command and compare the execution time to the original script, the execution time will be decreased by 5-10%. Signed-off-by: Vadym Hlushko commit 4f825d9849a7def94fa3dfe9be8f22a88f50aa1f Author: Jing Zhang Date: Wed Dec 14 10:13:43 2022 -0800 [muxcable][show] update `show mux tunnel-route` to separate ASIC and kernel into two columns (#2553) Stemming from sonic-net/sonic-swss#2557. This PR is to update show mux tunnel-route command to show status of both ASIC and kernel tunnel routes. sign-off: Jing Zhang zhangjing@microsoft.com What I did How I did it How to verify it Previous command output (if the output of a command-line utility has changed) Only check Kernel Route, if removing tunnel route for server_ipv4 in kernel, it won't show in CMD output: zhangjing@********************:~$ show mux tunnel-route Ethernet4 PORT DEST_TYPE DEST_ADDRESS --------- ----------- -------------------------------- Ethernet4 server_ipv6 2603:10b0:d11:8614::a32:9112/128 New command output (if the output of a command-line utility has changed) Check both ASIC and APP DB for tunnel route status zhangjing@********************:~$ show mux tunnel-route Ethernet4 PORT DEST_TYPE DEST_ADDRESS kernel asic --------- ----------- -------------------------------- -------- ------ Ethernet4 server_ipv4 10.50.145.18/32 - added Ethernet4 server_ipv6 2603:10b0:d11:8614::a32:9112/128 added added commit d5465ed5b22ead0101ad2aaabf44050773968cfd Author: Sudharsan Dhamal Gopalarathnam Date: Tue Dec 13 23:27:57 2022 -0800 [show]Fix show route return code on error (#2542) - What I did Fix show route return command to return error code on failure cases. The parameter return_cmd=True in run_command will suppress the return code and return success even in error scenarios. - How I did it When run command is called with return_cmd = True, modified its return to include return code, which can then be used to assess if there is an error by the caller - How to verify it Added UT to verify it - Previous command output (if the output of a command-line utility has changed) root@sonic:/home/admin# show ip route 123 % Unknown command: show ip route 123 root@sonic:/home/admin# echo $? 0 - New command output (if the output of a command-line utility has changed) root@sonic:/home/admin# show ip route 123 % Unknown command: show ip route 123 root@sonic:/home/admin# echo $? 1 commit 14936d7ef6a46745dd9d8b6c07e0de476695cd6e Author: Lawrence Lee Date: Mon Dec 12 17:07:27 2022 -0800 [route_check]: Ignore ASIC only SOC IPs (#2548) * [tests]: Improve route check test - Split test into separate methods based on functionality being tested - Parametrize main test method for better granularity when viewing results/running test cases - Add config DB mocking support - Move some setup/teardown code to fixtures for better consistency - Extract test data to separate file - Ignore routes for SOC IPs that are only present in the ASIC - Add test case to cover ASIC only SOC IPs Signed-off-by: Lawrence Lee commit 609f18fed063cf5c299328e2f6ca36c907cc1883 Author: isabelmsft <67024108+isabelmsft@users.noreply.github.com> Date: Thu Dec 8 11:12:50 2022 -0600 YANG Validation for ConfigDB Updates: WARM_RESTART, SFLOW_SESSION, SFLOW, VXLAN_TUNNEL, VXLAN_EVPN_NVO, VXLAN_TUNNEL_MAP, MGMT_VRF_CONFIG, CABLE_LENGTH, VRF tables (#2526) commit 83aa5fb9e7671ee9871af4ccee764ad5ef84cf0f Author: isabelmsft Date: Fri Mar 17 19:15:47 2023 +0000 UT coverage commit da9db448985ce138059e8c18b986a8c8e70035d1 Author: isabelmsft Date: Tue Feb 7 02:17:25 2023 +0000 add sflow collector commit c4caaf80adec48a2010c3be3e8fb0fedf696da9b Author: isabelmsft Date: Mon Feb 6 21:54:44 2023 +0000 fix UT commit cda464a32738d37e71589dc0ac982be43908734d Author: isabelmsft Date: Sat Feb 4 09:06:35 2023 +0000 fix UT commit 1251c805a6adc370c0aeb37dd06d3927a712468b Author: isabelmsft Date: Wed Feb 1 02:38:19 2023 +0000 fix UT commit c704b71c893f17062f2b7eab54ebc9ad65fa2a6c Author: isabelmsft Date: Thu Mar 23 07:22:54 2023 +0000 fix UT commit f83abb274c4e02ee31e3df0353cbbc784d9601a0 Author: isabelmsft Date: Thu Mar 23 05:32:03 2023 +0000 fix UT commit 966a0e0fe75cc542c6ca4f3dcca54f9b5d54e25b Author: isabelmsft Date: Thu Mar 23 04:37:26 2023 +0000 add UT commit 92a5dc20feb6e19a0515c1ce7e41c75efe5281ae Author: isabelmsft Date: Thu Mar 23 00:55:15 2023 +0000 add UT commit 34a61f3bb7c25fb2b3c8aedceeb311f5c3c8ef84 Merge: 582bac06 10f31ea6 Author: isabelmsft Date: Wed Mar 22 22:11:09 2023 +0000 Merge remote-tracking branch 'origin/master' into mux_mclag commit 10f31ea6fb0876f913cfcfce8c95011e675a99f6 Author: Mai Bui Date: Tue Mar 21 00:25:39 2023 -0400 Revert "Replace pickle by json (#2636)" (#2746) This reverts commit 54e26359fccf45d2e40800cf5598a725798634cd. Due to https://github.com/sonic-net/sonic-buildimage/issues/14089 Signed-off-by: Mai Bui commit 05fa7513355cf333818c480fade157bdff969811 Author: abdosi <58047199+abdosi@users.noreply.github.com> Date: Fri Mar 17 16:27:48 2023 -0700 Fix the `show interface counters` throwing exception on device with no external interfaces (#2703) Fix the `show interface counters` throwing exception issue where device do not have any external ports and all are internal links (ethernet or fabric) which is possible in chassis commit 582bac065ee067db6ad06ca71296fc70a4ebcb57 Author: isabelmsft Date: Fri Mar 17 19:15:47 2023 +0000 UT coverage commit f27dea0cfdefbdcfc03d19136e4ae47ea72fd51f Author: Stepan Blyshchak <38952541+stepanblyschak@users.noreply.github.com> Date: Fri Mar 17 09:10:47 2023 +0200 [route_check] remove check-frr_patch mock (#2732) The test fails with python3.7 (works in 3.9) when stopping patch which hasn't been started. We can always mock check_output call and if FRR_ROUTES is not defined return empty dictionary by the mock. #### What I did Removed check_frr_patch mock to fix UT running on python3.7 #### How I did it Removed the mock #### How to verify it Run unit test in stretch env commit 2d95529dce9ef3f23b859ec09135c40d87d4f4d5 Author: Neetha John Date: Thu Mar 16 17:31:49 2023 -0700 Revert "Update load minigraph to load backend acl (#2236)" (#2735) This reverts commit 1518ca92df1e794222bf45100246c8ef956d7af6. commit c869c9707a7622f31da7f92a338d7af358461f8a Author: Vivek Date: Tue Mar 14 17:55:40 2023 -0700 Update the ref guide to reflect the vlan brief output (#2731) What I did show vlan brief will only be showing dhcpv4 addresses and not dhcpv6 destination Signed-off-by: Vivek Reddy Karri commit 76457141db02b80abc003d00261e4c4635b83676 Author: Aryeh Feigin <101218333+arfeigin@users.noreply.github.com> Date: Tue Mar 14 22:13:51 2023 +0200 Fix fast-reboot DB migration (#2734) Fix DB migrator logic for migrating fast-reboot table, fixing #2621 db_migrator. How I did it Checking if fast-reboot table exists in DB. How to verify it Verified manually, migrating after fast-reboot and after cold/warm reboot. commit f7f783bce46a4383260e229dea90834672f03b6f Author: Stephen Sun <5379172+stephenxs@users.noreply.github.com> Date: Tue Mar 14 21:01:52 2023 +0800 Enhance the logic to wait for all buffer tables to be removed in _clear_qos (#2720) - What I did This is an enhancement of PR #2503 - How I did it On top of waiting for BUFFER_POOL_TABLE to be cleared from APPL_DB, we need to wait for KEY_SET and DEL_SET as well. KEY_SET and DEL_SET are designed to accommodate the APPL_DB entries that were updated by manager daemons but have not yet been handled by the orchagent. In this case, even if the buffer tables are empty, entries in KEY_SET or DEL_SET will be in the buffer tables later on. So, we need to wait for key set tables as well. Do not delay for traditional buffer manager because it does not remove any buffer table. Provide a CLI option to print the detailed message if there is any table item which still exists - How to verify it Manually test and unit test - Previous command output (if the output of a command-line utility has changed) Running command: /usr/local/bin/sonic-cfggen -d --write-to-db -t /usr/share/sonic/device/x86_64-mlnx_msn2410-r0/ACS-MSN2410/buffers_dynamic.json.j2,config-db -t /usr/share/sonic/device/x86_64-mlnx_msn2410-r0/ACS-MSN2410/qos.json.j2,config-db -y /etc/sonic/sonic_version.yml - New command output (if the output of a command-line utility has changed) Only with option --verbose there are new output. Without the option, the output is the same as it is. admin@mtbc-sonic-01-2410:~$ sudo config qos reload --verbose Some entries matching BUFFER_*_TABLE:* still exist: BUFFER_QUEUE_TABLE:Ethernet108:0-2 Some entries matching BUFFER_*_SET still exist: BUFFER_PG_TABLE_KEY_SET Some entries matching BUFFER_*_TABLE:* still exist: BUFFER_QUEUE_TABLE:Ethernet108:0-2 Some entries matching BUFFER_*_SET still exist: BUFFER_PG_TABLE_KEY_SET Some entries matching BUFFER_*_TABLE:* still exist: BUFFER_QUEUE_TABLE:Ethernet108:0-2 Running command: /usr/local/bin/sonic-cfggen -d --write-to-db -t /usr/share/sonic/device/x86_64-mlnx_msn2410-r0/ACS-MSN2410/buffers_dynamic.json.j2,config-db -t /usr/share/sonic/device/x86_64-mlnx_msn2410-r0/ACS-MSN2410/qos.json.j2,config-db -y /etc/sonic/sonic_version.yml commit e6179afa8771bfa1643243f7ef166dd1dc256b24 Author: Aryeh Feigin <101218333+arfeigin@users.noreply.github.com> Date: Fri Mar 10 18:41:30 2023 +0200 Remove timer from FAST_REBOOT STATE_DB entry and use finalizer (#2621) This should come along with sonic-buildimage PR (sonic-net/sonic-buildimage#13484) implementing fast-reboot finalizing logic in finalize-warmboot script and other submodules PRs utilizing the change. This PR should come along with the following PRs as well: sonic-net/sonic-swss-common#742 sonic-net/sonic-platform-daemons#335 sonic-net/sonic-sairedis#1196 This set of PRs solves the issue sonic-net/sonic-buildimage#13251 What I did Remove the timer used to clear fast-reboot entry from state-db, instead it will be cleared by fast-reboot finalize function implemented inside finalize-warmboot script (which will be invoked since fast-reboot is using warm-reboot infrastructure). As well instead of having "1" as the value for fast-reboot entry in state-db and deleting it when done it is now modified to set enable/disable according to the context. As well all scripts reading this entry should be modified to the new value options. How I did it Removed the timer usage in the fast-reboot script and adding fast-reboot finalize logic to warm-reboot in the linked PR. Use "enable/disable" instead of "1" as the entry value. How to verify it Run fast-reboot and check that the state-db entry for fast-reboot is being deleted after finalizing fast-reboot and not by an expiring timer. commit ff6883233a3c86e993add50453c3152745eaff0d Author: Stepan Blyshchak <38952541+stepanblyschak@users.noreply.github.com> Date: Fri Mar 10 04:07:25 2023 +0200 [route_check] fix IPv6 address handling (#2722) *In case user has configured an IPv6 address on an interface in CONFIG DB in non simplified form like 2000:31:0:0::1/64 it is present in a simplified form in ASIC_DB. This leads to route_check failure since it just compares strings. commit 7a604c51671a85470db3d15aaa83b6b39a01531a Author: jhli-cisco <93410383+jhli-cisco@users.noreply.github.com> Date: Wed Mar 8 18:03:50 2023 -0800 update fast-reboot (#2728) commit 9f83ace943bc4938a0bfc75239ecdac8600e5b56 Author: jingwenxie Date: Thu Mar 9 09:12:19 2023 +0800 [GCU] Add vlanintf-validator (#2697) What I did Fix the bug of GCU vlan interface modification. It should call ip neigh flush dev after removing interface ip. The fix is basically following config CLI's tradition. How I did it Add vlanintf service validator to check if extra step of ip neigh flush is needed. How to verify it GCU E2E test in dualtor testbed. commit 338d1c05bf067447bdc29013b419d2c51da5c086 Author: Liu Shilong Date: Thu Mar 9 06:57:05 2023 +0800 Check SONiC dependencies before installation. (#2716) #### What I did SONiC related packages shouldn't be intalled from Pypi. It is security compliance requirement. Check SONiC related packages when using setup.py. commit 64d2efd20b528f67c22dcfaf42e6ca5081aba416 Author: bingwang-ms <66248323+bingwang-ms@users.noreply.github.com> Date: Wed Mar 8 13:28:59 2023 -0800 Improve show acl commands (#2667) * Add status for ACL_TABLE and ACL_RULE in STATE_DB commit 2ef5b31e807048e39fc125370b40b58ba8db8b03 Author: isabelmsft <67024108+isabelmsft@users.noreply.github.com> Date: Wed Mar 8 00:19:03 2023 -0800 [GCU] Add PFC_WD RDMA validator (#2619) commit c09ede411ebfdea28e952c8cb78c1ce64ed2cff0 Author: isabelmsft Date: Tue Feb 7 02:17:25 2023 +0000 add sflow collector commit c6a777dd097f65d1d83327cac4e2605530d198d0 Author: isabelmsft Date: Mon Feb 6 21:54:44 2023 +0000 fix UT commit 8b21e8904c3cdeac25e5b69726be026c6f1ee00c Author: isabelmsft Date: Sat Feb 4 09:06:35 2023 +0000 fix UT commit 4a9aa93d4e31f0bc455374e74914f32d0b69ed99 Author: isabelmsft Date: Wed Feb 1 02:38:19 2023 +0000 fix UT commit 4214fbc77ebc2f8942df9db3580ee80e64332d94 Merge: cc7412b3 9dae74be Author: isabelmsft Date: Wed Feb 1 02:36:08 2023 +0000 Merge branch 'mux_mclag' of https://github.com/isabelmsft/sonic-utilities into mux_mclag commit cc7412b3d2efcc7329c27cf1ab814a89a0785812 Author: isl Date: Wed Feb 1 01:57:20 2023 +0000 fix UT commit 9dae74becaa5e581110c555b90a6c76b733f2ccd Author: isl Date: Wed Feb 1 01:57:20 2023 +0000 fix UT commit b4342fffe4042df3998f7ce22f257aeec4ce3fae Author: isabelmsft Date: Wed Feb 1 00:25:09 2023 +0000 add UT commit 4f9cecf738559920c26a79919890b7d3a6b98f97 Author: isabelmsft Date: Mon Jan 30 23:02:50 2023 +0000 add UT commit d7da9224b36060822538c2b013a9df9570f64a16 Author: isabelmsft Date: Fri Jan 27 21:29:28 2023 +0000 fix UT commit a769202db72ab73e444f64fef40ad1190f395fa0 Author: isabelmsft Date: Mon Jan 16 05:09:15 2023 +0000 fix UT commit e9873b0f5a6926c8fac5cbe3777cf9d27fa50127 Author: isabelmsft Date: Sat Jan 14 06:42:26 2023 +0000 add CONSOLE commit faf67e3e09185985d2bccdf02691fbbbc489310f Author: isabelmsft Date: Tue Jan 10 08:06:00 2023 +0000 add muxcable, mclag, etc --- acl_loader/main.py | 75 ++++- config/console.py | 62 +++- config/kube.py | 15 +- config/main.py | 137 ++++---- config/mclag.py | 135 +++++--- config/muxcable.py | 30 +- config/nat.py | 315 ++++++++++++------ doc/Command-Reference.md | 2 +- generic_config_updater/change_applier.py | 2 +- .../field_operation_validators.py | 26 ++ .../gcu_field_operation_validators.conf.json | 20 ++ ....json => gcu_services_validator.conf.json} | 6 + generic_config_updater/gu_common.py | 36 ++ generic_config_updater/services_validator.py | 21 ++ scripts/db_migrator.py | 23 +- scripts/dropstat | 14 +- scripts/fast-reboot | 11 +- scripts/flow_counters_stat | 10 +- scripts/intfstat | 64 ++-- scripts/pfcstat | 62 ++-- scripts/pg-drop | 8 +- scripts/portstat | 265 +++++++-------- scripts/queuestat | 34 +- scripts/route_check.py | 5 +- scripts/tunnelstat | 40 +-- setup.py | 33 +- .../templates/service_mgmt.sh.j2 | 3 +- tests/aclshow_test.py | 7 +- tests/config_test.py | 115 ++++--- tests/console_test.py | 101 ++++++ .../state_db/fast_reboot_expected.json | 5 + .../state_db/fast_reboot_input.json | 2 + tests/db_migrator_test.py | 32 ++ .../generic_config_updater/gu_common_test.py | 26 +- .../service_validator_test.py | 51 ++- tests/kube_test.py | 24 ++ tests/mclag_test.py | 128 ++++++- tests/mock_tables/asic0/config_db.json | 11 + tests/mock_tables/asic0/state_db.json | 6 + tests/mock_tables/asic2/config_db.json | 11 + tests/mock_tables/asic2/state_db.json | 6 + tests/mock_tables/config_db.json | 11 + tests/mock_tables/state_db.json | 6 + tests/nat_test.py | 267 +++++++++++++++ tests/route_check_test.py | 15 +- tests/route_check_test_data.py | 18 + tests/sflow_test.py | 20 ++ tests/show_acl_test.py | 95 ++++++ 48 files changed, 1801 insertions(+), 610 deletions(-) create mode 100644 generic_config_updater/field_operation_validators.py create mode 100644 generic_config_updater/gcu_field_operation_validators.conf.json rename generic_config_updater/{generic_config_updater.conf.json => gcu_services_validator.conf.json} (91%) create mode 100644 tests/db_migrator_input/state_db/fast_reboot_expected.json create mode 100644 tests/db_migrator_input/state_db/fast_reboot_input.json create mode 100644 tests/nat_test.py create mode 100644 tests/show_acl_test.py diff --git a/acl_loader/main.py b/acl_loader/main.py index c50efec032..2eab089c21 100644 --- a/acl_loader/main.py +++ b/acl_loader/main.py @@ -72,6 +72,10 @@ class AclLoader(object): ACL_TABLE = "ACL_TABLE" ACL_RULE = "ACL_RULE" + CFG_ACL_TABLE = "ACL_TABLE" + STATE_ACL_TABLE = "ACL_TABLE_TABLE" + CFG_ACL_RULE = "ACL_RULE" + STATE_ACL_RULE = "ACL_RULE_TABLE" ACL_TABLE_TYPE_MIRROR = "MIRROR" ACL_TABLE_TYPE_CTRLPLANE = "CTRLPLANE" CFG_MIRROR_SESSION_TABLE = "MIRROR_SESSION" @@ -117,11 +121,16 @@ def __init__(self): self.tables_db_info = {} self.rules_db_info = {} self.rules_info = {} + self.tables_state_info = None + self.rules_state_info = None # Load database config files load_db_config() self.sessions_db_info = {} + self.acl_table_status = {} + self.acl_rule_status = {} + self.configdb = ConfigDBConnector() self.configdb.connect() self.statedb = SonicV2Connector(host="127.0.0.1") @@ -156,6 +165,8 @@ def __init__(self): self.read_rules_info() self.read_sessions_info() self.read_policers_info() + self.acl_table_status = self.read_acl_object_status_info(self.CFG_ACL_TABLE, self.STATE_ACL_TABLE) + self.acl_rule_status = self.read_acl_object_status_info(self.CFG_ACL_RULE, self.STATE_ACL_RULE) def read_tables_info(self): """ @@ -210,7 +221,7 @@ def read_sessions_info(self): for key in self.sessions_db_info: if self.per_npu_statedb: # For multi-npu platforms we will read from all front asic name space - # statedb as the monitor port will be differnt for each asic + # statedb as the monitor port will be different for each asic # and it's status also might be different (ideally should not happen) # We will store them as dict of 'asic' : value self.sessions_db_info[key]["status"] = {} @@ -224,6 +235,35 @@ def read_sessions_info(self): self.sessions_db_info[key]["status"] = state_db_info.get("status", "inactive") if state_db_info else "error" self.sessions_db_info[key]["monitor_port"] = state_db_info.get("monitor_port", "") if state_db_info else "" + def read_acl_object_status_info(self, cfg_db_table_name, state_db_table_name): + """ + Read ACL_TABLE status or ACL_RULE status from STATE_DB + """ + if self.per_npu_configdb: + namespace_configdb = list(self.per_npu_configdb.values())[0] + keys = namespace_configdb.get_table(cfg_db_table_name).keys() + else: + keys = self.configdb.get_table(cfg_db_table_name).keys() + + status = {} + for key in keys: + # For ACL_RULE, the key is (acl_table_name, acl_rule_name) + if isinstance(key, tuple): + state_db_key = key[0] + "|" + key[1] + else: + state_db_key = key + status[key] = {} + if self.per_npu_statedb: + status[key]['status'] = {} + for namespace_key, namespace_statedb in self.per_npu_statedb.items(): + state_db_info = namespace_statedb.get_all(self.statedb.STATE_DB, "{}|{}".format(state_db_table_name, state_db_key)) + status[key]['status'][namespace_key] = state_db_info.get("status", "N/A") if state_db_info else "N/A" + else: + state_db_info = self.statedb.get_all(self.statedb.STATE_DB, "{}|{}".format(state_db_table_name, state_db_key)) + status[key]['status'] = state_db_info.get("status", "N/A") if state_db_info else "N/A" + + return status + def get_sessions_db_info(self): return self.sessions_db_info @@ -786,32 +826,36 @@ def show_table(self, table_name): :param table_name: Optional. ACL table name. Filter tables by specified name. :return: """ - header = ("Name", "Type", "Binding", "Description", "Stage") + header = ("Name", "Type", "Binding", "Description", "Stage", "Status") data = [] for key, val in self.get_tables_db_info().items(): if table_name and key != table_name: continue - + stage = val.get("stage", Stage.INGRESS).lower() - + # Get ACL table status from STATE_DB + if key in self.acl_table_status: + status = self.acl_table_status[key]['status'] + else: + status = 'N/A' if val["type"] == AclLoader.ACL_TABLE_TYPE_CTRLPLANE: services = natsorted(val["services"]) - data.append([key, val["type"], services[0], val["policy_desc"], stage]) + data.append([key, val["type"], services[0], val["policy_desc"], stage, status]) if len(services) > 1: for service in services[1:]: - data.append(["", "", service, "", ""]) + data.append(["", "", service, "", "", ""]) else: if not val["ports"]: - data.append([key, val["type"], "", val["policy_desc"], stage]) + data.append([key, val["type"], "", val["policy_desc"], stage, status]) else: ports = natsorted(val["ports"]) - data.append([key, val["type"], ports[0], val["policy_desc"], stage]) + data.append([key, val["type"], ports[0], val["policy_desc"], stage, status]) if len(ports) > 1: for port in ports[1:]: - data.append(["", "", port, "", ""]) + data.append(["", "", port, "", "", ""]) print(tabulate.tabulate(data, headers=header, tablefmt="simple", missingval="")) @@ -873,7 +917,7 @@ def show_rule(self, table_name, rule_id): :param rule_id: Optional. ACL rule name. Filter rule by specified rule name. :return: """ - header = ("Table", "Rule", "Priority", "Action", "Match") + header = ("Table", "Rule", "Priority", "Action", "Match", "Status") def pop_priority(val): priority = "N/A" @@ -919,11 +963,16 @@ def pop_matches(val): priority = pop_priority(val) action = pop_action(val) matches = pop_matches(val) - - rule_data = [[tname, rid, priority, action, matches[0]]] + # Get ACL rule status from STATE_DB + status_key = (tname, rid) + if status_key in self.acl_rule_status: + status = self.acl_rule_status[status_key]['status'] + else: + status = "N/A" + rule_data = [[tname, rid, priority, action, matches[0], status]] if len(matches) > 1: for m in matches[1:]: - rule_data.append(["", "", "", "", m]) + rule_data.append(["", "", "", "", m, ""]) raw_data.append([priority, rule_data]) diff --git a/config/console.py b/config/console.py index b28aeda672..0a263d4136 100644 --- a/config/console.py +++ b/config/console.py @@ -1,6 +1,8 @@ import click +import jsonpatch import utilities_common.cli as clicommon - +from .validated_config_db_connector import ValidatedConfigDBConnector +from jsonpatch import JsonPatchConflict # # 'console' group ('config console ...') # @@ -16,14 +18,18 @@ def console(): @clicommon.pass_db def enable_console_switch(db): """Enable console switch""" - config_db = db.cfgdb + config_db = ValidatedConfigDBConnector(db.cfgdb) table = "CONSOLE_SWITCH" dataKey1 = 'console_mgmt' dataKey2 = 'enabled' data = { dataKey2 : "yes" } - config_db.mod_entry(table, dataKey1, data) + try: + config_db.mod_entry(table, dataKey1, data) + except ValueError as e: + ctx = click.get_current_context() + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'console disable' group ('config console disable') @@ -32,14 +38,18 @@ def enable_console_switch(db): @clicommon.pass_db def disable_console_switch(db): """Disable console switch""" - config_db = db.cfgdb + config_db = ValidatedConfigDBConnector(db.cfgdb) table = "CONSOLE_SWITCH" dataKey1 = 'console_mgmt' dataKey2 = 'enabled' data = { dataKey2 : "no" } - config_db.mod_entry(table, dataKey1, data) + try: + config_db.mod_entry(table, dataKey1, data) + except ValueError as e: + ctx = click.get_current_context() + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'console add' group ('config console add ...') @@ -52,7 +62,7 @@ def disable_console_switch(db): @click.option('--devicename', '-d', metavar='', required=False) def add_console_setting(db, linenum, baud, flowcontrol, devicename): """Add Console-realted configuration tasks""" - config_db = db.cfgdb + config_db = ValidatedConfigDBConnector(db.cfgdb) table = "CONSOLE_PORT" dataKey1 = 'baud_rate' @@ -72,7 +82,10 @@ def add_console_setting(db, linenum, baud, flowcontrol, devicename): ctx.fail("Given device name {} has been used. Please enter a valid device name or remove the existing one !!".format(devicename)) console_entry[dataKey3] = devicename - config_db.set_entry(table, linenum, console_entry) + try: + config_db.set_entry(table, linenum, console_entry) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # @@ -83,15 +96,18 @@ def add_console_setting(db, linenum, baud, flowcontrol, devicename): @click.argument('linenum', metavar='', required=True, type=click.IntRange(0, 65535)) def remove_console_setting(db, linenum): """Remove Console-related configuration tasks""" - config_db = db.cfgdb + config_db = ValidatedConfigDBConnector(db.cfgdb) + ctx = click.get_current_context() table = "CONSOLE_PORT" data = config_db.get_entry(table, linenum) if data: - config_db.mod_entry(table, linenum, None) + try: + config_db.set_entry(table, linenum, None) + except JsonPatchConflict as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) else: - ctx = click.get_current_context() ctx.fail("Trying to delete console port setting, which is not present.") # @@ -103,7 +119,7 @@ def remove_console_setting(db, linenum): @click.argument('devicename', metavar='', required=False) def upate_console_remote_device_name(db, linenum, devicename): """Update remote device name for a console line""" - config_db = db.cfgdb + config_db = ValidatedConfigDBConnector(db.cfgdb) ctx = click.get_current_context() table = "CONSOLE_PORT" @@ -117,12 +133,18 @@ def upate_console_remote_device_name(db, linenum, devicename): elif not devicename: # remove configuration key from console setting if user not give a remote device name data.pop(dataKey, None) - config_db.mod_entry(table, linenum, data) + try: + config_db.mod_entry(table, linenum, data) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) elif isExistingSameDevice(config_db, devicename, table): ctx.fail("Given device name {} has been used. Please enter a valid device name or remove the existing one !!".format(devicename)) else: data[dataKey] = devicename - config_db.mod_entry(table, linenum, data) + try: + config_db.mod_entry(table, linenum, data) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) else: ctx.fail("Trying to update console port setting, which is not present.") @@ -135,7 +157,7 @@ def upate_console_remote_device_name(db, linenum, devicename): @click.argument('baud', metavar='', required=True, type=click.INT) def update_console_baud(db, linenum, baud): """Update baud for a console line""" - config_db = db.cfgdb + config_db = ValidatedConfigDBConnector(db.cfgdb) ctx = click.get_current_context() table = "CONSOLE_PORT" @@ -149,7 +171,10 @@ def update_console_baud(db, linenum, baud): return else: data[dataKey] = baud - config_db.mod_entry(table, linenum, data) + try: + config_db.mod_entry(table, linenum, data) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) else: ctx.fail("Trying to update console port setting, which is not present.") @@ -162,7 +187,7 @@ def update_console_baud(db, linenum, baud): @click.argument('linenum', metavar='', required=True, type=click.IntRange(0, 65535)) def update_console_flow_control(db, mode, linenum): """Update flow control setting for a console line""" - config_db = db.cfgdb + config_db = ValidatedConfigDBConnector(db.cfgdb) ctx = click.get_current_context() table = "CONSOLE_PORT" @@ -177,7 +202,10 @@ def update_console_flow_control(db, mode, linenum): return else: data[dataKey] = innerMode - config_db.mod_entry(table, linenum, data) + try: + config_db.mod_entry(table, linenum, data) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) else: ctx.fail("Trying to update console port setting, which is not present.") diff --git a/config/kube.py b/config/kube.py index 706a5ab260..526a4dd028 100644 --- a/config/kube.py +++ b/config/kube.py @@ -1,6 +1,7 @@ import click from utilities_common.cli import AbbreviationGroup, pass_db +from .validated_config_db_connector import ValidatedConfigDBConnector from .utils import log @@ -21,22 +22,30 @@ KUBE_LABEL_SET_KEY = "SET" def _update_kube_server(db, field, val): - db_data = db.cfgdb.get_entry(KUBE_SERVER_TABLE_NAME, KUBE_SERVER_TABLE_KEY) + config_db = ValidatedConfigDBConnector(db.cfgdb) + db_data = config_db.get_entry(KUBE_SERVER_TABLE_NAME, KUBE_SERVER_TABLE_KEY) def_data = { KUBE_SERVER_IP: "", KUBE_SERVER_PORT: "6443", KUBE_SERVER_INSECURE: "True", KUBE_SERVER_DISABLE: "False" } + ctx = click.get_current_context() for f in def_data: if db_data and f in db_data: if f == field and db_data[f] != val: - db.cfgdb.mod_entry(KUBE_SERVER_TABLE_NAME, KUBE_SERVER_TABLE_KEY, {field: val}) + try: + config_db.mod_entry(KUBE_SERVER_TABLE_NAME, KUBE_SERVER_TABLE_KEY, {field: val}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) log.log_info("modify kubernetes server entry {}={}".format(field,val)) else: # Missing field. Set to default or given value v = val if f == field else def_data[f] - db.cfgdb.mod_entry(KUBE_SERVER_TABLE_NAME, KUBE_SERVER_TABLE_KEY, {f: v}) + try: + config_db.mod_entry(KUBE_SERVER_TABLE_NAME, KUBE_SERVER_TABLE_KEY, {f: v}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) log.log_info("set kubernetes server entry {}={}".format(f,v)) diff --git a/config/main.py b/config/main.py index 384e6f9f68..970f930f03 100644 --- a/config/main.py +++ b/config/main.py @@ -15,6 +15,7 @@ import copy from jsonpatch import JsonPatchConflict +from jsonpointer import JsonPointerException from collections import OrderedDict from generic_config_updater.generic_updater import GenericUpdater, ConfigFormat from minigraph import parse_device_desc_xml, minigraph_encoder @@ -743,24 +744,28 @@ def storm_control_delete_entry(port_name, storm_type): return True -def _wait_until_clear(table, interval=0.5, timeout=30): +def _wait_until_clear(tables, interval=0.5, timeout=30, verbose=False): start = time.time() empty = False app_db = SonicV2Connector(host='127.0.0.1') app_db.connect(app_db.APPL_DB) while not empty and time.time() - start < timeout: - current_profiles = app_db.keys(app_db.APPL_DB, table) - if not current_profiles: - empty = True - else: - time.sleep(interval) + non_empty_table_count = 0 + for table in tables: + keys = app_db.keys(app_db.APPL_DB, table) + if keys: + non_empty_table_count += 1 + if verbose: + click.echo("Some entries matching {} still exist: {}".format(table, keys[0])) + time.sleep(interval) + empty = (non_empty_table_count == 0) if not empty: click.echo("Operation not completed successfully, please save and reload configuration.") return empty -def _clear_qos(delay = False): +def _clear_qos(delay=False, verbose=False): QOS_TABLE_NAMES = [ 'PORT_QOS_MAP', 'QUEUE', @@ -797,7 +802,10 @@ def _clear_qos(delay = False): for qos_table in QOS_TABLE_NAMES: config_db.delete_table(qos_table) if delay: - _wait_until_clear("BUFFER_POOL_TABLE:*",interval=0.5, timeout=30) + device_metadata = config_db.get_entry('DEVICE_METADATA', 'localhost') + # Traditional buffer manager do not remove buffer tables in any case, no need to wait. + timeout = 120 if device_metadata and device_metadata.get('buffer_model') == 'dynamic' else 0 + _wait_until_clear(["BUFFER_*_TABLE:*", "BUFFER_*_SET"], interval=0.5, timeout=timeout, verbose=verbose) def _get_sonic_generated_services(num_asic): if not os.path.isfile(SONIC_GENERATED_SERVICE_PATH): @@ -1155,41 +1163,6 @@ def validate_gre_type(ctx, _, value): except ValueError: raise click.UsageError("{} is not a valid GRE type".format(value)) -def _is_storage_device(cfg_db): - """ - Check if the device is a storage device or not - """ - device_metadata = cfg_db.get_entry("DEVICE_METADATA", "localhost") - return device_metadata.get("storage_device", "Unknown") == "true" - -def _is_acl_table_present(cfg_db, acl_table_name): - """ - Check if acl table exists - """ - return acl_table_name in cfg_db.get_keys("ACL_TABLE") - -def load_backend_acl(cfg_db, device_type): - """ - Load acl on backend storage device - """ - - BACKEND_ACL_TEMPLATE_FILE = os.path.join('/', "usr", "share", "sonic", "templates", "backend_acl.j2") - BACKEND_ACL_FILE = os.path.join('/', "etc", "sonic", "backend_acl.json") - - if device_type and device_type == "BackEndToRRouter" and _is_storage_device(cfg_db) and _is_acl_table_present(cfg_db, "DATAACL"): - if os.path.isfile(BACKEND_ACL_TEMPLATE_FILE): - clicommon.run_command( - "{} -d -t {},{}".format( - SONIC_CFGGEN_PATH, - BACKEND_ACL_TEMPLATE_FILE, - BACKEND_ACL_FILE - ), - display_cmd=True - ) - if os.path.isfile(BACKEND_ACL_FILE): - clicommon.run_command("acl-loader update incremental {}".format(BACKEND_ACL_FILE), display_cmd=True) - - # This is our main entrypoint - the main 'config' command @click.group(cls=clicommon.AbbreviationGroup, context_settings=CONTEXT_SETTINGS) @click.pass_context @@ -1767,12 +1740,6 @@ def load_minigraph(db, no_service_restart, traffic_shift_away, override_config, if os.path.isfile('/etc/sonic/acl.json'): clicommon.run_command("acl-loader update full /etc/sonic/acl.json", display_cmd=True) - # get the device type - device_type = _get_device_type() - - # Load backend acl - load_backend_acl(db.cfgdb, device_type) - # Load port_config.json try: load_port_config(db.cfgdb, '/etc/sonic/port_config.json') @@ -1782,6 +1749,8 @@ def load_minigraph(db, no_service_restart, traffic_shift_away, override_config, # generate QoS and Buffer configs clicommon.run_command("config qos reload --no-dynamic-buffer --no-delay", display_cmd=True) + # get the device type + device_type = _get_device_type() if device_type != 'MgmtToRRouter' and device_type != 'MgmtTsToR' and device_type != 'BmcMgmtToRRouter' and device_type != 'EPMS': clicommon.run_command("pfcwd start_default", display_cmd=True) @@ -2644,10 +2613,11 @@ def qos(ctx): pass @qos.command('clear') -def clear(): +@click.option('--verbose', is_flag=True, help="Enable verbose output") +def clear(verbose): """Clear QoS configuration""" log.log_info("'qos clear' executing...") - _clear_qos() + _clear_qos(verbose=verbose) def _update_buffer_calculation_model(config_db, model): """Update the buffer calculation model into CONFIG_DB""" @@ -2664,6 +2634,7 @@ def _update_buffer_calculation_model(config_db, model): @click.option('--ports', is_flag=False, required=False, help="List of ports that needs to be updated") @click.option('--no-dynamic-buffer', is_flag=True, help="Disable dynamic buffer calculation") @click.option('--no-delay', is_flag=True, hidden=True) +@click.option('--verbose', is_flag=True, help="Enable verbose output") @click.option( '--json-data', type=click.STRING, help="json string with additional data, valid with --dry-run option" @@ -2672,7 +2643,7 @@ def _update_buffer_calculation_model(config_db, model): '--dry_run', type=click.STRING, help="Dry run, writes config to the given file" ) -def reload(ctx, no_dynamic_buffer, no_delay, dry_run, json_data, ports): +def reload(ctx, no_dynamic_buffer, no_delay, dry_run, json_data, ports, verbose): """Reload QoS configuration""" if ports: log.log_info("'qos reload --ports {}' executing...".format(ports)) @@ -2681,7 +2652,7 @@ def reload(ctx, no_dynamic_buffer, no_delay, dry_run, json_data, ports): log.log_info("'qos reload' executing...") if not dry_run: - _clear_qos(delay = not no_delay) + _clear_qos(delay = not no_delay, verbose=verbose) _, hwsku_path = device_info.get_paths_to_platform_and_hwsku_dirs() sonic_version_file = device_info.get_sonic_version_file() @@ -4193,7 +4164,7 @@ def breakout(ctx, interface_name, mode, verbose, force_remove_dependencies, load raise click.Abort() # Get the config_db connector - config_db = ctx.obj['config_db'] + config_db = ValidatedConfigDBConnector(ctx.obj['config_db']) target_brkout_mode = mode @@ -4272,7 +4243,10 @@ def breakout(ctx, interface_name, mode, verbose, force_remove_dependencies, load if interface_name not in brkout_cfg_keys: click.secho("[ERROR] {} is not present in 'BREAKOUT_CFG' Table!".format(interface_name), fg='red') raise click.Abort() - config_db.set_entry("BREAKOUT_CFG", interface_name, {'brkout_mode': target_brkout_mode}) + try: + config_db.set_entry("BREAKOUT_CFG", interface_name, {'brkout_mode': target_brkout_mode}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) click.secho("Breakout process got successfully completed." .format(interface_name), fg="cyan", underline=True) click.echo("Please note loaded setting will be lost after system reboot. To preserve setting, run `config save`.") @@ -6419,15 +6393,19 @@ def ntp(ctx): @click.pass_context def add_ntp_server(ctx, ntp_ip_address): """ Add NTP server IP """ - if not clicommon.is_ipaddress(ntp_ip_address): - ctx.fail('Invalid ip address') - db = ctx.obj['db'] + if ADHOC_VALIDATION: + if not clicommon.is_ipaddress(ntp_ip_address): + ctx.fail('Invalid IP address') + db = ValidatedConfigDBConnector(ctx.obj['db']) ntp_servers = db.get_table("NTP_SERVER") if ntp_ip_address in ntp_servers: click.echo("NTP server {} is already configured".format(ntp_ip_address)) return else: - db.set_entry('NTP_SERVER', ntp_ip_address, {'NULL': 'NULL'}) + try: + db.set_entry('NTP_SERVER', ntp_ip_address, {'NULL': 'NULL'}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) click.echo("NTP server {} added to configuration".format(ntp_ip_address)) try: click.echo("Restarting ntp-config service...") @@ -6440,12 +6418,16 @@ def add_ntp_server(ctx, ntp_ip_address): @click.pass_context def del_ntp_server(ctx, ntp_ip_address): """ Delete NTP server IP """ - if not clicommon.is_ipaddress(ntp_ip_address): - ctx.fail('Invalid IP address') - db = ctx.obj['db'] + if ADHOC_VALIDATION: + if not clicommon.is_ipaddress(ntp_ip_address): + ctx.fail('Invalid IP address') + db = ValidatedConfigDBConnector(ctx.obj['db']) ntp_servers = db.get_table("NTP_SERVER") if ntp_ip_address in ntp_servers: - db.set_entry('NTP_SERVER', '{}'.format(ntp_ip_address), None) + try: + db.set_entry('NTP_SERVER', '{}'.format(ntp_ip_address), None) + except JsonPatchConflict as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) click.echo("NTP server {} removed from configuration".format(ntp_ip_address)) else: ctx.fail("NTP server {} is not configured.".format(ntp_ip_address)) @@ -6698,16 +6680,19 @@ def add(ctx, name, ipaddr, port, vrf): if not is_valid_collector_info(name, ipaddr, port, vrf): return - config_db = ctx.obj['db'] + config_db = ValidatedConfigDBConnector(ctx.obj['db']) collector_tbl = config_db.get_table('SFLOW_COLLECTOR') if (collector_tbl and name not in collector_tbl and len(collector_tbl) == 2): click.echo("Only 2 collectors can be configured, please delete one") return - - config_db.mod_entry('SFLOW_COLLECTOR', name, - {"collector_ip": ipaddr, "collector_port": port, - "collector_vrf": vrf}) + + try: + config_db.mod_entry('SFLOW_COLLECTOR', name, + {"collector_ip": ipaddr, "collector_port": port, + "collector_vrf": vrf}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) return # @@ -6718,14 +6703,18 @@ def add(ctx, name, ipaddr, port, vrf): @click.pass_context def del_collector(ctx, name): """Delete a sFlow collector""" - config_db = ctx.obj['db'] - collector_tbl = config_db.get_table('SFLOW_COLLECTOR') + config_db = ValidatedConfigDBConnector(ctx.obj['db']) + if ADHOC_VALIDATION: + collector_tbl = config_db.get_table('SFLOW_COLLECTOR') - if name not in collector_tbl: - click.echo("Collector: {} not configured".format(name)) - return + if name not in collector_tbl: + click.echo("Collector: {} not configured".format(name)) + return - config_db.mod_entry('SFLOW_COLLECTOR', name, None) + try: + config_db.set_entry('SFLOW_COLLECTOR', name, None) + except (JsonPatchConflict, JsonPointerException) as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'sflow agent-id' group diff --git a/config/mclag.py b/config/mclag.py index 589bb61a20..c62f8309a5 100644 --- a/config/mclag.py +++ b/config/mclag.py @@ -1,8 +1,12 @@ import click from swsscommon.swsscommon import ConfigDBConnector +from .validated_config_db_connector import ValidatedConfigDBConnector import ipaddress - +import jsonpatch +from jsonpatch import JsonPatchConflict +from jsonpointer import JsonPointerException +ADHOC_VALIDATION = False CFG_PORTCHANNEL_PREFIX = "PortChannel" CFG_PORTCHANNEL_PREFIX_LEN = 11 CFG_PORTCHANNEL_MAX_VAL = 9999 @@ -86,8 +90,7 @@ def is_ipv4_addr_valid(addr): def check_if_interface_is_valid(db, interface_name): from .main import interface_name_is_valid - if interface_name_is_valid(db,interface_name) is False: - ctx.fail("Interface name is invalid. Please enter a valid interface name!!") + return interface_name_is_valid(db,interface_name) def get_intf_vrf_bind_unique_ip(db, interface_name, interface_type): intfvrf = db.get_table(interface_type) @@ -121,34 +124,42 @@ def mclag(ctx): @click.pass_context def add_mclag_domain(ctx, domain_id, source_ip_addr, peer_ip_addr, peer_ifname): """Add MCLAG Domain""" - - if not mclag_domain_id_valid(domain_id): - ctx.fail("{} invalid domain ID, valid range is 1 to 4095".format(domain_id)) - if not is_ipv4_addr_valid(source_ip_addr): - ctx.fail("{} invalid local ip address".format(source_ip_addr)) - if not is_ipv4_addr_valid(peer_ip_addr): - ctx.fail("{} invalid peer ip address".format(peer_ip_addr)) - - db = ctx.obj['db'] + if ADHOC_VALIDATION: + if not mclag_domain_id_valid(domain_id): + ctx.fail("{} invalid domain ID, valid range is 1 to 4095".format(domain_id)) + if not is_ipv4_addr_valid(source_ip_addr): + ctx.fail("{} invalid local ip address".format(source_ip_addr)) + if not is_ipv4_addr_valid(peer_ip_addr): + ctx.fail("{} invalid peer ip address".format(peer_ip_addr)) + + db = ValidatedConfigDBConnector(ctx.obj['db']) fvs = {} fvs['source_ip'] = str(source_ip_addr) fvs['peer_ip'] = str(peer_ip_addr) - if peer_ifname is not None: - if (peer_ifname.startswith("Ethernet") is False) and (peer_ifname.startswith("PortChannel") is False): - ctx.fail("peer interface is invalid, should be Ethernet interface or portChannel !!") - if (peer_ifname.startswith("Ethernet") is True) and (check_if_interface_is_valid(db, peer_ifname) is False): - ctx.fail("peer Ethernet interface name is invalid. it is not present in port table of configDb!!") - if (peer_ifname.startswith("PortChannel")) and (is_portchannel_name_valid(peer_ifname) is False): - ctx.fail("peer PortChannel interface name is invalid !!") - fvs['peer_link'] = str(peer_ifname) + if ADHOC_VALIDATION: + if peer_ifname is not None: + if (peer_ifname.startswith("Ethernet") is False) and (peer_ifname.startswith("PortChannel") is False): + ctx.fail("peer interface is invalid, should be Ethernet interface or portChannel !!") + if (peer_ifname.startswith("Ethernet") is True) and (check_if_interface_is_valid(db, peer_ifname) is False): + ctx.fail("peer Ethernet interface name is invalid. it is not present in port table of configDb!!") + if (peer_ifname.startswith("PortChannel")) and (is_portchannel_name_valid(peer_ifname) is False): + ctx.fail("peer PortChannel interface name is invalid !!") + fvs['peer_link'] = str(peer_ifname) mclag_domain_keys = db.get_table('MCLAG_DOMAIN').keys() if len(mclag_domain_keys) == 0: - db.set_entry('MCLAG_DOMAIN', domain_id, fvs) + try: + db.set_entry('MCLAG_DOMAIN', domain_id, fvs) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) else: + domain_id = str(domain_id) if domain_id in mclag_domain_keys: - db.mod_entry('MCLAG_DOMAIN', domain_id, fvs) - else: - ctx.fail("only one mclag Domain can be configured. Already one domain {} configured ".format(mclag_domain_keys[0])) + try: + db.mod_entry('MCLAG_DOMAIN', domain_id, fvs) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) + else: + ctx.fail("only one mclag Domain can be configured. Already one domain {} configured ".format(list(mclag_domain_keys)[0])) #mclag domain delete @@ -158,15 +169,16 @@ def add_mclag_domain(ctx, domain_id, source_ip_addr, peer_ip_addr, peer_ifname): @click.pass_context def del_mclag_domain(ctx, domain_id): """Delete MCLAG Domain""" - - if not mclag_domain_id_valid(domain_id): - ctx.fail("{} invalid domain ID, valid range is 1 to 4095".format(domain_id)) - - db = ctx.obj['db'] - entry = db.get_entry('MCLAG_DOMAIN', domain_id) - if entry is None: - ctx.fail("MCLAG Domain {} not configured ".format(domain_id)) - return + + db = ValidatedConfigDBConnector(ctx.obj['db']) + + if ADHOC_VALIDATION: + if not mclag_domain_id_valid(domain_id): + ctx.fail("{} invalid domain ID, valid range is 1 to 4095".format(domain_id)) + + entry = db.get_entry('MCLAG_DOMAIN', domain_id) + if entry is None: + ctx.fail("MCLAG Domain {} not configured ".format(domain_id)) click.echo("MCLAG Domain delete takes care of deleting all associated MCLAG Interfaces") @@ -175,11 +187,17 @@ def del_mclag_domain(ctx, domain_id): #delete associated mclag interfaces for iface_domain_id, iface_name in interface_table_keys: - if (int(iface_domain_id) == domain_id): - db.set_entry('MCLAG_INTERFACE', (iface_domain_id, iface_name), None ) + if (int(iface_domain_id) == domain_id): + try: + db.set_entry('MCLAG_INTERFACE', (iface_domain_id, iface_name), None ) + except (JsonPointerException, JsonPatchConflict) as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) #delete mclag domain - db.set_entry('MCLAG_DOMAIN', domain_id, None) + try: + db.set_entry('MCLAG_DOMAIN', domain_id, None) + except (JsonPointerException, JsonPatchConflict) as e: + ctx.fail("Invalid ConfigDB. Error: MCLAG_DOMAIN {} failed to be deleted".format(domain_id)) #keepalive timeout config @@ -260,16 +278,21 @@ def mclag_member(ctx): @click.pass_context def add_mclag_member(ctx, domain_id, portchannel_names): """Add member MCLAG interfaces from MCLAG Domain""" - db = ctx.obj['db'] - entry = db.get_entry('MCLAG_DOMAIN', domain_id) - if len(entry) == 0: - ctx.fail("MCLAG Domain " + domain_id + " not configured, configure mclag domain first") + db = ValidatedConfigDBConnector(ctx.obj['db']) + if ADHOC_VALIDATION: + entry = db.get_entry('MCLAG_DOMAIN', domain_id) + if len(entry) == 0: + ctx.fail("MCLAG Domain " + domain_id + " not configured, configure mclag domain first") portchannel_list = portchannel_names.split(",") for portchannel_name in portchannel_list: - if is_portchannel_name_valid(portchannel_name) != True: - ctx.fail("{} is invalid!, name should have prefix '{}' and suffix '{}'" .format(portchannel_name, CFG_PORTCHANNEL_PREFIX, CFG_PORTCHANNEL_NO)) - db.set_entry('MCLAG_INTERFACE', (domain_id, portchannel_name), {'if_type':"PortChannel"} ) + if ADHOC_VALIDATION: + if is_portchannel_name_valid(portchannel_name) != True: + ctx.fail("{} is invalid!, name should have prefix '{}' and suffix '{}'" .format(portchannel_name, CFG_PORTCHANNEL_PREFIX, CFG_PORTCHANNEL_NO)) + try: + db.set_entry('MCLAG_INTERFACE', (domain_id, portchannel_name), {'if_type':"PortChannel"} ) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) @mclag_member.command('del') @click.argument('domain_id', metavar='', required=True) @@ -277,13 +300,17 @@ def add_mclag_member(ctx, domain_id, portchannel_names): @click.pass_context def del_mclag_member(ctx, domain_id, portchannel_names): """Delete member MCLAG interfaces from MCLAG Domain""" - db = ctx.obj['db'] + db = ValidatedConfigDBConnector(ctx.obj['db']) #split comma seperated portchannel names portchannel_list = portchannel_names.split(",") for portchannel_name in portchannel_list: - if is_portchannel_name_valid(portchannel_name) != True: - ctx.fail("{} is invalid!, name should have prefix '{}' and suffix '{}'" .format(portchannel_name, CFG_PORTCHANNEL_PREFIX, CFG_PORTCHANNEL_NO)) - db.set_entry('MCLAG_INTERFACE', (domain_id, portchannel_name), None ) + if ADHOC_VALIDATION: + if is_portchannel_name_valid(portchannel_name) != True: + ctx.fail("{} is invalid!, name should have prefix '{}' and suffix '{}'" .format(portchannel_name, CFG_PORTCHANNEL_PREFIX, CFG_PORTCHANNEL_NO)) + try: + db.set_entry('MCLAG_INTERFACE', (domain_id, portchannel_name), None ) + except (JsonPatchConflict, JsonPointerException) as e: + ctx.fail("Failed to delete mclag member {} from mclag domain {}".format(portchannel_name, domain_id)) #mclag unique ip config @mclag.group('unique-ip') @@ -297,7 +324,7 @@ def mclag_unique_ip(ctx): @click.pass_context def add_mclag_unique_ip(ctx, interface_names): """Add Unique IP on MCLAG Vlan interface""" - db = ctx.obj['db'] + db = ValidatedConfigDBConnector(ctx.obj['db']) mclag_domain_keys = db.get_table('MCLAG_DOMAIN').keys() if len(mclag_domain_keys) == 0: ctx.fail("MCLAG not configured. MCLAG should be configured.") @@ -318,14 +345,17 @@ def add_mclag_unique_ip(ctx, interface_names): (intf_name, ip) = k if intf_name == interface_name and ip != 0: ctx.fail("%s is configured with IP %s, remove the IP configuration and reconfigure after enabling unique IP configuration."%(str(intf_name), str(ip))) - db.set_entry('MCLAG_UNIQUE_IP', (interface_name), {'unique_ip':"enable"} ) + try: + db.set_entry('MCLAG_UNIQUE_IP', (interface_name), {'unique_ip':"enable"} ) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) @mclag_unique_ip.command('del') @click.argument('interface_names', metavar='', required=True) @click.pass_context def del_mclag_unique_ip(ctx, interface_names): """Delete Unique IP from MCLAG Vlan interface""" - db = ctx.obj['db'] + db = ValidatedConfigDBConnector(ctx.obj['db']) #split comma seperated interface names interface_list = interface_names.split(",") for interface_name in interface_list: @@ -341,7 +371,10 @@ def del_mclag_unique_ip(ctx, interface_names): (intf_name, ip) = k if intf_name == interface_name and ip != 0: ctx.fail("%s is configured with IP %s, remove the IP configuration and reconfigure after disabling unique IP configuration."%(str(intf_name), str(ip))) - db.set_entry('MCLAG_UNIQUE_IP', (interface_name), None ) + try: + db.set_entry('MCLAG_UNIQUE_IP', (interface_name), None ) + except (JsonPatchConflict, JsonPointerException) as e: + ctx.fail("Failed to delete mclag unique IP from Vlan interface {}".format(interface_name)) ####### diff --git a/config/muxcable.py b/config/muxcable.py index f53eae22e3..ba80cb02af 100644 --- a/config/muxcable.py +++ b/config/muxcable.py @@ -246,7 +246,7 @@ def lookup_statedb_and_update_configdb(db, per_npu_statedb, config_db, port, sta ipv6_value = get_value_for_key_in_config_tbl(config_db, port, "server_ipv6", "MUX_CABLE") soc_ipv4_value = get_optional_value_for_key_in_config_tbl(config_db, port, "soc_ipv4", "MUX_CABLE") cable_type = get_optional_value_for_key_in_config_tbl(config_db, port, "cable_type", "MUX_CABLE") - + ctx = click.get_current_context() state = get_value_for_key_in_dict(muxcable_statedb_dict, port, "state", "MUX_CABLE_TABLE") port_name = platform_sfputil_helper.get_interface_alias(port, db) @@ -255,15 +255,21 @@ def lookup_statedb_and_update_configdb(db, per_npu_statedb, config_db, port, sta port_status_dict[port_name] = 'OK' else: if cable_type is not None or soc_ipv4_value is not None: - config_db.set_entry("MUX_CABLE", port, {"state": state_cfg_val, - "server_ipv4": ipv4_value, - "server_ipv6": ipv6_value, - "soc_ipv4":soc_ipv4_value, - "cable_type": cable_type}) + try: + config_db.set_entry("MUX_CABLE", port, {"state": state_cfg_val, + "server_ipv4": ipv4_value, + "server_ipv6": ipv6_value, + "soc_ipv4":soc_ipv4_value, + "cable_type": cable_type}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) else: - config_db.set_entry("MUX_CABLE", port, {"state": state_cfg_val, - "server_ipv4": ipv4_value, - "server_ipv6": ipv6_value}) + try: + config_db.set_entry("MUX_CABLE", port, {"state": state_cfg_val, + "server_ipv4": ipv4_value, + "server_ipv6": ipv6_value}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) if (str(state_cfg_val) == 'active' and str(state) != 'active') or (str(state_cfg_val) == 'standby' and str(state) != 'standby'): port_status_dict[port_name] = 'INPROGRESS' else: @@ -274,9 +280,13 @@ def update_configdb_pck_loss_data(config_db, port, val): ipv4_value = get_value_for_key_in_config_tbl(config_db, port, "server_ipv4", "MUX_CABLE") ipv6_value = get_value_for_key_in_config_tbl(config_db, port, "server_ipv6", "MUX_CABLE") - config_db.set_entry("MUX_CABLE", port, {"state": configdb_state, + try: + config_db.set_entry("MUX_CABLE", port, {"state": configdb_state, "server_ipv4": ipv4_value, "server_ipv6": ipv6_value, "pck_loss_data_reset": val}) + except ValueError as e: + ctx = click.get_current_context() + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # 'muxcable' command ("config muxcable mode active|auto") @muxcable.command() diff --git a/config/nat.py b/config/nat.py index 99e21b2750..fd121b01d1 100644 --- a/config/nat.py +++ b/config/nat.py @@ -1,8 +1,14 @@ import ipaddress import click +import jsonpatch +import jsonpointer +from jsonpatch import JsonPatchConflict +from jsonpointer import JsonPointerException from swsscommon.swsscommon import SonicV2Connector, ConfigDBConnector +from .validated_config_db_connector import ValidatedConfigDBConnector +ADHOC_VALIDATION = True def is_valid_ipv4_address(address): """Check if the given ipv4 address is valid""" @@ -243,15 +249,15 @@ def static(): @click.option('-twice_nat_id', metavar='', required=False, type=click.IntRange(1, 9999), help="Set the twice nat id") def add_basic(ctx, global_ip, local_ip, nat_type, twice_nat_id): """Add Static NAT-related configutation""" + if ADHOC_VALIDATION: + # Verify the ip address format + if is_valid_ipv4_address(local_ip) is False: + ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - # Verify the ip address format - if is_valid_ipv4_address(local_ip) is False: - ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - - if is_valid_ipv4_address(global_ip) is False: - ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) + if is_valid_ipv4_address(global_ip) is False: + ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() entryFound = False @@ -304,13 +310,25 @@ def add_basic(ctx, global_ip, local_ip, nat_type, twice_nat_id): ctx.fail("Same Twice nat id is not allowed for more than 2 entries!!") if nat_type is not None and twice_nat_id is not None: - config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: nat_type, dataKey3: twice_nat_id}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: nat_type, dataKey3: twice_nat_id}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) elif nat_type is not None: - config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: nat_type}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: nat_type}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) elif twice_nat_id is not None: - config_db.set_entry(table, key, {dataKey1: local_ip, dataKey3: twice_nat_id}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip, dataKey3: twice_nat_id}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) else: - config_db.set_entry(table, key, {dataKey1: local_ip}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat add static tcp' command ('config nat add static tcp ') @@ -325,15 +343,15 @@ def add_basic(ctx, global_ip, local_ip, nat_type, twice_nat_id): @click.option('-twice_nat_id', metavar='', required=False, type=click.IntRange(1, 9999), help="Set the twice nat id") def add_tcp(ctx, global_ip, global_port, local_ip, local_port, nat_type, twice_nat_id): """Add Static TCP Protocol NAPT-related configutation""" + if ADHOC_VALIDATION: + # Verify the ip address format + if is_valid_ipv4_address(local_ip) is False: + ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - # Verify the ip address format - if is_valid_ipv4_address(local_ip) is False: - ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - - if is_valid_ipv4_address(global_ip) is False: - ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) + if is_valid_ipv4_address(global_ip) is False: + ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() entryFound = False @@ -384,13 +402,25 @@ def add_tcp(ctx, global_ip, global_port, local_ip, local_port, nat_type, twice_n ctx.fail("Same Twice nat id is not allowed for more than 2 entries!!") if nat_type is not None and twice_nat_id is not None: - config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey3: nat_type, dataKey4: twice_nat_id}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey3: nat_type, dataKey4: twice_nat_id}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) elif nat_type is not None: - config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey3: nat_type}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey3: nat_type}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) elif twice_nat_id is not None: - config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey4: twice_nat_id}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey4: twice_nat_id}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) else: - config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat add static udp' command ('config nat add static udp ') @@ -405,15 +435,16 @@ def add_tcp(ctx, global_ip, global_port, local_ip, local_port, nat_type, twice_n @click.option('-twice_nat_id', metavar='', required=False, type=click.IntRange(1, 9999), help="Set the twice nat id") def add_udp(ctx, global_ip, global_port, local_ip, local_port, nat_type, twice_nat_id): """Add Static UDP Protocol NAPT-related configutation""" + + if ADHOC_VALIDATION: + # Verify the ip address format + if is_valid_ipv4_address(local_ip) is False: + ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - # Verify the ip address format - if is_valid_ipv4_address(local_ip) is False: - ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - - if is_valid_ipv4_address(global_ip) is False: - ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) + if is_valid_ipv4_address(global_ip) is False: + ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() entryFound = False @@ -464,13 +495,25 @@ def add_udp(ctx, global_ip, global_port, local_ip, local_port, nat_type, twice_n ctx.fail("Same Twice nat id is not allowed for more than 2 entries!!") if nat_type is not None and twice_nat_id is not None: - config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey3: nat_type, dataKey4: twice_nat_id}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey3: nat_type, dataKey4: twice_nat_id}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) elif nat_type is not None: - config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey3: nat_type}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey3: nat_type}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) elif twice_nat_id is not None: - config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey4: twice_nat_id}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port, dataKey4: twice_nat_id}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) else: - config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port}) + try: + config_db.set_entry(table, key, {dataKey1: local_ip, dataKey2: local_port}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat remove static' group ('config nat remove static ...') @@ -489,15 +532,16 @@ def static(): @click.argument('local_ip', metavar='', required=True) def remove_basic(ctx, global_ip, local_ip): """Remove Static NAT-related configutation""" + + if ADHOC_VALIDATION: + # Verify the ip address format + if is_valid_ipv4_address(local_ip) is False: + ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - # Verify the ip address format - if is_valid_ipv4_address(local_ip) is False: - ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - - if is_valid_ipv4_address(global_ip) is False: - ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) + if is_valid_ipv4_address(global_ip) is False: + ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() entryFound = False @@ -508,8 +552,11 @@ def remove_basic(ctx, global_ip, local_ip): data = config_db.get_entry(table, key) if data: if data[dataKey] == local_ip: - config_db.set_entry(table, key, None) - entryFound = True + try: + config_db.set_entry(table, key, None) + entryFound = True + except (JsonPatchConflict, JsonPointerException) as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) if entryFound is False: click.echo("Trying to delete static nat entry, which is not present.") @@ -526,15 +573,16 @@ def remove_basic(ctx, global_ip, local_ip): @click.argument('local_port', metavar='', type=click.IntRange(1, 65535), required=True) def remove_tcp(ctx, global_ip, global_port, local_ip, local_port): """Remove Static TCP Protocol NAPT-related configutation""" + + if ADHOC_VALIDATION: + # Verify the ip address format + if is_valid_ipv4_address(local_ip) is False: + ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - # Verify the ip address format - if is_valid_ipv4_address(local_ip) is False: - ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - - if is_valid_ipv4_address(global_ip) is False: - ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) + if is_valid_ipv4_address(global_ip) is False: + ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() entryFound = False @@ -544,8 +592,11 @@ def remove_tcp(ctx, global_ip, global_port, local_ip, local_port): data = config_db.get_entry(table, key) if data: if data['local_ip'] == local_ip and data['local_port'] == str(local_port): - config_db.set_entry(table, key, None) - entryFound = True + try: + config_db.set_entry(table, key, None) + entryFound = True + except (JsonPatchConflict, JsonPointerException) as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) if entryFound is False: click.echo("Trying to delete static napt entry, which is not present.") @@ -561,15 +612,16 @@ def remove_tcp(ctx, global_ip, global_port, local_ip, local_port): @click.argument('local_port', metavar='', type=click.IntRange(1, 65535), required=True) def remove_udp(ctx, global_ip, global_port, local_ip, local_port): """Remove Static UDP Protocol NAPT-related configutation""" + + if ADHOC_VALIDATION: + # Verify the ip address format + if is_valid_ipv4_address(local_ip) is False: + ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - # Verify the ip address format - if is_valid_ipv4_address(local_ip) is False: - ctx.fail("Given local ip address {} is invalid. Please enter a valid local ip address !!".format(local_ip)) - - if is_valid_ipv4_address(global_ip) is False: - ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) + if is_valid_ipv4_address(global_ip) is False: + ctx.fail("Given global ip address {} is invalid. Please enter a valid global ip address !!".format(global_ip)) - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() entryFound = False @@ -581,8 +633,11 @@ def remove_udp(ctx, global_ip, global_port, local_ip, local_port): data = config_db.get_entry(table, key) if data: if data[dataKey1] == local_ip and data[dataKey2] == str(local_port): - config_db.set_entry(table, key, None) - entryFound = True + try: + config_db.set_entry(table, key, None) + entryFound = True + except (JsonPatchConflict, JsonPointerException) as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) if entryFound is False: click.echo("Trying to delete static napt entry, which is not present.") @@ -595,7 +650,7 @@ def remove_udp(ctx, global_ip, global_port, local_ip, local_port): def remove_static_all(ctx): """Remove all Static related configutation""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() tables = ['STATIC_NAT', 'STATIC_NAPT'] @@ -604,7 +659,10 @@ def remove_static_all(ctx): table_dict = config_db.get_table(table_name) if table_dict: for table_key_name in table_dict: - config_db.set_entry(table_name, table_key_name, None) + try: + config_db.set_entry(table_name, table_key_name, None) + except (JsonPatchConflict, JsonPointerException) as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat add pool' command ('config nat add pool ') @@ -664,7 +722,7 @@ def add_pool(ctx, pool_name, global_ip_range, global_port_range): else: global_port_range = "NULL" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() entryFound = False @@ -711,10 +769,13 @@ def add_pool(ctx, pool_name, global_ip_range, global_port_range): ctx.fail("Given Ip address entry is overlapping with existing Static NAT entry !!") if entryFound == False: - config_db.set_entry(table, key, {dataKey1: global_ip_range, dataKey2 : global_port_range}) + try: + config_db.set_entry(table, key, {dataKey1: global_ip_range, dataKey2 : global_port_range}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # -# 'nat add binding' command ('config nat add binding ') +# 'nat add binding' command ('config nat add binding ') # @add.command('binding') @click.pass_context @@ -740,7 +801,7 @@ def add_binding(ctx, binding_name, pool_name, acl_name, nat_type, twice_nat_id): if len(binding_name) > 32: ctx.fail("Invalid binding name. Maximum allowed binding name is 32 characters !!") - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() data = config_db.get_entry(table, key) @@ -773,7 +834,10 @@ def add_binding(ctx, binding_name, pool_name, acl_name, nat_type, twice_nat_id): if count > 1: ctx.fail("Same Twice nat id is not allowed for more than 2 entries!!") - config_db.set_entry(table, key, {dataKey1: acl_name, dataKey2: pool_name, dataKey3: nat_type, dataKey4: twice_nat_id}) + try: + config_db.set_entry(table, key, {dataKey1: acl_name, dataKey2: pool_name, dataKey3: nat_type, dataKey4: twice_nat_id}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat remove pool' command ('config nat remove pool ') @@ -791,7 +855,7 @@ def remove_pool(ctx, pool_name): if len(pool_name) > 32: ctx.fail("Invalid pool name. Maximum allowed pool name is 32 characters !!") - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() data = config_db.get_entry(table, key) @@ -808,7 +872,10 @@ def remove_pool(ctx, pool_name): break if entryFound == False: - config_db.set_entry(table, key, None) + try: + config_db.set_entry(table, key, None) + except (JsonPatchConflict, JsonPointerException) as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat remove pools' command ('config nat remove pools') @@ -818,7 +885,7 @@ def remove_pool(ctx, pool_name): def remove_pools(ctx): """Remove all Pools for Dynamic configutation""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() entryFound = False @@ -835,8 +902,11 @@ def remove_pools(ctx): entryFound = True break - if entryFound == False: - config_db.set_entry(pool_table_name, pool_key_name, None) + if entryFound == False: + try: + config_db.set_entry(pool_table_name, pool_key_name, None) + except (JsonPatchConflict, JsonPointerException) as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat remove binding' command ('config nat remove binding ') @@ -854,7 +924,7 @@ def remove_binding(ctx, binding_name): if len(binding_name) > 32: ctx.fail("Invalid binding name. Maximum allowed binding name is 32 characters !!") - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() data = config_db.get_entry(table, key) @@ -863,7 +933,10 @@ def remove_binding(ctx, binding_name): entryFound = True if entryFound == False: - config_db.set_entry(table, key, None) + try: + config_db.set_entry(table, key, None) + except (JsonPatchConflict, JsonPointerException) as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat remove bindings' command ('config nat remove bindings') @@ -873,14 +946,17 @@ def remove_binding(ctx, binding_name): def remove_bindings(ctx): """Remove all Bindings for Dynamic configutation""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigBConnector(ConfigDBConnector()) config_db.connect() binding_table_name = 'NAT_BINDINGS' binding_dict = config_db.get_table(binding_table_name) if binding_dict: for binding_key_name in binding_dict: - config_db.set_entry(binding_table_name, binding_key_name, None) + try: + config_db.set_entry(binding_table_name, binding_key_name, None) + except (JsonPatchConflict, JsonPointerException) as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat add interface' command ('config nat add interface -nat_zone ') @@ -892,7 +968,7 @@ def remove_bindings(ctx): def add_interface(ctx, interface_name, nat_zone): """Add interface related nat configuration""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() if nat_interface_name_is_valid(interface_name) is False: @@ -912,7 +988,10 @@ def add_interface(ctx, interface_name, nat_zone): if not interface_table_dict or interface_name not in interface_table_dict: ctx.fail("Interface table is not present. Please configure ip-address on {} and apply the nat zone !!".format(interface_name)) - config_db.mod_entry(interface_table_type, interface_name, {"nat_zone": nat_zone}) + try: + config_db.mod_entry(interface_table_type, interface_name, {"nat_zone": nat_zone}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat remove interface' command ('config nat remove interface ') @@ -922,7 +1001,7 @@ def add_interface(ctx, interface_name, nat_zone): @click.argument('interface_name', metavar='', required=True) def remove_interface(ctx, interface_name): """Remove interface related NAT configuration""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() if nat_interface_name_is_valid(interface_name) is False: @@ -942,7 +1021,10 @@ def remove_interface(ctx, interface_name): if not interface_table_dict or interface_name not in interface_table_dict: ctx.fail("Interface table is not present. Ignoring the nat zone configuration") - config_db.mod_entry(interface_table_type, interface_name, {"nat_zone": "0"}) + try: + config_db.mod_entry(interface_table_type, interface_name, {"nat_zone": "0"}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat remove interfaces' command ('config nat remove interfaces') @@ -951,7 +1033,7 @@ def remove_interface(ctx, interface_name): @click.pass_context def remove_interfaces(ctx): """Remove all interface related NAT configuration""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() tables = ['INTERFACE', 'PORTCHANNEL_INTERFACE', 'VLAN_INTERFACE', 'LOOPBACK_INTERFACE'] @@ -964,7 +1046,10 @@ def remove_interfaces(ctx): if isinstance(table_key_name, str) is False: continue - config_db.set_entry(table_name, table_key_name, nat_config) + try: + config_db.set_entry(table_name, table_key_name, nat_config) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat feature' group ('config nat feature ') @@ -982,9 +1067,12 @@ def feature(): def enable(ctx): """Enbale the NAT feature """ - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() - config_db.mod_entry("NAT_GLOBAL", "Values", {"admin_mode": "enabled"}) + try: + config_db.mod_entry("NAT_GLOBAL", "Values", {"admin_mode": "enabled"}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat feature disable' command ('config nat feature disable>') @@ -993,9 +1081,12 @@ def enable(ctx): @click.pass_context def disable(ctx): """Disable the NAT feature """ - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() - config_db.mod_entry("NAT_GLOBAL", "Values", {"admin_mode": "disabled"}) + try: + config_db.mod_entry("NAT_GLOBAL", "Values", {"admin_mode": "disabled"}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat set timeout' command ('config nat set timeout ') @@ -1005,10 +1096,13 @@ def disable(ctx): @click.argument('seconds', metavar='', type=click.IntRange(300, 432000), required=True) def timeout(ctx, seconds): """Set NAT timeout configuration""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() - - config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_timeout": seconds}) + + try: + config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_timeout": seconds}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat set tcp-timeout' command ('config nat set tcp-timeout ') @@ -1018,10 +1112,13 @@ def timeout(ctx, seconds): @click.argument('seconds', metavar='', type=click.IntRange(300, 432000), required=True) def tcp_timeout(ctx, seconds): """Set NAT TCP timeout configuration""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() - - config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_tcp_timeout": seconds}) + + try: + config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_tcp_timeout": seconds}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat set udp-timeout' command ('config nat set udp-timeout ') @@ -1031,10 +1128,13 @@ def tcp_timeout(ctx, seconds): @click.argument('seconds', metavar='', type=click.IntRange(120, 600), required=True) def udp_timeout(ctx, seconds): """Set NAT UDP timeout configuration""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() - config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_udp_timeout": seconds}) + try: + config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_udp_timeout": seconds}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat reset timeout' command ('config nat reset timeout') @@ -1043,11 +1143,14 @@ def udp_timeout(ctx, seconds): @click.pass_context def timeout(ctx): """Reset NAT timeout configuration to default value (600 seconds)""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() seconds = 600 - - config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_timeout": seconds}) + + try: + config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_timeout": seconds}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat reset tcp-timeout' command ('config nat reset tcp-timeout') @@ -1056,11 +1159,14 @@ def timeout(ctx): @click.pass_context def tcp_timeout(ctx): """Reset NAT TCP timeout configuration to default value (86400 seconds)""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() seconds = 86400 - - config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_tcp_timeout": seconds}) + + try: + config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_tcp_timeout": seconds}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) # # 'nat reset udp-timeout' command ('config nat reset udp-timeout') @@ -1069,8 +1175,11 @@ def tcp_timeout(ctx): @click.pass_context def udp_timeout(ctx): """Reset NAT UDP timeout configuration to default value (300 seconds)""" - config_db = ConfigDBConnector() + config_db = ValidatedConfigDBConnector(ConfigDBConnector()) config_db.connect() seconds = 300 - - config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_udp_timeout": seconds}) + + try: + config_db.mod_entry("NAT_GLOBAL", "Values", {"nat_udp_timeout": seconds}) + except ValueError as e: + ctx.fail("Invalid ConfigDB. Error: {}".format(e)) diff --git a/doc/Command-Reference.md b/doc/Command-Reference.md index 69f282ccbb..494773b83c 100644 --- a/doc/Command-Reference.md +++ b/doc/Command-Reference.md @@ -9793,7 +9793,7 @@ Go Back To [Beginning of the document](#) or [Beginning of this section](#System **show vlan brief** -This command displays brief information about all the vlans configured in the device. It displays the vlan ID, IP address (if configured for the vlan), list of vlan member ports, whether the port is tagged or in untagged mode, the DHCP Helper Address, and the proxy ARP status +This command displays brief information about all the vlans configured in the device. It displays the vlan ID, IP address (if configured for the vlan), list of vlan member ports, whether the port is tagged or in untagged mode, the DHCPv4 Helper Address, and the proxy ARP status - Usage: ``` diff --git a/generic_config_updater/change_applier.py b/generic_config_updater/change_applier.py index f5a365d59f..d0818172f8 100644 --- a/generic_config_updater/change_applier.py +++ b/generic_config_updater/change_applier.py @@ -9,7 +9,7 @@ from .gu_common import genericUpdaterLogging SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) -UPDATER_CONF_FILE = f"{SCRIPT_DIR}/generic_config_updater.conf.json" +UPDATER_CONF_FILE = f"{SCRIPT_DIR}/gcu_services_validator.conf.json" logger = genericUpdaterLogging.get_logger(title="Change Applier") print_to_console = False diff --git a/generic_config_updater/field_operation_validators.py b/generic_config_updater/field_operation_validators.py new file mode 100644 index 0000000000..befd4b8749 --- /dev/null +++ b/generic_config_updater/field_operation_validators.py @@ -0,0 +1,26 @@ +from sonic_py_common import device_info +import re + +def rdma_config_update_validator(): + version_info = device_info.get_sonic_version_info() + build_version = version_info.get('build_version') + asic_type = version_info.get('asic_type') + + if (asic_type != 'mellanox' and asic_type != 'broadcom' and asic_type != 'cisco-8000'): + return False + + version_substrings = build_version.split('.') + branch_version = None + + for substring in version_substrings: + if substring.isdigit() and re.match(r'^\d{8}$', substring): + branch_version = substring + break + + if branch_version is None: + return False + + if asic_type == 'cisco-8000': + return branch_version >= "20201200" + else: + return branch_version >= "20181100" diff --git a/generic_config_updater/gcu_field_operation_validators.conf.json b/generic_config_updater/gcu_field_operation_validators.conf.json new file mode 100644 index 0000000000..f12a14d8eb --- /dev/null +++ b/generic_config_updater/gcu_field_operation_validators.conf.json @@ -0,0 +1,20 @@ +{ + "README": [ + "field_operation_validators provides, module & method name as ", + " .", + "NOTE: module name could have '.'", + " ", + "The last element separated by '.' is considered as ", + "method name", + "", + "e.g. 'show.acl.test_acl'", + "", + "field_operation_validators for a given table defines a list of validators that all must pass for modification to the specified field and table to be allowed", + "" + ], + "tables": { + "PFC_WD": { + "field_operation_validators": [ "generic_config_updater.field_operation_validators.rdma_config_update_validator" ] + } + } +} diff --git a/generic_config_updater/generic_config_updater.conf.json b/generic_config_updater/gcu_services_validator.conf.json similarity index 91% rename from generic_config_updater/generic_config_updater.conf.json rename to generic_config_updater/gcu_services_validator.conf.json index 907b5a6863..852b587286 100644 --- a/generic_config_updater/generic_config_updater.conf.json +++ b/generic_config_updater/gcu_services_validator.conf.json @@ -48,6 +48,9 @@ }, "NTP_SERVER": { "services_to_validate": [ "ntp-service" ] + }, + "VLAN_INTERFACE": { + "services_to_validate": [ "vlanintf-service" ] } }, "services": { @@ -71,6 +74,9 @@ }, "ntp-service": { "validate_commands": [ "generic_config_updater.services_validator.ntp_validator" ] + }, + "vlanintf-service": { + "validate_commands": [ "generic_config_updater.services_validator.vlanintf_validator" ] } } } diff --git a/generic_config_updater/gu_common.py b/generic_config_updater/gu_common.py index 0d7a5281bb..e8c66fcbbe 100644 --- a/generic_config_updater/gu_common.py +++ b/generic_config_updater/gu_common.py @@ -1,5 +1,6 @@ import json import jsonpatch +import importlib from jsonpointer import JsonPointer import sonic_yang import sonic_yang_ext @@ -7,11 +8,14 @@ import yang as ly import copy import re +import os from sonic_py_common import logger from enum import Enum YANG_DIR = "/usr/local/yang-models" SYSLOG_IDENTIFIER = "GenericConfigUpdater" +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) +GCU_FIELD_OP_CONF_FILE = f"{SCRIPT_DIR}/gcu_field_operation_validators.conf.json" class GenericConfigUpdaterError(Exception): pass @@ -162,6 +166,38 @@ def validate_field_operation(self, old_config, target_config): if any(op['op'] == operation and field == op['path'] for op in patch): raise IllegalPatchOperationError("Given patch operation is invalid. Operation: {} is illegal on field: {}".format(operation, field)) + def _invoke_validating_function(cmd): + # cmd is in the format as . + method_name = cmd.split(".")[-1] + module_name = ".".join(cmd.split(".")[0:-1]) + if module_name != "generic_config_updater.field_operation_validators" or "validator" not in method_name: + raise GenericConfigUpdaterError("Attempting to call invalid method {} in module {}. Module must be generic_config_updater.field_operation_validators, and method must be a defined validator".format(method_name, module_name)) + module = importlib.import_module(module_name, package=None) + method_to_call = getattr(module, method_name) + return method_to_call() + + if os.path.exists(GCU_FIELD_OP_CONF_FILE): + with open(GCU_FIELD_OP_CONF_FILE, "r") as s: + gcu_field_operation_conf = json.load(s) + else: + raise GenericConfigUpdaterError("GCU field operation validators config file not found") + + for element in patch: + path = element["path"] + match = re.search(r'\/([^\/]+)(\/|$)', path) # This matches the table name in the path, eg if path if /PFC_WD/GLOBAL, the match would be PFC_WD + if match is not None: + table = match.group(1) + else: + raise GenericConfigUpdaterError("Invalid jsonpatch path: {}".format(path)) + validating_functions= set() + tables = gcu_field_operation_conf["tables"] + validating_functions.update(tables.get(table, {}).get("field_operation_validators", [])) + + for function in validating_functions: + if not _invoke_validating_function(function): + raise IllegalPatchOperationError("Modification of {} table is illegal- validating function {} returned False".format(table, function)) + + def validate_lanes(self, config_db): if "PORT" not in config_db: return True, None diff --git a/generic_config_updater/services_validator.py b/generic_config_updater/services_validator.py index 44a9e095eb..5d8c1f0d51 100644 --- a/generic_config_updater/services_validator.py +++ b/generic_config_updater/services_validator.py @@ -101,3 +101,24 @@ def caclmgrd_validator(old_config, upd_config, keys): def ntp_validator(old_config, upd_config, keys): return _service_restart("ntp-config") + +def vlanintf_validator(old_config, upd_config, keys): + old_vlan_intf = old_config.get("VLAN_INTERFACE", {}) + upd_vlan_intf = upd_config.get("VLAN_INTERFACE", {}) + + # Get the tuple with format (iface, iface_ip) then check deleted tuple + # Example: + # old_keys = [("Vlan1000", "192.168.0.1")] + # upd_keys = [("Vlan1000", "192.168.0.2")] + old_keys = [ tuple(key.split("|")) + for key in old_vlan_intf if len(key.split("|")) == 2 ] + upd_keys = [ tuple(key.split("|")) + for key in upd_vlan_intf if len(key.split("|")) == 2 ] + + deleted_keys = list(set(old_keys) - set(upd_keys)) + for key in deleted_keys: + iface, iface_ip = key + rc = os.system(f"ip neigh flush dev {iface} {iface_ip}") + if not rc: + return False + return True diff --git a/scripts/db_migrator.py b/scripts/db_migrator.py index 5c946bbb9f..64fddea290 100755 --- a/scripts/db_migrator.py +++ b/scripts/db_migrator.py @@ -45,7 +45,7 @@ def __init__(self, namespace, socket=None): none-zero values. build: sequentially increase within a minor version domain. """ - self.CURRENT_VERSION = 'version_4_0_0' + self.CURRENT_VERSION = 'version_4_0_1' self.TABLE_NAME = 'VERSIONS' self.TABLE_KEY = 'DATABASE' @@ -867,9 +867,28 @@ def version_3_0_6(self): def version_4_0_0(self): """ Version 4_0_0. - This is the latest version for master branch """ log.log_info('Handling version_4_0_0') + # Update state-db fast-reboot entry to enable if set to enable fast-reboot finalizer when using upgrade with fast-reboot + # since upgrading from previous version FAST_REBOOT table will be deleted when the timer will expire. + # reading FAST_REBOOT table can't be done with stateDB.get as it uses hget behind the scenes and the table structure is + # not using hash and won't work. + # FAST_REBOOT table exists only if fast-reboot was triggered. + keys = self.stateDB.keys(self.stateDB.STATE_DB, "FAST_REBOOT|system") + if keys: + enable_state = 'true' + else: + enable_state = 'false' + self.stateDB.set(self.stateDB.STATE_DB, 'FAST_RESTART_ENABLE_TABLE|system', 'enable', enable_state) + self.set_version('version_4_0_1') + return 'version_4_0_1' + + def version_4_0_1(self): + """ + Version 4_0_1. + This is the latest version for master branch + """ + log.log_info('Handling version_4_0_1') return None def get_version(self): diff --git a/scripts/dropstat b/scripts/dropstat index 4e9f5bb4d0..f98fc29197 100755 --- a/scripts/dropstat +++ b/scripts/dropstat @@ -11,7 +11,7 @@ # - Refactor calls to COUNTERS_DB to reduce redundancy # - Cache DB queries to reduce # of expensive queries -import json +import _pickle as pickle import argparse import os import socket @@ -117,10 +117,10 @@ class DropStat(object): """ try: - json.dump(self.get_counts_table(self.gather_counters(std_port_rx_counters + std_port_tx_counters, DEBUG_COUNTER_PORT_STAT_MAP), COUNTERS_PORT_NAME_MAP), - open(self.port_drop_stats_file, 'w+')) - json.dump(self.get_counts(self.gather_counters([], DEBUG_COUNTER_SWITCH_STAT_MAP), self.get_switch_id()), - open(self.switch_drop_stats_file, 'w+')) + pickle.dump(self.get_counts_table(self.gather_counters(std_port_rx_counters + std_port_tx_counters, DEBUG_COUNTER_PORT_STAT_MAP), COUNTERS_PORT_NAME_MAP), + open(self.port_drop_stats_file, 'wb+')) + pickle.dump(self.get_counts(self.gather_counters([], DEBUG_COUNTER_SWITCH_STAT_MAP), self.get_switch_id()), + open(self.switch_drop_stats_file, 'wb+')) except IOError as e: print(e) sys.exit(e.errno) @@ -135,7 +135,7 @@ class DropStat(object): # Grab the latest clear checkpoint, if it exists if os.path.isfile(self.port_drop_stats_file): - port_drop_ckpt = json.load(open(self.port_drop_stats_file, 'r')) + port_drop_ckpt = pickle.load(open(self.port_drop_stats_file, 'rb')) counters = self.gather_counters(std_port_rx_counters + std_port_tx_counters, DEBUG_COUNTER_PORT_STAT_MAP, group, counter_type) headers = std_port_description_header + self.gather_headers(counters, DEBUG_COUNTER_PORT_STAT_MAP) @@ -162,7 +162,7 @@ class DropStat(object): # Grab the latest clear checkpoint, if it exists if os.path.isfile(self.switch_drop_stats_file): - switch_drop_ckpt = json.load(open(self.switch_drop_stats_file, 'r')) + switch_drop_ckpt = pickle.load(open(self.switch_drop_stats_file, 'rb')) counters = self.gather_counters([], DEBUG_COUNTER_SWITCH_STAT_MAP, group, counter_type) headers = std_switch_description_header + self.gather_headers(counters, DEBUG_COUNTER_SWITCH_STAT_MAP) diff --git a/scripts/fast-reboot b/scripts/fast-reboot index defde666ee..fb162ae180 100755 --- a/scripts/fast-reboot +++ b/scripts/fast-reboot @@ -23,6 +23,7 @@ PLATFORM=$(sonic-cfggen -H -v DEVICE_METADATA.localhost.platform) PLATFORM_PLUGIN="${REBOOT_TYPE}_plugin" LOG_SSD_HEALTH="/usr/local/bin/log_ssd_health" PLATFORM_FWUTIL_AU_REBOOT_HANDLE="platform_fw_au_reboot_handle" +PLATFORM_REBOOT_PRE_CHECK="platform_reboot_pre_check" SSD_FW_UPDATE="ssd-fw-upgrade" SSD_FW_UPDATE_BOOT_OPTION=no TAG_LATEST=yes @@ -148,7 +149,7 @@ function clear_boot() #clear_fast_boot if [[ "$REBOOT_TYPE" = "fast-reboot" ]]; then - sonic-db-cli STATE_DB DEL "FAST_REBOOT|system" &>/dev/null || /bin/true + sonic-db-cli STATE_DB HSET "FAST_RESTART_ENABLE_TABLE|system" "enable" "false" &>/dev/null || /bin/true fi } @@ -179,6 +180,10 @@ function initialize_pre_shutdown() function request_pre_shutdown() { + if [ -x ${DEVPATH}/${PLATFORM}/${PLATFORM_REBOOT_PRE_CHECK} ]; then + debug "Requesting platform reboot pre-check ..." + ${DEVPATH}/${PLATFORM}/${PLATFORM_REBOOT_PRE_CHECK} ${REBOOT_TYPE} + fi debug "Requesting pre-shutdown ..." STATE=$(timeout 5s docker exec syncd /usr/bin/syncd_request_shutdown --pre &> /dev/null; if [[ $? == 124 ]]; then echo "timed out"; fi) if [[ x"${STATE}" == x"timed out" ]]; then @@ -265,7 +270,7 @@ function backup_database() and not string.match(k, 'WARM_RESTART_ENABLE_TABLE|') \ and not string.match(k, 'VXLAN_TUNNEL_TABLE|') \ and not string.match(k, 'BUFFER_MAX_PARAM_TABLE|') \ - and not string.match(k, 'FAST_REBOOT|') then + and not string.match(k, 'FAST_RESTART_ENABLE_TABLE|') then redis.call('del', k) end end @@ -544,7 +549,7 @@ case "$REBOOT_TYPE" in check_warm_restart_in_progress BOOT_TYPE_ARG=$REBOOT_TYPE trap clear_boot EXIT HUP INT QUIT TERM KILL ABRT ALRM - sonic-db-cli STATE_DB SET "FAST_REBOOT|system" "1" "EX" "210" &>/dev/null + sonic-db-cli STATE_DB HSET "FAST_RESTART_ENABLE_TABLE|system" "enable" "true" &>/dev/null config warm_restart enable system ;; "warm-reboot") diff --git a/scripts/flow_counters_stat b/scripts/flow_counters_stat index 49b97e335b..ac5ef94beb 100755 --- a/scripts/flow_counters_stat +++ b/scripts/flow_counters_stat @@ -2,7 +2,7 @@ import argparse import os -import json +import _pickle as pickle import sys from natsort import natsorted @@ -185,8 +185,8 @@ class FlowCounterStats(object): if os.path.exists(self.data_file): os.remove(self.data_file) - with open(self.data_file, 'w') as f: - json.dump(data, f) + with open(self.data_file, 'wb') as f: + pickle.dump(data, f) except IOError as e: print('Failed to save statistic - {}'.format(repr(e))) @@ -200,8 +200,8 @@ class FlowCounterStats(object): return None try: - with open(self.data_file, 'r') as f: - data = json.load(f) + with open(self.data_file, 'rb') as f: + data = pickle.load(f) except IOError as e: print('Failed to load statistic - {}'.format(repr(e))) return None diff --git a/scripts/intfstat b/scripts/intfstat index b4a770adeb..30cfbf084d 100755 --- a/scripts/intfstat +++ b/scripts/intfstat @@ -6,7 +6,7 @@ # ##################################################################### -import json +import _pickle as pickle import argparse import datetime import sys @@ -28,7 +28,7 @@ from collections import namedtuple, OrderedDict from natsort import natsorted from tabulate import tabulate from utilities_common.netstat import ns_diff, table_as_json, STATUS_NA, format_brate, format_prate -from utilities_common.cli import json_serial, UserCache +from utilities_common.cli import UserCache from swsscommon.swsscommon import SonicV2Connector nstat_fields = ( @@ -96,7 +96,7 @@ class Intfstat(object): counter_data = self.db.get(self.db.COUNTERS_DB, full_table_id, counter_name) if counter_data: fields[pos] = str(counter_data) - cntr = NStats._make(fields)._asdict() + cntr = NStats._make(fields) return cntr def get_rates(table_id): @@ -153,14 +153,14 @@ class Intfstat(object): rates = ratestat_dict.get(key, RateStats._make([STATUS_NA] * len(rates_key_list))) table.append((key, - data['rx_p_ok'], + data.rx_p_ok, format_brate(rates.rx_bps), format_prate(rates.rx_pps), - data['rx_p_err'], - data['tx_p_ok'], + data.rx_p_err, + data.tx_p_ok, format_brate(rates.tx_bps), format_prate(rates.tx_pps), - data['tx_p_err'])) + data.tx_p_err)) if use_json: print(table_as_json(table, header)) @@ -186,24 +186,24 @@ class Intfstat(object): if old_cntr is not None: table.append((key, - ns_diff(cntr['rx_p_ok'], old_cntr['rx_p_ok']), + ns_diff(cntr.rx_p_ok, old_cntr.rx_p_ok), format_brate(rates.rx_bps), format_prate(rates.rx_pps), - ns_diff(cntr['rx_p_err'], old_cntr['rx_p_err']), - ns_diff(cntr['tx_p_ok'], old_cntr['tx_p_ok']), + ns_diff(cntr.rx_p_err, old_cntr.rx_p_err), + ns_diff(cntr.tx_p_ok, old_cntr.tx_p_ok), format_brate(rates.tx_bps), format_prate(rates.tx_pps), - ns_diff(cntr['tx_p_err'], old_cntr['tx_p_err']))) + ns_diff(cntr.tx_p_err, old_cntr.tx_p_err))) else: table.append((key, - cntr['rx_p_ok'], + cntr.rx_p_ok, format_brate(rates.rx_bps), format_prate(rates.rx_pps), - cntr['rx_p_err'], - cntr['tx_p_ok'], + cntr.rx_p_err, + cntr.tx_p_ok, format_brate(rates.tx_bps), format_prate(rates.tx_pps), - cntr['tx_p_err'])) + cntr.tx_p_err)) if use_json: print(table_as_json(table, header)) @@ -229,17 +229,17 @@ class Intfstat(object): if cnstat_old_dict and cnstat_old_dict.get(rif): old_cntr = cnstat_old_dict.get(rif) - body = body % (ns_diff(cntr['rx_p_ok'], old_cntr['rx_p_ok']), - ns_diff(cntr['rx_b_ok'], old_cntr['rx_b_ok']), - ns_diff(cntr['rx_p_err'], old_cntr['rx_p_err']), - ns_diff(cntr['rx_b_err'], old_cntr['rx_b_err']), - ns_diff(cntr['tx_p_ok'], old_cntr['tx_p_ok']), - ns_diff(cntr['tx_b_ok'], old_cntr['tx_b_ok']), - ns_diff(cntr['tx_p_err'], old_cntr['tx_p_err']), - ns_diff(cntr['tx_b_err'], old_cntr['tx_b_err'])) + body = body % (ns_diff(cntr.rx_p_ok, old_cntr.rx_p_ok), + ns_diff(cntr.rx_b_ok, old_cntr.rx_b_ok), + ns_diff(cntr.rx_p_err, old_cntr.rx_p_err), + ns_diff(cntr.rx_b_err, old_cntr.rx_b_err), + ns_diff(cntr.tx_p_ok, old_cntr.tx_p_ok), + ns_diff(cntr.tx_b_ok, old_cntr.tx_b_ok), + ns_diff(cntr.tx_p_err, old_cntr.tx_p_err), + ns_diff(cntr.tx_b_err, old_cntr.tx_b_err)) else: - body = body % (cntr['rx_p_ok'], cntr['rx_b_ok'], cntr['rx_p_err'],cntr['rx_b_err'], - cntr['tx_p_ok'], cntr['tx_b_ok'], cntr['tx_p_err'], cntr['tx_b_err']) + body = body % (cntr.rx_p_ok, cntr.rx_b_ok, cntr.rx_p_err,cntr.rx_b_err, + cntr.tx_p_ok, cntr.tx_b_ok, cntr.tx_p_err, cntr.tx_b_err) print(header) print(body) @@ -305,20 +305,20 @@ def main(): if tag_name is not None: if os.path.isfile(cnstat_fqn_general_file): try: - general_data = json.load(open(cnstat_fqn_general_file, 'r')) + general_data = pickle.load(open(cnstat_fqn_general_file, 'rb')) for key, val in cnstat_dict.items(): general_data[key] = val - json.dump(general_data, open(cnstat_fqn_general_file, 'w')) + pickle.dump(general_data, open(cnstat_fqn_general_file, 'wb')) except IOError as e: sys.exit(e.errno) # Add the information also to tag specific file if os.path.isfile(cnstat_fqn_file): - data = json.load(open(cnstat_fqn_file, 'r')) + data = pickle.load(open(cnstat_fqn_file, 'rb')) for key, val in cnstat_dict.items(): data[key] = val - json.dump(data, open(cnstat_fqn_file, 'w')) + pickle.dump(data, open(cnstat_fqn_file, 'wb')) else: - json.dump(cnstat_dict, open(cnstat_fqn_file, 'w'), default=json_serial) + pickle.dump(cnstat_dict, open(cnstat_fqn_file, 'wb')) except IOError as e: sys.exit(e.errno) else: @@ -330,9 +330,9 @@ def main(): try: cnstat_cached_dict = {} if os.path.isfile(cnstat_fqn_file): - cnstat_cached_dict = json.load(open(cnstat_fqn_file, 'r')) + cnstat_cached_dict = pickle.load(open(cnstat_fqn_file, 'rb')) else: - cnstat_cached_dict = json.load(open(cnstat_fqn_general_file, 'r')) + cnstat_cached_dict = pickle.load(open(cnstat_fqn_general_file, 'rb')) print("Last cached time was " + str(cnstat_cached_dict.get('time'))) if interface_name: diff --git a/scripts/pfcstat b/scripts/pfcstat index 094c6e9380..fb7e6018b6 100755 --- a/scripts/pfcstat +++ b/scripts/pfcstat @@ -6,7 +6,7 @@ # ##################################################################### -import json +import _pickle as pickle import argparse import datetime import os.path @@ -37,7 +37,7 @@ except KeyError: from utilities_common.netstat import ns_diff, STATUS_NA, format_number_with_comma from utilities_common import multi_asic as multi_asic_util from utilities_common import constants -from utilities_common.cli import json_serial, UserCache +from utilities_common.cli import UserCache PStats = namedtuple("PStats", "pfc0, pfc1, pfc2, pfc3, pfc4, pfc5, pfc6, pfc7") @@ -101,7 +101,7 @@ class Pfcstat(object): fields[pos] = STATUS_NA else: fields[pos] = str(int(counter_data)) - cntr = PStats._make(fields)._asdict() + cntr = PStats._make(fields) return cntr # Get the info from database @@ -144,14 +144,14 @@ class Pfcstat(object): if key == 'time': continue table.append((key, - format_number_with_comma(data['pfc0']), - format_number_with_comma(data['pfc1']), - format_number_with_comma(data['pfc2']), - format_number_with_comma(data['pfc3']), - format_number_with_comma(data['pfc4']), - format_number_with_comma(data['pfc5']), - format_number_with_comma(data['pfc6']), - format_number_with_comma(data['pfc7']))) + format_number_with_comma(data.pfc0), + format_number_with_comma(data.pfc1), + format_number_with_comma(data.pfc2), + format_number_with_comma(data.pfc3), + format_number_with_comma(data.pfc4), + format_number_with_comma(data.pfc5), + format_number_with_comma(data.pfc6), + format_number_with_comma(data.pfc7))) if rx: print(tabulate(table, header_Rx, tablefmt='simple', stralign='right')) @@ -173,24 +173,24 @@ class Pfcstat(object): if old_cntr is not None: table.append((key, - ns_diff(cntr['pfc0'], old_cntr['pfc0']), - ns_diff(cntr['pfc1'], old_cntr['pfc1']), - ns_diff(cntr['pfc2'], old_cntr['pfc2']), - ns_diff(cntr['pfc3'], old_cntr['pfc3']), - ns_diff(cntr['pfc4'], old_cntr['pfc4']), - ns_diff(cntr['pfc5'], old_cntr['pfc5']), - ns_diff(cntr['pfc6'], old_cntr['pfc6']), - ns_diff(cntr['pfc7'], old_cntr['pfc7']))) + ns_diff(cntr.pfc0, old_cntr.pfc0), + ns_diff(cntr.pfc1, old_cntr.pfc1), + ns_diff(cntr.pfc2, old_cntr.pfc2), + ns_diff(cntr.pfc3, old_cntr.pfc3), + ns_diff(cntr.pfc4, old_cntr.pfc4), + ns_diff(cntr.pfc5, old_cntr.pfc5), + ns_diff(cntr.pfc6, old_cntr.pfc6), + ns_diff(cntr.pfc7, old_cntr.pfc7))) else: table.append((key, - format_number_with_comma(cntr['pfc0']), - format_number_with_comma(cntr['pfc1']), - format_number_with_comma(cntr['pfc2']), - format_number_with_comma(cntr['pfc3']), - format_number_with_comma(cntr['pfc4']), - format_number_with_comma(cntr['pfc5']), - format_number_with_comma(cntr['pfc6']), - format_number_with_comma(cntr['pfc7']))) + format_number_with_comma(cntr.pfc0), + format_number_with_comma(cntr.pfc1), + format_number_with_comma(cntr.pfc2), + format_number_with_comma(cntr.pfc3), + format_number_with_comma(cntr.pfc4), + format_number_with_comma(cntr.pfc5), + format_number_with_comma(cntr.pfc6), + format_number_with_comma(cntr.pfc7))) if rx: print(tabulate(table, header_Rx, tablefmt='simple', stralign='right')) @@ -256,8 +256,8 @@ Examples: if save_fresh_stats: try: - json.dump(cnstat_dict_rx, open(cnstat_fqn_file_rx, 'w'), default=json_serial) - json.dump(cnstat_dict_tx, open(cnstat_fqn_file_tx, 'w'), default=json_serial) + pickle.dump(cnstat_dict_rx, open(cnstat_fqn_file_rx, 'wb')) + pickle.dump(cnstat_dict_tx, open(cnstat_fqn_file_tx, 'wb')) except IOError as e: print(e.errno, e) sys.exit(e.errno) @@ -271,7 +271,7 @@ Examples: """ if os.path.isfile(cnstat_fqn_file_rx): try: - cnstat_cached_dict = json.load(open(cnstat_fqn_file_rx, 'r')) + cnstat_cached_dict = pickle.load(open(cnstat_fqn_file_rx, 'rb')) print("Last cached time was " + str(cnstat_cached_dict.get('time'))) pfcstat.cnstat_diff_print(cnstat_dict_rx, cnstat_cached_dict, True) except IOError as e: @@ -286,7 +286,7 @@ Examples: """ if os.path.isfile(cnstat_fqn_file_tx): try: - cnstat_cached_dict = json.load(open(cnstat_fqn_file_tx, 'r')) + cnstat_cached_dict = pickle.load(open(cnstat_fqn_file_tx, 'rb')) print("Last cached time was " + str(cnstat_cached_dict.get('time'))) pfcstat.cnstat_diff_print(cnstat_dict_tx, cnstat_cached_dict, False) except IOError as e: diff --git a/scripts/pg-drop b/scripts/pg-drop index 7741593081..40b4e863d3 100755 --- a/scripts/pg-drop +++ b/scripts/pg-drop @@ -5,7 +5,7 @@ # pg-drop is a tool for show/clear ingress pg dropped packet stats. # ##################################################################### -import json +import _pickle as pickle import argparse import os import sys @@ -144,7 +144,7 @@ class PgDropStat(object): port_drop_ckpt = {} # Grab the latest clear checkpoint, if it exists if os.path.isfile(self.port_drop_stats_file): - port_drop_ckpt = json.load(open(self.port_drop_stats_file, 'r')) + port_drop_ckpt = pickle.load(open(self.port_drop_stats_file, 'rb')) # Header list contains the port name followed by the PGs. Fields is used to populate the pg values fields = ["0"]* (len(self.header_list) - 1) @@ -216,10 +216,10 @@ class PgDropStat(object): counter_pg_drop_array = [ "SAI_INGRESS_PRIORITY_GROUP_STAT_DROPPED_PACKETS"] try: - json.dump(self.get_counts_table( + pickle.dump(self.get_counts_table( counter_pg_drop_array, COUNTERS_PG_NAME_MAP), - open(self.port_drop_stats_file, 'w+')) + open(self.port_drop_stats_file, 'wb+')) except IOError as e: print(e) sys.exit(e.errno) diff --git a/scripts/portstat b/scripts/portstat index 09ad88b08d..399733f69c 100755 --- a/scripts/portstat +++ b/scripts/portstat @@ -6,7 +6,7 @@ # ##################################################################### -import json +import _pickle as pickle import argparse import datetime import os.path @@ -40,7 +40,7 @@ from utilities_common.intf_filter import parse_interface_in_filter import utilities_common.multi_asic as multi_asic_util from utilities_common.netstat import ns_diff, table_as_json, format_brate, format_prate, format_util, format_number_with_comma -from utilities_common.cli import json_serial, UserCache +from utilities_common.cli import UserCache """ The order and count of statistics mentioned below needs to be in sync with the values in portstat script @@ -181,7 +181,7 @@ class Portstat(object): elif fields[pos] != STATUS_NA: fields[pos] = str(int(fields[pos]) + int(fvs[counter_name])) - cntr = NStats._make(fields)._asdict() + cntr = NStats._make(fields) return cntr def get_rates(table_id): @@ -278,68 +278,69 @@ class Portstat(object): if print_all: header = header_all table.append((key, self.get_port_state(key), - format_number_with_comma(data['rx_ok']), + format_number_with_comma(data.rx_ok), format_brate(rates.rx_bps), format_prate(rates.rx_pps), format_util(rates.rx_bps, port_speed), - format_number_with_comma(data['rx_err']), - format_number_with_comma(data['rx_drop']), - format_number_with_comma(data['rx_ovr']), - format_number_with_comma(data['tx_ok']), + format_number_with_comma(data.rx_err), + format_number_with_comma(data.rx_drop), + format_number_with_comma(data.rx_ovr), + format_number_with_comma(data.tx_ok), format_brate(rates.tx_bps), format_prate(rates.tx_pps), format_util(rates.tx_bps, port_speed), - format_number_with_comma(data['tx_err']), - format_number_with_comma(data['tx_drop']), - format_number_with_comma(data['tx_ovr']))) + format_number_with_comma(data.tx_err), + format_number_with_comma(data.tx_drop), + format_number_with_comma(data.tx_ovr))) elif errors_only: header = header_errors_only table.append((key, self.get_port_state(key), - format_number_with_comma(data['rx_err']), - format_number_with_comma(data['rx_drop']), - format_number_with_comma(data['rx_ovr']), - format_number_with_comma(data['tx_err']), - format_number_with_comma(data['tx_drop']), - format_number_with_comma(data['tx_ovr']))) + format_number_with_comma(data.rx_err), + format_number_with_comma(data.rx_drop), + format_number_with_comma(data.rx_ovr), + format_number_with_comma(data.tx_err), + format_number_with_comma(data.tx_drop), + format_number_with_comma(data.tx_ovr))) elif fec_stats_only: header = header_fec_only table.append((key, self.get_port_state(key), - format_number_with_comma(data['fec_corr']), - format_number_with_comma(data['fec_uncorr']), - format_number_with_comma(data['fec_symbol_err']))) + format_number_with_comma(data.fec_corr), + format_number_with_comma(data.fec_uncorr), + format_number_with_comma(data.fec_symbol_err))) elif rates_only: header = header_rates_only table.append((key, self.get_port_state(key), - format_number_with_comma(data['rx_ok']), + format_number_with_comma(data.rx_ok), format_brate(rates.rx_bps), format_prate(rates.rx_pps), format_util(rates.rx_bps, port_speed), - format_number_with_comma(data['tx_ok']), + format_number_with_comma(data.tx_ok), format_brate(rates.tx_bps), format_prate(rates.tx_pps), format_util(rates.tx_bps, port_speed))) else: header = header_std table.append((key, self.get_port_state(key), - format_number_with_comma(data['rx_ok']), + format_number_with_comma(data.rx_ok), format_brate(rates.rx_bps), format_util(rates.rx_bps, port_speed), - format_number_with_comma(data['rx_err']), - format_number_with_comma(data['rx_drop']), - format_number_with_comma(data['rx_ovr']), - format_number_with_comma(data['tx_ok']), + format_number_with_comma(data.rx_err), + format_number_with_comma(data.rx_drop), + format_number_with_comma(data.rx_ovr), + format_number_with_comma(data.tx_ok), format_brate(rates.tx_bps), format_util(rates.tx_bps, port_speed), - format_number_with_comma(data['tx_err']), - format_number_with_comma(data['tx_drop']), - format_number_with_comma(data['tx_ovr']))) - if use_json: - print(table_as_json(table, header)) - else: - print(tabulate(table, header, tablefmt='simple', stralign='right')) - if multi_asic.is_multi_asic() or device_info.is_chassis(): - print("\nReminder: Please execute 'show interface counters -d all' to include internal links\n") + format_number_with_comma(data.tx_err), + format_number_with_comma(data.tx_drop), + format_number_with_comma(data.tx_ovr))) + if table: + if use_json: + print(table_as_json(table, header)) + else: + print(tabulate(table, header, tablefmt='simple', stralign='right')) + if multi_asic.is_multi_asic() or device_info.is_chassis() and not use_json: + print("\nReminder: Please execute 'show interface counters -d all' to include internal links\n") def cnstat_intf_diff_print(self, cnstat_new_dict, cnstat_old_dict, intf_list): """ @@ -353,51 +354,51 @@ class Portstat(object): if key in cnstat_old_dict: old_cntr = cnstat_old_dict.get(key) else: - old_cntr = NStats._make([0] * BUCKET_NUM)._asdict() + old_cntr = NStats._make([0] * BUCKET_NUM) if intf_list and key not in intf_list: continue - print("Packets Received 64 Octets..................... {}".format(ns_diff(cntr['rx_64'], old_cntr['rx_64']))) - print("Packets Received 65-127 Octets................. {}".format(ns_diff(cntr['rx_65_127'], old_cntr['rx_65_127']))) - print("Packets Received 128-255 Octets................ {}".format(ns_diff(cntr['rx_128_255'], old_cntr['rx_128_255']))) - print("Packets Received 256-511 Octets................ {}".format(ns_diff(cntr['rx_256_511'], old_cntr['rx_256_511']))) - print("Packets Received 512-1023 Octets............... {}".format(ns_diff(cntr['rx_512_1023'], old_cntr['rx_512_1023']))) - print("Packets Received 1024-1518 Octets.............. {}".format(ns_diff(cntr['rx_1024_1518'], old_cntr['rx_1024_1518']))) - print("Packets Received 1519-2047 Octets.............. {}".format(ns_diff(cntr['rx_1519_2047'], old_cntr['rx_1519_2047']))) - print("Packets Received 2048-4095 Octets.............. {}".format(ns_diff(cntr['rx_2048_4095'], old_cntr['rx_2048_4095']))) - print("Packets Received 4096-9216 Octets.............. {}".format(ns_diff(cntr['rx_4096_9216'], old_cntr['rx_4096_9216']))) - print("Packets Received 9217-16383 Octets............. {}".format(ns_diff(cntr['rx_9217_16383'], old_cntr['rx_9217_16383']))) + print("Packets Received 64 Octets..................... {}".format(ns_diff(cntr.rx_64, old_cntr.rx_64))) + print("Packets Received 65-127 Octets................. {}".format(ns_diff(cntr.rx_65_127, old_cntr.rx_65_127))) + print("Packets Received 128-255 Octets................ {}".format(ns_diff(cntr.rx_128_255, old_cntr.rx_128_255))) + print("Packets Received 256-511 Octets................ {}".format(ns_diff(cntr.rx_256_511, old_cntr.rx_256_511))) + print("Packets Received 512-1023 Octets............... {}".format(ns_diff(cntr.rx_512_1023, old_cntr.rx_512_1023))) + print("Packets Received 1024-1518 Octets.............. {}".format(ns_diff(cntr.rx_1024_1518, old_cntr.rx_1024_1518))) + print("Packets Received 1519-2047 Octets.............. {}".format(ns_diff(cntr.rx_1519_2047, old_cntr.rx_1519_2047))) + print("Packets Received 2048-4095 Octets.............. {}".format(ns_diff(cntr.rx_2048_4095, old_cntr.rx_2048_4095))) + print("Packets Received 4096-9216 Octets.............. {}".format(ns_diff(cntr.rx_4096_9216, old_cntr.rx_4096_9216))) + print("Packets Received 9217-16383 Octets............. {}".format(ns_diff(cntr.rx_9217_16383, old_cntr.rx_9217_16383))) print("") - print("Total Packets Received Without Errors.......... {}".format(ns_diff(cntr['rx_all'], old_cntr['rx_all']))) - print("Unicast Packets Received....................... {}".format(ns_diff(cntr['rx_uca'], old_cntr['rx_uca']))) - print("Multicast Packets Received..................... {}".format(ns_diff(cntr['rx_mca'], old_cntr['rx_mca']))) - print("Broadcast Packets Received..................... {}".format(ns_diff(cntr['rx_bca'], old_cntr['rx_bca']))) + print("Total Packets Received Without Errors.......... {}".format(ns_diff(cntr.rx_all, old_cntr.rx_all))) + print("Unicast Packets Received....................... {}".format(ns_diff(cntr.rx_uca, old_cntr.rx_uca))) + print("Multicast Packets Received..................... {}".format(ns_diff(cntr.rx_mca, old_cntr.rx_mca))) + print("Broadcast Packets Received..................... {}".format(ns_diff(cntr.rx_bca, old_cntr.rx_bca))) print("") - print("Jabbers Received............................... {}".format(ns_diff(cntr['rx_jbr'], old_cntr['rx_jbr']))) - print("Fragments Received............................. {}".format(ns_diff(cntr['rx_frag'], old_cntr['rx_frag']))) - print("Undersize Received............................. {}".format(ns_diff(cntr['rx_usize'], old_cntr['rx_usize']))) - print("Overruns Received.............................. {}".format(ns_diff(cntr['rx_ovrrun'], old_cntr['rx_ovrrun']))) + print("Jabbers Received............................... {}".format(ns_diff(cntr.rx_jbr, old_cntr.rx_jbr))) + print("Fragments Received............................. {}".format(ns_diff(cntr.rx_frag, old_cntr.rx_frag))) + print("Undersize Received............................. {}".format(ns_diff(cntr.rx_usize, old_cntr.rx_usize))) + print("Overruns Received.............................. {}".format(ns_diff(cntr.rx_ovrrun, old_cntr.rx_ovrrun))) print("") - print("Packets Transmitted 64 Octets.................. {}".format(ns_diff(cntr['tx_64'], old_cntr['tx_64']))) - print("Packets Transmitted 65-127 Octets.............. {}".format(ns_diff(cntr['tx_65_127'], old_cntr['tx_65_127']))) - print("Packets Transmitted 128-255 Octets............. {}".format(ns_diff(cntr['tx_128_255'], old_cntr['tx_128_255']))) - print("Packets Transmitted 256-511 Octets............. {}".format(ns_diff(cntr['tx_256_511'], old_cntr['tx_256_511']))) - print("Packets Transmitted 512-1023 Octets............ {}".format(ns_diff(cntr['tx_512_1023'], old_cntr['tx_512_1023']))) - print("Packets Transmitted 1024-1518 Octets........... {}".format(ns_diff(cntr['tx_1024_1518'], old_cntr['tx_1024_1518']))) - print("Packets Transmitted 1519-2047 Octets........... {}".format(ns_diff(cntr['tx_1519_2047'], old_cntr['tx_1519_2047']))) - print("Packets Transmitted 2048-4095 Octets........... {}".format(ns_diff(cntr['tx_2048_4095'], old_cntr['tx_2048_4095']))) - print("Packets Transmitted 4096-9216 Octets........... {}".format(ns_diff(cntr['tx_4096_9216'], old_cntr['tx_4096_9216']))) - print("Packets Transmitted 9217-16383 Octets.......... {}".format(ns_diff(cntr['tx_9217_16383'], old_cntr['tx_9217_16383']))) + print("Packets Transmitted 64 Octets.................. {}".format(ns_diff(cntr.tx_64, old_cntr.tx_64))) + print("Packets Transmitted 65-127 Octets.............. {}".format(ns_diff(cntr.tx_65_127, old_cntr.tx_65_127))) + print("Packets Transmitted 128-255 Octets............. {}".format(ns_diff(cntr.tx_128_255, old_cntr.tx_128_255))) + print("Packets Transmitted 256-511 Octets............. {}".format(ns_diff(cntr.tx_256_511, old_cntr.tx_256_511))) + print("Packets Transmitted 512-1023 Octets............ {}".format(ns_diff(cntr.tx_512_1023, old_cntr.tx_512_1023))) + print("Packets Transmitted 1024-1518 Octets........... {}".format(ns_diff(cntr.tx_1024_1518, old_cntr.tx_1024_1518))) + print("Packets Transmitted 1519-2047 Octets........... {}".format(ns_diff(cntr.tx_1519_2047, old_cntr.tx_1519_2047))) + print("Packets Transmitted 2048-4095 Octets........... {}".format(ns_diff(cntr.tx_2048_4095, old_cntr.tx_2048_4095))) + print("Packets Transmitted 4096-9216 Octets........... {}".format(ns_diff(cntr.tx_4096_9216, old_cntr.tx_4096_9216))) + print("Packets Transmitted 9217-16383 Octets.......... {}".format(ns_diff(cntr.tx_9217_16383, old_cntr.tx_9217_16383))) print("") - print("Total Packets Transmitted Successfully......... {}".format(ns_diff(cntr['tx_all'], old_cntr['tx_all']))) - print("Unicast Packets Transmitted.................... {}".format(ns_diff(cntr['tx_uca'], old_cntr['tx_uca']))) - print("Multicast Packets Transmitted.................. {}".format(ns_diff(cntr['tx_mca'], old_cntr['tx_mca']))) - print("Broadcast Packets Transmitted.................. {}".format(ns_diff(cntr['tx_bca'], old_cntr['tx_bca']))) + print("Total Packets Transmitted Successfully......... {}".format(ns_diff(cntr.tx_all, old_cntr.tx_all))) + print("Unicast Packets Transmitted.................... {}".format(ns_diff(cntr.tx_uca, old_cntr.tx_uca))) + print("Multicast Packets Transmitted.................. {}".format(ns_diff(cntr.tx_mca, old_cntr.tx_mca))) + print("Broadcast Packets Transmitted.................. {}".format(ns_diff(cntr.tx_bca, old_cntr.tx_bca))) print("Time Since Counters Last Cleared............... " + str(cnstat_old_dict.get('time'))) @@ -434,88 +435,88 @@ class Portstat(object): header = header_all if old_cntr is not None: table.append((key, self.get_port_state(key), - ns_diff(cntr['rx_ok'], old_cntr['rx_ok']), + ns_diff(cntr.rx_ok, old_cntr.rx_ok), format_brate(rates.rx_bps), format_prate(rates.rx_pps), format_util(rates.rx_bps, port_speed), - ns_diff(cntr['rx_err'], old_cntr['rx_err']), - ns_diff(cntr['rx_drop'], old_cntr['rx_drop']), - ns_diff(cntr['rx_ovr'], old_cntr['rx_ovr']), - ns_diff(cntr['tx_ok'], old_cntr['tx_ok']), + ns_diff(cntr.rx_err, old_cntr.rx_err), + ns_diff(cntr.rx_drop, old_cntr.rx_drop), + ns_diff(cntr.rx_ovr, old_cntr.rx_ovr), + ns_diff(cntr.tx_ok, old_cntr.tx_ok), format_brate(rates.tx_bps), format_prate(rates.tx_pps), format_util(rates.tx_bps, port_speed), - ns_diff(cntr['tx_err'], old_cntr['tx_err']), - ns_diff(cntr['tx_drop'], old_cntr['tx_drop']), - ns_diff(cntr['tx_ovr'], old_cntr['tx_ovr']))) + ns_diff(cntr.tx_err, old_cntr.tx_err), + ns_diff(cntr.tx_drop, old_cntr.tx_drop), + ns_diff(cntr.tx_ovr, old_cntr.tx_ovr))) else: table.append((key, self.get_port_state(key), - format_number_with_comma(cntr['rx_ok']), + format_number_with_comma(cntr.rx_ok), format_brate(rates.rx_bps), format_prate(rates.rx_pps), format_util(rates.rx_bps, port_speed), - format_number_with_comma(cntr['rx_err']), - format_number_with_comma(cntr['rx_drop']), - format_number_with_comma(cntr['rx_ovr']), - format_number_with_comma(cntr['tx_ok']), + format_number_with_comma(cntr.rx_err), + format_number_with_comma(cntr.rx_drop), + format_number_with_comma(cntr.rx_ovr), + format_number_with_comma(cntr.tx_ok), format_brate(rates.tx_bps), format_prate(rates.tx_pps), format_util(rates.tx_bps, port_speed), - format_number_with_comma(cntr['tx_err']), - format_number_with_comma(cntr['tx_drop']), - format_number_with_comma(cntr['tx_ovr']))) + format_number_with_comma(cntr.tx_err), + format_number_with_comma(cntr.tx_drop), + format_number_with_comma(cntr.tx_ovr))) elif errors_only: header = header_errors_only if old_cntr is not None: table.append((key, self.get_port_state(key), - ns_diff(cntr['rx_err'], old_cntr['rx_err']), - ns_diff(cntr['rx_drop'], old_cntr['rx_drop']), - ns_diff(cntr['rx_ovr'], old_cntr['rx_ovr']), - ns_diff(cntr['tx_err'], old_cntr['tx_err']), - ns_diff(cntr['tx_drop'], old_cntr['tx_drop']), - ns_diff(cntr['tx_ovr'], old_cntr['tx_ovr']))) + ns_diff(cntr.rx_err, old_cntr.rx_err), + ns_diff(cntr.rx_drop, old_cntr.rx_drop), + ns_diff(cntr.rx_ovr, old_cntr.rx_ovr), + ns_diff(cntr.tx_err, old_cntr.tx_err), + ns_diff(cntr.tx_drop, old_cntr.tx_drop), + ns_diff(cntr.tx_ovr, old_cntr.tx_ovr))) else: table.append((key, self.get_port_state(key), - format_number_with_comma(cntr['rx_err']), - format_number_with_comma(cntr['rx_drop']), - format_number_with_comma(cntr['rx_ovr']), - format_number_with_comma(cntr['tx_err']), - format_number_with_comma(cntr['tx_drop']), - format_number_with_comma(cntr['tx_ovr']))) + format_number_with_comma(cntr.rx_err), + format_number_with_comma(cntr.rx_drop), + format_number_with_comma(cntr.rx_ovr), + format_number_with_comma(cntr.tx_err), + format_number_with_comma(cntr.tx_drop), + format_number_with_comma(cntr.tx_ovr))) elif fec_stats_only: header = header_fec_only if old_cntr is not None: table.append((key, self.get_port_state(key), - ns_diff(cntr['fec_corr'], old_cntr['fec_corr']), - ns_diff(cntr['fec_uncorr'], old_cntr['fec_uncorr']), - ns_diff(cntr['fec_symbol_err'], old_cntr['fec_symbol_err']))) + ns_diff(cntr.fec_corr, old_cntr.fec_corr), + ns_diff(cntr.fec_uncorr, old_cntr.fec_uncorr), + ns_diff(cntr.fec_symbol_err, old_cntr.fec_symbol_err))) else: table.append((key, self.get_port_state(key), - format_number_with_comma(cntr['fec_corr']), - format_number_with_comma(cntr['fec_uncorr']), - format_number_with_comma(cntr['fec_symbol_err']))) + format_number_with_comma(cntr.fec_corr), + format_number_with_comma(cntr.fec_uncorr), + format_number_with_comma(cntr.fec_symbol_err))) elif rates_only: header = header_rates_only if old_cntr is not None: table.append((key, self.get_port_state(key), - ns_diff(cntr['rx_ok'], old_cntr['rx_ok']), + ns_diff(cntr.rx_ok, old_cntr.rx_ok), format_brate(rates.rx_bps), format_prate(rates.rx_pps), format_util(rates.rx_bps, port_speed), - ns_diff(cntr['tx_ok'], old_cntr['tx_ok']), + ns_diff(cntr.tx_ok, old_cntr.tx_ok), format_brate(rates.tx_bps), format_prate(rates.tx_pps), format_util(rates.tx_bps, port_speed))) else: table.append((key, self.get_port_state(key), - format_number_with_comma(cntr['rx_ok']), + format_number_with_comma(cntr.rx_ok), format_brate(rates.rx_bps), format_prate(rates.rx_pps), format_util(rates.rx_bps, port_speed), - format_number_with_comma(cntr['tx_ok']), + format_number_with_comma(cntr.tx_ok), format_brate(rates.tx_bps), format_prate(rates.tx_pps), format_util(rates.tx_bps, port_speed))) @@ -524,40 +525,40 @@ class Portstat(object): if old_cntr is not None: table.append((key, self.get_port_state(key), - ns_diff(cntr['rx_ok'], old_cntr['rx_ok']), + ns_diff(cntr.rx_ok, old_cntr.rx_ok), format_brate(rates.rx_bps), format_util(rates.rx_bps, port_speed), - ns_diff(cntr['rx_err'], old_cntr['rx_err']), - ns_diff(cntr['rx_drop'], old_cntr['rx_drop']), - ns_diff(cntr['rx_ovr'], old_cntr['rx_ovr']), - ns_diff(cntr['tx_ok'], old_cntr['tx_ok']), + ns_diff(cntr.rx_err, old_cntr.rx_err), + ns_diff(cntr.rx_drop, old_cntr.rx_drop), + ns_diff(cntr.rx_ovr, old_cntr.rx_ovr), + ns_diff(cntr.tx_ok, old_cntr.tx_ok), format_brate(rates.tx_bps), format_util(rates.tx_bps, port_speed), - ns_diff(cntr['tx_err'], old_cntr['tx_err']), - ns_diff(cntr['tx_drop'], old_cntr['tx_drop']), - ns_diff(cntr['tx_ovr'], old_cntr['tx_ovr']))) + ns_diff(cntr.tx_err, old_cntr.tx_err), + ns_diff(cntr.tx_drop, old_cntr.tx_drop), + ns_diff(cntr.tx_ovr, old_cntr.tx_ovr))) else: table.append((key, self.get_port_state(key), - format_number_with_comma(cntr['rx_ok']), + format_number_with_comma(cntr.rx_ok), format_brate(rates.rx_bps), format_util(rates.rx_bps, port_speed), - format_number_with_comma(cntr['rx_err']), - format_number_with_comma(cntr['rx_drop']), - format_number_with_comma(cntr['rx_ovr']), - format_number_with_comma(cntr['tx_ok']), + format_number_with_comma(cntr.rx_err), + format_number_with_comma(cntr.rx_drop), + format_number_with_comma(cntr.rx_ovr), + format_number_with_comma(cntr.tx_ok), format_brate(rates.tx_bps), format_util(rates.tx_bps, port_speed), - format_number_with_comma(cntr['tx_err']), - format_number_with_comma(cntr['tx_drop']), - format_number_with_comma(cntr['tx_ovr']))) - - if use_json: - print(table_as_json(table, header)) - else: - print(tabulate(table, header, tablefmt='simple', stralign='right')) - if multi_asic.is_multi_asic() or device_info.is_chassis(): - print("\nReminder: Please execute 'show interface counters -d all' to include internal links\n") + format_number_with_comma(cntr.tx_err), + format_number_with_comma(cntr.tx_drop), + format_number_with_comma(cntr.tx_ovr))) + if table: + if use_json: + print(table_as_json(table, header)) + else: + print(tabulate(table, header, tablefmt='simple', stralign='right')) + if multi_asic.is_multi_asic() or device_info.is_chassis() and not use_json: + print("\nReminder: Please execute 'show interface counters -d all' to include internal links\n") def main(): parser = argparse.ArgumentParser(description='Display the ports state and counters', @@ -641,7 +642,7 @@ Examples: if save_fresh_stats: try: - json.dump(cnstat_dict, open(cnstat_fqn_file, 'w'), default=json_serial) + pickle.dump(cnstat_dict, open(cnstat_fqn_file, 'wb')) except IOError as e: sys.exit(e.errno) else: @@ -652,7 +653,7 @@ Examples: cnstat_cached_dict = OrderedDict() if os.path.isfile(cnstat_fqn_file): try: - cnstat_cached_dict = json.load(open(cnstat_fqn_file, 'r')) + cnstat_cached_dict = pickle.load(open(cnstat_fqn_file, 'rb')) if not detail: print("Last cached time was " + str(cnstat_cached_dict.get('time'))) portstat.cnstat_diff_print(cnstat_dict, cnstat_cached_dict, ratestat_dict, intf_list, use_json, print_all, errors_only, fec_stats_only, rates_only, detail) diff --git a/scripts/queuestat b/scripts/queuestat index d82e7e4a6a..96a24b51a3 100755 --- a/scripts/queuestat +++ b/scripts/queuestat @@ -6,7 +6,7 @@ # ##################################################################### -import json +import _pickle as pickle import argparse import datetime import os.path @@ -33,7 +33,7 @@ except KeyError: pass from swsscommon.swsscommon import SonicV2Connector -from utilities_common.cli import json_serial, UserCache +from utilities_common.cli import UserCache from utilities_common import constants import utilities_common.multi_asic as multi_asic_util @@ -186,7 +186,7 @@ class Queuestat(object): fields[pos] = STATUS_NA elif fields[pos] != STATUS_NA: fields[pos] = str(int(counter_data)) - cntr = QueueStats._make(fields)._asdict() + cntr = QueueStats._make(fields) return cntr # Build a dictionary of the stats @@ -211,9 +211,9 @@ class Queuestat(object): if json_opt: json_output[port][key] = data continue - table.append((port, data['queuetype'] + str(data['queueindex']), - data['totalpacket'], data['totalbytes'], - data['droppacket'], data['dropbytes'])) + table.append((port, data.queuetype + str(data.queueindex), + data.totalpacket, data.totalbytes, + data.droppacket, data.dropbytes)) if json_opt: json_output[port].update(build_json(port, table)) @@ -241,15 +241,15 @@ class Queuestat(object): old_cntr = cnstat_old_dict.get(key) if old_cntr is not None: - table.append((port, cntr['queuetype'] + str(cntr['queueindex']), - ns_diff(cntr['totalpacket'], old_cntr['totalpacket']), - ns_diff(cntr['totalbytes'], old_cntr['totalbytes']), - ns_diff(cntr['droppacket'], old_cntr['droppacket']), - ns_diff(cntr['dropbytes'], old_cntr['dropbytes']))) + table.append((port, cntr.queuetype + str(cntr.queueindex), + ns_diff(cntr.totalpacket, old_cntr.totalpacket), + ns_diff(cntr.totalbytes, old_cntr.totalbytes), + ns_diff(cntr.droppacket, old_cntr.droppacket), + ns_diff(cntr.dropbytes, old_cntr.dropbytes))) else: - table.append((port, cntr['queuetype'] + str(cntr['queueindex']), - cntr['totalpacket'], cntr['totalbytes'], - cntr['droppacket'], cntr['dropbytes'])) + table.append((port, cntr.queuetype + str(cntr.queueindex), + cntr.totalpacket, cntr.totalbytes, + cntr.droppacket, cntr.dropbytes)) if json_opt: json_output[port].update(build_json(port, table)) @@ -273,7 +273,7 @@ class Queuestat(object): cnstat_fqn_file_name = cnstat_fqn_file + port if os.path.isfile(cnstat_fqn_file_name): try: - cnstat_cached_dict = json.load(open(cnstat_fqn_file_name, 'r')) + cnstat_cached_dict = pickle.load(open(cnstat_fqn_file_name, 'rb')) if json_opt: json_output[port].update({"cached_time":cnstat_cached_dict.get('time')}) json_output.update(self.cnstat_diff_print(port, cnstat_dict, cnstat_cached_dict, json_opt)) @@ -307,7 +307,7 @@ class Queuestat(object): json_output[port] = {} if os.path.isfile(cnstat_fqn_file_name): try: - cnstat_cached_dict = json.load(open(cnstat_fqn_file_name, 'r')) + cnstat_cached_dict = pickle.load(open(cnstat_fqn_file_name, 'rb')) if json_opt: json_output[port].update({"cached_time":cnstat_cached_dict.get('time')}) json_output.update(self.cnstat_diff_print(port, cnstat_dict, cnstat_cached_dict, json_opt)) @@ -330,7 +330,7 @@ class Queuestat(object): for port in natsorted(self.counter_port_name_map): cnstat_dict = self.get_cnstat(self.port_queues_map[port]) try: - json.dump(cnstat_dict, open(cnstat_fqn_file + port, 'w'), default=json_serial) + pickle.dump(cnstat_dict, open(cnstat_fqn_file + port, 'wb')) except IOError as e: print(e.errno, e) sys.exit(e.errno) diff --git a/scripts/route_check.py b/scripts/route_check.py index 4db3f399a2..c832b2c6ea 100755 --- a/scripts/route_check.py +++ b/scripts/route_check.py @@ -47,6 +47,7 @@ import traceback import subprocess +from ipaddress import ip_network from swsscommon import swsscommon from utilities_common import chassis @@ -145,7 +146,7 @@ def add_prefix(ip): ip = ip + PREFIX_SEPARATOR + "32" else: ip = ip + PREFIX_SEPARATOR + "128" - return ip + return str(ip_network(ip)) def add_prefix_ifnot(ip): @@ -154,7 +155,7 @@ def add_prefix_ifnot(ip): :param ip: IP to add prefix as string. :return ip with prefix """ - return ip if ip.find(PREFIX_SEPARATOR) != -1 else add_prefix(ip) + return str(ip_network(ip)) if ip.find(PREFIX_SEPARATOR) != -1 else add_prefix(ip) def is_local(ip): diff --git a/scripts/tunnelstat b/scripts/tunnelstat index 3d7423e86b..8b045ec684 100755 --- a/scripts/tunnelstat +++ b/scripts/tunnelstat @@ -6,7 +6,7 @@ # ##################################################################### -import json +import _pickle as pickle import argparse import datetime import sys @@ -29,7 +29,7 @@ from collections import namedtuple, OrderedDict from natsort import natsorted from tabulate import tabulate from utilities_common.netstat import ns_diff, table_as_json, STATUS_NA, format_prate -from utilities_common.cli import json_serial, UserCache +from utilities_common.cli import UserCache from swsscommon.swsscommon import SonicV2Connector @@ -80,7 +80,7 @@ class Tunnelstat(object): counter_data = self.db.get(self.db.COUNTERS_DB, full_table_id, counter_name) if counter_data: fields[pos] = str(counter_data) - cntr = NStats._make(fields)._asdict() + cntr = NStats._make(fields) return cntr def get_rates(table_id): @@ -149,8 +149,8 @@ class Tunnelstat(object): continue rates = ratestat_dict.get(key, RateStats._make([STATUS_NA] * len(rates_key_list))) - table.append((key, data['rx_p_ok'], data['rx_b_ok'], format_prate(rates.rx_pps), - data['tx_p_ok'], data['tx_b_ok'], format_prate(rates.tx_pps))) + table.append((key, data.rx_p_ok, data.rx_b_ok, format_prate(rates.rx_pps), + data.tx_p_ok, data.tx_b_ok, format_prate(rates.tx_pps))) if use_json: print(table_as_json(table, header)) @@ -175,19 +175,19 @@ class Tunnelstat(object): rates = ratestat_dict.get(key, RateStats._make([STATUS_NA] * len(rates_key_list))) if old_cntr is not None: table.append((key, - ns_diff(cntr['rx_p_ok'], old_cntr['rx_p_ok']), - ns_diff(cntr['rx_b_ok'], old_cntr['rx_b_ok']), + ns_diff(cntr.rx_p_ok, old_cntr.rx_p_ok), + ns_diff(cntr.rx_b_ok, old_cntr.rx_b_ok), format_prate(rates.rx_pps), - ns_diff(cntr['tx_p_ok'], old_cntr['tx_p_ok']), - ns_diff(cntr['tx_b_ok'], old_cntr['tx_b_ok']), + ns_diff(cntr.tx_p_ok, old_cntr.tx_p_ok), + ns_diff(cntr.tx_b_ok, old_cntr.tx_b_ok), format_prate(rates.tx_pps))) else: table.append((key, - cntr['rx_p_ok'], - cntr['rx_b_ok'], + cntr.rx_p_ok, + cntr.rx_b_ok, format_prate(rates.rx_pps), - cntr['tx_p_ok'], - cntr['tx_b_ok'], + cntr.tx_p_ok, + cntr.tx_b_ok, format_prate(rates.tx_pps))) if use_json: print(table_as_json(table, header)) @@ -210,12 +210,12 @@ class Tunnelstat(object): if cnstat_old_dict: old_cntr = cnstat_old_dict.get(tunnel) if old_cntr: - body = body % (ns_diff(cntr['rx_p_ok'], old_cntr['rx_p_ok']), - ns_diff(cntr['rx_b_ok'], old_cntr['rx_b_ok']), - ns_diff(cntr['tx_p_ok'], old_cntr['tx_p_ok']), - ns_diff(cntr['tx_b_ok'], old_cntr['tx_b_ok'])) + body = body % (ns_diff(cntr.rx_p_ok, old_cntr.rx_p_ok), + ns_diff(cntr.rx_b_ok, old_cntr.rx_b_ok), + ns_diff(cntr.tx_p_ok, old_cntr.tx_p_ok), + ns_diff(cntr.tx_b_ok, old_cntr.tx_b_ok)) else: - body = body % (cntr['rx_p_ok'], cntr['rx_b_ok'], cntr['tx_p_ok'], cntr['tx_b_ok']) + body = body % (cntr.rx_p_ok, cntr.rx_b_ok, cntr.tx_p_ok, cntr.tx_b_ok) print(header) print(body) @@ -273,7 +273,7 @@ def main(): if save_fresh_stats: try: - json.dump(cnstat_dict, open(cnstat_fqn_file, 'w'), default=json_serial) + pickle.dump(cnstat_dict, open(cnstat_fqn_file, 'wb')) except IOError as e: sys.exit(e.errno) else: @@ -283,7 +283,7 @@ def main(): if wait_time_in_seconds == 0: if os.path.isfile(cnstat_fqn_file): try: - cnstat_cached_dict = json.load(open(cnstat_fqn_file, 'r')) + cnstat_cached_dict = pickle.load(open(cnstat_fqn_file, 'rb')) print("Last cached time was " + str(cnstat_cached_dict.get('time'))) if tunnel_name: tunnelstat.cnstat_single_tunnel(tunnel_name, cnstat_dict, cnstat_cached_dict) diff --git a/setup.py b/setup.py index 70d7473bd7..f071797280 100644 --- a/setup.py +++ b/setup.py @@ -5,9 +5,34 @@ # under scripts/. Consider stop using scripts and use console_scripts instead # # https://stackoverflow.com/questions/18787036/difference-between-entry-points-console-scripts-and-scripts-in-setup-py +from __future__ import print_function +import sys import fastentrypoints from setuptools import setup +import pkg_resources +from packaging import version + +# sonic_dependencies, version requirement only supports '>=' +sonic_dependencies = [ + 'sonic-config-engine', + 'sonic-platform-common', + 'sonic-py-common', + 'sonic-yang-mgmt', +] + +for package in sonic_dependencies: + try: + package_dist = pkg_resources.get_distribution(package.split(">=")[0]) + except pkg_resources.DistributionNotFound: + print(package + " is not found!", file=sys.stderr) + print("Please build and install SONiC python wheels dependencies from sonic-buildimage", file=sys.stderr) + exit(1) + if ">=" in package: + if version.parse(package_dist.version) >= version.parse(package.split(">=")[1]): + continue + print(package + " version not match!", file=sys.stderr) + exit(1) setup( name='sonic-utilities', @@ -64,7 +89,7 @@ 'sonic_cli_gen', ], package_data={ - 'generic_config_updater': ['generic_config_updater.conf.json'], + 'generic_config_updater': ['gcu_services_validator.conf.json', 'gcu_field_operation_validators.conf.json'], 'show': ['aliases.ini'], 'sonic_installer': ['aliases.ini'], 'tests': ['acl_input/*', @@ -211,16 +236,12 @@ 'prettyprinter>=0.18.0', 'pyroute2>=0.5.14, <0.6.1', 'requests>=2.25.0', - 'sonic-config-engine', - 'sonic-platform-common', - 'sonic-py-common', - 'sonic-yang-mgmt', 'tabulate==0.8.2', 'toposort==1.6', 'www-authenticate==0.9.2', 'xmltodict==0.12.0', 'lazy-object-proxy', - ], + ] + sonic_dependencies, setup_requires= [ 'pytest-runner', 'wheel' diff --git a/sonic-utilities-data/templates/service_mgmt.sh.j2 b/sonic-utilities-data/templates/service_mgmt.sh.j2 index d206049015..5c8f4e4974 100644 --- a/sonic-utilities-data/templates/service_mgmt.sh.j2 +++ b/sonic-utilities-data/templates/service_mgmt.sh.j2 @@ -51,7 +51,8 @@ function check_warm_boot() function check_fast_boot() { - if [[ $($SONIC_DB_CLI STATE_DB GET "FAST_REBOOT|system") == "1" ]]; then + SYSTEM_FAST_REBOOT=`$SONIC_DB_CLI STATE_DB hget "FAST_RESTART_ENABLE_TABLE|system" enable` + if [[ x"${SYSTEM_FAST_REBOOT}" == x"true" ]]; then FAST_BOOT="true" else FAST_BOOT="false" diff --git a/tests/aclshow_test.py b/tests/aclshow_test.py index 90fe46f683..0abe509aad 100644 --- a/tests/aclshow_test.py +++ b/tests/aclshow_test.py @@ -46,6 +46,7 @@ RULE_9 DATAACL 9991 901 900 RULE_10 DATAACL 9989 1001 1000 DEFAULT_RULE DATAACL 1 2 1 +RULE_1 DATAACL_5 9999 N/A N/A RULE_NO_COUNTER DATAACL_NO_COUNTER 9995 N/A N/A RULE_6 EVERFLOW 9994 601 600 RULE_08 EVERFLOW 9992 0 0 @@ -89,8 +90,8 @@ # Expected output for aclshow -r RULE_4,RULE_6 -vv rule4_rule6_verbose_output = '' + \ """Reading ACL info... -Total number of ACL Tables: 11 -Total number of ACL Rules: 20 +Total number of ACL Tables: 12 +Total number of ACL Rules: 21 RULE NAME TABLE NAME PRIO PACKETS COUNT BYTES COUNT ----------- ------------ ------ --------------- ------------- @@ -136,6 +137,7 @@ RULE_9 DATAACL 9991 0 0 RULE_10 DATAACL 9989 0 0 DEFAULT_RULE DATAACL 1 0 0 +RULE_1 DATAACL_5 9999 N/A N/A RULE_NO_COUNTER DATAACL_NO_COUNTER 9995 N/A N/A RULE_6 EVERFLOW 9994 0 0 RULE_08 EVERFLOW 9992 0 0 @@ -161,6 +163,7 @@ RULE_9 DATAACL 9991 0 0 RULE_10 DATAACL 9989 0 0 DEFAULT_RULE DATAACL 1 0 0 +RULE_1 DATAACL_5 9999 N/A N/A RULE_NO_COUNTER DATAACL_NO_COUNTER 9995 100 100 RULE_6 EVERFLOW 9994 0 0 RULE_08 EVERFLOW 9992 0 0 diff --git a/tests/config_test.py b/tests/config_test.py index 5fa50abd00..cef84e5441 100644 --- a/tests/config_test.py +++ b/tests/config_test.py @@ -8,6 +8,7 @@ import unittest import ipaddress from unittest import mock +from jsonpatch import JsonPatchConflict import click from click.testing import CliRunner @@ -354,49 +355,6 @@ def test_load_minigraph_with_port_config(self, get_cmd_module, setup_single_broa port_config = [{"PORT": {"Ethernet0": {"admin_status": "up"}}}] self.check_port_config(db, config, port_config, "config interface startup Ethernet0") - def test_load_backend_acl(self, get_cmd_module, setup_single_broadcom_asic): - db = Db() - db.cfgdb.set_entry("DEVICE_METADATA", "localhost", {"storage_device": "true"}) - self.check_backend_acl(get_cmd_module, db, device_type='BackEndToRRouter', condition=True) - - def test_load_backend_acl_not_storage(self, get_cmd_module, setup_single_broadcom_asic): - db = Db() - self.check_backend_acl(get_cmd_module, db, device_type='BackEndToRRouter', condition=False) - - def test_load_backend_acl_storage_leaf(self, get_cmd_module, setup_single_broadcom_asic): - db = Db() - db.cfgdb.set_entry("DEVICE_METADATA", "localhost", {"storage_device": "true"}) - self.check_backend_acl(get_cmd_module, db, device_type='BackEndLeafRouter', condition=False) - - def test_load_backend_acl_storage_no_dataacl(self, get_cmd_module, setup_single_broadcom_asic): - db = Db() - db.cfgdb.set_entry("DEVICE_METADATA", "localhost", {"storage_device": "true"}) - db.cfgdb.set_entry("ACL_TABLE", "DATAACL", None) - self.check_backend_acl(get_cmd_module, db, device_type='BackEndToRRouter', condition=False) - - def check_backend_acl(self, get_cmd_module, db, device_type='BackEndToRRouter', condition=True): - def is_file_side_effect(filename): - return True if 'backend_acl' in filename else False - with mock.patch('os.path.isfile', mock.MagicMock(side_effect=is_file_side_effect)): - with mock.patch('config.main._get_device_type', mock.MagicMock(return_value=device_type)): - with mock.patch( - "utilities_common.cli.run_command", - mock.MagicMock(side_effect=mock_run_command_side_effect)) as mock_run_command: - (config, show) = get_cmd_module - runner = CliRunner() - result = runner.invoke(config.config.commands["load_minigraph"], ["-y"], obj=db) - print(result.exit_code) - expected_output = ['Running command: acl-loader update incremental /etc/sonic/backend_acl.json', - 'Running command: /usr/local/bin/sonic-cfggen -d -t /usr/share/sonic/templates/backend_acl.j2,/etc/sonic/backend_acl.json' - ] - print(result.output) - assert result.exit_code == 0 - output = result.output.split('\n') - if condition: - assert set(expected_output).issubset(set(output)) - else: - assert not(set(expected_output).issubset(set(output))) - def check_port_config(self, db, config, port_config, expected_output): def read_json_file_side_effect(filename): return port_config @@ -693,7 +651,7 @@ def test_qos_wait_until_clear_empty(self): with mock.patch('swsscommon.swsscommon.SonicV2Connector.keys', side_effect=TestConfigQos._keys): TestConfigQos._keys_counter = 1 - empty = _wait_until_clear("BUFFER_POOL_TABLE:*", 0.5,2) + empty = _wait_until_clear(["BUFFER_POOL_TABLE:*"], 0.5,2) assert empty def test_qos_wait_until_clear_not_empty(self): @@ -701,9 +659,15 @@ def test_qos_wait_until_clear_not_empty(self): with mock.patch('swsscommon.swsscommon.SonicV2Connector.keys', side_effect=TestConfigQos._keys): TestConfigQos._keys_counter = 10 - empty = _wait_until_clear("BUFFER_POOL_TABLE:*", 0.5,2) + empty = _wait_until_clear(["BUFFER_POOL_TABLE:*"], 0.5,2) assert not empty + @mock.patch('config.main._wait_until_clear') + def test_qos_clear_no_wait(self, _wait_until_clear): + from config.main import _clear_qos + _clear_qos(True, False) + _wait_until_clear.assert_called_with(['BUFFER_*_TABLE:*', 'BUFFER_*_SET'], interval=0.5, timeout=0, verbose=False) + def test_qos_reload_single( self, get_cmd_module, setup_qos_mock_apis, setup_single_broadcom_asic @@ -1910,3 +1874,64 @@ def test_add_loopback_adhoc_validation(self): @classmethod def teardown_class(cls): print("TEARDOWN") + + +class TestConfigNtp(object): + @classmethod + def setup_class(cls): + print("SETUP") + import config.main + importlib.reload(config.main) + + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=ValueError)) + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + def test_add_ntp_server_failed_yang_validation(self): + config.ADHOC_VALIDATION = False + runner = CliRunner() + db = Db() + obj = {'db':db.cfgdb} + + result = runner.invoke(config.config.commands["ntp"], ["add", "10.10.10.x"], obj=obj) + print(result.exit_code) + print(result.output) + assert "Invalid ConfigDB. Error" in result.output + + def test_add_ntp_server_invalid_ip(self): + config.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'db':db.cfgdb} + + result = runner.invoke(config.config.commands["ntp"], ["add", "10.10.10.x"], obj=obj) + print(result.exit_code) + print(result.output) + assert "Invalid IP address" in result.output + + def test_del_ntp_server_invalid_ip(self): + config.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'db':db.cfgdb} + + result = runner.invoke(config.config.commands["ntp"], ["del", "10.10.10.x"], obj=obj) + print(result.exit_code) + print(result.output) + assert "Invalid IP address" in result.output + + @patch("config.main.ConfigDBConnector.get_table", mock.Mock(return_value="10.10.10.10")) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=JsonPatchConflict)) + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + def test_del_ntp_server_invalid_ip_yang_validation(self): + config.ADHOC_VALIDATION = False + runner = CliRunner() + db = Db() + obj = {'db':db.cfgdb} + + result = runner.invoke(config.config.commands["ntp"], ["del", "10.10.10.10"], obj=obj) + print(result.exit_code) + print(result.output) + assert "Invalid ConfigDB. Error" in result.output + + @classmethod + def teardown_class(cls): + print("TEARDOWN") diff --git a/tests/console_test.py b/tests/console_test.py index 8161eda7dd..528f5f4ba8 100644 --- a/tests/console_test.py +++ b/tests/console_test.py @@ -1,8 +1,10 @@ import os import sys import subprocess +import jsonpatch import pexpect from unittest import mock +from mock import patch import pytest @@ -14,6 +16,7 @@ from utilities_common.db import Db from consutil.lib import * from sonic_py_common import device_info +from jsonpatch import JsonPatchConflict class TestConfigConsoleCommands(object): @classmethod @@ -28,6 +31,16 @@ def test_enable_console_switch(self): print(result.exit_code) print(sys.stderr, result.output) assert result.exit_code == 0 + + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_enable_console_switch_yang_validation(self): + runner = CliRunner() + db = Db() + + result = runner.invoke(config.config.commands["console"].commands["enable"]) + print(result.exit_code) + assert "Invalid ConfigDB. Error" in result.output def test_disable_console_switch(self): runner = CliRunner() @@ -38,6 +51,17 @@ def test_disable_console_switch(self): print(sys.stderr, result.output) assert result.exit_code == 0 + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_disable_console_switch_yang_validation(self): + runner = CliRunner() + db = Db() + + result = runner.invoke(config.config.commands["console"].commands["disable"]) + print(result.exit_code) + print(sys.stderr, result.output) + assert "Invalid ConfigDB. Error" in result.output + def test_console_add_exists(self): runner = CliRunner() db = Db() @@ -95,6 +119,18 @@ def test_console_add_success(self): print(sys.stderr, result.output) assert result.exit_code == 0 + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=ValueError)) + def test_console_add_yang_validation(self): + runner = CliRunner() + db = Db() + + # add a console setting without flow control option + result = runner.invoke(config.config.commands["console"].commands["add"], ["0", '--baud', "9600"], obj=db) + print(result.exit_code) + print(sys.stderr, result.output) + assert "Invalid ConfigDB. Error" in result.output + def test_console_del_non_exists(self): runner = CliRunner() db = Db() @@ -117,6 +153,19 @@ def test_console_del_success(self): print(sys.stderr, result.output) assert result.exit_code == 0 + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=JsonPatchConflict)) + def test_console_del_yang_validation(self): + runner = CliRunner() + db = Db() + db.cfgdb.set_entry("CONSOLE_PORT", "1", { "baud_rate" : "9600" }) + + # add a console setting which the port exists + result = runner.invoke(config.config.commands["console"].commands["del"], ["1"], obj=db) + print(result.exit_code) + print(sys.stderr, result.output) + assert "Invalid ConfigDB. Error" in result.output + def test_update_console_remote_device_name_non_exists(self): runner = CliRunner() db = Db() @@ -163,6 +212,19 @@ def test_update_console_remote_device_name_reset(self): print(sys.stderr, result.output) assert result.exit_code == 0 + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_update_console_remote_device_name_reset_yang_validation(self): + runner = CliRunner() + db = Db() + db.cfgdb.set_entry("CONSOLE_PORT", 2, { "remote_device" : "switch1" }) + + # trying to reset a console line remote device configuration which is not exists + result = runner.invoke(config.config.commands["console"].commands["remote_device"], ["2"], obj=db) + print(result.exit_code) + print(sys.stderr, result.output) + assert "Invalid ConfigDB. Error" in result.output + def test_update_console_remote_device_name_success(self): runner = CliRunner() db = Db() @@ -174,6 +236,19 @@ def test_update_console_remote_device_name_success(self): print(sys.stderr, result.output) assert result.exit_code == 0 + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_update_console_remote_device_name_yang_validation(self): + runner = CliRunner() + db = Db() + db.cfgdb.set_entry("CONSOLE_PORT", "1", { "baud_rate" : "9600" }) + + # trying to set a console line remote device configuration + result = runner.invoke(config.config.commands["console"].commands["remote_device"], ["1", "switch1"], obj=db) + print(result.exit_code) + print(sys.stderr, result.output) + assert "Invalid ConfigDB. Error" in result.output + def test_update_console_baud_no_change(self): runner = CliRunner() db = Db() @@ -207,6 +282,19 @@ def test_update_console_baud_success(self): print(sys.stderr, result.output) assert result.exit_code == 0 + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_update_console_baud_yang_validation(self): + runner = CliRunner() + db = Db() + db.cfgdb.set_entry("CONSOLE_PORT", "1", { "baud_rate" : "9600" }) + + # trying to set a console line baud + result = runner.invoke(config.config.commands["console"].commands["baud"], ["1", "115200"], obj=db) + print(result.exit_code) + print(sys.stderr, result.output) + assert "Invalid ConfigDB. Error" in result.output + def test_update_console_flow_control_no_change(self): runner = CliRunner() db = Db() @@ -240,6 +328,19 @@ def test_update_console_flow_control_success(self): print(sys.stderr, result.output) assert result.exit_code == 0 + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_update_console_flow_control_yang_validation(self): + runner = CliRunner() + db = Db() + db.cfgdb.set_entry("CONSOLE_PORT", "1", { "baud_rate" : "9600", "flow_control" : "0" }) + + # trying to set a console line flow control option + result = runner.invoke(config.config.commands["console"].commands["flow_control"], ["enable", "1"], obj=db) + print(result.exit_code) + print(sys.stderr, result.output) + assert "Invalid ConfigDB. Error" in result.output + class TestConsutilLib(object): @classmethod def setup_class(cls): diff --git a/tests/db_migrator_input/state_db/fast_reboot_expected.json b/tests/db_migrator_input/state_db/fast_reboot_expected.json new file mode 100644 index 0000000000..e3a7a5fa14 --- /dev/null +++ b/tests/db_migrator_input/state_db/fast_reboot_expected.json @@ -0,0 +1,5 @@ +{ + "FAST_RESTART_ENABLE_TABLE|system": { + "enable": "false" + } +} \ No newline at end of file diff --git a/tests/db_migrator_input/state_db/fast_reboot_input.json b/tests/db_migrator_input/state_db/fast_reboot_input.json new file mode 100644 index 0000000000..7a73a41bfd --- /dev/null +++ b/tests/db_migrator_input/state_db/fast_reboot_input.json @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file diff --git a/tests/db_migrator_test.py b/tests/db_migrator_test.py index b5c70fce8e..e9c184d160 100644 --- a/tests/db_migrator_test.py +++ b/tests/db_migrator_test.py @@ -451,6 +451,38 @@ def test_move_logger_tables_in_warm_upgrade(self): diff = DeepDiff(resulting_table, expected_table, ignore_order=True) assert not diff +class TestFastRebootTableModification(object): + @classmethod + def setup_class(cls): + os.environ['UTILITIES_UNIT_TESTING'] = "2" + + @classmethod + def teardown_class(cls): + os.environ['UTILITIES_UNIT_TESTING'] = "0" + dbconnector.dedicated_dbs['STATE_DB'] = None + + def mock_dedicated_state_db(self): + dbconnector.dedicated_dbs['STATE_DB'] = os.path.join(mock_db_path, 'state_db') + + def test_rename_fast_reboot_table_check_enable(self): + device_info.get_sonic_version_info = get_sonic_version_info_mlnx + dbconnector.dedicated_dbs['STATE_DB'] = os.path.join(mock_db_path, 'state_db', 'fast_reboot_input') + dbconnector.dedicated_dbs['CONFIG_DB'] = os.path.join(mock_db_path, 'config_db', 'empty-config-input') + + import db_migrator + dbmgtr = db_migrator.DBMigrator(None) + dbmgtr.migrate() + + dbconnector.dedicated_dbs['STATE_DB'] = os.path.join(mock_db_path, 'state_db', 'fast_reboot_expected') + expected_db = SonicV2Connector(host='127.0.0.1') + expected_db.connect(expected_db.STATE_DB) + + resulting_table = dbmgtr.stateDB.get_all(dbmgtr.stateDB.STATE_DB, 'FAST_RESTART_ENABLE_TABLE|system') + expected_table = expected_db.get_all(expected_db.STATE_DB, 'FAST_RESTART_ENABLE_TABLE|system') + + diff = DeepDiff(resulting_table, expected_table, ignore_order=True) + assert not diff + class TestWarmUpgrade_to_2_0_2(object): @classmethod def setup_class(cls): diff --git a/tests/generic_config_updater/gu_common_test.py b/tests/generic_config_updater/gu_common_test.py index 7fa471ee3b..a319a25ead 100644 --- a/tests/generic_config_updater/gu_common_test.py +++ b/tests/generic_config_updater/gu_common_test.py @@ -3,8 +3,10 @@ import jsonpatch import sonic_yang import unittest -from unittest.mock import MagicMock, Mock, patch +import mock +from unittest.mock import MagicMock, Mock +from mock import patch from .gutest_helpers import create_side_effect_dict, Files import generic_config_updater.gu_common as gu_common @@ -69,11 +71,25 @@ def setUp(self): self.config_wrapper_mock = gu_common.ConfigWrapper() self.config_wrapper_mock.get_config_db_as_json=MagicMock(return_value=Files.CONFIG_DB_AS_JSON) + @patch("sonic_py_common.device_info.get_sonic_version_info", mock.Mock(return_value={"asic_type": "mellanox", "build_version": "SONiC.20181131"})) def test_validate_field_operation_legal__pfcwd(self): old_config = {"PFC_WD": {"GLOBAL": {"POLL_INTERVAL": "60"}}} target_config = {"PFC_WD": {"GLOBAL": {"POLL_INTERVAL": "40"}}} config_wrapper = gu_common.ConfigWrapper() config_wrapper.validate_field_operation(old_config, target_config) + + def test_validate_field_operation_illegal__pfcwd(self): + old_config = {"PFC_WD": {"GLOBAL": {"POLL_INTERVAL": "60"}}} + target_config = {"PFC_WD": {"GLOBAL": {}}} + config_wrapper = gu_common.ConfigWrapper() + self.assertRaises(gu_common.IllegalPatchOperationError, config_wrapper.validate_field_operation, old_config, target_config) + + @patch("sonic_py_common.device_info.get_sonic_version_info", mock.Mock(return_value={"asic_type": "invalid-asic", "build_version": "SONiC.20181131"})) + def test_validate_field_modification_illegal__pfcwd(self): + old_config = {"PFC_WD": {"GLOBAL": {"POLL_INTERVAL": "60"}}} + target_config = {"PFC_WD": {"GLOBAL": {"POLL_INTERVAL": "80"}}} + config_wrapper = gu_common.ConfigWrapper() + self.assertRaises(gu_common.IllegalPatchOperationError, config_wrapper.validate_field_operation, old_config, target_config) def test_validate_field_operation_legal__rm_loopback1(self): old_config = { @@ -92,13 +108,7 @@ def test_validate_field_operation_legal__rm_loopback1(self): } config_wrapper = gu_common.ConfigWrapper() config_wrapper.validate_field_operation(old_config, target_config) - - def test_validate_field_operation_illegal__pfcwd(self): - old_config = {"PFC_WD": {"GLOBAL": {"POLL_INTERVAL": 60}}} - target_config = {"PFC_WD": {"GLOBAL": {}}} - config_wrapper = gu_common.ConfigWrapper() - self.assertRaises(gu_common.IllegalPatchOperationError, config_wrapper.validate_field_operation, old_config, target_config) - + def test_validate_field_operation_illegal__rm_loopback0(self): old_config = { "LOOPBACK_INTERFACE": { diff --git a/tests/generic_config_updater/service_validator_test.py b/tests/generic_config_updater/service_validator_test.py index 2f51771d33..f14a3ad7b0 100644 --- a/tests/generic_config_updater/service_validator_test.py +++ b/tests/generic_config_updater/service_validator_test.py @@ -6,7 +6,7 @@ from collections import defaultdict from unittest.mock import patch -from generic_config_updater.services_validator import vlan_validator, rsyslog_validator, caclmgrd_validator +from generic_config_updater.services_validator import vlan_validator, rsyslog_validator, caclmgrd_validator, vlanintf_validator import generic_config_updater.gu_common @@ -152,6 +152,46 @@ def mock_time_sleep_call(sleep_time): { "cmd": "systemctl restart rsyslog", "rc": 1 }, # restart again; fails ] +test_vlanintf_data = [ + { "old": {}, "upd": {}, "cmd": "" }, + { + "old": { "VLAN_INTERFACE": { + "Vlan1000": {}, + "Vlan1000|192.168.0.1/21": {} } }, + "upd": { "VLAN_INTERFACE": { + "Vlan1000": {}, + "Vlan1000|192.168.0.1/21": {} } }, + "cmd": "" + }, + { + "old": { "VLAN_INTERFACE": { + "Vlan1000": {}, + "Vlan1000|192.168.0.1/21": {} } }, + "upd": { "VLAN_INTERFACE": { + "Vlan1000": {}, + "Vlan1000|192.168.0.2/21": {} } }, + "cmd": "ip neigh flush dev Vlan1000 192.168.0.1/21" + }, + { + "old": { "VLAN_INTERFACE": { + "Vlan1000": {}, + "Vlan1000|192.168.0.1/21": {} } }, + "upd": { "VLAN_INTERFACE": { + "Vlan1000": {}, + "Vlan1000|192.168.0.1/21": {}, + "Vlan1000|192.168.0.2/21": {} } }, + "cmd": "" + }, + { + "old": { "VLAN_INTERFACE": { + "Vlan1000": {}, + "Vlan1000|192.168.0.1/21": {} } }, + "upd": {}, + "cmd": "ip neigh flush dev Vlan1000 192.168.0.1/21" + } + ] + + class TestServiceValidator(unittest.TestCase): @patch("generic_config_updater.change_applier.os.system") @@ -177,6 +217,15 @@ def test_change_apply_os_system(self, mock_os_sys): rc = rsyslog_validator("", "", "") assert not rc, "rsyslog_validator expected to fail" + os_system_calls = [] + os_system_call_index = 0 + for entry in test_vlanintf_data: + if entry["cmd"]: + os_system_calls.append({"cmd": entry["cmd"], "rc": 0 }) + msg = "case failed: {}".format(str(entry)) + + vlanintf_validator(entry["old"], entry["upd"], None) + @patch("generic_config_updater.services_validator.time.sleep") def test_change_apply_time_sleep(self, mock_time_sleep): global time_sleep_calls, time_sleep_call_index diff --git a/tests/kube_test.py b/tests/kube_test.py index e49a2a55f8..72392b9775 100644 --- a/tests/kube_test.py +++ b/tests/kube_test.py @@ -1,5 +1,7 @@ from click.testing import CliRunner from utilities_common.db import Db +import mock +from mock import patch show_no_server_output="""\ Kubernetes server is not configured @@ -110,8 +112,30 @@ def test_no_kube_server(self, get_cmd_module): result = runner.invoke(show.cli.commands["kubernetes"].commands["server"].commands["config"], [], obj=db) self.__check_res(result, "config command default value", show_server_output_5) + + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_no_kube_server_yang_validation(self, get_cmd_module): + (config, show) = get_cmd_module + runner = CliRunner() + db = Db() + + db.cfgdb.delete_table("KUBERNETES_MASTER") + # Check server not configured + result = runner.invoke(show.cli.commands["kubernetes"].commands["server"].commands["config"], [], obj=db) + self.__check_res(result, "null server config test", show_no_server_output) + # Add IP when not configured + result = runner.invoke(config.config.commands["kubernetes"].commands["server"], ["ip", "10.10.10.11"], obj=db) + assert "Invalid ConfigDB. Error" in result.output + + db.cfgdb.mod_entry("KUBERNETES_MASTER", "SERVER", {"ip": "10.10.10.11"}) + # Add IP when already configured + result = runner.invoke(config.config.commands["kubernetes"].commands["server"], ["ip", "10.10.10.12"], obj=db) + assert "Invalid ConfigDB. Error" in result.output + + def test_only_kube_server(self, get_cmd_module): (config, show) = get_cmd_module runner = CliRunner() diff --git a/tests/mclag_test.py b/tests/mclag_test.py index a653174000..2401978e97 100644 --- a/tests/mclag_test.py +++ b/tests/mclag_test.py @@ -1,14 +1,19 @@ import os import traceback +import mock +import jsonpatch from click.testing import CliRunner import config.main as config +import config.mclag as mclag import show.main as show from utilities_common.db import Db - +from mock import patch +from jsonpatch import JsonPatchConflict MCLAG_DOMAIN_ID = "123" +MCLAG_NONEXISTENT_DOMAIN_ID = "234" MCLAG_INVALID_DOMAIN_ID1 = "-1" MCLAG_INVALID_DOMAIN_ID2 = "5000" MCLAG_DOMAIN_ID2 = "500" @@ -87,6 +92,7 @@ def verify_mclag_interface(self, db, domain_id, intf_str): return False def test_add_mclag_with_invalid_src_ip(self): + mclag.ADHOC_VALIDATION = True runner = CliRunner() db = Db() obj = {'db':db.cfgdb} @@ -223,9 +229,33 @@ def test_add_invalid_mclag_domain(self): result = runner.invoke(config.config.commands["mclag"].commands["add"], [5000, MCLAG_SRC_IP, MCLAG_PEER_IP, MCLAG_PEER_LINK], obj=obj) assert result.exit_code != 0, "mclag invalid domain test case with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=ValueError)) + def test_add_mclag_domain_invalid_yang_validation(self): + mclag.ADHOC_VALIDATION = False + runner = CliRunner() + db = Db() + obj = {'db':db.cfgdb} + # add invalid mclag domain + result = runner.invoke(config.config.commands["mclag"].commands["add"], [MCLAG_DOMAIN_ID, MCLAG_SRC_IP, MCLAG_PEER_IP, MCLAG_INVALID_PEER_LINK4], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + @patch("config.main.ConfigDBConnector.get_table", mock.Mock(return_value={"123": "xyz"})) + def test_add_mclag_domain_invalid_yang_validation_override(self): + mclag.ADHOC_VALIDATION = False + runner = CliRunner() + db = Db() + obj = {'db':db.cfgdb} + + # add invalid mclag domain + result = runner.invoke(config.config.commands["mclag"].commands["add"], [MCLAG_DOMAIN_ID, MCLAG_SRC_IP, MCLAG_PEER_IP, MCLAG_INVALID_PEER_LINK4], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + def test_add_mclag_domain(self): + mclag.ADHOC_VALIDATION = True runner = CliRunner() db = Db() obj = {'db':db.cfgdb} @@ -378,10 +408,29 @@ def test_mclag_add_invalid_member(self): result = runner.invoke(config.config.commands["mclag"].commands["member"].commands["add"], [MCLAG_DOMAIN_ID, MCLAG_INVALID_PORTCHANNEL4], obj=obj) assert result.exit_code != 0, "mclag invalid member add case failed with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) + + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=ValueError)) + def test_mclag_add_invalid_member_yang_validation(self): + runner = CliRunner() + db = Db() + obj = {'db':db.cfgdb} + mclag.ADHOC_VALIDATION = False + + # add valid mclag domain + db.cfgdb.set_entry("MCLAG_DOMAIN", MCLAG_DOMAIN_ID, {"source_ip": MCLAG_SRC_IP, "peer_ip": MCLAG_PEER_IP, "peer_link": MCLAG_PEER_LINK}) + + with mock.patch('validated_config_db_connector.device_info.is_yang_config_validation_enabled', mock.Mock(return_value=True)): + result = runner.invoke(config.config.commands["mclag"].commands["member"].commands["add"], [MCLAG_DOMAIN_ID, MCLAG_INVALID_MCLAG_MEMBER], obj=obj) + print(result.exit_code) + print(result.output) + assert "Invalid ConfigDB. Error" in result.output + + def test_mclag_add_member(self): runner = CliRunner() db = Db() obj = {'db':db.cfgdb} + mclag.ADHOC_VALIDATION = True # add valid mclag domain @@ -447,6 +496,29 @@ def test_mclag_add_member(self): assert result.exit_code != 0, "mclag invalid member del case failed with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=ValueError)) + def test_mclag_add__unique_ip_yang_validation(self): + runner = CliRunner() + db = Db() + obj = {'db':db.cfgdb} + db.cfgdb.set_entry("MCLAG_DOMAIN", MCLAG_DOMAIN_ID, {"source_ip": MCLAG_SRC_IP}) + + with mock.patch('validated_config_db_connector.device_info.is_yang_config_validation_enabled', return_value=True): + result = runner.invoke(config.config.commands["mclag"].commands["unique-ip"].commands["add"], [MCLAG_UNIQUE_IP_VLAN], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=JsonPatchConflict)) + def test_mclag_del_unique_ip_yang_validation(self): + runner = CliRunner() + db = Db() + obj = {'db':db.cfgdb} + db.cfgdb.set_entry("MCLAG_DOMAIN", MCLAG_DOMAIN_ID, {"source_ip": MCLAG_SRC_IP}) + + with mock.patch('validated_config_db_connector.device_info.is_yang_config_validation_enabled', return_value=True): + result = runner.invoke(config.config.commands["mclag"].commands["unique-ip"].commands["del"], [MCLAG_UNIQUE_IP_VLAN], obj=obj) + assert "Failed to delete mclag unique IP" in result.output + def test_mclag_add_unique_ip(self, mock_restart_dhcp_relay_service): runner = CliRunner() @@ -544,12 +616,18 @@ def test_add_mclag_with_invalid_domain_id(self): result = runner.invoke(config.config.commands["mclag"].commands["add"], [MCLAG_INVALID_DOMAIN_ID2, MCLAG_SRC_IP, MCLAG_PEER_IP, MCLAG_PEER_LINK], obj=obj) assert result.exit_code != 0, "mclag invalid src ip test caase with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) - + def test_del_mclag_with_invalid_domain_id(self): + mclag.ADHOC_VALIDATION = True runner = CliRunner() db = Db() obj = {'db':db.cfgdb} + with mock.patch('config.main.ConfigDBConnector.get_entry', return_value=None): + # del mclag nonexistent domain_id + result = runner.invoke(config.config.commands["mclag"].commands["del"], [MCLAG_NONEXISTENT_DOMAIN_ID], obj=obj) + assert result.exit_code != 0, "mclag invalid domain id test case with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) + # del mclag with invalid domain_id result = runner.invoke(config.config.commands["mclag"].commands["del"], [MCLAG_INVALID_DOMAIN_ID1], obj=obj) assert result.exit_code != 0, "mclag invalid domain id test case with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) @@ -557,10 +635,10 @@ def test_del_mclag_with_invalid_domain_id(self): result = runner.invoke(config.config.commands["mclag"].commands["del"], [MCLAG_INVALID_DOMAIN_ID2], obj=obj) assert result.exit_code != 0, "mclag invalid domain id test case with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) result = runner.invoke(config.config.commands["mclag"].commands["del"], [MCLAG_DOMAIN_ID3], obj=obj) + print(result.output) assert result.exit_code == 0, "mclag invalid domain id test case with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) - def test_modify_mclag_domain(self): runner = CliRunner() db = Db() @@ -568,15 +646,14 @@ def test_modify_mclag_domain(self): # add mclag domain entry in db db.cfgdb.set_entry("MCLAG_DOMAIN", MCLAG_DOMAIN_ID, {"source_ip": MCLAG_SRC_IP}) - result = runner.invoke(config.config.commands["mclag"].commands["add"], [MCLAG_DOMAIN_ID, MCLAG_SRC_IP, MCLAG_PEER_IP, MCLAG_PEER_LINK], obj=obj) - assert result.exit_code != 0, "mclag add domain peer ip test caase with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) + assert result.exit_code == 0, "mclag add domain peer ip test caase with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) assert self.verify_mclag_domain_cfg(db, MCLAG_DOMAIN_ID, MCLAG_SRC_IP, MCLAG_PEER_IP, MCLAG_PEER_LINK) == True, "mclag config not found" - + print(result.output) # modify mclag config - result = runner.invoke(config.config.commands["mclag"].commands["add"], [MCLAG_DOMAIN_ID, MCLAG_SRC_IP, MCLAG_PEER_IP, MCLAG_PEER_LINK], obj=obj) - assert result.exit_code != 0, "test_mclag_domain_add_again with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) + result = runner.invoke(config.config.commands["mclag"].commands["add"], [MCLAG_DOMAIN_ID, MCLAG_SRC_IP, MCLAG_PEER_IP, MCLAG_PEER_LINK2], obj=obj) + assert result.exit_code == 0, "test_mclag_domain_add_again with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) assert self.verify_mclag_domain_cfg(db, MCLAG_DOMAIN_ID, MCLAG_SRC_IP, MCLAG_PEER_IP, MCLAG_PEER_LINK2) == True, "mclag config not modified" @@ -590,6 +667,7 @@ def test_add_mclag_domain_no_peer_link(self): assert result.exit_code != 0, "mclag add domain peer ip test caase with code {}:{} Output:{}".format(type(result.exit_code), result.exit_code, result.output) assert self.verify_mclag_domain_cfg(db, MCLAG_DOMAIN_ID, MCLAG_SRC_IP, MCLAG_PEER_IP) == False, "mclag config not found" + def test_del_mclag_domain_with_members(self): runner = CliRunner() db = Db() @@ -617,11 +695,45 @@ def test_del_mclag_domain_with_members(self): assert self.verify_mclag_interface(db, MCLAG_DOMAIN_ID, MCLAG_MEMBER_PO) == False, "mclag member not deleted" assert self.verify_mclag_domain_cfg(db, MCLAG_DOMAIN_ID) == False, "mclag domain not present" + + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=JsonPatchConflict)) + def test_del_mclag_domain_with_members_invalid_yang_validation(self): + runner = CliRunner() + db = Db() + obj = {'db':db.cfgdb} + mclag.ADHOC_VALIDATION = False + + db.cfgdb.set_entry("MCLAG_DOMAIN", MCLAG_DOMAIN_ID, {"source_ip": MCLAG_SRC_IP, "peer_ip": MCLAG_PEER_IP, "peer_link": MCLAG_PEER_LINK}) + db.cfgdb.set_entry('MCLAG_INTERFACE', (MCLAG_DOMAIN_ID, MCLAG_MEMBER_PO), {'if_type':"PortChannel"} ) + db.cfgdb.set_entry('MCLAG_INTERFACE', (MCLAG_DOMAIN_ID, MCLAG_MEMBER_PO2), {'if_type':"PortChannel"} ) + + with mock.patch('validated_config_db_connector.device_info.is_yang_config_validation_enabled', return_value=True): + result = runner.invoke(config.config.commands["mclag"].commands["member"].commands["del"], [MCLAG_DOMAIN_ID, MCLAG_MEMBER_PO2], obj=obj) + assert "Failed to delete mclag member" in result.output + + with mock.patch('validated_config_db_connector.device_info.is_yang_config_validation_enabled', return_value=True): + result = runner.invoke(config.config.commands["mclag"].commands["del"], [MCLAG_DOMAIN_ID], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=JsonPatchConflict)) + def test_del_mclag_domain_invalid_yang_validation(self): + runner = CliRunner() + db = Db() + obj = {'db':db.cfgdb} + mclag.ADHOC_VALIDATION = False + + db.cfgdb.set_entry("MCLAG_DOMAIN", MCLAG_DOMAIN_ID, {"source_ip": MCLAG_SRC_IP, "peer_ip": MCLAG_PEER_IP, "peer_link": MCLAG_PEER_LINK}) + with mock.patch('validated_config_db_connector.device_info.is_yang_config_validation_enabled', return_value=True): + result = runner.invoke(config.config.commands["mclag"].commands["del"], [MCLAG_DOMAIN_ID], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + def test_mclag_keepalive_for_non_existent_domain(self): runner = CliRunner() db = Db() obj = {'db':db.cfgdb} + mclag.ADHOC_VALIDATION = True # configure keepalive timer for non-existing domain result = runner.invoke(config.config.commands["mclag"].commands["keepalive-interval"], [MCLAG_DOMAIN_ID, MCLAG_INVALID_KEEPALIVE_TIMER], obj=obj) diff --git a/tests/mock_tables/asic0/config_db.json b/tests/mock_tables/asic0/config_db.json index 66b51f4ccb..de20194a64 100644 --- a/tests/mock_tables/asic0/config_db.json +++ b/tests/mock_tables/asic0/config_db.json @@ -246,5 +246,16 @@ "holdtime": "10", "asn": "65200", "keepalive": "3" + }, + "ACL_RULE|DATAACL_5|RULE_1": { + "IP_PROTOCOL": "126", + "PACKET_ACTION": "FORWARD", + "PRIORITY": "9999" + }, + "ACL_TABLE|DATAACL_5": { + "policy_desc": "DATAACL_5", + "ports@": "Ethernet124", + "type": "L3", + "stage": "ingress" } } diff --git a/tests/mock_tables/asic0/state_db.json b/tests/mock_tables/asic0/state_db.json index 2756404971..559af04826 100644 --- a/tests/mock_tables/asic0/state_db.json +++ b/tests/mock_tables/asic0/state_db.json @@ -286,5 +286,11 @@ "STATUS": "up", "REMOTE_MOD": "0", "REMOTE_PORT": "93" + }, + "ACL_TABLE_TABLE|DATAACL_5" : { + "status": "Active" + }, + "ACL_RULE_TABLE|DATAACL_5|RULE_1" : { + "status": "Active" } } diff --git a/tests/mock_tables/asic2/config_db.json b/tests/mock_tables/asic2/config_db.json index 532d85bcbb..bfda10a0d5 100644 --- a/tests/mock_tables/asic2/config_db.json +++ b/tests/mock_tables/asic2/config_db.json @@ -124,5 +124,16 @@ "state": "disabled", "auto_restart": "disabled", "high_mem_alert": "disabled" + }, + "ACL_RULE|DATAACL_5|RULE_1": { + "IP_PROTOCOL": "126", + "PACKET_ACTION": "FORWARD", + "PRIORITY": "9999" + }, + "ACL_TABLE|DATAACL_5": { + "policy_desc": "DATAACL_5", + "ports@": "Ethernet124", + "type": "L3", + "stage": "ingress" } } diff --git a/tests/mock_tables/asic2/state_db.json b/tests/mock_tables/asic2/state_db.json index f6e3eee4cf..c6c8c88898 100644 --- a/tests/mock_tables/asic2/state_db.json +++ b/tests/mock_tables/asic2/state_db.json @@ -207,5 +207,11 @@ "speed_target": "50", "led_status": "green", "timestamp": "20200813 01:32:30" + }, + "ACL_TABLE_TABLE|DATAACL_5" : { + "status": "Active" + }, + "ACL_RULE_TABLE|DATAACL_5|RULE_1" : { + "status": "Active" } } diff --git a/tests/mock_tables/config_db.json b/tests/mock_tables/config_db.json index 51af58e86d..3a2b681a6e 100644 --- a/tests/mock_tables/config_db.json +++ b/tests/mock_tables/config_db.json @@ -496,6 +496,11 @@ "PACKET_ACTION": "FORWARD", "PRIORITY": "9995" }, + "ACL_RULE|DATAACL_5|RULE_1": { + "IP_PROTOCOL": "126", + "PACKET_ACTION": "FORWARD", + "PRIORITY": "9999" + }, "ACL_TABLE|NULL_ROUTE_V4": { "policy_desc": "DATAACL", "ports@": "PortChannel0002,PortChannel0005,PortChannel0008,PortChannel0011,PortChannel0014,PortChannel0017,PortChannel0020,PortChannel0023", @@ -533,6 +538,12 @@ "type": "L3V6", "stage": "egress" }, + "ACL_TABLE|DATAACL_5": { + "policy_desc": "DATAACL_5", + "ports@": "Ethernet124", + "type": "L3", + "stage": "ingress" + }, "ACL_TABLE|EVERFLOW": { "policy_desc": "EVERFLOW", "ports@": "PortChannel0002,PortChannel0005,PortChannel0008,PortChannel0011,PortChannel0014,PortChannel0017,PortChannel0020,PortChannel0023,Ethernet100,Ethernet104,Ethernet92,Ethernet96,Ethernet84,Ethernet88,Ethernet76,Ethernet80,Ethernet108,Ethernet112,Ethernet64,Ethernet120,Ethernet116,Ethernet124,Ethernet72,Ethernet68", diff --git a/tests/mock_tables/state_db.json b/tests/mock_tables/state_db.json index 4cdda56bc8..cd1a194ba8 100644 --- a/tests/mock_tables/state_db.json +++ b/tests/mock_tables/state_db.json @@ -1210,5 +1210,11 @@ "STATUS": "up", "REMOTE_MOD": "0", "REMOTE_PORT": "93" + }, + "ACL_TABLE_TABLE|DATAACL_5" : { + "status": "Active" + }, + "ACL_RULE_TABLE|DATAACL_5|RULE_1" : { + "status": "Active" } } diff --git a/tests/nat_test.py b/tests/nat_test.py new file mode 100644 index 0000000000..e37f13bc71 --- /dev/null +++ b/tests/nat_test.py @@ -0,0 +1,267 @@ +import mock + +from click.testing import CliRunner +from utilities_common.db import Db +from mock import patch +from jsonpatch import JsonPatchConflict +import config.main as config +import config.nat as nat +import config.validated_config_db_connector as validated_config_db_connector + +class TestNat(object): + @classmethod + def setup_class(cls): + print("SETUP") + + def test_add_basic_invalid(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["basic", "65.66.45.1", "12.12.12.14x", "-nat_type", "dnat"], obj=obj) + assert "Please enter a valid local ip address" in result.output + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["basic", "65.66.45.1x", "12.12.12.14", "-nat_type", "dnat"], obj=obj) + assert "Please enter a valid global ip address" in result.output + + @patch("config.nat.SonicV2Connector.get_all", mock.Mock(return_value={"MAX_NAT_ENTRIES": "9999"})) + @patch("config.nat.SonicV2Connector.exists", mock.Mock(return_value="True")) + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=ValueError)) + def test_add_basic_yang_validation(self): + nat.ADHOC_VALIDATION = False + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["basic", "65.66.45.1", "12.12.12.14", "-nat_type", "dnat", "-twice_nat_id", "3"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["basic", "65.66.45.1", "12.12.12.14", "-nat_type", "dnat"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["basic", "65.66.45.1", "12.12.12.14", "-twice_nat_id", "3"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["basic", "65.66.45.1", "12.12.12.14"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + def test_add_tcp_invalid(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["tcp", "65.66.45.1", "100", "12.12.12.14x", "200", "-nat_type", "dnat"], obj=obj) + assert "Please enter a valid local ip address" in result.output + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["tcp", "65.66.45.1x", "100", "12.12.12.14", "200", "-nat_type", "dnat"], obj=obj) + assert "Please enter a valid global ip address" in result.output + + @patch("config.nat.SonicV2Connector.get_all", mock.Mock(return_value={"MAX_NAT_ENTRIES": "9999"})) + @patch("config.nat.SonicV2Connector.exists", mock.Mock(return_value="True")) + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=ValueError)) + def test_add_tcp_yang_validation(self): + nat.ADHOC_VALIDATION = False + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["tcp", "65.66.45.1", "100", "12.12.12.14", "200", "-nat_type", "dnat", "-twice_nat_id", "3"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["tcp", "65.66.45.1", "100", "12.12.12.14", "200", "-nat_type", "dnat"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["tcp", "65.66.45.1", "100", "12.12.12.14", "200", "-twice_nat_id", "3"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["tcp", "65.66.45.1", "100", "12.12.12.14", "200"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + def test_add_udp_invalid(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["udp", "65.66.45.1", "100", "12.12.12.14x", "200", "-nat_type", "dnat"], obj=obj) + assert "Please enter a valid local ip address" in result.output + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["udp", "65.66.45.1x", "100", "12.12.12.14", "200", "-nat_type", "dnat"], obj=obj) + assert "Please enter a valid global ip address" in result.output + + @patch("config.nat.SonicV2Connector.get_all", mock.Mock(return_value={"MAX_NAT_ENTRIES": "9999"})) + @patch("config.nat.SonicV2Connector.exists", mock.Mock(return_value="True")) + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=ValueError)) + def test_add_udp_yang_validation(self): + nat.ADHOC_VALIDATION = False + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["udp", "65.66.45.1", "100", "12.12.12.14", "200", "-nat_type", "dnat", "-twice_nat_id", "3"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["udp", "65.66.45.1", "100", "12.12.12.14", "200", "-nat_type", "dnat"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["udp", "65.66.45.1", "100", "12.12.12.14", "200", "-twice_nat_id", "3"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + result = runner.invoke(config.config.commands["nat"].commands["add"].commands["static"], ["udp", "65.66.45.1", "100", "12.12.12.14", "200"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + def test_remove_basic(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["remove"].commands["static"].commands["basic"], ["65.66.45.1", "12.12.12.14x"], obj=obj) + assert "Please enter a valid local ip address" in result.output + + result = runner.invoke(config.config.commands["nat"].commands["remove"].commands["static"].commands["basic"], ["65.66.45.1x", "12.12.12.14"], obj=obj) + assert "Please enter a valid global ip address" in result.output + + @patch("config.nat.ConfigDBConnector.get_entry", mock.Mock(return_value={"local_ip": "12.12.12.14"})) + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=JsonPatchConflict)) + def test_remove_basic_yang_validation(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["remove"].commands["static"].commands["basic"], ["65.66.45.1", "12.12.12.14"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + def test_remove_udp(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["remove"].commands["static"].commands["udp"], ["65.66.45.1", "100", "12.12.12.14x", "200"], obj=obj) + assert "Please enter a valid local ip address" in result.output + + result = runner.invoke(config.config.commands["nat"].commands["remove"].commands["static"].commands["udp"], ["65.66.45.1x", "100", "12.12.12.14", "200"], obj=obj) + assert "Please enter a valid global ip address" in result.output + + @patch("config.nat.ConfigDBConnector.get_entry", mock.Mock(return_value={"local_ip": "12.12.12.14", "local_port": "200"})) + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=JsonPatchConflict)) + def test_remove_udp_yang_validation(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["remove"].commands["static"].commands["udp"], ["65.66.45.1", "100", "12.12.12.14", "200"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + @patch("config.nat.ConfigDBConnector.get_table", mock.Mock(return_value={"sample_table_key": "sample_table_value"})) + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=JsonPatchConflict)) + def test_remove_static_all_yang_validation(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["remove"].commands["static"].commands["all"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_enable_yang_validation(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["feature"].commands["enable"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_disable_yang_validation(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["feature"].commands["disable"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_timeout_yang_validation(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["set"].commands["timeout"], ["301"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_tcp_timeout_yang_validation(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["set"].commands["tcp-timeout"], ["301"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_udp_timeout_yang_validation(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["set"].commands["udp-timeout"], ["301"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_reset_timeout_yang_validation(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["reset"].commands["timeout"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_reset_tcp_timeout_yang_validation(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["reset"].commands["tcp-timeout"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output + + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + def test_reset_udp_timeout_yang_validation(self): + nat.ADHOC_VALIDATION = True + runner = CliRunner() + db = Db() + obj = {'config_db':db.cfgdb} + + result = runner.invoke(config.config.commands["nat"].commands["reset"].commands["udp-timeout"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output diff --git a/tests/route_check_test.py b/tests/route_check_test.py index 4d93c74e2d..118e9eab56 100644 --- a/tests/route_check_test.py +++ b/tests/route_check_test.py @@ -277,17 +277,12 @@ def test_route_check(self, mock_dbs, test_num): with patch('sys.argv', ct_data[ARGS].split()), \ patch('route_check.subprocess.check_output') as mock_check_output: - check_frr_patch = patch('route_check.check_frr_pending_routes', lambda: []) + routes = ct_data.get(FRR_ROUTES, {}) - if FRR_ROUTES in ct_data: - routes = ct_data[FRR_ROUTES] + def side_effect(*args, **kwargs): + return json.dumps(routes) - def side_effect(*args, **kwargs): - return json.dumps(routes) - - mock_check_output.side_effect = side_effect - else: - check_frr_patch.start() + mock_check_output.side_effect = side_effect ret, res = route_check.main() expect_ret = ct_data[RET] if RET in ct_data else 0 @@ -299,8 +294,6 @@ def side_effect(*args, **kwargs): assert ret == expect_ret assert res == expect_res - check_frr_patch.stop() - def test_timeout(self, mock_dbs, force_hang): # Test timeout ex_raised = False diff --git a/tests/route_check_test_data.py b/tests/route_check_test_data.py index b8ba9c521a..7ed1eee41f 100644 --- a/tests/route_check_test_data.py +++ b/tests/route_check_test_data.py @@ -462,4 +462,22 @@ }, RET: -1, }, + "10": { + DESCR: "basic good one with IPv6 address", + ARGS: "route_check -m INFO -i 1000", + PRE: { + APPL_DB: { + ROUTE_TABLE: { + }, + INTF_TABLE: { + "PortChannel1013:2000:31:0:0::1/64": {}, + } + }, + ASIC_DB: { + RT_ENTRY_TABLE: { + RT_ENTRY_KEY_PREFIX + "2000:31::1/128" + RT_ENTRY_KEY_SUFFIX: {}, + } + } + } + }, } diff --git a/tests/sflow_test.py b/tests/sflow_test.py index 226e52ae5e..da03ff396e 100644 --- a/tests/sflow_test.py +++ b/tests/sflow_test.py @@ -3,6 +3,7 @@ import pytest from unittest import mock +from jsonpatch import JsonPatchConflict from click.testing import CliRunner from utilities_common.db import Db from mock import patch @@ -193,6 +194,25 @@ def test_config_sflow_collector(self): assert result.output == show_sflow_output return + + @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) + @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_set_entry", mock.Mock(side_effect=JsonPatchConflict)) + def test_config_sflow_collector_invalid_yang_validation(self): + db = Db() + runner = CliRunner() + obj = {'db':db.cfgdb} + + config.ADHOC_VALIDTION = False + result = runner.invoke(config.config.commands["sflow"]. + commands["collector"].commands["del"], ["prod"], obj=obj) + print(result.exit_code, result.output) + assert "Invalid ConfigDB. Error" in result.output + + result = runner.invoke(config.config.commands["sflow"]. + commands["collector"].commands["add"], + ["prod", "fe80::6e82:6aff:fe1e:cd8e", "--vrf", "mgmt"], obj=obj) + assert "Invalid ConfigDB. Error" in result.output @patch("validated_config_db_connector.device_info.is_yang_config_validation_enabled", mock.Mock(return_value=True)) @patch("config.validated_config_db_connector.ValidatedConfigDBConnector.validated_mod_entry", mock.Mock(side_effect=ValueError)) diff --git a/tests/show_acl_test.py b/tests/show_acl_test.py new file mode 100644 index 0000000000..1b2cdc60a9 --- /dev/null +++ b/tests/show_acl_test.py @@ -0,0 +1,95 @@ +import os +import pytest +from click.testing import CliRunner + +import acl_loader.main as acl_loader_show +from acl_loader import * +from acl_loader.main import * +from importlib import reload + +root_path = os.path.dirname(os.path.abspath(__file__)) +modules_path = os.path.dirname(root_path) +scripts_path = os.path.join(modules_path, "scripts") + + +@pytest.fixture() +def setup_teardown_single_asic(): + os.environ["PATH"] += os.pathsep + scripts_path + os.environ["UTILITIES_UNIT_TESTING"] = "2" + os.environ["UTILITIES_UNIT_TESTING_TOPOLOGY"] = "" + yield + os.environ["UTILITIES_UNIT_TESTING"] = "0" + + +@pytest.fixture(scope="class") +def setup_teardown_multi_asic(): + os.environ["PATH"] += os.pathsep + scripts_path + os.environ["UTILITIES_UNIT_TESTING"] = "2" + os.environ["UTILITIES_UNIT_TESTING_TOPOLOGY"] = "multi_asic" + from .mock_tables import mock_multi_asic_3_asics + reload(mock_multi_asic_3_asics) + from .mock_tables import dbconnector + dbconnector.load_namespace_config() + yield + os.environ["UTILITIES_UNIT_TESTING"] = "0" + os.environ["UTILITIES_UNIT_TESTING_TOPOLOGY"] = "" + from .mock_tables import mock_single_asic + reload(mock_single_asic) + + +class TestShowACLSingleASIC(object): + def test_show_acl_table(self, setup_teardown_single_asic): + runner = CliRunner() + aclloader = AclLoader() + context = { + "acl_loader": aclloader + } + result = runner.invoke(acl_loader_show.cli.commands['show'].commands['table'], ['DATAACL_5'], obj=context) + assert result.exit_code == 0 + # We only care about the third line, which contains the 'Active' + result_top = result.output.split('\n')[2] + expected_output = "DATAACL_5 L3 Ethernet124 DATAACL_5 ingress Active" + assert result_top == expected_output + + def test_show_acl_rule(self, setup_teardown_single_asic): + runner = CliRunner() + aclloader = AclLoader() + context = { + "acl_loader": aclloader + } + result = runner.invoke(acl_loader_show.cli.commands['show'].commands['rule'], ['DATAACL_5'], obj=context) + assert result.exit_code == 0 + # We only care about the third line, which contains the 'Active' + result_top = result.output.split('\n')[2] + expected_output = "DATAACL_5 RULE_1 9999 FORWARD IP_PROTOCOL: 126 Active" + assert result_top == expected_output + + +class TestShowACLMultiASIC(object): + def test_show_acl_table(self, setup_teardown_multi_asic): + runner = CliRunner() + aclloader = AclLoader() + context = { + "acl_loader": aclloader + } + result = runner.invoke(acl_loader_show.cli.commands['show'].commands['table'], ['DATAACL_5'], obj=context) + assert result.exit_code == 0 + # We only care about the third line, which contains the 'Active' + result_top = result.output.split('\n')[2] + expected_output = "DATAACL_5 L3 Ethernet124 DATAACL_5 ingress {'asic0': 'Active', 'asic2': 'Active'}" + assert result_top == expected_output + + def test_show_acl_rule(self, setup_teardown_multi_asic): + runner = CliRunner() + aclloader = AclLoader() + context = { + "acl_loader": aclloader + } + result = runner.invoke(acl_loader_show.cli.commands['show'].commands['rule'], ['DATAACL_5'], obj=context) + assert result.exit_code == 0 + # We only care about the third line, which contains the 'Active' + result_top = result.output.split('\n')[2] + expected_output = "DATAACL_5 RULE_1 9999 FORWARD IP_PROTOCOL: 126 {'asic0': 'Active', 'asic2': 'Active'}" + assert result_top == expected_output + +