Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Verify buffer priority groups for all platforms #4482

Merged
merged 5 commits into from
Oct 26, 2021
Merged

Verify buffer priority groups for all platforms #4482

merged 5 commits into from
Oct 26, 2021

Conversation

stephenxs
Copy link
Contributor

@stephenxs stephenxs commented Oct 14, 2021

Description of PR

Summary:
Fixes # (issue)

Type of change

  • Bug fix
  • Testbed and Framework(new/improvement)
  • Test case(new/improvement)

Back port request

  • 201911

Approach

What is the motivation for this PR?

Verify buffer priority groups for non-Mellanox platforms as well.

Signed-off-by: Stephen Sun [email protected]

How did you do it?

  • On Mellanox platform, the lossless buffer profile is applied on PG 3-4 only if the port is admin down
  • On non-Mellanox platforms, the lossless buffer profile is applied on PG 3-4 of all interfaces regardless of their admin status.

How did you verify/test it?

It was verified on Mellanox platform. An old image without reclaiming buffer is adapted to simulate non-Mellanox platform.

Any platform specific information?

Supported testbed topology if it's a new test case?

Documentation

stephenxs and others added 2 commits September 27, 2021 01:42
…anager in 201911 (#3889)

### Description of PR
Summary:
Add test case for traditional buffer manager for fundamental functionalities and reclaiming unused buffer.

### Approach
#### What is the motivation for this PR?
Add test case for traditional buffer manager for fundamental functionalities and reclaiming unused buffer.

#### How did you do it?
1. For all ports in the config_db,
   - Check whether there is no lossless buffer PG configured on an admin-down port
   - Check whether the lossless PG aligns with the port's speed and cable length
   - If the name-to-OID maps exist for port and PG, check whether the information in ASIC_DB aligns with that in CONFIG_DB
   - If a lossless profile hasn't been checked, check whether the lossless profile in CONFIG_DB aligns with
     - pg_profile_lookup.ini according to speed and cable length
     - information in ASIC_DB
2. Shutdown a port and check whether the lossless buffer PG has been removed
3. Startup the port and check whether the lossless PG has been readded.

#### How did you verify/test it?
Run regression test.
@stephenxs stephenxs requested a review from a team as a code owner October 14, 2021 00:58
Signed-off-by: Stephen Sun <[email protected]>
@stephenxs stephenxs closed this Oct 14, 2021
@stephenxs stephenxs reopened this Oct 14, 2021
@stephenxs stephenxs changed the title [201911] Verify buffer priority groups for all platforms Verify buffer priority groups for all platforms Oct 15, 2021
@stephenxs
Copy link
Contributor Author

/azpw run

@neethajohn
Copy link
Contributor

/azp run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Signed-off-by: Stephen Sun <[email protected]>
port_to_shutdown = admin_up_ports.pop()
expected_profile = duthost.shell('redis-cli -n 4 hget "BUFFER_PG|{}|3-4" profile'.format(port_to_shutdown))['stdout']
try:
# Shutdown the port and check whether the lossless PG has been removed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update comment.. lossless PG has been removed on Mellanox

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Adjusted comment in other places as well.

@neethajohn
Copy link
Contributor

neethajohn commented Oct 21, 2021

Depends on sonic-net/sonic-swss#1837. Will merge after that PR is in

Signed-off-by: Stephen Sun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants