Skip to content

Harbor FAQs

stonezdj(Daojun Zhang) edited this page Dec 2, 2024 · 109 revisions

Installation

  1. When install Harbor it report follow Error:

    Traceback (most recent call last): File "./prepare", line 110, in <module> validate(rcp, args) File "./prepare", line 31, in validate raise Exception("Error: The path for certificate: %s is invalid" % cert_path) Exception: Error: The path for certificate: /data/cert/server.crt is invalid
    Root cause: This error was caused by when you enable https on harbor.cfg but don't have "/data" directory and create the harbor server ca certificate and server certificate

    Solution: Make sure you have a /data directory and follow the step in [enable https] (https://github.com/vmware/harbor/blob/master/docs/configure_https.md)

  2. How to customize the port that Harbor listens on?

    [A] Please refer to the [installation guide] (https://github.com/vmware/harbor/blob/master/docs/installation_guide.md#configuring-harbor-listening-on-a- customized-port).

  3. How to initialized Harbor DB when use external database.

    [A] No need to initialize database manually.

  4. Can't find the download certificate button

    [A] Copy the ca certificate to /data/ca_download/ca.crt, then this download link is visible in web console.

  5. Internal certs enabled Harbor fail to start up after upgrading from the version before v2.2.0 to newer version.

    [A] This may caused by Golang version upgrading to 1.5.0. Which force the SAN extension included in cert file. So you need to add SAN to your certs file. More information please refer this

Upgrade

  1. Harbor upgrade failed with the error message: failed to migrate the database, error: Dirty database version 80. Fix and force version.

[A] First, reset the is dirty to false and roll back to the previous version. for example, if you are upgrade from 2.6.0 to 2.7.0, you should run the following command:

update schema_migrations set version = 90, dirty=false where 1 = 1" registry

Where 90 is the schema version number for 2.6.0, you can find the version number in the table schema_migration/postgresql folder and <schema_version>_<harbor_version>_schema.up.sql file.

Then, you could try the following steps to upgrade manually:

If Harbor is deployed with Helm, you could try to upgrade Harbor with the helm upgrade with enableMigrateHelmHook=true, it will run the migration job before the upgrade.

If Harbor is deployed with docker-compose, you could try to run the migration script manually with SQL cli tool.

You could run the following command to upgrade the database schema: copy and execute the following content in upgrade SQL files, the file name is <schema_version>_<harbor_version>_schema.up.sql, the version should cover all versions between the current version and the target version. for example, if you are upgrading from 2.6.0 to 2.6.1, you should copy and run the following SQL script in files subsequently: https://github.com/goharbor/harbor/tree/main/make/migrations/postgresql/0091_2.6.2_schema.up.sql https://github.com/goharbor/harbor/tree/main/make/migrations/postgresql/0100_2.7.0_schema.up.sql

  1. Harbor upgrade failed with following error message in core.log database "registry" does not exist (SQLSTATE 3D000)

[A] Please check if the database upgrade is successful by comparing the data folder /data/database/pg13 and /data/database/pg14, if successful their size should be on the same scale. when Harbor upgrades to a new version, it might cause the database upgrade to a new version, for example, it may require to migrate /data/database/pg13 to /data/database/pg14, if there is not enough free disk space, then the migration fails and the size of /data/database/pg13 and /data/database/pg14 have a large difference. to solve this issue, you need to stop the harbor instance, resize the disk size, remove the /data/database/pg14 folder, and start the harbor instance, the built-in migrator will migrate /data/database/pg13 to /data/database/pg14 again.

Usage

Modify the max connections of the database

For versions less than v2.0.2

Postgres is the database used by the harbor, the default max_connections of the database is 100. This value is small sometimes and may cause "pq: sorry, too many clients already" issue. The following is the workaround to modify the max_connections to 1000 manually.

Installed by installer (docker-compose)

  1. Goto the directory of the installer
  2. Modify the max_connections parameter in /data/database/postgresql.conf
max_connections=1000
  1. Restart the postgresql
sudo docker-compose restart postgresql
  1. Ensure that the max_connections changed to 1000
sudo docker-compose exec postgresql psql -c "SHOW max_connections"
 max_connections
-----------------
 1000
(1 row)

Installed by Helm

  1. Fetch the postgresql.conf from the database pod
NAMESPACE=the-namespace-of-harbor
POD_NAME=`kubectl -n $NAMESPACE get pod -l "component=database,app=harbor" -o name`
kubectl -n $NAMESPACE exec $POD_NAME -- cat /var/lib/postgresql/data/postgresql.conf > postgresql.conf
  1. Modify the max_connections parameter in the postgresql.conf
max_connections=1000
  1. Write back the postgresql.conf to the database
NAMESPACE=the-namespace-of-harbor
POD_NAME=`kubectl -n $NAMESPACE get pod -l "component=database,app=harbor" -o name`
cat postgresql.conf | kubectl -n $NAMESPACE exec $POD_NAME -i -- tee /var/lib/postgresql/data/postgresql.conf
  1. Delete the pod of the database and wait Kubernetes schedule a new pod
kubectl -n $NAMESPACE delete $POD_NAME
kubectl -n $NAMESPACE wait --for=condition=ready $POD_NAME --timeout=60s
  1. Ensure that the max_connections changed to 1000
kubectl -n $NAMESPACE exec $POD_NAME -- psql -c "SHOW max_connections"
 max_connections
-----------------
 1000
(1 row)

For version >= 2.0.2

Set an environment for harbor-db

POSTGRES_MAX_CONNECTIONS = 1000

Then restart harbor-db container or pod, it will pickup the configuration from env.

Duplicate repository name in the same project

As described in issue https://github.com/goharbor/harbor/issues/17544, user found two repositories with the same name in a project, but there is a unique constraint for repository name in database.

[Root cause]: Postgresql use glibc to sort string data and create index, the unique constraint relies on the index to check if the current key exist. in glibc2.28, there is a change caused the sorting order change. The upgrade from Harbor v2.2.x to v2.3.0 happens to span across this fix. it means that indexes created before v2.3.0 may be invalid after upgrade to v2.3.0 and the postgres will use the invalid indexes until you delete them and reindex manually. the invalid index causes the unique constraint invalid. especially when the string contains special characters such as -,_,$,#,% mixed with alphabets. To verify if the repository name is duplicated: Login to the Harbor database and run the following SQL:

		SELECT name, count(*) cnt
		FROM repository
		GROUP BY name
		HAVING COUNT(*) > 1

To solve this issue you should cleanup duplicated records with this workaround: https://github.com/goharbor/harbor/issues/17544#issuecomment-1257385150

It may impact the latest Harbor version and all tables with unique constraint column, because the invalid indexes persist until you delete them manually.

Replication

  1. What happens if I update a same name image to Harbor with replication enabled?

    [A] It will overwrite the images both on the source and destination Harbor server

  2. Got 504 Gateway Time-out error when replicating big images.

    [A] Please refer to issue 3446.

  3. Can harbor send webhook notification like https://docs.docker.com/docker-hub/webhooks/ showing?

    [A] No, Harbor does not support webhook notification of registry yet.

  4. If there are some replication tasks are hung (status is hung in the pending/running status), how can I fix them?

    [A] Several ways you can have a try:

    • Via harbor portal: Login Harbor as an administrator, select the numeric ID of the replication execution which has hung tasks under the replication section. Click the Stop button to stop the tasks.
    • If the above approach can not resolve the "hung" issue, you can connect to the database to directly update the status of the hung tasks.
    # Make sure harbor is still running
    docker exec -it harbor-db /bin/bash
    # Enter the db container
    # Connect to the database
    psql -U postgres
    # Change database to registry
    \c registry
    # Update the database records
    UPDATE replication_task SET status = 'Stopped' WHERE status = 'InProgress';

Vulnerability scan

  1. Can I use the scan functionality when Harbor has no internet access?

    [A] You can but you need to manually update the vulnerability database. please refer to this wiki: [Import Clair vulnerability data] (https://github.com/vmware/harbor/blob/master/docs/import_vulnerability_data.md)

  2. Scan all has been issued for a long while, but there still are some scanning tasks not completed. How can I fix it?

    [A] If the "Scan All" button is not disabled, you can click again. The new scanning processes will override the previous ones. If the "Scan All" button is disabled, you can also trigger a new scan all process by issuing an API call.

    curl -X POST -u <USER>:<PASSWORD> -H "Content-type: application/json" -H "X-Xsrftoken:xtuwrDBPMSbkNR0r7rchHdpjX57o26By" -k -i -d '{"schedule":{"type":"Manual"}}'  https://<HOST>/api/system/scanAll/schedule
    
  3. Can I disable vulnerability scan against language-specific package and/or associated library packages?

    [A] If users deploy a harbor instance via harbor-helm, they can set trivy.vulnType to os only, instead of the default value of os,library, so that the Language-specific packages or associated library packages will not be scanned for vulnerabilities.

  4. How to configure Trivy scan when Harbor is behind a proxy?

    [A] Users may need to set an allow list in proxy of these URLs related to Trivy scanning:

    • ghcr.io
    • search.maven.org, alternatively set offline_scan: true to avoid sending API requests to identify dependencies.

Pulling and pushing images

  1. Why can not push image 192.168.0.1/hello-world:latest to Harbor?

    [A] At least two namespaces are needed for repository name in Harbor, so tag the image as 192.168.0.1/project_name/hello-world:latest should fix this. (Create the project on the web page first)

API

  1. How to access the APIs of the Docker registry?
    [A] First you need to request a token:

     curl -i -k -u <username>:<password> https://<harbor_host_or_ip>/service/token?service=harbor- 
     registry&scope=repository:library/mysql:pull,push
    

    Then you can use the token to issue registry API:

     curl -i -k -H "Content-Type: application/json" -H "Authorization:  Bearer longlongtokenxxxxx“ -X GET 
    https://10.192.212.107/v2/library/mysql/5.6.35/manifests/latest
    

    About the detail of the token, please refer to the guide https://github.com/docker/distribution/blob/master/docs/spec/auth/token.md.

  2. How to issue an API call in Harbor with version >= 1.10?

[A] XSRF protection is enabled in Harbor V1.10. After this version, when you issue an API call, you can

# Send API call
curl -X POST -u <USER>:<PASSWORD> -H "Content-type: application/json" -k -i -d '{"data": "example"}'  https://<HOST>/api/system/scanAll/schedule

# If 403 code is returned, extract the "_xsrf" value from the "Set-Cookie" header
# e.g: Set-Cookie: _xsrf=eHR1d3JEQlBNU2JrTlIwcjdyY2hIZHBqWDU3bzI2Qnk=|1576117983208824582|c861b53e3e4fdcfc2f8f5b6c38cc14ed0a1272a8; Expires=Thu, 12 Dec 2019 03:33:03 UTC; Max-Age=3600; Path=/; Secure

# As the _xsrf value is encoded with base64, decode it with base64 and get the raw value
# e.g: base64-decode "eHR1d3JEQlBNU2JrTlIwcjdyY2hIZHBqWDU3bzI2Qnk=" to xtuwrDBPMSbkNR0r7rchHdpjX57o26By

# Add "X-Xsrftoken" header with the decoded value and issue the API call again
curl -X POST -u <USER>:<PASSWORD> -H "Content-type: application/json" -H "X-Xsrftoken:xtuwrDBPMSbkNR0r7rchHdpjX57o26By" -k -i -d '{"data": "example"}'  https://<HOST>/api/system/scanAll/schedule
  1. My Harbor is configured to use OIDC for authentication, how do I access Harbor's API?

[A] In such case, you have to use the OIDC token as bearer token to access Harbor's API.
Keycloak example, please refer to:
https://github.com/goharbor/harbor/issues/10597#issuecomment-603159112
Azure AD example, please refer to:
https://github.com/goharbor/harbor/issues/9193#issuecomment-1317557916
Dex example, please refer to:
https://dexidp.io/docs/using-dex/
Or set the Harbor log level to debug and get the bearer token from the core.log

Raw ID token for verification: <bearer token>

Authentication

  1. How to change auth mode when the auth_mode is not editable?

    [A] Execute the following command to make the auth_mode editable

    docker exec -it harbor-db bash
    psql -U postgres
    \c registry
    select * from harbor_user;
    delete from harbor_user where user_id > 2;
    

    Refresh the Harbor web console-> Configurations, then you can change the auth_mode.

  2. How to reset admin password?

    [A] The initial admin password can be found in harbor.yml,

    harbor_admin_password: <initial_admin_password>
    

    If the administrator have update his password in web console and forget it.

    Make sure the harbor server is running.

    docker exec -it harbor-db bash
    psql -U postgres
    \c registry
    update harbor_user set salt='', password='' where user_id = 1; 
    

    Restart Harbor

    docker-compose down -v
    docker-compose up -d
    

    Then you can login with the initial admin password

    • If the Harbor is installed in Kubernetes, you could reset the password with the same approache, just replace the docker command with kubectl command and restart the harbor-core pod after updating the harbor_user table.
  3. How to configure Azure Active Directory as the OIDC provider?

    [A] Refer to this issue: https://github.com/goharbor/harbor/issues/9193#issuecomment-1317557916

  4. How to configure VMware Workspace One as the OIDC provider

    [A] Refer to this post: https://www.catbird.se/2023/08/harbor-authentication-oidc-with-vmware-workspace-one-access/

LDAP

  1. When auth mode is changed to ldap_auth, all LDAP users can login harbor, How to let only users in a group can login?

    [A] You can add LDAP filter like that:

    (&(objectclass=person)(memberof=CN=harbor_users,OU=sample,OU=vmware,DC=harbor,DC=com))

    The CN=harbor_users,OU=sample,OU=vmware,DC=harbor,DC=com is the LDAP group DN, then only LDAP user in group harbor_users can login.

  2. When LDAP UID setting is changed, some LDAP user can not login

    [A] Because the LDAP user have logged in with different UID, some user information is cached in the Harbor DB. you can clean up the user information and try login again.

    docker exec -it harbor-db bash
    psql -U postgres
    \c registry
    select * from harbor_user;
     delete from harbor_user where user_id > 2;
    
  3. How to add a CA cert for the LDAP server or other Harbor server?

    [A] After installed the Harbor, there is a directory under common/config/shared/trust-certificates Copy the LDAP certificate, for example, ldap_ca.crt to this directory and restart the Harbor, The certificate is added to the trust store of the core container, and then you could enable “Verify Cert” in the LDAP configuration.

CVE-2019-16097

  1. How can I workaround the CVE-2019-16097?

    [A] The system admin can disable the allow self-registration both via UI or API.

    • UI: Configuration -> Authentication -> Allow Self-Registration(uncheck the checkbox)
    • API: Use the configuration API to update Self-Registration.
    PUT /api/configurations
    {"self_registration":false}
    

Stuck in read-only mode

Before the Harbor V2.1 release, the Harbor will be put into read-only mode when the GC task is started and change back to normal mode when GC is completed. GC task can also be set as a scheduled task that is executed periodically. Sometimes the GC task that is running in jobservice may be failed to report the status change to the Harbor core and then cause the read-only mode can not be changed back. The whole harbor is stuck in the read-only mode as the GC task is periodically running.

  1. A quick workaround solution

Disable the read-only mode in the system configuration page immediately.

  1. Workaround solution at jobservice side

  2. Enter your Redis container docker exec -ti redis /bin/bash

  3. Connect to the Redis redis-cli -n 2 2 is the db index for jobservice (you can find the settings in the installation configuration yaml file harbror.yaml)

  4. Find all the schedule policies related to IMAGE_GC wth command zrange {harbor_job_service_namespace}:period:policies 0 -1

    • Delete them with the command ZREMRANGEBYLEX {harbor_job_service_namespace}:period:policies [INDEX] [INDEX] NOTE: [INDEX] is the ID number you found at step 3
  5. Find all the tasks that have been scheduled for executing and related to IMAGE_GC with command zrange {harbor_job_service_namespace}:scheduled 0 -1

    • Delete them with the command ZREMRANGEBYLEX {harbor_job_service_namespace}:scheduled [INDEX] [INDEX] NOTE: [INDEX] is the ID number you found at step 4
  6. Reset your GC schedule at the GC management page if necessary

Notary Key Not found

If you're experiencing the key not found error descripted as https://github.com/goharbor/harbor/issues/14932 in the particular path mentioned by https://github.com/goharbor/harbor/commit/4017e995b7ada3bcb54ba30b2a86a7559cbcf1a9. Please follow the steps to resolve:

  1. Remove gun from notary DB.
docker exec -it harbor-db bash
psql -U postgres
\c notarysigner
delete from private_keys where gun = '${host}/${project}/${repository}';

\c notaryserver
delete from tuf_files where gun = '${host}/${project}/${repository}';
  1. Remove local tuf
rm -rf ${notary_cache_directory}/tuf/${host}/${project}/${repository}/*

By default, notary is using ~/.docker/trust as the local cache directory.

Trivy in Air-Gapped Environment

Trivy Version >= v0.23.0

  1. Install the harbor with skip_update=true and offline_scan=true for the trivy configuration.
  2. Download the vulnerability database
TRIVY_TEMP_DIR=$(mktemp -d)
trivy --cache-dir $TRIVY_TEMP_DIR image --download-db-only
chmod o+r $TRIVY_TEMP_DIR/db/metadata.json
chmod o+r $TRIVY_TEMP_DIR/db/trivy.db
  1. Put the DB file in Trivy's cache directory

For the harbor installed by the Harbor offline-installer

docker exec -u scanner trivy-adapter mkdir -p /home/scanner/.cache/trivy/db/
docker cp $TRIVY_TEMP_DIR/db/metadata.json trivy-adapter:/tmp/metadata.json
docker cp $TRIVY_TEMP_DIR/db/trivy.db trivy-adapter:/tmp/trivy.db
docker exec -u scanner trivy-adapter cp /tmp/metadata.json /home/scanner/.cache/trivy/db/metadata.json
docker exec -u scanner trivy-adapter cp /tmp/trivy.db /home/scanner/.cache/trivy/db/trivy.db

For the harbor installed by the harbor-helm

NAMESPACE=the-namespace-of-harbor
POD_NAME=`kubectl -n $NAMESPACE get pod -l "component=trivy" -o name`
kubectl -n $NAMESPACE exec $POD_NAME -- mkdir -p /home/scanner/.cache/trivy/db/ 
cat $TRIVY_TEMP_DIR/db/metadata.json | kubectl -n $NAMESPACE exec $POD_NAME -i -- tee /home/scanner/.cache/trivy/db/metadata.json 1>/dev/null
cat $TRIVY_TEMP_DIR/db/trivy.db | kubectl -n $NAMESPACE exec $POD_NAME -i -- tee /home/scanner/.cache/trivy/db/trivy.db 1>/dev/null

Trivy Version <= v0.22.0

The trivy scanner can be used in air-gapped environments. The following is the steps.

  1. Install the harbor with skip_update=true and offline_scan=true for the trivy configuration.
  2. Download the vulnerability database
$ wget https://github.com/aquasecurity/trivy-db/releases/tag/v1-2023020812
$ tar xvf trivy-offline.db.tgz
x trivy.db
x metadata.json
$ chmod o+r trivy.db
$ chmod o+r metadata.json
  1. Put the DB file in Trivy's cache directory

For the harbor installed by the installer

$ docker exec -u scanner trivy-adapter mkdir -p /home/scanner/.cache/trivy/db/
$ docker cp metadata.json trivy-adapter:/tmp/metadata.json
$ docker cp trivy.db trivy-adapter:/tmp/trivy.db
$ docker exec -u scanner trivy-adapter cp /tmp/metadata.json /home/scanner/.cache/trivy/db/metadata.json
$ docker exec -u scanner trivy-adapter cp /tmp/trivy.db /home/scanner/.cache/trivy/db/trivy.db

For the harbor installed by the harbor-helm

$ NAMESPACE=the-namespace-of-harbor
$ POD_NAME=`kubectl -n $NAMESPACE get pod -l "component=trivy" -o name`
$ kubectl -n $NAMESPACE exec $POD_NAME -- mkdir -p /home/scanner/.cache/trivy/db/ 
$ cat metadata.json | kubectl -n $NAMESPACE exec $POD_NAME -i -- tee /home/scanner/.cache/trivy/db/metadata.json 1>/dev/null
$ cat trivy.db | kubectl -n $NAMESPACE exec $POD_NAME -i -- tee /home/scanner/.cache/trivy/db/trivy.db 1>/dev/null

Chartmuseum

  1. Failed to delete the existing chart

When deleting an existing chart, failed with error: "fail to get chart version: improper constraint: xxx-xxx-xxx" This issue often happens when the chart version doesn't follow the semver, it can be uploaded, but can't be removed. The workaround to remove these charts

# Stop the Harbor

cd /data/chart_storage/<project of the chartmuseum uploaded>
# remove chart files
rm <chart files>.tgz 
# Remove the index-cache.yaml, it will generate when next start
rm -rf /data/chart_storage/<project of the chartmuseum uploaded>/index-cache.yaml
# cleanup redis cache
rm /data/redis/*

# Start the Harbor
# Then deleted charts should be removed from the Harbor, remain charts should be visible.

Robot Account

  1. Failed to edit System Robot Account with all-projects covering.

A regression issue was introduced in Harbor v2.5.3 which will cause the failure of updating the System Robot Account with all-projects access. You can use the bellowing workaround to recreate your System Robot Account without usage impact.

   1. You need to know the name and secret of the System Robot Account you want to edit.
   2. Delete this System Robot Account.
   3. Create a new System Robot Account and enter your previous System Robot Account name and the fields(Expiration time, Description) you want to edit.
   4. Refresh the secret of this System Robot Account with previous secret.
   5. This will not affect your System Robot Account usage because the name and secret are not changed.

Create index for Harbor tables to improve performance

-- task
CREATE INDEX IF NOT EXISTS idx_task_job_id ON task (job_id);
-- execution
CREATE INDEX IF NOT EXISTS idx_execution_vendor_type_vendor_id ON execution (vendor_type, vendor_id)
CREATE INDEX IF NOT EXISTS idx_execution_start_time ON execution(start_time)
-- audit_log
CREATE INDEX IF NOT EXISTS idx_audit_log_project_id ON audit_log (project_id);
-- artifact
-- CREATE INDEX IF NOT EXISTS idx_artifact_repository_name ON artifact (repository_name);
-- or
CREATE INDEX concurrently idx_artifact_repository_name ON artifact USING hash (repository_name);
CREATE INDEX IF NOT EXISTS idx_artifact_repository_id ON artifact (repository_id);
CREATE INDEX IF NOT EXISTS idx_artifact_project_id ON artifact (project_id);
-- remove index has drawback 
DROP INDEX IF EXISTS idx_audit_log_op_time; 

Job Service

  1. There are too many jobs in pending/running state, how to clean it?
   1. Stop all the jobs with a schedule, such as tag retention, image scan and GC job.
   2. Mark pending task into Error 
      > docker exec -it harbor-db bash
      > psql -U postgres -d registry
      > update task set status = 'Error', status_code = 3 where status in ('Pending', 'Running')  and vendor_type = '<job_type>';
      > update execution set status = 'Stopped' where status = 'Running' and vendor_type = '<job type>';
   3. Backup the redis file to another directory and delete it,  then restart Harbor
      > docker-compose down -v
      > cp -r /data/redis/* /data/redis_backup/
      > rm -rf /data/redis/*
      > docker-compose up -d
  1. Why the GC job failed when the execution time exceed 24 hours

Because job service is based on gocraft/work framework, it will expire the job status every 24 hours, the job still keep running after it exceeds 24 hours but it is invisible in the job service dashboard. To avoid this issue, for example, you need to expand time from 24 hours to 168

Update the common/config/jobservice/env to add the following env

MAX_JOB_DURATION_SECONDS=604800

Update the common/config/jobservice/config.yml

reaper:
  max_update_hours: 168

Proxy Cache

Why it can't create tag after pulling image from proxy cache?

Because harbor proxy relies on HEAD request to link the digest and the tag. If the tag and its digest information are cached locally, then it won't send the HEAD request to the server, then the Harbor doesn't link the tag and the digest.

Why it doesn't count the pull request from Kubernetes

Sometimes when deploy application in kubernetes, why the repository's pull count is not updated?

[A] It is the implementation of the container runtime, first it sends a HEAD request to the registry, and then the client gets the digest of the manifest from the response, and if there is any image in the local cache with the same digest, it uses the image in the local cache and skip to pull the image, that is why the pull count is not updated. Only GET requests to manifest update the pull count, HEAD requests have no impact on the pull count.

external redis username-password usage

There's a known issue that upstream distribution(util v2.8.2)do not support redis auth via username-password. See more details issues-18892

This issue would not affect harbor functionalities, such as image pull/push, scan, or GC, but would have minor impact on distribution push/pull performance, especially for cloud storage like s3, when you are using external redis and configured a username. So if you care about the registry performance, please follow the guidance to set up your Redis server that allow you to bypass this limitation, otherwise you could just ignore it.

you need guarantee that requirepass set up in the redis.conf is same as the password set for acl user, and could connect to redis server successfully.

## redis.conf
requirepass:foobar

## set user password same as requirepass
acl setuser virginia >foobar

## test connection
redis-cli -h <host> -p 6379 -a foobar
10.**.**.**:6379> ping
PONG

redis-cli -h <host> -p 6379 --user virginia --pass foobar
10.**.**.**:6379> ping
PONG

Cosign Signature Accessory

There's a known issue 19788 before Harbor v2.9.1(fixed in v2.9.2) that while replication, the cosign signature may landing before the subject artifact which would cause the signature is not related to its subject and do not showing on the UI properly. It is been fixed in v2.9.2 and later releases to support pushing accessory and subject in either order by PR-19906. You need to check and manually correct the data if the issue occurs in your instance.

  • Step1 Checking artifact_accessory table in the database, while having rows with subject_artifact_id is 0
docker exec -it <harbor-db-container> /bin/bash

postgres [ / ]$ psql -d registry

registry=# select artifact_id, subject_artifact_id, digest, subject_artifact_digest from artifact_accessory where subject_artifact_id=0;
 artifact_id | subject_artifact_id |                                 digest                                  |                         subject_artifact_digest

-------------+---------------------+-------------------------------------------------------------------------+------------------------------------------------------------------------
-
          51 |                   0 | sha256:797c18b8b1cc884c958a66083db7a61e76d879a2f13e1b2c325ef789c2966490 | sha256:af6e931e64717b56b1dbedfabe1a55b7847565535db1de6be106de9396777209
          52 |                   0 | sha256:8de19060454181e8357d3810a30cd091e7c5b7c01873606847719a0da2288da9 | sha256:21e877678da29c1ad4411143639e0d3bcfe5dfd243b97ed5e1a9c31bae6aa11d
(2 rows)

  • Step2 Running bellowing SQL to do data corrections
DO $$
DECLARE
    acc RECORD;
    art RECORD;
BEGIN
    FOR acc IN SELECT * FROM artifact_accessory where subject_artifact_id = 0
    LOOP
        SELECT * INTO art from artifact where digest = acc.subject_artifact_digest and repository_name = acc.subject_artifact_repo;
        UPDATE artifact_accessory SET subject_artifact_id=art.id where id = acc.id;
    END LOOP;
END $$;
  • Step3 Confirm in database subject_artifact_id is not 0 and could showing signature on the UI properly
registry=# select artifact_id, subject_artifact_id, digest, subject_artifact_digest from artifact_accessory;
 artifact_id | subject_artifact_id |                                 digest                                  |                         subject_artifact_digest

-------------+---------------------+-------------------------------------------------------------------------+------------------------------------------------------------------------
-
          51 |                  53 | sha256:797c18b8b1cc884c958a66083db7a61e76d879a2f13e1b2c325ef789c2966490 | sha256:af6e931e64717b56b1dbedfabe1a55b7847565535db1de6be106de9396777209
          52 |                  54 | sha256:8de19060454181e8357d3810a30cd091e7c5b7c01873606847719a0da2288da9 | sha256:21e877678da29c1ad4411143639e0d3bcfe5dfd243b97ed5e1a9c31bae6aa11d
(2 rows)

Notation Signature Accessory

Here's some tips you for your awareness, as of Harbor v2.11.0 and Notation v1.2.0 released and both fully support distribution-spec v1.1.

  • Harbor recommend to explicitly set flag --force-referrers-tag=false while signing since Harbor supports distribution-spec referrers-api. This could avoid generating unnecessary signature index for Harbor, hence, more smoothy experiences on signature signing/verification, image copy and replications.
$ notation-v1.2 --force-referrers-tag=false sign xx.xx.xx.xxx/library/hello-world:latest
Screenshot 2024-06-20 at 15 01 47
  • If you choose to use the notation v1.2 default behavior or forgot to disable the force-referrers-tag, please be aware of this would pushing extra signature index while signing. Miss-behaving like delete the index solely, unselected copy or replication the index along with images would cause trouble for the next signing operations.
~$ notation-v1.2 sign xx.xx.xx.xxx/library/hello-world:latest
Screenshot 2024-06-20 at 15 16 26
Clone this wiki locally