Skip to content

Commit

Permalink
provide env variable for configuring custom s3 hosts
Browse files Browse the repository at this point in the history
  • Loading branch information
schaschjan authored and ifox committed May 26, 2020
1 parent 267eec0 commit 5894ccc
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 0 deletions.
1 change: 1 addition & 0 deletions config/disks.php
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
'bucket' => env('S3_BUCKET', env('AWS_BUCKET')),
'root' => env('S3_ROOT', env('AWS_ROOT', '')),
'use_https' => env('S3_UPLOADER_USE_HTTPS', env('S3_USE_HTTPS', env('AWS_USE_HTTPS', true))),
'endpoint' => env("S3_ENDPOINT")
];

$azureConfig = [
Expand Down
9 changes: 9 additions & 0 deletions docs/.sections/getting-started/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,6 +244,15 @@ S3_BUCKET=bucket-name

Optionally, you can use the `S3_REGION` variable to specify a region other than S3's default region (`us-east-1`).

If you prefer to use another S3 Compliant Storage such as Minio, provide your application with the following environment variables:

```bash
S3_KEY=S3_KEY
S3_SECRET=S3_SECRET
S3_BUCKET=bucket-name
S3_ENDPOINT=https://YOUR_S3_DOMAIN
```

When uploading images to S3, Twill sets the `acl` parameter to `private`. This is because images in your bucket should not be publicly accessible when using a service like [Imgix](https://imgix.com) on top of it. Only Imgix should have read-only access to your bucket, while your application obviously needs to have write access. If you intend to access images uploaded to S3 directly, set the `MEDIA_LIBRARY_ACL` variable or `acl` configuration option to `public-read`.

**Azure endpoint**
Expand Down
7 changes: 7 additions & 0 deletions src/Helpers/media_library_helpers.php
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,13 @@
*/
function s3Endpoint($disk = 'libraries')
{
// if a custom s3 endpoint is configured explicitly, return it
$customEndpoint = config("filesystems.disks.{$disk}.endpoint");

if ($customEndpoint) {
return $customEndpoint;
}

This comment has been minimized.

Copy link
@talvbansal

talvbansal Jul 21, 2020

Contributor

This change breaks compatibility for other s3 compliant storage:

Digital Ocean spaces for example expects the endpoint to be defined as https://{region}.digitaloceanspaces.com, without the bucket name in the endpoint.

When determining the asset path the bucketname should be interpolated into the endpoint url as it is below.

PR #703

This comment has been minimized.

Copy link
@kirkbushell

kirkbushell Apr 29, 2021

I think this could be resolved by checking to see if the bucket exists as part of the endpoint or not. If it does, use the full endpoint, if not - concatenate them together?


$scheme = config("filesystems.disks.{$disk}.use_https") ? 'https://' : '';
return $scheme . config("filesystems.disks.{$disk}.bucket") . '.' . Storage::disk($disk)->getAdapter()->getClient()->getEndpoint()->getHost();
}
Expand Down

0 comments on commit 5894ccc

Please sign in to comment.