All pages
Powered by GitBook
1 of 12

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Integration & Usage Guides

How to Transfer Data to Stook with Cyberduck

Upload and migrate files to Medianova Stook Object Storage using Cyberduck.

Use Cyberduck to connect to Medianova Stook Object Storage and transfer files from your computer or migrate data from AWS S3.

Cyberduck is available for macOS and Windows and provides a simple drag-and-drop interface.


Prerequisites

  • Cyberduck installed on your computer

  • Medianova Stook credentials:

    • Server: xxxxx.mncdn.com

    • Access Key

    • Secret Key

  • (Optional) AWS S3 credentials for migration

⚠️ Credentials are private. Do not share them.


Data Transfer from Your Computer to Stook

  1. Open Cyberduck and click Open Connection.

  2. Click More Options and select Amazon S3 (Deprecated path style requests).

⚠️ Uploaded files are not directly accessible via HTTP/HTTPS. To publish them, create a CDN zone linked to your storage.


Data Transfer from AWS S3 to Stook

  1. Connect to Stook using the steps above.

  2. Open a new Cyberduck connection for AWS S3.

  3. Enter your AWS credentials and connect.

  4. Open AWS S3 in one pane and Stook in another, then drag and drop files between them.

⚠️ For large-scale migrations, contact Medianova Support.


Troubleshooting / FAQ

  • Cannot connect: Verify credentials and server endpoint.

  • Slow transfer: Check internet connection; for bulk migration, request support.

  • Files not accessible: A CDN zone must be created to serve files.


References

  • Cyberduck Official Website

  • Support: [email protected] | +90 212 275 54 56

How to Migrate Data to Stook Using Rclone

Step-by-step guide to migrate data to Stook using Rclone.

This guide explains how to migrate data from any S3-compatible storage provider to Medianova’s Stook Object Storage using Rclone. You will learn how to configure Rclone, copy data, monitor the transfer, and verify the integrity of migrated files.

Prerequisites

  • Installed Rclone (download from )

Access credentials (Access Key, Secret Key) for both source and Stook buckets

  • Endpoint URL of your source and target storage

  • Step-by-Step Instructions

    Step 1: Configure Rclone

    1. Open your terminal or command prompt.

    2. Run the following command:

    3. Follow the interactive wizard:

      • Choose n for a new remote.

      • Enter a name for your remote (e.g., s3-source).

      • Select your S3-compatible provider.

      • Enter configuration details (endpoint URL, access key, secret key).

    4. Repeat the process to configure your Stook target bucket (e.g., stook-target).

      • When asked for S3 type, choose Ceph Object Storage.


    Step 2: Copy Data

    To copy all data from the source bucket to the target bucket:

    To copy specific files or directories:


    Step 3: Monitor Progress

    Add the --progress flag to monitor transfer status in real time:


    Step 4: Verify Data

    After transfer, verify integrity using the rclone check command:

    This compares file sizes and hashes (MD5 or SHA1) and reports mismatches.


    Troubleshooting / FAQ

    • Problem: Files missing in target bucket.

      • Solution: Re-run the rclone copy command with --progress.

    • Problem: Authentication error.

      • Solution: Verify your access key, secret key, and endpoint URL.

    • Problem: Slow transfer speed.

      • Solution: Use --transfers=N option to increase parallel transfers.


    References

    • Rclone Downloads

    https://rclone.org/downloads/
    rclone config
    rclone copy s3-source:source-bucket-name stook-target:target-bucket-name
    rclone copy s3-source:source-bucket-name/path/to/source-data \
    stook-target:target-bucket-name/path/to/target-location
    rclone copy --progress s3-source:source-bucket-name stook-target:target-bucket-name
    rclone check s3-source:source-bucket-name stook-target:target-bucket-name
    Enter your Server, Access Key, Secret Key, then click Connect.
    Entering Stook connection credentials in Cyberduck
  • From the menu bar, select Action → New Folder to create a folder.

    Creating a new folder in Stook via Cyberduck
  • Drag and drop files from your computer into this folder.

    Uploading files to Stook using drag-and-drop
  • Migrating files between AWS S3 and Stook with drag-and-drop
    Cyberduck main screen
    Selecting Amazon S3 (Deprecated path style requests) option
    Connecting Cyberduck to AWS S3

    How to Use Pre-Signed URL in PHP with Stook

    Generate and use pre-signed URLs in PHP for accessing objects in Medianova Stook Object Storage.

    You can generate pre-signed URLs with the AWS SDK for PHP to securely access or share objects stored in Medianova Stook Object Storage. Pre-signed URLs allow time-limited access without exposing your credentials.


    Prerequisites

    • PHP installed on your system

    • AWS SDK for PHP included in your project

    • Medianova Stook credentials (Access Key, Secret Key, Endpoint)


    Example: Generate a Pre-Signed URL


    How It Works

    1. Define your bucket and object key.

    2. Create an S3Client with your Stook endpoint and credentials.

    3. Use the getCommand method to specify the operation (e.g., GetObject).


    Troubleshooting / FAQ

    • Expired URL: Check the validity time (+20 minutes, +1 hour, etc.).

    • Access denied: Verify bucket policy and Stook credentials.

    • Invalid endpoint: Ensure your Stook endpoint matches your account.


    References

    How to use the AWS SDK for PHP with Stook?

    Integrate Medianova Stook Object Storage into your PHP applications using the AWS SDK.

    You can connect PHP applications to Medianova Stook Object Storage using the AWS SDK for PHP. This allows you to create buckets, upload files, and retrieve objects from Stook directly in your PHP projects.


    Prerequisites

    • PHP installed on your system

    How do I use Stook with the AWS Java SDK?

    Integrate Medianova Stook Object Storage into your Java applications using the AWS SDK for Java.

    You can connect Java applications to Medianova Stook Object Storage using the AWS SDK for Java. This allows you to create buckets, upload files, and manage objects directly from your Java code.


    Prerequisites

    • JDK installed on your system

    Generate a pre-signed URL with createPresignedRequest.
  • Share or use the generated URL to access the object for a limited time.

  • AWS SDK for PHP Documentation

    AWS SDK for PHP

  • Medianova Stook credentials:

    • Access Key

    • Secret Key

    • Endpoint


  • Installation

    Follow AWS SDK for PHP installation guide:


    Example Code

    Create a PHP file and configure your Stook connection:

    Example PHP code for connecting to Stook, creating a bucket, uploading and retrieving objects


    Troubleshooting / FAQ

    • Error: “There is a bucket with that name!”

      • Solution: Use a unique bucket name.

    • Error: Connection issues

      • Solution: Ensure endpoint URL is correct and matches Stook.

    • Error: Permission denied

      • Solution: Validate Access Key and Secret Key.


    References

    • AWS SDK for PHP Installation Guide

    • Medianova Stook Storage User Guide for AWS CLI

    AWS SDK for Java

  • Medianova Stook credentials:

    • Access Key

    • Secret Key

    • Endpoint


  • Installation

    Download the AWS SDK for Java from the official site:


    Example: Create a Bucket and Upload File

    Example Java code for connecting to Stook, creating a bucket, and uploading a file


    Troubleshooting / FAQ

    • Bucket already exists error: Use a unique bucket name.

    • Connection error: Verify endpoint URL and credentials.

    • Access denied: Check Access Key and Secret Key.


    References

    • AWS SDK for Java Official Documentation

    require 'vendor/autoload.php';
    
    use Aws\S3\S3Client;  
    use Aws\S3\Exception\S3Exception;
    
    $bucket = "bucket";  
    $key = "mykey";  
    
    // Create a S3Client  
    $s3 = new Aws\S3\S3Client([
        'endpoint' => 'https://customername.mncdn.com',
        'profile' => 'medianova',
        'version' => 'latest',
        'region' => 'us-east-1',
        'use_path_style_endpoint' => true
    ]);
    
    $cmd = $s3->getCommand('GetObject', [
        'Bucket' => $bucket,
        'Key'    => $key,
    ]);
    
    $request = $s3->createPresignedRequest($cmd, '+20 minutes');  
    
    $presignedUrl = (string) $request->getUri();  
    echo $presignedUrl;
    <?php
    require 'aws/aws-autoloader.php';
    
    $s3 = new Aws\S3\S3Client([
        'version' => 'latest',
        'region'  => 'us-east-1',
        'endpoint' => 'Enter server info',
        'use_path_style_endpoint' => true,
        'credentials' => [
            'key'    => 'Enter Access Key',
            'secret' => 'Enter Secret Key',
        ],
       'http'    => [
           'verify' => false
        ]
    ]);
    
    try {
        $s3->createBucket(array('Bucket' => 'testbucket'));
    }
    catch (Exception $e){
        echo "There is a bucket with that name!<br>";
    }
    
    // Sending a PutObject request and getting the results
    $insert = $s3->putObject([
        'Bucket' => 'testbucket',
        'Key'    => 'testkey',
        'Body'   => 'Hello world'
    ]);
    
    // Download the contents of the object
    $retrive = $s3->getObject([
        'Bucket' => 'testbucket',
        'Key'    => 'testkey',
        'SaveAs' => 'testkey_local'
    ]);
    
    //Index the body to the result object
    echo $retrive['Body'];
    import java.io.ByteArrayInputStream;
    import java.io.IOException;
    import java.io.InputStream;
    
    import com.amazonaws.auth.AWSStaticCredentialsProvider;
    import com.amazonaws.auth.BasicAWSCredentials;
    import com.amazonaws.client.builder.AwsClientBuilder;
    import com.amazonaws.services.s3.AmazonS3;
    import com.amazonaws.services.s3.AmazonS3ClientBuilder;
    import com.amazonaws.services.s3.model.CannedAccessControlList;
    import com.amazonaws.services.s3.model.ObjectMetadata;
    import com.amazonaws.services.s3.model.PutObjectRequest;
    import com.amazonaws.services.s3.model.CreateBucketRequest;
    import com.amazonaws.services.s3.model.GetBucketLocationRequest;
    
    public class s3Example {
        
       private static final String SERVICE_ENDPOINT = "https://******.mncdn.com";
       private static final String REGION = "us-east-1";
       private static final String ACCESS_KEY = "******";
       private static final String SECRET_KEY = "******";
    
       private static final String BUCKET_NAME = "bucket";
    
       private static final AmazonS3 AMAZON_S3_CLIENT = 
           AmazonS3ClientBuilder.standard()
           .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(SERVICE_ENDPOINT, REGION))
           .withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY)))
           .withPathStyleAccessEnabled(true)
           .build();
    
       public static String uploadFile(byte[] data) throws IOException {
           try (InputStream inputStream = new ByteArrayInputStream(data)) {
               String filename = "test.txt";
    
               ObjectMetadata metadata = new ObjectMetadata();
               metadata.setContentLength(data.length);
    
               PutObjectRequest putObjectRequest = new PutObjectRequest(BUCKET_NAME, filename, inputStream, metadata)
                       .withCannedAcl(CannedAccessControlList.PublicRead);
    
               AMAZON_S3_CLIENT.putObject(putObjectRequest);
    
               return AMAZON_S3_CLIENT.getUrl(BUCKET_NAME, filename).toString();
           }
       }
    
       public static void main(String[] args) {
           if (!AMAZON_S3_CLIENT.doesBucketExistV2(BUCKET_NAME)) {
               AMAZON_S3_CLIENT.createBucket(new CreateBucketRequest(BUCKET_NAME));
    
               String bucketLocation = AMAZON_S3_CLIENT.getBucketLocation(new GetBucketLocationRequest(BUCKET_NAME));
               System.out.println("Bucket location: " + bucketLocation);
           }
    
           try {
               uploadFile("Hello World".getBytes());
           } catch (IOException e) {
               e.printStackTrace();
           }
           System.out.println("Hello, World");
       }
    }

    How to Use AWS CLI with Stook

    Transfer and manage data in Medianova Stook Object Storage using the AWS CLI.

    The AWS CLI is a command-line tool that can be used to transfer data to Medianova Stook Object Storage, which is fully compatible with S3 storage services. You can install AWS CLI on Windows, macOS, and Linux to configure Stook, create buckets, and manage objects.


    Prerequisites

    • AWS CLI installed

    • Medianova Stook credentials:

      • Server

      • Access Key

      • Secret Key

    • Python (for macOS/Linux installations with pip)

    ⚠️ Your account information is private. Do not share it.


    Installation

    Windows

    1. Download the AWS CLI installer from .

    2. Run the setup file (choose 32-bit or 64-bit version).

    3. Accept the license agreement, click Next, then Install.

    4. Finish the installation with the Finish button.

    Caption: Windows installation wizard for AWS CLI

    macOS / Linux

    Install AWS CLI with pip:


    Configuration

    Run the following command to configure credentials:

    Example credentials:

    • Server: xxxxx.mncdn.com

    • Access Key: xxxxxxxxxxxxxxxxxxxx

    • Secret Key: yyyyyyyyyyyyyyyyyyyy


    Commands

    List Buckets

    List Contents in a Bucket

    Create a Bucket

    Add Objects to a Bucket

    Delete Objects

    Remove Buckets


    Troubleshooting / FAQ

    • Command not found: Ensure AWS CLI is installed and added to PATH.

    • Authentication error: Verify Access Key, Secret Key, and endpoint URL.

    • Permission denied: Confirm you are using correct credentials.


    References

    How to use AWS SDK for JavaScript with Stook?

    Integrate Medianova Stook Object Storage into your JavaScript applications using the AWS SDK.

    You can use the AWS SDK for JavaScript to connect your applications to Medianova Stook Object Storage. With the SDK, you can set credentials, create buckets, and upload files directly to Stook.


    Prerequisites

    • Node.js installed on your system

    • AWS SDK for JavaScript installed

    • Medianova Stook credentials:

      • Access Key

      • Secret Key

      • Endpoint


    Installation

    Install the AWS SDK for JavaScript:


    Example: Create Bucket and Upload File

    sample.js


    Config.json Example

    You can configure credentials in config.json or under ~/.aws/credentials.

    config.json:


    Running the Script

    If credentials are saved under ~/.aws/credentials with profile medianova:

    Expected output:


    Troubleshooting / FAQ

    • Error: Credentials not found

      • Solution: Ensure config.json or ~/.aws/credentials contains valid keys.

    • Error: Bucket already exists


    References

    How to use the AWS SDK for Laravel with Stook?

    Integrate Medianova Stook Object Storage into your Laravel application using the AWS SDK.

    You can connect your Laravel application to Medianova Stook Object Storage by using the AWS SDK. This allows you to store and manage files directly in Stook through Laravel.


    Prerequisites

    • Laravel project installed

    • Composer available on your system

    • Medianova Stook credentials:

      • Access Key

      • Secret Key

      • Endpoint


    Installation

    Install the AWS SDK for Laravel with Composer:


    Package Registration

    Open config/app.php and register the package:


    Publish Configuration

    Run the following command to publish the AWS config file:

    This creates config/aws.php.


    Configure Stook in Laravel

    Update config/aws.php with your Stook credentials:


    Define ENV Variables

    Set your Stook credentials in .env:


    Example: Upload File to Stook

    Example controller method to upload files:


    Troubleshooting / FAQ

    • File not uploading: Check bucket name and credentials.

    • Connection error: Ensure the Stook endpoint is correct.

    • Permission denied: Verify Access Key and Secret Key.


    References

    How to Use Pre-Signed URL in Node.js with Stook

    Generate and use pre-signed URLs in Node.js for accessing objects in Medianova Stook Object Storage.

    You can generate pre-signed URLs with the AWS SDK for Node.js to securely access or share objects stored in Medianova Stook Object Storage. Pre-signed URLs allow temporary access to your data without exposing credentials.


    Prerequisites

    • Node.js installed on your system

    • AWS SDK for Node.js installed (npm install aws-sdk)

    • Medianova Stook credentials:

      • Access Key

      • Secret Key

      • Endpoint


    Example: Generate a Pre-Signed URL


    How It Works

    1. Configure the AWS SDK with s3ForcePathStyle.

    2. Use your Medianova credentials (via profile or direct configuration).

    3. Define bucket, object key, and expiration time.

    4. Call getSignedUrl('getObject', {...})


    Troubleshooting / FAQ

    • Expired URL: Adjust Expires value (e.g., 60*10 for 10 minutes).

    • Access denied: Verify bucket policy and Stook credentials.

    • Invalid endpoint: Ensure endpoint matches your Stook account.


    References

    How do I use the AWS SDK for .NET with Stook?

    Integrate Medianova Stook Object Storage into your .NET applications using the AWS SDK for .NET.

    You can connect your .NET applications to Medianova Stook Object Storage using the AWS SDK for .NET. This allows you to create buckets, upload objects, and manage your Stook storage directly from Visual Studio.


    Prerequisites

    • Visual Studio installed

    AWS CLI official website
    AWS CLI Official Website

    Solution: Use a unique bucket name.

  • Error: Connection issues

    • Solution: Verify endpoint URL and network connectivity.

  • AWS SDK for JavaScript Documentation
    AWS SDK for Laravel GitHub
    to generate a URL.
  • Use or share the generated URL within its expiration time.

  • AWS SDK for Node.js Documentation

    AWS Toolkit for Visual Studio installed

  • Medianova Stook credentials:

    • Access Key

    • Secret Key

    • Endpoint

  • ⚠️ The AWS Toolkit includes sample code examples to help you get started quickly.


    Installation

    1. Download and install Visual Studio.

    2. Download and install the AWS Toolkit for Visual Studio.


    Example: Create a Bucket in Stook

    Caption: Example .NET code for connecting to Stook and creating a bucket


    Troubleshooting / FAQ

    • Bucket already exists error: Use a unique bucket name.

    • Connection error: Check Stook endpoint and credentials.

    • Access denied: Verify Access Key and Secret Key.


    References

    • AWS Toolkit for Visual Studio

    pip install awscli
    aws configure
    AWS Access Key ID [None]: xxxxxxxxxxxxxxxxxx
    AWS Secret Access Key [None]: yyyyyyyyyyyyyyyyyyy
    Default region name [None]: ENTER
    Default output format [None]: ENTER
    aws --endpoint-url https://xxxxx.mncdn.com s3 ls
    aws --endpoint-url https://xxxxx.mncdn.com s3 ls s3://testbucket1
    aws --endpoint-url https://xxxxx.mncdn.com s3 mb s3://bucket2
    aws --endpoint-url https://xxxxx.mncdn.com s3 cp hello_world.txt s3://mybucket
    aws --endpoint-url https://xxxxx.mncdn.com s3 rm s3://mybucket/hello_world.txt
    aws --endpoint-url https://xxxxx.mncdn.com s3 rb s3://mybucket
    npm install aws-sdk
    var AWS = require('aws-sdk');
    
    var config = { s3ForcePathStyle: true };
    var credentials = new AWS.SharedIniFileCredentials({profile: 'medianova'});
    AWS.config.credentials = credentials;
    AWS.config.update(config);
    
    var ep = new AWS.Endpoint('*******.mncdn.com');
    var s3 = new AWS.S3({endpoint: ep});
    
    var bucketName = 'medianovatest';
    var keyName = 'hello_world.txt';
    
    s3.createBucket({Bucket: bucketName}, function() {     
        var params = {Bucket: bucketName, Key: keyName, Body: 'Hello World!'};
        s3.putObject(params, function(err, data) {
            if (err) console.log(err);
            else console.log("Successfully uploaded data to " + bucketName + "/" + keyName);
        });
    });
    {
      "accessKeyId": "your_medianova_access_key",
      "secretAccessKey": "your_medianova_secret_key",
      "region": "us-east-1"
    }
    AWS_PROFILE=medianova node sample.js
    Successfully uploaded data to node-sdk-sample-xxxx/hello_world.txt
    composer require aws/aws-sdk-php-laravel
    Registering AWS Service Provider and AWS alias in Laravel
    'providers' => [
        Aws\Laravel\AwsServiceProvider::class,
    ],
    
    'aliases' => [
        'AWS' => Aws\Laravel\AwsFacade::class,
    ],
    php artisan vendor:publish --provider="Aws\Laravel\AwsServiceProvider"
    return [
        'credentials' => [
            'key'    => env('YOUR_STOOK_ACCESS_KEY_ID'),
            'secret' => env('YOUR_STOOK_SECRET_ACCESS_KEY'),
        ],
        'region' => env('YOUR_STOOK_REGION', 'us-east-1'),
        'version' => 'latest',
        'ua_append' => [
            'L5MOD/' . AwsServiceProvider::VERSION,
        ],
        'endpoint' => env('YOUR_STOOK_ENDPOINT'),
        'use_path_style_endpoint' => true,
        'http' => [
            'verify' => false
        ]
    ];
    YOUR_STOOK_ACCESS_KEY_ID=xxxxxxxxxxxxxxxx
    YOUR_STOOK_SECRET_ACCESS_KEY=yyyyyyyyyyyyyyyyyyyy
    YOUR_STOOK_ENDPOINT=https://xxxxx.mncdn.com
    public function uploadStookFile(Request $request)
    {
        // File information
        $fileExtension = $request->file('image')->getClientOriginalExtension();
        $fileFullName = "testFile" . '.' . $fileExtension;
    
        try {
            $s3 = App::make('aws')->createClient('s3');
            $s3->putObject([
                'Bucket'     => "test-bucket",
                'Key'        => $fileFullName,
                'SourceFile' => $request->file('image')->getRealPath(),
            ]);
        } catch (\Exception $exception) {
            throw new \Exception('File could not upload to Stook account.');
        }
    var AWS = require('aws-sdk');
    
    var config = {
        s3ForcePathStyle: true,
    };
    
    var credentials = new AWS.SharedIniFileCredentials({
        profile: 'medianova'
    });
    
    AWS.config.credentials = credentials;
    AWS.config.update(config);
    
    var ep = new AWS.Endpoint('customername.mncdn.com');
    var s3 = new AWS.S3({endpoint: ep});
    
    var myBucket = 'bucket';
    var myKey = 'mykey';
    var signedUrlExpireSeconds = 60 * 5;
    
    var url = s3.getSignedUrl('getObject', {
        Bucket: myBucket,
        Key: myKey,
        Expires: signedUrlExpireSeconds
    });
    
    console.log(url);
    using System;
    using Amazon.Runtime;
    using Amazon.S3;
    using Amazon.S3.Model;
    using Amazon.S3.Util;
    
    namespace s3Example
    {
        internal class Program
        {
            private static string bucketName = "medianovatest";
            private static string accessKey = "Enter Access Key";
            private static string secretKey = "Enter Secret Key";
    
            static async Task Main(string[] args)
            {
                var s3Config = new AmazonS3Config
                {
                    RegionEndpoint = Amazon.RegionEndpoint.EUWest1,
                    ServiceURL = "https://xxxxxxxxx.mncdn.com",
                    ForcePathStyle = true
                };
    
                AWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey);
    
                using (var s3Client = new AmazonS3Client(awsCredentials, s3Config))
                {
                    await CreateBucket(s3Client);
                }
    
                Console.WriteLine("Operation completed.");
            }
    
            private static async Task CreateBucket(AmazonS3Client s3Client)
            {
                var createBucketRequest = new PutBucketRequest
                {
                    BucketName = bucketName,
                    UseClientRegion = true
                };
                try
                {
                    var response = await s3Client.PutBucketAsync(createBucketRequest);
                    Console.WriteLine($"Bucket creation status: {response.HttpStatusCode}");
                }
                catch (AmazonS3Exception ex)
                {
                    Console.WriteLine($"{ex.Message}");
                }
            }
        }
    }

    Verify Object Integrity Using Checksums in Stook

    Learn how to validate object integrity in Stook using checksums exposed through ETag.

    Checksum validation helps you ensure that data is stored and transferred without corruption. When you upload an object, Stook calculates a checksum and stores it as the object’s ETag. You can:

    • Retrieve the ETag from Stook, and

    • Optionally compute your own checksum locally (for example, using Python) and compare the values for single-part uploads.

    ETag is meaningful for comparing with a local MD5 only for single-part uploads. For multipart uploads, ETag will differ from the MD5 of the full file.

    Requirements

    Make sure you have:

    • A Stook bucket containing the object you want to verify

    • Access Key and Secret Key

    • One or more of the following tools installed:

      • AWS CLI

    Retrieve the checksum (ETag) from Stook

    You can retrieve the ETag value using AWS CLI, MinIO Client, or curl.

    Use AWS CLI

    The output includes the ETag:

    Use MinIO Client (mc)

    Example output:

    Use curl over the Stook endpoint

    If you access the object directly via the Stook endpoint, you can inspect the ETag header:

    If the Stook bucket is connected to a CDN Resource, you can also send the request through the CDN URL. However, note that the ETag header may change when served via CDN, depending on caching and response headers.

    Understand ETag and multipart behavior

    ETag behaves differently for single-part and multipart uploads:

    • Single-part uploads

      • ETag typically represents the MD5 checksum of the entire object.

      • In this case, a locally computed MD5 of the file can match the ETag.

    • Multipart uploads

    These checksum comparisons (local MD5 vs ETag) are valid only for single-part uploads. For multipart uploads, a mismatch between ETag and local MD5 is expected.

    Compute a local checksum in Python (Optionally)

    Python examples are provided only to calculate local checksums on the client side. They do not calculate or change the checksum stored by Stook; they simply:

    1. Download the object from Stook, and

    2. Compute MD5 or SHA-256 locally so you can compare it with the ETag (for single-part uploads) or use it in your own integrity checks.

    Example — Compute an MD5 checksum in Python

    Example — Compute a SHA-256 checksum in Python

    These Python scripts compute only local checksums. They do not compute or retrieve any internal checksum other than what Stook already exposes via the ETag header.

    Troubleshoot checksum mismatches

    If you notice a mismatch between a local checksum and the ETag from Stook:

    • The object may have been uploaded as multipart, so the ETag will not match the MD5 of the full file.

    • You may be using a different algorithm (for example, SHA-256 locally vs MD5-based ETag).

    • You may be comparing against an outdated or changed local file.

    Always verify:

    • Upload method (single-part vs multipart)

    • The algorithm used

    • That you are comparing the correct object version

    Summary

    • Stook calculates a checksum when an object is uploaded and exposes it via the ETag header.

    • You can read ETag using AWS CLI, MinIO Client, or curl.

    • ETag corresponds to an MD5-like checksum only for single-part uploads.

    • For

    MinIO Client (mc)

  • Python with boto3 (optional, for local checksum calculation)

  • ETag has the format:

  • This value does not match the MD5 checksum of the entire file.

  • If you see a mismatch between local MD5 and ETag, the object may have been uploaded as multipart.

  • multipart uploads
    , ETag follows a special format and does not match the MD5 of the full file.
  • Python examples compute local MD5 or SHA-256 and are used only for client-side integrity validation, not for calculating Stook’s internal checksum.

  • <md5-of-each-part-concatenated>-<number_of_parts>
    aws s3api --endpoint-url https://<ENDPOINT_URL> \
      head-object \
      --bucket <BUCKET_NAME> \
      --key <OBJECT_KEY> \
      --profile <AWS_PROFILE>
    {
      "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
      ...
    }
    mc stat <PROFILE_NAME>/<BUCKET_NAME>/<FILE_PATH> --json
    {
      "etag": "d41d8cd98f00b204e9800998ecf8427e",
      ...
    }
    curl -I https://<ENDPOINT_URL>/<BUCKET_NAME>/<FILE_PATH>
    import boto3
    import hashlib
    
    s3 = boto3.client(
        "s3",
        endpoint_url="https://endpoint_url",
        aws_access_key_id="access_key",
        aws_secret_access_key="secret_key",
    )
    
    response = s3.get_object(
        Bucket="bucket_name",
        Key="file_path",
    )
    
    md5 = hashlib.md5()
    
    # Streaming read → do not load entire file into RAM
    for chunk in response["Body"].iter_chunks(chunk_size=8192):
        if chunk:
            md5.update(chunk)
    
    print("MD5:", md5.hexdigest())
    import boto3
    import hashlib
    
    s3 = boto3.client(
        "s3",
        endpoint_url="https://endpoint_url",
        aws_access_key_id="access_key",
        aws_secret_access_key="secret_key",
    )
    
    response = s3.get_object(
        Bucket="bucket_name",
        Key="file_path",
    )
    
    sha256 = hashlib.sha256()
    
    # Streaming read → do not load entire file into RAM
    for chunk in response["Body"].iter_chunks(chunk_size=8192):
        if chunk:
            sha256.update(chunk)
    
    print("SHA-256:", sha256.hexdigest())