Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Upload and migrate files to Medianova Stook Object Storage using Cyberduck.
Use Cyberduck to connect to Medianova Stook Object Storage and transfer files from your computer or migrate data from AWS S3.
Cyberduck is available for macOS and Windows and provides a simple drag-and-drop interface.
Cyberduck installed on your computer
Medianova Stook credentials:
Server: xxxxx.mncdn.com
Access Key
Secret Key
(Optional) AWS S3 credentials for migration
⚠️ Credentials are private. Do not share them.
Open Cyberduck and click Open Connection.
Click More Options and select Amazon S3 (Deprecated path style requests).
⚠️ Uploaded files are not directly accessible via HTTP/HTTPS. To publish them, create a CDN zone linked to your storage.
Connect to Stook using the steps above.
Open a new Cyberduck connection for AWS S3.
Enter your AWS credentials and connect.
Open AWS S3 in one pane and Stook in another, then drag and drop files between them.
⚠️ For large-scale migrations, contact Medianova Support.
Cannot connect: Verify credentials and server endpoint.
Slow transfer: Check internet connection; for bulk migration, request support.
Files not accessible: A CDN zone must be created to serve files.
Cyberduck Official Website
Support: [email protected] | +90 212 275 54 56
Step-by-step guide to migrate data to Stook using Rclone.
Access credentials (Access Key, Secret Key) for both source and Stook buckets
Endpoint URL of your source and target storage
Open your terminal or command prompt.
Run the following command:
Follow the interactive wizard:
Choose n for a new remote.
Enter a name for your remote (e.g., s3-source).
Select your S3-compatible provider.
Enter configuration details (endpoint URL, access key, secret key).
Repeat the process to configure your Stook target bucket (e.g., stook-target).
When asked for S3 type, choose Ceph Object Storage.
To copy all data from the source bucket to the target bucket:
To copy specific files or directories:
Add the --progress flag to monitor transfer status in real time:
After transfer, verify integrity using the rclone check command:
This compares file sizes and hashes (MD5 or SHA1) and reports mismatches.
Problem: Files missing in target bucket.
Solution: Re-run the rclone copy command with --progress.
Problem: Authentication error.
Solution: Verify your access key, secret key, and endpoint URL.
Problem: Slow transfer speed.
Solution: Use --transfers=N option to increase parallel transfers.
rclone configrclone copy s3-source:source-bucket-name stook-target:target-bucket-namerclone copy s3-source:source-bucket-name/path/to/source-data \
stook-target:target-bucket-name/path/to/target-locationrclone copy --progress s3-source:source-bucket-name stook-target:target-bucket-namerclone check s3-source:source-bucket-name stook-target:target-bucket-nameFrom the menu bar, select Action → New Folder to create a folder.
Drag and drop files from your computer into this folder.







Generate and use pre-signed URLs in PHP for accessing objects in Medianova Stook Object Storage.
You can generate pre-signed URLs with the AWS SDK for PHP to securely access or share objects stored in Medianova Stook Object Storage. Pre-signed URLs allow time-limited access without exposing your credentials.
PHP installed on your system
AWS SDK for PHP included in your project
Medianova Stook credentials (Access Key, Secret Key, Endpoint)
Define your bucket and object key.
Create an S3Client with your Stook endpoint and credentials.
Use the getCommand method to specify the operation (e.g., GetObject).
Expired URL: Check the validity time (+20 minutes, +1 hour, etc.).
Access denied: Verify bucket policy and Stook credentials.
Invalid endpoint: Ensure your Stook endpoint matches your account.
Integrate Medianova Stook Object Storage into your PHP applications using the AWS SDK.
Integrate Medianova Stook Object Storage into your Java applications using the AWS SDK for Java.
createPresignedRequest.Share or use the generated URL to access the object for a limited time.
AWS SDK for PHP
Medianova Stook credentials:
Access Key
Secret Key
Endpoint
Create a PHP file and configure your Stook connection:
Example PHP code for connecting to Stook, creating a bucket, uploading and retrieving objects
Error: “There is a bucket with that name!”
Solution: Use a unique bucket name.
Error: Connection issues
Solution: Ensure endpoint URL is correct and matches Stook.
Error: Permission denied
Solution: Validate Access Key and Secret Key.
AWS SDK for Java
Medianova Stook credentials:
Access Key
Secret Key
Endpoint
Example Java code for connecting to Stook, creating a bucket, and uploading a file
Bucket already exists error: Use a unique bucket name.
Connection error: Verify endpoint URL and credentials.
Access denied: Check Access Key and Secret Key.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = "bucket";
$key = "mykey";
// Create a S3Client
$s3 = new Aws\S3\S3Client([
'endpoint' => 'https://customername.mncdn.com',
'profile' => 'medianova',
'version' => 'latest',
'region' => 'us-east-1',
'use_path_style_endpoint' => true
]);
$cmd = $s3->getCommand('GetObject', [
'Bucket' => $bucket,
'Key' => $key,
]);
$request = $s3->createPresignedRequest($cmd, '+20 minutes');
$presignedUrl = (string) $request->getUri();
echo $presignedUrl;<?php
require 'aws/aws-autoloader.php';
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-east-1',
'endpoint' => 'Enter server info',
'use_path_style_endpoint' => true,
'credentials' => [
'key' => 'Enter Access Key',
'secret' => 'Enter Secret Key',
],
'http' => [
'verify' => false
]
]);
try {
$s3->createBucket(array('Bucket' => 'testbucket'));
}
catch (Exception $e){
echo "There is a bucket with that name!<br>";
}
// Sending a PutObject request and getting the results
$insert = $s3->putObject([
'Bucket' => 'testbucket',
'Key' => 'testkey',
'Body' => 'Hello world'
]);
// Download the contents of the object
$retrive = $s3->getObject([
'Bucket' => 'testbucket',
'Key' => 'testkey',
'SaveAs' => 'testkey_local'
]);
//Index the body to the result object
echo $retrive['Body'];import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CannedAccessControlList;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.CreateBucketRequest;
import com.amazonaws.services.s3.model.GetBucketLocationRequest;
public class s3Example {
private static final String SERVICE_ENDPOINT = "https://******.mncdn.com";
private static final String REGION = "us-east-1";
private static final String ACCESS_KEY = "******";
private static final String SECRET_KEY = "******";
private static final String BUCKET_NAME = "bucket";
private static final AmazonS3 AMAZON_S3_CLIENT =
AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(SERVICE_ENDPOINT, REGION))
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY)))
.withPathStyleAccessEnabled(true)
.build();
public static String uploadFile(byte[] data) throws IOException {
try (InputStream inputStream = new ByteArrayInputStream(data)) {
String filename = "test.txt";
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(data.length);
PutObjectRequest putObjectRequest = new PutObjectRequest(BUCKET_NAME, filename, inputStream, metadata)
.withCannedAcl(CannedAccessControlList.PublicRead);
AMAZON_S3_CLIENT.putObject(putObjectRequest);
return AMAZON_S3_CLIENT.getUrl(BUCKET_NAME, filename).toString();
}
}
public static void main(String[] args) {
if (!AMAZON_S3_CLIENT.doesBucketExistV2(BUCKET_NAME)) {
AMAZON_S3_CLIENT.createBucket(new CreateBucketRequest(BUCKET_NAME));
String bucketLocation = AMAZON_S3_CLIENT.getBucketLocation(new GetBucketLocationRequest(BUCKET_NAME));
System.out.println("Bucket location: " + bucketLocation);
}
try {
uploadFile("Hello World".getBytes());
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("Hello, World");
}
}Transfer and manage data in Medianova Stook Object Storage using the AWS CLI.
The AWS CLI is a command-line tool that can be used to transfer data to Medianova Stook Object Storage, which is fully compatible with S3 storage services. You can install AWS CLI on Windows, macOS, and Linux to configure Stook, create buckets, and manage objects.
AWS CLI installed
Medianova Stook credentials:
Server
Access Key
Secret Key
Python (for macOS/Linux installations with pip)
⚠️ Your account information is private. Do not share it.
Download the AWS CLI installer from .
Run the setup file (choose 32-bit or 64-bit version).
Accept the license agreement, click Next, then Install.
Finish the installation with the Finish button.
Caption: Windows installation wizard for AWS CLI
Install AWS CLI with pip:
Run the following command to configure credentials:
Example credentials:
Server: xxxxx.mncdn.com
Access Key: xxxxxxxxxxxxxxxxxxxx
Secret Key: yyyyyyyyyyyyyyyyyyyy
Command not found: Ensure AWS CLI is installed and added to PATH.
Authentication error: Verify Access Key, Secret Key, and endpoint URL.
Permission denied: Confirm you are using correct credentials.
Integrate Medianova Stook Object Storage into your JavaScript applications using the AWS SDK.
You can use the AWS SDK for JavaScript to connect your applications to Medianova Stook Object Storage. With the SDK, you can set credentials, create buckets, and upload files directly to Stook.
Node.js installed on your system
AWS SDK for JavaScript installed
Medianova Stook credentials:
Access Key
Secret Key
Endpoint
Install the AWS SDK for JavaScript:
sample.js
You can configure credentials in config.json or under ~/.aws/credentials.
config.json:
If credentials are saved under ~/.aws/credentials with profile medianova:
Expected output:
Error: Credentials not found
Solution: Ensure config.json or ~/.aws/credentials contains valid keys.
Error: Bucket already exists
Integrate Medianova Stook Object Storage into your Laravel application using the AWS SDK.
You can connect your Laravel application to Medianova Stook Object Storage by using the AWS SDK. This allows you to store and manage files directly in Stook through Laravel.
Laravel project installed
Composer available on your system
Medianova Stook credentials:
Access Key
Secret Key
Endpoint
Install the AWS SDK for Laravel with Composer:
Open config/app.php and register the package:
Run the following command to publish the AWS config file:
This creates config/aws.php.
Update config/aws.php with your Stook credentials:
Set your Stook credentials in .env:
Example controller method to upload files:
File not uploading: Check bucket name and credentials.
Connection error: Ensure the Stook endpoint is correct.
Permission denied: Verify Access Key and Secret Key.
Generate and use pre-signed URLs in Node.js for accessing objects in Medianova Stook Object Storage.
You can generate pre-signed URLs with the AWS SDK for Node.js to securely access or share objects stored in Medianova Stook Object Storage. Pre-signed URLs allow temporary access to your data without exposing credentials.
Node.js installed on your system
AWS SDK for Node.js installed (npm install aws-sdk)
Medianova Stook credentials:
Access Key
Secret Key
Endpoint
Configure the AWS SDK with s3ForcePathStyle.
Use your Medianova credentials (via profile or direct configuration).
Define bucket, object key, and expiration time.
Call getSignedUrl('getObject', {...})
Expired URL: Adjust Expires value (e.g., 60*10 for 10 minutes).
Access denied: Verify bucket policy and Stook credentials.
Invalid endpoint: Ensure endpoint matches your Stook account.
Integrate Medianova Stook Object Storage into your .NET applications using the AWS SDK for .NET.
Solution: Use a unique bucket name.
Error: Connection issues
Solution: Verify endpoint URL and network connectivity.
Use or share the generated URL within its expiration time.
AWS Toolkit for Visual Studio installed
Medianova Stook credentials:
Access Key
Secret Key
Endpoint
⚠️ The AWS Toolkit includes sample code examples to help you get started quickly.
Download and install Visual Studio.
Download and install the AWS Toolkit for Visual Studio.
Caption: Example .NET code for connecting to Stook and creating a bucket
Bucket already exists error: Use a unique bucket name.
Connection error: Check Stook endpoint and credentials.
Access denied: Verify Access Key and Secret Key.
pip install awscliaws configure
AWS Access Key ID [None]: xxxxxxxxxxxxxxxxxx
AWS Secret Access Key [None]: yyyyyyyyyyyyyyyyyyy
Default region name [None]: ENTER
Default output format [None]: ENTERaws --endpoint-url https://xxxxx.mncdn.com s3 lsaws --endpoint-url https://xxxxx.mncdn.com s3 ls s3://testbucket1aws --endpoint-url https://xxxxx.mncdn.com s3 mb s3://bucket2aws --endpoint-url https://xxxxx.mncdn.com s3 cp hello_world.txt s3://mybucketaws --endpoint-url https://xxxxx.mncdn.com s3 rm s3://mybucket/hello_world.txtaws --endpoint-url https://xxxxx.mncdn.com s3 rb s3://mybucketnpm install aws-sdkvar AWS = require('aws-sdk');
var config = { s3ForcePathStyle: true };
var credentials = new AWS.SharedIniFileCredentials({profile: 'medianova'});
AWS.config.credentials = credentials;
AWS.config.update(config);
var ep = new AWS.Endpoint('*******.mncdn.com');
var s3 = new AWS.S3({endpoint: ep});
var bucketName = 'medianovatest';
var keyName = 'hello_world.txt';
s3.createBucket({Bucket: bucketName}, function() {
var params = {Bucket: bucketName, Key: keyName, Body: 'Hello World!'};
s3.putObject(params, function(err, data) {
if (err) console.log(err);
else console.log("Successfully uploaded data to " + bucketName + "/" + keyName);
});
});{
"accessKeyId": "your_medianova_access_key",
"secretAccessKey": "your_medianova_secret_key",
"region": "us-east-1"
}AWS_PROFILE=medianova node sample.jsSuccessfully uploaded data to node-sdk-sample-xxxx/hello_world.txtcomposer require aws/aws-sdk-php-laravel'providers' => [
Aws\Laravel\AwsServiceProvider::class,
],
'aliases' => [
'AWS' => Aws\Laravel\AwsFacade::class,
],php artisan vendor:publish --provider="Aws\Laravel\AwsServiceProvider"return [
'credentials' => [
'key' => env('YOUR_STOOK_ACCESS_KEY_ID'),
'secret' => env('YOUR_STOOK_SECRET_ACCESS_KEY'),
],
'region' => env('YOUR_STOOK_REGION', 'us-east-1'),
'version' => 'latest',
'ua_append' => [
'L5MOD/' . AwsServiceProvider::VERSION,
],
'endpoint' => env('YOUR_STOOK_ENDPOINT'),
'use_path_style_endpoint' => true,
'http' => [
'verify' => false
]
];YOUR_STOOK_ACCESS_KEY_ID=xxxxxxxxxxxxxxxx
YOUR_STOOK_SECRET_ACCESS_KEY=yyyyyyyyyyyyyyyyyyyy
YOUR_STOOK_ENDPOINT=https://xxxxx.mncdn.compublic function uploadStookFile(Request $request)
{
// File information
$fileExtension = $request->file('image')->getClientOriginalExtension();
$fileFullName = "testFile" . '.' . $fileExtension;
try {
$s3 = App::make('aws')->createClient('s3');
$s3->putObject([
'Bucket' => "test-bucket",
'Key' => $fileFullName,
'SourceFile' => $request->file('image')->getRealPath(),
]);
} catch (\Exception $exception) {
throw new \Exception('File could not upload to Stook account.');
}var AWS = require('aws-sdk');
var config = {
s3ForcePathStyle: true,
};
var credentials = new AWS.SharedIniFileCredentials({
profile: 'medianova'
});
AWS.config.credentials = credentials;
AWS.config.update(config);
var ep = new AWS.Endpoint('customername.mncdn.com');
var s3 = new AWS.S3({endpoint: ep});
var myBucket = 'bucket';
var myKey = 'mykey';
var signedUrlExpireSeconds = 60 * 5;
var url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
});
console.log(url);using System;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Util;
namespace s3Example
{
internal class Program
{
private static string bucketName = "medianovatest";
private static string accessKey = "Enter Access Key";
private static string secretKey = "Enter Secret Key";
static async Task Main(string[] args)
{
var s3Config = new AmazonS3Config
{
RegionEndpoint = Amazon.RegionEndpoint.EUWest1,
ServiceURL = "https://xxxxxxxxx.mncdn.com",
ForcePathStyle = true
};
AWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey);
using (var s3Client = new AmazonS3Client(awsCredentials, s3Config))
{
await CreateBucket(s3Client);
}
Console.WriteLine("Operation completed.");
}
private static async Task CreateBucket(AmazonS3Client s3Client)
{
var createBucketRequest = new PutBucketRequest
{
BucketName = bucketName,
UseClientRegion = true
};
try
{
var response = await s3Client.PutBucketAsync(createBucketRequest);
Console.WriteLine($"Bucket creation status: {response.HttpStatusCode}");
}
catch (AmazonS3Exception ex)
{
Console.WriteLine($"{ex.Message}");
}
}
}
}Learn how to validate object integrity in Stook using checksums exposed through ETag.
Checksum validation helps you ensure that data is stored and transferred without corruption. When you upload an object, Stook calculates a checksum and stores it as the object’s ETag. You can:
Retrieve the ETag from Stook, and
Optionally compute your own checksum locally (for example, using Python) and compare the values for single-part uploads.
Make sure you have:
A Stook bucket containing the object you want to verify
Access Key and Secret Key
One or more of the following tools installed:
AWS CLI
You can retrieve the ETag value using AWS CLI, MinIO Client, or curl.
The output includes the ETag:
Example output:
If you access the object directly via the Stook endpoint, you can inspect the ETag header:
If the Stook bucket is connected to a CDN Resource, you can also send the request through the CDN URL. However, note that the ETag header may change when served via CDN, depending on caching and response headers.
ETag behaves differently for single-part and multipart uploads:
Single-part uploads
ETag typically represents the MD5 checksum of the entire object.
In this case, a locally computed MD5 of the file can match the ETag.
Multipart uploads
Python examples are provided only to calculate local checksums on the client side. They do not calculate or change the checksum stored by Stook; they simply:
Download the object from Stook, and
Compute MD5 or SHA-256 locally so you can compare it with the ETag (for single-part uploads) or use it in your own integrity checks.
If you notice a mismatch between a local checksum and the ETag from Stook:
The object may have been uploaded as multipart, so the ETag will not match the MD5 of the full file.
You may be using a different algorithm (for example, SHA-256 locally vs MD5-based ETag).
You may be comparing against an outdated or changed local file.
Always verify:
Upload method (single-part vs multipart)
The algorithm used
That you are comparing the correct object version
Stook calculates a checksum when an object is uploaded and exposes it via the ETag header.
You can read ETag using AWS CLI, MinIO Client, or curl.
ETag corresponds to an MD5-like checksum only for single-part uploads.
For
MinIO Client (mc)
Python with boto3 (optional, for local checksum calculation)
ETag has the format:
This value does not match the MD5 checksum of the entire file.
If you see a mismatch between local MD5 and ETag, the object may have been uploaded as multipart.
Python examples compute local MD5 or SHA-256 and are used only for client-side integrity validation, not for calculating Stook’s internal checksum.
<md5-of-each-part-concatenated>-<number_of_parts>aws s3api --endpoint-url https://<ENDPOINT_URL> \
head-object \
--bucket <BUCKET_NAME> \
--key <OBJECT_KEY> \
--profile <AWS_PROFILE>{
"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
...
}mc stat <PROFILE_NAME>/<BUCKET_NAME>/<FILE_PATH> --json{
"etag": "d41d8cd98f00b204e9800998ecf8427e",
...
}curl -I https://<ENDPOINT_URL>/<BUCKET_NAME>/<FILE_PATH>import boto3
import hashlib
s3 = boto3.client(
"s3",
endpoint_url="https://endpoint_url",
aws_access_key_id="access_key",
aws_secret_access_key="secret_key",
)
response = s3.get_object(
Bucket="bucket_name",
Key="file_path",
)
md5 = hashlib.md5()
# Streaming read → do not load entire file into RAM
for chunk in response["Body"].iter_chunks(chunk_size=8192):
if chunk:
md5.update(chunk)
print("MD5:", md5.hexdigest())import boto3
import hashlib
s3 = boto3.client(
"s3",
endpoint_url="https://endpoint_url",
aws_access_key_id="access_key",
aws_secret_access_key="secret_key",
)
response = s3.get_object(
Bucket="bucket_name",
Key="file_path",
)
sha256 = hashlib.sha256()
# Streaming read → do not load entire file into RAM
for chunk in response["Body"].iter_chunks(chunk_size=8192):
if chunk:
sha256.update(chunk)
print("SHA-256:", sha256.hexdigest())