Documentation
¶
Overview ¶
Package transx provides transfer executors for data migration. Supports:
- Rsync: Local/remote filesystem transfers with SSH support
- S3: Object Storage transfers using presigned URLs
Package transx provides S3-compatible Object Storage providers. Supported providers:
- Direct: AWS S3, MinIO, and S3-compatible storage (via minio-go SDK)
- Spider: via CB-Spider Object Storage API
- Tumblebug: via CB-Tumblebug Object Storage API
Index ¶
- Constants
- Variables
- func Backup(dmm DataMigrationModel) error
- func GetCategory(method string) string
- func GetKeyExpiry() time.Duration
- func InitKeyStore(keyExpiryDuration, cleanupInterval time.Duration)
- func IsObjectStorageMethod(method string) bool
- func IsRsyncMethod(method string) bool
- func MigrateData(dmm DataMigrationModel) error
- func ParseBucketAndKey(path string) (bucket, key string)
- func Restore(dmm DataMigrationModel) error
- func Transfer(dmm DataMigrationModel) error
- func Validate(dmm DataMigrationModel) error
- type AuthConfig
- type BasicAuthConfig
- type DataLocation
- type DataMigrationModel
- type Executor
- type FilesystemAccess
- type FilterOption
- type JWTAuthConfig
- type KeyPair
- type KeyStore
- type MigrationError
- type MinioConfig
- type MinioProvider
- func (p *MinioProvider) DownloadFile(key, localPath string) error
- func (p *MinioProvider) GeneratePresignedURL(action, key string) (string, error)
- func (p *MinioProvider) GetBucket() string
- func (p *MinioProvider) ListObjects(prefix string) ([]ObjectInfo, error)
- func (p *MinioProvider) UploadFile(localPath, key string) error
- type ObjectInfo
- type ObjectStorageAccess
- type OperationError
- type Pipeline
- type PublicKeyBundle
- type RsyncExecutor
- type S3Executor
- type S3MinioConfig
- type S3Provider
- type SSHConfig
- type SpiderConfig
- type SpiderProvider
- type Step
- type TransferMode
- type TumblebugConfig
- type TumblebugProvider
- type UnsupportedTransferError
Constants ¶
const ( StageBackup = "backup" StageTransfer = "transfer" StageRestore = "restore" )
Migration stages
const ( OperationBackup = "backup" OperationRestore = "restore" OperationTransfer = "transfer" OperationPreCmd = "pre-command" OperationPostCmd = "post-command" )
Operation types
const ( MethodLocal = "local" // Local filesystem transfer MethodSSH = "ssh" // Remote transfer via SSH/rsync MethodS3 = "s3" // S3-compatible object storage )
Transfer method constants
const ( CategoryRsync = "rsync" // rsync-based transfers (local/ssh) CategoryObjectStorage = "object-storage" // Object storage transfers (S3, etc.) )
Transfer category constants
const ( // StorageTypeFilesystem represents local or remote filesystem storage. StorageTypeFilesystem = "filesystem" // StorageTypeObjectStorage represents S3-compatible object storage. StorageTypeObjectStorage = "objectstorage" )
const ( // AccessTypeLocal represents local filesystem access (no network). AccessTypeLocal = "local" // AccessTypeSSH represents remote filesystem access via SSH/rsync. AccessTypeSSH = "ssh" )
Filesystem access types
const ( // AccessTypeMinio represents direct S3 SDK access using minio-go. AccessTypeMinio = "minio" // AccessTypeSpider represents access via CB-Spider Object Storage API. AccessTypeSpider = "spider" // AccessTypeTumblebug represents access via CB-Tumblebug Object Storage API. AccessTypeTumblebug = "tumblebug" )
Object Storage access types
const ( // StrategyAuto automatically selects the best transfer method. StrategyAuto = "auto" // StrategyDirect forces direct transfer (e.g., SSH agent forwarding). StrategyDirect = "direct" // StrategyRelay forces relay via local machine. StrategyRelay = "relay" )
const ( PipelineFilesystemTransfer = "filesystem-transfer" PipelineObjectStorageTransfer = "objectstorage-transfer" PipelineCrossStorageTransfer = "cross-storage-transfer" StepRsyncTransfer = "rsync-transfer" StepDownloadFromS3 = "download-from-s3" StepUploadToS3 = "upload-to-s3" StepRsyncFromServer = "rsync-from-server" StepRsyncToServer = "rsync-to-server" )
const ( // AuthTypeBasic represents HTTP Basic Authentication. AuthTypeBasic = "basic" // AuthTypeJWT represents JWT (JSON Web Token) Authentication. AuthTypeJWT = "jwt" )
Auth types
const (
// DefaultStagingPath is the default local staging directory for relay transfers.
DefaultStagingPath = "/tmp/transx-staging"
)
Variables ¶
var ( // ErrKeyNotFound is returned when the requested key is not in the store. ErrKeyNotFound = fieldsec.ErrKeyNotFound // ErrKeyExpired is returned when the key has expired. ErrKeyExpired = fieldsec.ErrKeyExpired // ErrKeyMismatch is returned when the key ID doesn't match. ErrKeyMismatch = fieldsec.ErrKeyMismatch // ErrDecryptionFailed is returned when decryption fails. ErrDecryptionFailed = fieldsec.ErrDecryptionFailed // ErrInvalidPublicKey is returned when public key parsing fails. ErrInvalidPublicKey = fieldsec.ErrInvalidPublicKey )
var ( NewKeyStore = fieldsec.NewKeyStore ParsePublicKeyBundle = fieldsec.ParsePublicKeyBundle )
Re-export functions from fieldsec
Functions ¶
func Backup ¶
func Backup(dmm DataMigrationModel) error
Backup executes the PreCmd defined in the source DataLocation. Deprecated: Use executePreCommand directly or set Source.PreCmd and call MigrateData.
func GetCategory ¶
GetCategory returns the transfer category for the given method.
func GetKeyExpiry ¶
GetKeyExpiry returns the configured key expiry duration. Panics if InitKeyStore() was not called.
func InitKeyStore ¶
InitKeyStore initializes the singleton KeyStore and starts the cleanup routine. This should be called once from main() during server startup. Thread-safe: uses sync.Once to ensure single initialization.
Parameters:
- keyExpiryDuration: duration after which generated keys expire (e.g., 30*time.Minute)
- cleanupInterval: interval for background cleanup of expired keys (e.g., 10*time.Minute)
func IsObjectStorageMethod ¶
IsObjectStorageMethod returns true if the method uses object storage.
func IsRsyncMethod ¶
IsRsyncMethod returns true if the method uses rsync for transfer.
func MigrateData ¶
func MigrateData(dmm DataMigrationModel) error
MigrateData manages the complete data migration workflow: 1. If Source.PreCmd is defined, perform pre-processing (e.g., backup) 2. Always perform Transfer 3. If Destination.PostCmd is defined, perform post-processing (e.g., restore)
func ParseBucketAndKey ¶
ParseBucketAndKey parses the path into bucket and key components. Path format: "bucket-name/path/to/object" or "bucket-name/"
func Restore ¶
func Restore(dmm DataMigrationModel) error
Restore executes the PostCmd defined in the destination DataLocation. Deprecated: Use executePostCommand directly or set Destination.PostCmd and call MigrateData.
func Transfer ¶
func Transfer(dmm DataMigrationModel) error
Transfer runs the data transfer as defined by the given DataMigrationModel. It automatically selects the appropriate transfer strategy based on source/destination types.
func Validate ¶
func Validate(dmm DataMigrationModel) error
Validate checks if DataMigrationModel satisfies requirements.
Types ¶
type AuthConfig ¶
type AuthConfig struct {
AuthType string `json:"authType" validate:"required"` // "basic", "jwt" ("apikey", "oauth" not tested yet)
Basic *BasicAuthConfig `json:"basic,omitempty"` // For authType="basic"
JWT *JWTAuthConfig `json:"jwt,omitempty"` // For authType="jwt"
}
AuthConfig defines authentication configuration. Use authType to specify the authentication method.
type BasicAuthConfig ¶
type BasicAuthConfig struct {
Username string `json:"username" validate:"required"`
Password string `json:"password" validate:"required"`
}
BasicAuthConfig defines HTTP Basic Authentication credentials.
type DataLocation ¶
type DataLocation struct {
// StorageType: What kind of storage
// "filesystem": Local or remote filesystem
// "objectstorage": S3-compatible object storage
StorageType string `json:"storageType" validate:"required,oneof=filesystem objectstorage"`
// Path to the data
// For Filesystem: File path (e.g., "/data", "/home/user/data")
// For ObjectStorage: Bucket/Key (e.g., "my-bucket/my-key")
Path string `json:"path" validate:"required"`
// Access configuration (one of the following based on StorageType)
Filesystem *FilesystemAccess `json:"filesystem,omitempty"` // For storageType="filesystem"
ObjectStorage *ObjectStorageAccess `json:"objectStorage,omitempty"` // For storageType="objectstorage"
// Filter defines file filtering options
Filter *FilterOption `json:"filter,omitempty"`
// Hooks for pre/post processing
PreCmd string `json:"preCmd,omitempty"` // Command to run before transfer (source only)
PostCmd string `json:"postCmd,omitempty"` // Command to run after transfer (destination only)
}
DataLocation defines any data location with separated storage type and access method.
func (DataLocation) IsFilesystem ¶
func (loc DataLocation) IsFilesystem() bool
IsFilesystem returns true if the location uses filesystem storage.
func (DataLocation) IsLocal ¶
func (loc DataLocation) IsLocal() bool
IsLocal returns true if the location is local filesystem.
func (DataLocation) IsObjectStorage ¶
func (loc DataLocation) IsObjectStorage() bool
IsObjectStorage returns true if the location uses object storage.
func (DataLocation) IsRemote ¶
func (loc DataLocation) IsRemote() bool
IsRemote returns true if the location requires network access.
func (DataLocation) NeedsLocalStaging ¶
func (loc DataLocation) NeedsLocalStaging() bool
NeedsLocalStaging returns true if this location needs local staging for relay.
type DataMigrationModel ¶
type DataMigrationModel struct {
Source DataLocation `json:"source" validate:"required"`
Destination DataLocation `json:"destination" validate:"required"`
// Strategy determines how the transfer is orchestrated.
// "auto": Automatically select best method.
// "direct": Force direct transfer (e.g., SSH agent forwarding).
// "relay": Force relay via local machine.
Strategy string `json:"strategy,omitempty" default:"auto" validate:"omitempty,oneof=auto direct relay"`
// EncryptionKeyID indicates that sensitive fields are encrypted.
// Empty string means plaintext, non-empty means encrypted with the specified key.
// The key is one-time use and will be deleted after decryption.
EncryptionKeyID string `json:"encryptionKeyId,omitempty"`
}
DataMigrationModel defines a single data migration task.
func DecryptModel ¶
func DecryptModel(model DataMigrationModel) (DataMigrationModel, error)
DecryptModel decrypts all sensitive fields in DataMigrationModel using the singleton KeyStore. After successful decryption, the key is automatically deleted (one-time use). Panics if InitKeyStore() was not called.
Parameters:
- model: the DataMigrationModel to decrypt
Returns a new DataMigrationModel with decrypted fields and EncryptionKeyID cleared.
func DecryptModelWith ¶
func DecryptModelWith(model DataMigrationModel, keyPair *KeyPair) (DataMigrationModel, error)
DecryptModelWith decrypts all sensitive fields in DataMigrationModel using the provided key pair. If the model is not encrypted (EncryptionKeyID is empty), returns as-is. Use this for testing or when managing keys externally.
Parameters:
- model: the DataMigrationModel to decrypt
- keyPair: the key pair containing the private key
Returns a new DataMigrationModel with decrypted fields and EncryptionKeyID cleared.
func EncryptModel ¶
func EncryptModel(model DataMigrationModel, publicKey *rsa.PublicKey, keyID string) (DataMigrationModel, error)
EncryptModel encrypts all sensitive fields in DataMigrationModel. Uses hybrid encryption (AES-256-GCM + RSA-OAEP) for fields of any size.
Parameters:
- model: the DataMigrationModel to encrypt
- publicKey: RSA public key for encryption
- keyID: identifier for the key (for server-side lookup)
Returns a new DataMigrationModel with encrypted fields and EncryptionKeyID set.
func (DataMigrationModel) IsEncrypted ¶
func (m DataMigrationModel) IsEncrypted() bool
IsEncrypted returns true if the model has encrypted sensitive fields.
type Executor ¶
type Executor interface {
// Execute performs the transfer from source to destination.
// Returns an error if the transfer fails.
Execute(source, destination DataLocation) error
}
Executor defines the interface for transfer operations.
type FilesystemAccess ¶
type FilesystemAccess struct {
// AccessType: How to access the filesystem
// "local": Local filesystem (no network)
// "ssh": Remote filesystem via SSH
AccessType string `json:"accessType" validate:"required,oneof=local ssh"`
// SSH configuration (required when accessType="ssh")
SSH *SSHConfig `json:"ssh,omitempty"`
}
FilesystemAccess defines how to access filesystem storage.
type FilterOption ¶
type FilterOption struct {
Include []string `json:"include,omitempty"` // Patterns to include (e.g., "*.txt", "data/**")
Exclude []string `json:"exclude,omitempty"` // Patterns to exclude (e.g., "*.log", "temp/**")
}
FilterOption defines file filtering options for transfers.
type JWTAuthConfig ¶
type JWTAuthConfig struct {
Token string `json:"token" validate:"required"`
}
JWTAuthConfig defines JWT authentication configuration.
type KeyStore ¶
Re-export types from fieldsec for convenience
func GetKeyStore ¶
func GetKeyStore() *KeyStore
GetKeyStore returns the singleton KeyStore instance. Panics if InitKeyStore() was not called.
type MigrationError ¶
MigrationError represents an error during the migration process
func (*MigrationError) Error ¶
func (e *MigrationError) Error() string
func (*MigrationError) Unwrap ¶
func (e *MigrationError) Unwrap() error
type MinioConfig ¶
type MinioConfig struct {
Endpoint string `json:"endpoint" validate:"required"`
AccessKeyId string `json:"accessKeyId" validate:"required"`
SecretAccessKey string `json:"secretAccessKey" validate:"required"`
Region string `json:"region,omitempty" default:"us-east-1"`
UseSSL bool `json:"useSSL,omitempty" default:"true"`
}
MinioConfig defines configuration for S3-compatible storage access using minio-go SDK. Supports: AWS S3, MinIO, Ceph, DigitalOcean Spaces, etc.
MinioConfig is defined here as it's S3-specific. SpiderConfig and TumblebugConfig are defined in model.go as they're shared with the main transx package.
type MinioProvider ¶
type MinioProvider struct {
// contains filtered or unexported fields
}
MinioProvider implements Provider using minio-go SDK. Supports: AWS S3, MinIO, Ceph, DigitalOcean Spaces, and other S3-compatible services.
func NewMinioProvider ¶
func NewMinioProvider(config *MinioConfig, bucket string) (*MinioProvider, error)
NewMinioProvider creates a new MinioProvider from MinioConfig.
func (*MinioProvider) DownloadFile ¶
func (p *MinioProvider) DownloadFile(key, localPath string) error
DownloadFile downloads a file from S3 to local path.
func (*MinioProvider) GeneratePresignedURL ¶
func (p *MinioProvider) GeneratePresignedURL(action, key string) (string, error)
GeneratePresignedURL generates a presigned URL for S3 operations.
func (*MinioProvider) GetBucket ¶
func (p *MinioProvider) GetBucket() string
GetBucket returns the bucket name.
func (*MinioProvider) ListObjects ¶
func (p *MinioProvider) ListObjects(prefix string) ([]ObjectInfo, error)
ListObjects lists objects in the bucket with the given prefix.
func (*MinioProvider) UploadFile ¶
func (p *MinioProvider) UploadFile(localPath, key string) error
UploadFile uploads a local file to S3.
type ObjectInfo ¶
type ObjectInfo struct {
Key string // Object key (path)
Size int64 // Size in bytes
LastModified string // Last modified timestamp
ETag string // Entity tag (hash)
}
ObjectInfo represents metadata about a storage object.
type ObjectStorageAccess ¶
type ObjectStorageAccess struct {
// AccessType: How to access object storage
// "minio": Direct S3 SDK access using minio-go
// "spider": Via CB-Spider Object Storage API
// "tumblebug": Via CB-Tumblebug Object Storage API
AccessType string `json:"accessType" validate:"required,oneof=minio spider tumblebug"`
// Provider-specific configurations (one required based on accessType)
Minio *S3MinioConfig `json:"minio,omitempty"` // For accessType="minio"
Spider *SpiderConfig `json:"spider,omitempty"` // For accessType="spider"
Tumblebug *TumblebugConfig `json:"tumblebug,omitempty"` // For accessType="tumblebug"
}
ObjectStorageAccess defines how to access object storage.
type OperationError ¶
type OperationError struct {
Operation string // "backup", "restore", "transfer"
Method string // transfer method (for transfer operations)
Source string // source path/endpoint
Destination string // destination path/endpoint
Command string // executed command (for backup/restore)
Output string // command output (for backup/restore)
IsRelayMode bool // relay mode flag (for transfer)
Context map[string]string // additional context information
Err error // underlying error
}
OperationError provides detailed context about transx operation failures This unified error type handles backup, restore, and transfer operations
func (*OperationError) Error ¶
func (e *OperationError) Error() string
func (*OperationError) GetMethod ¶
func (e *OperationError) GetMethod() string
GetMethod returns the transfer method (applicable to transfer operations)
func (*OperationError) GetOutput ¶
func (e *OperationError) GetOutput() string
GetOutput returns the command output for debugging (applicable to backup/restore operations)
func (*OperationError) IsOperation ¶
func (e *OperationError) IsOperation(operation string) bool
IsOperation checks if the error is for a specific operation type
func (*OperationError) Unwrap ¶
func (e *OperationError) Unwrap() error
type Pipeline ¶
Pipeline represents a planned transfer with multiple steps.
func Plan ¶
func Plan(model DataMigrationModel) (*Pipeline, error)
Plan analyzes the DataMigrationModel and returns the optimal transfer Pipeline. The routing is based on StorageType combinations (3 cases only).
type PublicKeyBundle ¶
type PublicKeyBundle = fieldsec.PublicKeyBundle
Re-export types from fieldsec for convenience
type RsyncExecutor ¶
type RsyncExecutor struct {
Mode TransferMode // Transfer mode (pull, push, agent-forward)
DeleteExtraneous bool // --delete: remove extraneous files from destination
DryRun bool // --dry-run: perform trial run without changes
Verbose bool // -v: increase verbosity
AdditionalArgs []string // Additional rsync arguments
// contains filtered or unexported fields
}
RsyncExecutor implements Executor using rsync for file transfers. Supports three transfer modes: Pull, Push, and Agent Forwarding.
func NewRsyncExecutor ¶
func NewRsyncExecutor(src, dst DataLocation) (*RsyncExecutor, error)
NewRsyncExecutor creates a new RsyncExecutor with automatically determined transfer mode:
- SSH → SSH: AgentForward
- SSH → Local: Pull
- Local → SSH: Push
- Local → Local: not supported (returns error)
func (*RsyncExecutor) Execute ¶
func (e *RsyncExecutor) Execute(source, destination DataLocation) error
Execute performs rsync transfer from source to destination.
type S3Executor ¶
type S3Executor struct {
Provider S3Provider // S3 provider for generating presigned URLs
}
S3Executor implements Executor for S3 object storage transfers. Uses presigned URLs for authentication-free upload/download.
func NewS3Executor ¶
func NewS3Executor(provider S3Provider) *S3Executor
NewS3Executor creates a new S3Executor with the given provider.
func (*S3Executor) Execute ¶
func (e *S3Executor) Execute(source, destination DataLocation) error
Execute performs S3 transfer from source to destination.
type S3MinioConfig ¶
type S3MinioConfig struct {
Endpoint string `json:"endpoint" validate:"required"`
AccessKeyId string `json:"accessKeyId" validate:"required"`
SecretAccessKey string `json:"secretAccessKey" validate:"required"`
Region string `json:"region,omitempty" default:"us-east-1"`
UseSSL bool `json:"useSSL,omitempty" default:"true"`
}
S3MinioConfig defines S3 SDK configuration using minio-go.
type S3Provider ¶
type S3Provider interface {
// GeneratePresignedURL generates a presigned URL for upload or download.
// action: "upload" or "download"
// key: object key (file path within bucket)
GeneratePresignedURL(action, key string) (string, error)
// ListObjects lists objects with the given prefix.
ListObjects(prefix string) ([]ObjectInfo, error)
// GetBucket returns the bucket/container name for this provider.
GetBucket() string
}
S3Provider defines the interface for S3-compatible object storage operations.
func NewS3Provider ¶
func NewS3Provider(loc DataLocation) (S3Provider, error)
NewS3Provider creates an S3 provider from DataLocation.
type SSHConfig ¶
type SSHConfig struct {
// Connection details
Host string `json:"host" validate:"required"`
Port int `json:"port,omitempty" default:"22"`
Username string `json:"username" validate:"required"`
ConnectTimeout int `json:"connectTimeout,omitempty" default:"30"`
// Authentication (priority: PrivateKey > PrivateKeyPath > Agent > none)
// At least one authentication method should be available.
//
// PrivateKey: PEM-encoded private key content (preferred for injected secrets).
// In JSON, use single line with \n for newlines:
// "privateKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIE...\n-----END RSA PRIVATE KEY-----"
PrivateKey string `json:"privateKey,omitempty"`
PrivateKeyPath string `json:"privateKeyPath,omitempty"` // Path to private key file (legacy, prefer PrivateKey)
UseAgent bool `json:"useAgent,omitempty"` // Use SSH agent for authentication (supports agent forwarding)
// Rsync options
Archive bool `json:"archive,omitempty" default:"true"`
Compress bool `json:"compress,omitempty" default:"true"`
Delete bool `json:"delete,omitempty"`
Verbose bool `json:"verbose,omitempty"`
DryRun bool `json:"dryRun,omitempty"`
}
SSHConfig defines SSH connection details and rsync options.
type SpiderConfig ¶
type SpiderConfig struct {
Endpoint string `json:"endpoint" validate:"required"` // CB-Spider API base URL (e.g., "http://localhost:1024/spider")
ConnectionName string `json:"connectionName" validate:"required"` // CB-Spider connection name (e.g., "aws-connection")
Expires int `json:"expires,omitempty" default:"3600"` // Presigned URL expiration in seconds
Auth *AuthConfig `json:"auth,omitempty"` // Optional authentication configuration
}
SpiderConfig defines CB-Spider Object Storage API configuration. Endpoint should include /spider prefix (e.g., "http://localhost:1024/spider").
type SpiderProvider ¶
type SpiderProvider struct {
// contains filtered or unexported fields
}
SpiderProvider implements Provider for CB-Spider S3 Object Storage API. Based on CB-Spider swagger.yaml [S3 Object Storage Management] endpoints.
func NewSpiderProvider ¶
func NewSpiderProvider(config *SpiderConfig, bucket string) (*SpiderProvider, error)
NewSpiderProvider creates a new SpiderProvider.
func (*SpiderProvider) GeneratePresignedURL ¶
func (p *SpiderProvider) GeneratePresignedURL(action, key string) (string, error)
GeneratePresignedURL generates a presigned URL via CB-Spider S3 API. Uses the CB-Spider special feature endpoints:
- GET /s3/presigned/download/{BucketName}/{ObjectKey} for download
- GET /s3/presigned/upload/{BucketName}/{ObjectKey} for upload
func (*SpiderProvider) GetBucket ¶
func (p *SpiderProvider) GetBucket() string
GetBucket returns the bucket name.
func (*SpiderProvider) ListObjects ¶
func (p *SpiderProvider) ListObjects(prefix string) ([]ObjectInfo, error)
ListObjects lists objects via CB-Spider S3 API. Uses GET /s3/{BucketName}?ConnectionName=xxx to list objects in bucket.
type Step ¶
type Step struct {
Name string
Source DataLocation
Destination DataLocation
Executor Executor
}
Step represents a single transfer step in the pipeline.
type TransferMode ¶
type TransferMode string
TransferMode defines how rsync transfer is executed.
const ( // TransferModePull pulls data from remote source to local. // Direction: remote-source → local TransferModePull TransferMode = "pull" // TransferModePush pushes data from local to remote destination. // Direction: local → remote-destination TransferModePush TransferMode = "push" // TransferModeAgentForward uses SSH Agent Forwarding to execute rsync // on the source server, transferring directly to destination. // Direction: remote-source → remote-destination (via source server) TransferModeAgentForward TransferMode = "agent-forward" )
type TumblebugConfig ¶
type TumblebugConfig struct {
Endpoint string `json:"endpoint" validate:"required"` // CB-Tumblebug API base URL (e.g., "http://localhost:1323/tumblebug")
NsId string `json:"nsId" validate:"required"` // Namespace ID for multi-tenancy
OsId string `json:"osId" validate:"required"` // Object Storage ID
Expires int `json:"expires,omitempty" default:"3600"` // Presigned URL expiration in seconds
Auth *AuthConfig `json:"auth,omitempty"` // Optional authentication configuration
}
TumblebugConfig defines CB-Tumblebug Object Storage API configuration. Endpoint should include /tumblebug prefix (e.g., "http://localhost:1323/tumblebug").
type TumblebugProvider ¶
type TumblebugProvider struct {
// contains filtered or unexported fields
}
TumblebugProvider implements Provider for CB-Tumblebug Object Storage API. Based on CB-Tumblebug swagger.yaml [Infra Resource] Object Storage Management endpoints.
func NewTumblebugProvider ¶
func NewTumblebugProvider(config *TumblebugConfig) (*TumblebugProvider, error)
NewTumblebugProvider creates a new TumblebugProvider.
func (*TumblebugProvider) GeneratePresignedURL ¶
func (p *TumblebugProvider) GeneratePresignedURL(action, key string) (string, error)
GeneratePresignedURL generates a presigned URL via CB-Tumblebug API. Uses the new endpoint: GET /ns/{nsId}/resources/objectStorage/{osId}/object/{objectKey} Query parameters:
- operation: "upload" or "download"
- expires: expiration time in seconds (default: 3600)
func (*TumblebugProvider) GetBucket ¶
func (p *TumblebugProvider) GetBucket() string
GetBucket returns the osId as the bucket identifier.
func (*TumblebugProvider) ListObjects ¶
func (p *TumblebugProvider) ListObjects(prefix string) ([]ObjectInfo, error)
ListObjects lists objects via CB-Tumblebug API. Uses GET /ns/{nsId}/resources/objectStorage/{osId} to list objects in bucket.
type UnsupportedTransferError ¶
UnsupportedTransferError indicates that no executor is available for the given transfer combination.
func (*UnsupportedTransferError) Error ¶
func (e *UnsupportedTransferError) Error() string
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
examples
|
|
|
mariadb-migration
command
|
|
|
object-storage
command
|
|
|
Package fieldsec provides field-level encryption for Go structs using hybrid encryption.
|
Package fieldsec provides field-level encryption for Go structs using hybrid encryption. |