This was on Node.js 0.10.24.
When I try to use the copyObject api I get a NoSuchBucket error. If I use a getObject followed by a putObject with the same parameters the call succeeds.
If I use s3cmd from the command line to issue the command it succeeds.
The name of the bucket I was trying to write to was 'fair-pricing'.
Let me know if you need any more information
@dylanlingelbach I was not able to reproduce this error. Can you provide code that reproduces the issue for you?
@lsegal - sure, here is a minimal repo:
var aws = require('aws-sdk');
var s3 = new aws.S3();
var bucket = 'fair-pricing';
var copyParams = {
Bucket: bucket,
Key: 'csv_results_dev/20140415/test.csv',
CopySource: 'csv_results_dev/20140415/results.csv'
};
s3.client.copyObject(copyParams, function(err, data) {
if (err) {
console.log("ERROR copyObject");
console.log(err);
}
else {
console.log('SUCCESS copyObject');
}
});
var getParams = {
Bucket: bucket,
Key: 'csv_results_dev/20140415/results.csv'
}
s3.client.getObject(getParams, function(err, data) {
if (err) {
console.log('ERROR getObject');
console.log(err);
} else {
console.log('SUCCESS getObject')
}
var putParams = {
Bucket: bucket,
Key: 'csv_results_dev/20140415/test.csv',
Body: data.Body
}
s3.client.putObject(putParams, function(err, data) {
if (err) {
console.log('ERROR putObject');
} else {
console.log('SUCCESS putObject')
}
});
});
Here is my console output when I put the above text in test.js and run with node 0.10.24 on Mac OS X 10.9.2.
~/Code/xxxxx/xxxxxx (master)$ node test.js
ERROR copyObject
{ [NoSuchBucket: The specified bucket does not exist]
message: 'The specified bucket does not exist',
code: 'NoSuchBucket',
time: Wed Apr 16 2014 13:48:00 GMT-0500 (CDT),
statusCode: 404,
retryable: false,
_willRetry: false }
SUCCESS getObject
SUCCESS putObject
I've tried wiping out my node_modules and re-running npm install to ensure I have 2.0.0-rc13. I have my access key and secret key set via environment variables.
I haven't tried to see if the bucket name changes things. I'll try that now and comment when I find out.
According to the documentation on copyObject, the CopySource parameter needs to be prefixed with the source bucket name, followed by a /. It looks like in your example, "csv_results_dev/20140415/results.csv" is the full key name. Have you tried prefixing that param with the source bucket? i.e.:
CopySource: "SOURCEBUCKET/csv_results_dev/20140415/results.csv"
Yep, I saw that right as you were commenting. When I tried with a keyname without a / I got an error saying I needed to pass the bucket name as well.
I just misread the documentation when I was coding this up.
Thanks and sorry for the trouble!
No problem, glad you could get it working!
This conversation was helpful. Thanks!
I'm actually having this same problem, using a "copyObject" immediately following a "putObject". Please advise.
yupp, the conversation was helpful. Thanks!!!
May I know if we have a way to get back the URL of the copied object from the destination bucket?
(example - response.Location that we get in 'upload()')
Thanks in advance :)
Introducing a new parameters called "sourceBucket" and removing the bucket-name prefix in the CopySource path would have been more developer friendly.
It took me a long time to reach to this issue as i had assumed that both source and destination paths would have identical conventions which is generally the case.
const bucket= 'mybucket';
const url = s3.getSignedUrl('copyObject', {
Bucket: bucket,
CopySource:bucket+'/'+'myfile.txt',
Key: 'myfileCopy.txt',
Expires: signedUrlExpireSeconds
});
In front end using this.
$http.put(url) -- Gives 200 but nothing in response.
When trying to get the file later, giving me a corrupted file.
Any idea why?
@imdadul did you solve this? I get the same, an empty 200 response and no file copied.
const source = `/${s3Config.bucket}/${key}`;
const destination = '/test.docx';
try {
const result = await s3Client.copyObject({
Bucket: s3Config.bucket,
CopySource: source,
Key: destination,
});
console.log(result.response.data); //null
}
catch (error) {
console.error(error.message);
}
You have to:
const result = await s3Client.copyObject({
Bucket: s3Config.bucket,
CopySource: source,
Key: destination,
}).promise();
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.
Most helpful comment
Introducing a new parameters called "sourceBucket" and removing the bucket-name prefix in the CopySource path would have been more developer friendly.
It took me a long time to reach to this issue as i had assumed that both source and destination paths would have identical conventions which is generally the case.