This may sound strange but I have been able to consistently reproduce it.
I have been testing using my app using 3 different backends and have noticed a difference in file upload success for files over 1MB.
The exception is:
Error Domain=NSCocoaErrorDomain Code=3840 "JSON text did not start with array or object and option to allow fragments not set." UserInfo={NSDebugDescription=JSON text did not start with array or object and option to allow fragments not set.}
The Heroku and AWS hosted parse-servers use the exact same configuration (fileKey, mLab db, S3 bucket, S3 credentials, etc). Both are using the same version of parse-server-example which I have configured to use parse-server 2.2.7. They are both deployed from the same git repo.
I understand this doesn't seem like it would be an issue with the parse-server itself, but in any case I would appreciate any thoughts on why parse-server would be responding with a JSON failure. The 1MB threshold makes it seem like there must be some explicit size check occurring along the way somewhere (AWS, S3, parse-server, or S3 Adapter).
Hi - I hit similar limits with server on AWS. Assuming you have a similar setup it's likely due to the limits on request size in the default nginx config. You can increase it following this blog post:
https://kroltech.com/2014/09/14/quick-tip-increase-upload-size-in-aws-elastic-beanstalk-node-js-env/
Hope that helps.
@mattgoldspink - I think that did it, thanks!
@mattgoldspink What if I'm using docker ?
I'm using EC2 for my server and s3 File Adapter. I cannot seem to resolve this issue. No matter how I set this up, I'm limited to 1MB file uploads.
Node v6.6.0
NPM v3.10.3
Parse-Server v2.3.8
body-parser v1.17.1
express v4.15.2
This is what my files adapter looks like this:
s3Adapter = new S3Adapter("xxxxx", "xxxxx", "abc",{directAccess: true});
My Parser Server set is basic standard setup using the S3 Files Adapter
var api = new ParseServer({
databaseURI: process.env.DB_URL,
verifyUserEmail: process.env.VERIFY_EMAIL ,
publicServerURL: process.env.EMAIL_URL,
appName: process.env.APP_NAME ,
enableAnonymousUsers: false,
mountPath: "/",
emailAdapter: sesServer,
cloud: process.env.CLOUD_CODE,
appId: process.env.APP_ID,
fileKey: process.env.FILE_KEY ,
filesAdapter:s3Adapter,
masterKey: process.env.MASTER_KEY,
clientKey: process.env.CLIENT_KEY,
restAPIKey: process.env.REST_KEY,
javascriptKey: process.env.JS_KEY,
dotNetKey: process.env.DOT_NET_KEY,
serverURL: process.env.CC_SERVER_URL,
push: pushObj
});
I have this set up for my Body Parser.
app.use(bodyParser.raw({limit:'50mb'}));
app.use(bodyParser.json({limit: '50mb', type:'application/json'}));
app.use(bodyParser.urlencoded({limit: '50mb'}));
Then I have the following Express Routing settings
var url = process.env.API_URL || "/";
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Methods", "OPTIONS, POST, GET, PUT, DELETE");
res.header("Access-Control-Allow-Headers", "X-Parse-REST-API-Key, X-Parse-Javascript-Key, X-Parse-Application-Id, X-Parse-Client-Version, X-Parse-Session-Token, X-Requested-With, X-Parse-Revocable-Session, Content-Type");
res.header('Access-Control-Allow-Credentials', true);
if ('OPTIONS' == req.method) {
return res.sendStatus(200);
} else {
console.log("Got Request")
next();
}
});
app.use(url, api);
http.createServer(app).listen(app.get('port'), function(){
console.log('Express server listening on port ' + app.get('port'));
});
Did you check nginx/apache max upload size options?
yeah, that worked... I just forgot that we had an additional proxy using Nginx.. duh :) Thanks!
Eheh :)
Most helpful comment
Hi - I hit similar limits with server on AWS. Assuming you have a similar setup it's likely due to the limits on request size in the default nginx config. You can increase it following this blog post:
https://kroltech.com/2014/09/14/quick-tip-increase-upload-size-in-aws-elastic-beanstalk-node-js-env/
Hope that helps.