MongoDB is a great NoSQL database choice, but I had a few concerns:
I want to use either Google Datastore or Amazon DynamoDB. Both are good NoSQL database choices.
Google: https://cloud.google.com/datastore/
Amazon: https://aws.amazon.com/dynamodb/details/
I think it would be _nice_ to be able to interface with alternative NoSQL databases other than MongoDB.
Should be fairly easy to add if I'm right. You would just need to implement a DatabaseAdapter similar to https://github.com/ParsePlatform/parse-server/blob/master/ExportAdapter.js for your desired database
See the wiki about ExportAdapter... There is still some work to do in modularizing the database layer, but it's part of the vision for sure.
+1 I don't want to rely on MongoLab either. It'll be awesome if we have full control over the db.
What about RethinkDB?
thinking about adding support for Azure DocumentDB if there is interest.
@ryancrawcour Great
Is the plan for Parse to use the ExportAdapterDatabaseAdapter class to do the migration to databases other than MongoDB? For example, if I were to implement my own ExportAdapterDatabaseAdapter for DynamoDB, will I be able to have Parse use that adapter to fill up my new DynamoDB table(s)?
If you look through the Parse code you will see that not all the Mongo code is contained within just DatabaseAdapter and ExportAdapter.
For instance, users.js has direct Mongo code.
This seems to be what @gfosco was talking about when he said that complete modularization hasn't been completed yet.
My question is how much work is required to totally modualrize datavase access?
Not much (famous last words).. Turn transform.js into ExportTransformAdapter.js and create a TransformAdapter (like FilesAdapter) which defaults to it. Make Schema.js and any other module which accesses transform use the adapter class, and allow it to be injected at initialization.
Of course, there's a lot more to it I'm sure. The same methods used in ExportAdapter may not always fit some other system, or other unforeseen differences..
Yeah honestly this part of the design is a bit wack. Ideally all the Mongo access should be contained within DatabaseAdapter and ExportAdapter but at some point we got to a tradeoff between feature completeness and modular design, and decided to move towards feature completeness. Especially around supporting some of the stranger $ operators. I'm not sure how hard it would be to make more adapters and it may or may not be tractable.
I never used Parse, but I know people who did and it seems like the main of it was that they didn't have to provision and maintain their own servers. In my opinion, the stateless application layer is not so hard to maintain but going from no servers to maintaining a MongoDB replica set or even working with MongoLab is a big jump for people. I think complete support for adapters so we can use DBaaS like Google Cloud Datastore and DynamoDB should be a priority.
I think we need to:
@jashsayani sounds like a plan d(^_^)
@jashsayani I'm keen to contribute to the core and to building an adapter for Azure DocumentDB
@ryancrawcour I don't think Azure DocumentDB is the best choice since Azure is not as main stream as AWS. So if Microsoft does not see Azure bringing significant revenue in the future, they might restructure it or cancel services. AWS on the other hand is a proven success and DynamoDB is not going anywhere.
Google Cloud Datastore is also a gamble like Azure, but its a shiny new thing and looks promising. Is there a specific reason why you want to use Azure?
@jashsayani trust me, Azure isn't going anywhere :)
@jashsayani I think the fact that @ryancrawcour is in the Azure GitHub organization might have something to do with why he'd like to use Azure lol
@richardkazuomiller Haha. Yes, I saw his profile. He's a Microsoft employee working on Azure. So it makes sense. Good to see Microsoft evangelizing on Github.
I found a really good helper library from Google, which will make it easy to build a transport with Datastore: https://googlecloudplatform.github.io/gcloud-node/
I am trying to make support for Google DataStore.
Any quick advices how to do it best way .?
I would like to fully migrate to Google Services as well (Cloud and Datastore). Mongolab is too pricey for my liking.
Yes, i work for Microsoft on Azure. in fact, i work on the Azure DocumentDB team.
Point is, the DocumentDB team would love to have a native adapter for Parse for our database and want to help contribute to modularizing db access for Parse. Not only will this help us build an adapter, but will also enhance the product and allow anyone to build adapters. Win win.
So, how do we start?
@jashsayani, are you planning to start with this?
This appears to be a good plan, guys. Too bad I can only watch and learn.
I work in Google Cloud and have already started work on a Parse Cloud Datastore adapter. Also can help with modularizing.
@jmdobry how are you building an adapter without modularization done first?
Right now it's more investigation than working code. Just understanding everything Parse is doing with MongoDB and nailing down the Datastore equivalents. Obviously modularization needs to be completed before an adapter will completely work. I only just started.
@jmdobry then you're in the same place as I am with a DocumentDB adapter.
@gfosco @lacker what is the best way to move the modularization forward? Does somebody just submit a PR and go from there?
@jmdobry +1 Love it. Ideally as someone with no backend knowledge, if I'm migrating to app engine, I'd like to keep the data there aswell. Whats you timeframe to deliver?
I'm not sure yet, I'm still parsing the code.
@jmdobry Sounds good - my future in your hands no pressure
If the database access code were already fully modularized, I would estimate two weeks for a Cloud Datastore adapter and Mongo to Datastore migration code, but as it is, I'm not sure on the timeline.
@jmdobry Would the Datastore adapter be a one time deal or would it need updating itself as the community improves and adds on to the existing parse server code?
The latter.
Starting to look at modularizing the Parse code based on @gfosco suggestions above. Some other things need to be done too like users.js which stores user documents in Mongo; this probably needs to be turned in to an adapter pattern too.
Hopefully contribute a PR soon with the first steps toward a modular Parse.
@jmdobry & @jashsayani want to collaborate on modularizing this with us? divide and conquer?
It's cool that you folks are excited about this! One thing I was also wondering about modularization is whether it would make more sense to use some existing node.js ORM type thing as a target. Maybe then multiple databases would work with less effort. I'm not sure if that's the right way to go though. Another thing is that this right now doesn't need any API for schema changes in the DB since with Mongo it's pretty much automatic. That might need to change. Another tricky bit is that some operators are handled outside of the adapter, like a lot of the relational stuff. I'm not quite sure what to do there. Just dumping some of the concerns I had on my mind when I was working on this. I think these things are solvable. Just a plain old SQL adapter would be pretty badass.
Oh one last thing to think about is that there are a bunch of wacky names, like _created_at, and it might make sense to not use the wacky names with new db adapters. Those are really technical debt from when we were first creating Parse and thought ho ho it sure won't matter or ever become publicly visible what we name these fields. Dunno if it's a real problem - it's ugly though.
@lacker thanks for the input. definitely something we think is worth pursuing. the databases mentioned here are all NoSQL dbs thus far, so like Mongo they should be able to handle schema changes on the fly.
Oh ok folks from Google and Azure are on this thread. Awesome. I should point out that AFAICT Azure is making billions in revenue and seems to be a top Microsoft priority so I would not worry that it is going away! One key thing to note is, if you guys want to grab existing Parse customers then the database migration stuff will only work with MongoDB directly. Supporting those other datastores will make it easier for people to spin up new Parse Server apps on your clouds though so it is quite cool.
We have a db migration tool that will migrate data from MongoDB database to an Azure DocumentDB database. We will make sure that this works with any native adapter we build.
whether it would make more sense to use some existing node.js ORM type thing as a target
There's js-data (a personal project of mine, not associated with my work), a database-agnostic ORM for Node.js (and the browser). It abstracts a number of basic query operators and relation/association functionality into a generalized syntax. There are other options too, but an ORM may not be necessary.
@ryancrawcour I'm still digging through the source to see how Parse-server interacts with Mongo, when I'm done I can give a definitive answer on my contribution.
@jmdobry I'm all for building an adapter for js-data instead of building a Parse specific adapter if this is going to be easier, more flexible, etc. going forward.
We should take some guidance from the Parse guys here if they're ok with reworking all the database access and introducing dependencies on js-data.
It's my intention to write js-data-cloud-datastore anyway, so I agree with the sentiment.
I'll dive more into the code very soon.
This is my initial understanding of how to decouple Parse-server from MongoDB:
databaseAdapter
Defines an interface for talking to a database. Interface is:
Current implementation of the interface is in ExportAdapter.js.
Note: There is a single place in schemas.js where it's hard-coded to database.collection in order to retrieve all of the schemas. But it's easy enough to simply add a getAllSchemas method to the adapter that hides collection from Parse-server. Apart from this, Parse-server only interacts with Mongo through ExportAdapter.
ExportAdapter makes use of the logic found in Schema.js, which handles validation and permissions. ExportAdapter also makes use of transform.js, which transforms REST data into Mongo form. I'm not completely sure if an adapter for a different database would need to implement everything in Schema.js and transform.js. I personally don't yet know what an entry in the _SCHEMA table looks like, or what it's expected format is. For example, the Schema has a setPermissions method, but I don't see it used anywhere in Parse-server.
I would need to run an actual app with Parse-server and observe the format of REST data and the format of data stored in Mongo to better understand the need for transform.js. If I can get a better understanding of the format for REST data then it should reveal how much of transform.js needs to be re-implemented for another database.
filesAdapter
Defines an interface for storing and retrieving files. Interface is:
Current implementation of this interface is in GridStoreAdapter.js. This implementation stores the files in MongoDB using GridStore. Should note that the GridStore adapter will only work if the MongoDB ExportAdapter is being used.
Should be pretty easy to write an adapter that stores the files in say, Google Cloud Storage.
There is also a bundled S3Adapter that can be used in place of GridStoreAdapter. See #113.
Each method of the files adapter is called in exactly one place in Parse-server. Parse-server does not
attempt to access files without going through the adapter.
More investigation:
schemas.js
The REST API docs describe:
GET /schemas
GET /schemas/:className
PUT /schemas/:className
DELETE /schemas/:className
But as far as I can tell, Parse-server only implements GET /schemas. Hmm.
Schema.js
This file takes care of validation and "rolling updates" to the schema for each className. A user doesn't have to fully describe a schema if they don't want to, because as they save data for a className the schema for that className will continually update itself by inferring the schema structure using the types of the values of each being object being saved for that className.
Schema.js is couples itself to MongoDB by using things like .collection, .insert and .findAndModify, but it doesn't have to. If the database adapter exposed the necessary methods then the "rolling updates" logic could work for any database.
transform.js
It appears a lot of the logic here is similar to what is in js-data-mongodb and the other js-data adapters, meaning, if Parse-server _were_ to use js-data, the logic in transform.js would need to be re-written to map to js-data's query syntax, which would immediately add support for querying against RethinkDB, MySql, Postgres, Redis, Nedb, LevelUp, and MongoDB of course. I'm not yet sure that the mongo stuff in transform.js would all map over 100%, particular the geo stuff.
What about users.js that spears to have some direct calls to MongoDB? Guess that could all be changed to use the database adapter too.
users.js appears to just use the database adapter, so it should be fine.
It does have var mongodb = require('mongodb'); at the top, but it is extraneous and can be deleted.
@jmdobry and others, you have my emotional support. Thank you. How are we planning to manage migration, though? Maybe parse host -> some mongoDB and then create a new migration tool from there to (say) Datastore?
@jmdobry Schemas API is a work in progress. Once it is ready for use we will announce it in the release notes and on the Parse blog.
@jmdobry any Updates .?
Happy to test
The migration plan on the Parse blog states that Parse will start writing new records to a user-specified database so the data from old clients go to the user-hosted database. Parse would also have to support writing to DynamoDB / Google Cloud Datastore.
@jashsayani I couldn't find this news anywhere..? The migration guide says that the tool will point the parse hosted database to a user-hosted mongoDB database.
@miav that is correct, which migrating your data off parse.com you must use mongodb. But we are discussing support for potentially doing another migration from your own mongo to your own non-mongo, or for creating a new app thats not on mongo.
In reference not using mongo, if you need the data but not planning on using parse server anymore, I once had to move a parse database from parse out to a relational system. It's doable, but not simple, there are arrays and embedded document references that need to be addressed. Challenging but not impossible.
Here are two projects that I used to prove that it was possible:
This project exported everthing to json files, mostly used since I was able to fork another attempt so it was a starting point
which will support importing to any ActiveRecord (relational) DB. (in my case postgresql)
Note this was a proof of concept, it's not finished.
If you follow the Parse migration guide (which is the best way to do this thing), then Parse will have to support pointing their existing service to Amazon DynamoDB or Google Cloud Datastore. When existing clients talk to Parse to write new data, it would have to write to this new DB till a new client is published _and_ all the users upgrade to the new client.
Hey @jmdobry any updates on parse server support for Datastore?
I've actually been pretty sick the last two weeks, no progress from my end. After my initial investigation of what would need to be refactored, I've decided to wait a little bit to see what progress the Parse team makes in the near term.
@jmdobry sounds good - Get well soon :)
+1 Can't wait to be able to use this.
Going to close this issue... the database adapter system is in development.
Why not leave it open and close it once the adapter system is complete? That way adapter developers know when to start building their adapters.
I really prefer to close issues when there's no active discussion... but ok, i'll leave it open for now.
Having it open also helps us track the updates related to this issue. Thanks for having it open.
Hey All,
I am starting to make databaseAdapter for Google DataStore, Any one have boilerplate for databaseAdapter other than the existing one
I would also prefer to have it open till we have a PR for the database adapter. Also, an ETA would be helpful.
@saleeh93 you can start with the current mongo implementation, and replicate the whole interface of that object. You'll also need to provide a custom transform.js, this module is responsible for transforming queries and objects to a from the REST format to the mongo format and back.
Given the current architecture of the project, it is most likely we won't merge any custom database adapter as we provide a way to load those at runtime.
However we'll happily accept pull request that would help decoupling mongodb.
@flovilmart I already started as you mentioned,So Its working for adding data to datastore But later i understand that Google DataStore doesn't support
Datastore allows querying on properties. Supported comparison operators are
=,<,>,<=, and>=. "Not equal" andINoperators are currently not supported.
In finding any object from db its checking the $exist and $in for ACL,So i am not sure DataStore will work with Parse.
Any suggestions to overcome it
I can't really comment on that. Every data store had it's own capacities and limitations. It seems that GDS is out of the question for now.
Ok, Thank for your response,
You could try performing multiple queries, like their appengine libs do:
https://cloud.google.com/appengine/docs/python/datastore/queries#Python_Property_filters
@lkraider Its not available in node Library
@saleeh93 thanks for taking on the datastore adaptor effort. Have really been looking forward to it. Is it currently looking like an impossibility due to datastore query restrictions?
Azure DocumentDB just announced protocol support for existing MongoDB drivers. In preview right now but should help open the path to using DocumentDB with parse-server.
Microsoft Azure DocumentDB can now be accessed using existing drivers for MongoDB.
@thewizster that's pretty awesome!
@thewizster There is also an Azure Marketplace gallery item to 1-click deploy a parse-sever running on Azure App Service and DocumentDB. :smile:
@otymartin In my view with current version of gcloud-node its not possible,
In every get request on every Class it send IN query, and its not support on current version
@saleeh93 thats really disappointing to hear. Do we have to wait for google cloud to make a move or can the community mitigate this problem? Perhaps @jmdobry can answer?
It's true that Google Cloud DataStore does not support an IN operator, however it is possible to simulate an IN query with several '=' queries that are unioned in the application layer. In general, how big is the array used in parse-server's IN queries?
@jmdobry, not sure about that, that should not be too much a concern for now. This can be a work in progress and the DBAdapter could throw errors if a query type is not supported.
Is there any update on making the hosted service on parse.com accept different databases for migration (besides MongoDB)? I understand that the deadline for migration is April 28, 2016. So apps will be marked as inactive after that date.
No, hosted Parse.com will support MongoDB only. After migrating to your own MongoDB (hopefully before April 28th) you can then do whatever you want with the data, including put it in a different DB. Parse Serve support for the DBs is coming, but isn't going to be ready by April 28th.
@drew-gross Wait, if hosted Parse does not migrate to other solutions (besides MongoDB), then this doesn't help much. Its nice for new apps that use Parse Server for new projects (not sure why you would do that though), but for existing users that are migrating, its kinda pointless to use anything other than MongoDB. Because when the old client hits the Parse end-point, it will not read/write data to whatever the chosen database is. Is this correct?
That is correct. Eventually, though, all your old clients will be upgraded and hitting your own Parse Server, at which point you can change databases.
@drew-gross sorry if I bother: I still hear about this April 28th deadline, but it actually looks nothing more than a suggested date (i.e.: if you migrate your production database before 28th, you鈥檒l probably make it to migrate your whole app before January 2017). Can you make clear what will happen on day 29th?
I鈥檓 trying to understand if I should really hurry up migrating my ridiculously small (yet production stage) database.
I also need more info about April 28th
There will be some clarification going out in emails soon. The most time consuming part of migrating off Parse is going to be waiting for old clients to upgrade their app, so keep that in mind as you plan your migration.
@drew-gross I hope it's not going out on 27th, you know :)
Any updates about the progress :question:
I'm in the process of isolating all the MongoDB specific logic into MongoAdapter. This requires breaking a lot of internal dependencies on the schema format. After that, the next step will be to decide on the generic API that different database adapters will be expected to conform to, and change Parse Server to use that API, while changing the existing MongoAdapter to implement that API. Then after that we will implement that same API for other databases (probably MiniMongo, just for practise, and because it will help with our test setup, then with Postgres)
If you want to help out, check out the MongoTransfrom.js file, and see if you can make it less dependent on the SchemaController.
@saleeh93 Do you have an open pull request or a branch in your fork that i can have a look and give a try? I am also after gcloud datastore as backend :)
Is anyone working on a DynamoDB adapter? Is there some way I can stay informed, and maybe help out with development?
@karthiksekarnz Here it is https://github.com/saleeh93/parse-datastore-adapter
@karthiksekarnz Feel free to ask any questions
What's the current state of the work towards modularizing the database adapter so Parse Server can be used with other NoSQL databases? I'm interested in using RethinkDB for a current project, and I'd be willing to write an adapter for it if that's a possibility (or if it will be possible soon)...
@flinn I'm not from the team, but since it has a enhancement and long-term label I don't think it'll be coming soon. Instead you might want to take a look at horizon.io which use RethinkDB as it's database. As long as you don't rely on Parse specific stuff it should get you covered.
I am also interested in an update on this, I like parse-server but it really needs support for other databases otherwise it is too expensive to run.
@flinn @drew-gross has been working on a postgres adapter, once he has sorted the adapter API to get that up and running, it should be pretty simple to add other database adapters in.
I think he is away on holiday atm, so I am not too sure on the ETA for that.
Does the Parse postgres database adapter ready for test ??
I read through the code , Could anyone point me a direction to test out this code ???
@blacha Thanks for the update -- I'll keep checking back for a status update from @drew-gross whenever he makes it back from his holiday.. I am still very interested in this and willing to dive in and work on a RethinkDB adapter for Parse Server.
@diego-vieira Thanks for that info, the Horizon project looks really interesting/promising... I'll take a closer look at it soon. Unfortunately, the project I am looking to set this stuff up for is a web application that's already being used in a production environment and it's tightly coupled on both the front and backend to the existing Parse infrastructure. This may be completely off-base and/or ill-advised (I haven't had a chance to dive into Horizon's docs yet, so please excuse my ignorance!), but would it make any sense to also consider building a horizon-adapter as another optional storage engine (instead of, or perhaps in addition to, a more direct/native rethinkdb-adapter)? It seems like that might offer an interesting plug-and-play solution that'd take care of the sharding/clustering of your storage solution and abstract those details away from the Parse Server... I guess the drawback is the added complexity and/or any additional network latency that might create.
Hi. What's the database layer "modularization"'s current situation?
This should be modular enough to provide your own adapter. There still may be some quirks to move to the mongo adapter and the PG adapter is a work in progress that needs some effort to complete.
Contributions are welcome, i personally don't plan to work on this feature for the time being focusing on performance and stability instead.
Seems that a lot of postgres was done, but I cannot figure how can I configure the parse server to use the postgres adapter for testing and early adaptation?
@drew-gross
Is this feature cold turkey? I never hear about progress anymore, I am guessing once the fb guys stopped working on it nobody else took the gauntlet?
The Postgres adapter lacks documentation but is covered the majority of tests. As for other database adapters, I don't believe it's the responsibility of that project to implement other of them. As always, we welcome contributions!
Sometimes I wish there was a bounty system on github to help reward contributors, I simply don't have the time to contribute. I would not mind putting some $ on the line for a google datastore adapter + documentation however. Not a lot but just for the sake of providing incentives. Anyway I digress, I will personally look into using the postgres adapter. If anyone can share their experiences using it, let me know.
Sometimes I wish there was a bounty system on github to help reward contributors
At the risk of sounding dumb, what would the benefit of using the Postgres adapter over MongoDB be? I am on Heroku using Mlab and I've wondered if using Heroku's built in Postgres storage would be cheaper or have some other benefit.
@flovilmart
The Postgres adapter lacks documentation but is covered the majority of tests.
Does this mean that it is ready for use in production?
If it's not, I'd like to contribute to help it get there quicker. Any pointers on where I can start?
Now that we can create and manage projects on Github, it might make sense to start one for completing Postgres support. This way people like me can see what's already done and what we can work on to help out.
Anyone interested in doing a fundraiser via bountysource to help promote the creation of a Google Datastore Adapter?
@saleeh93 Why did you halt working on Google datastore adapter? Did you come across some sort of limitation?
@noder199 Yea. there is only some limited query supported by Google datastore. I stopped working on that
Ah alright, hmm was there ever a dynamodb effort made by anybody? Any limitations involved with that database? Dynamo has been around for awhile so it should be more capable, but I know it sucks at storing certain data types...
@saleeh93
I am gonna take a look at some of your code. Would you mind telling me exactly what limitations you faced when it came to querying?
Mongodb atlas is a cheaper aternative to mlab and I am switching to that service as my database needs the extra storage. I won't be needing datastore, but I'd still welcome any effort for an adapter if google makes changes to its api.
Closing as we added support to Postgres a while ago.
Most helpful comment
I never used Parse, but I know people who did and it seems like the main of it was that they didn't have to provision and maintain their own servers. In my opinion, the stateless application layer is not so hard to maintain but going from no servers to maintaining a MongoDB replica set or even working with MongoLab is a big jump for people. I think complete support for adapters so we can use DBaaS like Google Cloud Datastore and DynamoDB should be a priority.