What are the current thoughts on adding a shortcut syntax for doing async transactions (where operations happen on a background thread)?
Here is the current recommended way of doing it manually (from the docs):
// Query from any thread
dispatch_async(dispatch_queue_create("background", nil)) {
let realm = try! Realm()
let theDog = realm.objects(Dog).filter("age == 1").first
try! realm.write {
theDog!.age = 3
}
}
It is very flexible, but it does open up the opportunity to accidentally forgetting to get a new Realm, and when working with non-default realms, you have to be careful to pass it the right configuration (also, you probably want it wrapped in an autoreleasepool, like in some of the later examples).
Would it make sense to offer a simpler version that ensured that you get the right realm (and that it get released afterwards)? Maybe something like this:
// Query and update on background thread
realm.async { realm in
let theDog = realm.objects(Dog).filter("age == 1").first
try! realm.write {
theDog!.age = 3
}
}
Or even simpler if you know that you need to write:
// Write via background thread
realm.async_write { realm in
let theDog = realm.objects(Dog).filter("age == 1").first
theDog!.age = 3
}
This kind of shortcut syntax would obviously need a bit of thought in regard to how you would then handle errors (can't open realm, commit fails), but offering the option to supply a closure called on error might satisfy that.
To me, the value add is much clearer for async write transactions than with read transactions. One big question is what queue quality to pick since the point of such an API would be to decide this for users so they don't have to. The beauty of the current thread-agnostic API is its flexibility, which this would trade against a bit less verbosity.
As discussed in private, I'd see a family of async write APIs, which could use a concurrent, low priority queue:
extension Realm {
func asyncWrite(writeBlock: Realm -> Void) throws
func asyncWrite<T: Object>(object: T, writeBlock: (Realm, T) -> Void) throws
func asyncWrite(objects: Object..., writeBlock: (Realm, [Object]) -> Void) throws
func asyncWrite<T: Object>(objects: RealmCollection<T>, writeBlock: (Realm, RealmCollection<T>) -> Void) throws
}
We could also have a queue parameter with a carefully default value so that if users don't care, they use our default, and if they do care, they can provide their own.
I'll mark this as a backlog priority for now, and we can continue discussing.
func asyncWrite(writeBlock: Realm -> Void) throws
Does it make sense for a method like that to throw? If an error happens, it is likely to happen at a later time and on a different thread. Would it not make more sense for it to take a closure argument for error handling?
The method wouldn't throw, the closure would. My sample is incorrect.
@jpsim The asyncWrite function must provide a completion block so that asynchronous errors (on commit after the write block is executed) can be handled. Since most users would probably ignore the error in the completion block, we ought to provide a default implementation that aborts. Unfortunately we are unable to show variant may fail (like the try! annotates with synchronous write), but this is a limitation of Swift.
I propose the following functions for the no-parameter and single-object-parameter cases. I agree that the multi-object-parameter variants are also useful, but I would first like to discuss the simpler cases.
public func asyncWrite(onQueue queue: dispatch_queue_t = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), do block: Realm -> Void) {
asyncWrite(onQueue: queue, do: block, then: { error in
if let error = error { try! { throw error }() }
})
}
public func asyncWrite(onQueue queue: dispatch_queue_t = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), do block: Realm -> Void, then completion: ErrorType? -> Void) {
fatalError("Unimplemented")
}
public func asyncWrite<T: Object>(onQueue queue: dispatch_queue_t = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), with object: T, do block: (Realm, T) -> Void) {
asyncWrite(onQueue: queue, with: object, do: block, then: { error in
if let error = error { try! { throw error }() }
})
}
public func asyncWrite<T: Object>(onQueue queue: dispatch_queue_t = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), with object: T, do block: (Realm, T) -> Void, then completion: ErrorType? -> Void) {
fatalError("Unimplemented")
}
Note that the completion block is provided in an overload rather than a default parameter so that users can use trailing-closure syntax in the non-handling case.
It's not straightforward how this API ought to translate to Objective-C. If we wish to provide "default" arguments for queue in the Objective-C API, we have to provide 2x as many functions, and there are already quite a lot. If we do, I think something like this might be reasonable, but I'm not very happy with the number of "overloads".
- (void)asyncTransaction:(void(^)(RLMRealm *))block;
- (void)asyncTransactionOnQueue:(dispatch_queue_t)queue executeBlock:(void(^)(RLMRealm *))block;
- (void)asyncTransactionWithObject:(RLMObject *)object executeBlock:(void(^)(RLMRealm *, RLMObject *))block;
- (void)asyncTransactionOnQueue:(dispatch_queue_t)queue withObject:(RLMObject *)object executeBlock:(void(^)(RLMRealm *, RLMObject *))block;
- (void)asyncTransaction:(void(^)(RLMRealm *))block completion:(void(^)(NSError * _Nullable));
- (void)asyncTransactionOnQueue:(dispatch_queue_t)queue executeBlock:(void(^)(RLMRealm *))block completion:(void(^)(NSError * _Nullable));
- (void)asyncTransactionWithObject:(RLMObject *)object executeBlock:(void(^)(RLMRealm *, RLMObject *))block completion:(void(^)(NSError * _Nullable));
- (void)asyncTransactionOnQueue:(dispatch_queue_t)queue withObject:(RLMObject *)object executeBlock:(void(^)(RLMRealm *, RLMObject *))block completion:(void(^)(NSError * _Nullable));
And now you can probably see why I left off discussing the other variants for now…! I'd appreciate feedback on whether this direction is reasonable.
//cc @austinzheng @tgoyne @TimOliver @mrackwitz
@JadenGeller I think it was suggested somewhere that we would have a default background queue similar to the default file path, where it is defined globally and not tied to the methods themselves. Is this too limiting?
What is the use-case for letting the user specify the queue to use? If they need it to be on a specific queue for the sake of synchronizing access with other things they can just use a dispatch_sync() within the write block. Being able to set the priority seems not all that important.
For the sake of not having an explosion of overloads I'd be inclined to have just asyncWriteWithObjects:block: and aysncWriteWithObjects:block:completion: and have the user just pass nil for the first parameter if they don't want to hand anything over.
We could even skip the completion block entirely and tell people to just call commit() directly within the block if they want to handle errors from it.
If we skip the completion block then we would likely need to adjust API to something like handoverObjects: since the block wouldn't be wrapped in a transaction automatically right?
No, it would still be automatically wrapped in a transaction. Calling commit() from within a transaction block is allowed.
Ok, well my preference is on having a completion block since it is more obvious how to handle the error.
If we restrict the arguments of the function to be Objects, users cannot pass in collections for handover along with their objects. Should we have a common protocol for all Realm object-like types?
@JadenGeller we should support Realm object types but also Foundation collections that contain Realm objects as well.
@bigfish24 Which should we support? All of NSArray, NSSet, NSOrderedSet, NSDictionary, NSCountedSet, etc., and their mutable counterparts? This seems like a potentially huge undertaking.
I guess since the mutable counterparts are subclasses, not need to support them. The user could simply create a mutable copy after handover.
Why not any that support NSFastEnumeration?
@bigfish24 That's a reasonable idea. We have to iterate over their objects in O(n) time anyway, so they can rebuild any hash map if necessary.
Yeah I think that was the original idea when I was working on it to support passing in any Realm object or collection, or then a collection of Realm objects/collections that adheres to NSFastEnumeration
I'm not sure how we can structure this API without having a bunch of gross overloads. The user will have to cast each object that's handed over back to the correct class.
realm.asyncWrite(handingOver: myArray, myFoo, myBar) { realm, objects in
let myArray = objects[0] as! [Bazz]
let myFoo = objects[1] as! Foo
let myBar = objects[2] as! Bar
// code here
}
If we provide overloads for some finite number of arguments, the user can avoid collections and casting, but it may make the API more confusing.
realm.asyncWrite(handingOver: myArray, myFoo, myBar) { realm, myArray, myFoo, myBar in
// code here
}
The above, for example, would be a specific overload with signature
func asyncWrite<T, U, V>(handingOver obj1: T, _ obj2: U, obj2: V, withHandler callback: (Realm, T, U, V) -> ()
Why not follow @tgoyne suggestion and just have it accept a single object which is either an object or a collection of objects, thus you only have to cast back into that?
That prevents users from passing multiple objects without first going through gross casts. For example, if I want to pass an instance of my Person model and my Business model into an async write, I'd have to upcast both to Object, place them into a collection, and [force] downcast back to their respective classes. This may be an acceptable API (it doesn't seem like there's a single great solution), but it definitely makes you jump through a lot of loops.
Do we expect users to usually pass only a single type of model, in the most common case, over for an async write?
[RLMRealm deleteObjects:] is an example of a function that can take RLMResults, RLMArray, or any fast enumerable and does the appropriate thing.
For Swift, can you have a function which takes an arbitrary tuple and a function which takes the same tuple? That would let users hand over an arbitrary wad of objects without any casting.
Also, what if I have [Person] and [Business] that I'd like to pass through. I'm going to have to flatten that into [Object], keeping track at what index it goes from Person to Business, and separate them back out on the other side. Maybe it makes sense for us to support nested lists as well, since [[Object]] in this case is a little less gross. Just a tad.
@tgoyne It's possible to define a function that takes _any_ argument, but not specifically a tuple argument. Further, Swift provides us no capabilities for iterating over that tuple, so we wouldn't be able to implement the necessary logic for passing the objects between threads.
Well I guess there's always the GYB approach for working around the language not having variadic generics...
Not familiar with that acronym. Are you referring to providing overloading for a reasonable number of arguments (i.e. approximately 5), or is this something else? For the record, the Swift standard library supports equality on tuples by this same method (so you can't check for equality if the tuple has more than 12 arguments): https://github.com/apple/swift-evolution/blob/master/proposals/0015-tuple-comparison-operators.md
GYB is Swift's Python-based way to generate boilerplate code (e.g. all twelve variants of CollectionSlice).
@tgoyne @austinzheng So is the consensus that it'd be ideal to go with the automatically-casting overload approach? @bigfish24 We could also allow any of these arguments to be collections, like in the 2nd example: https://github.com/realm/realm-cocoa/issues/3136#issuecomment-229132034
Could you use a variadic parameter (e.g. foo(x: Object...))? Sorry, I'm still reviewing the thread.
We definitely could, but this would require the user to downcast as in the 1st example: https://github.com/realm/realm-cocoa/issues/3136#issuecomment-229132034
Yeah, I think you're right. There's no way to both accept an arbitrary number of arguments and keep type information at the same time.
@austinzheng @tgoyne Is it fairly easy to integrate GYB into our build system? Is this something we'd even necessarily want to do?
I think Thomas was being facetious, but if he wasn't GYB is just a Python script we can steal from the Swift repo and invoke at build time.
If we do go with the code-gen route we'd probably want to just commit the end result and have regenerating it be a manual step to avoid having to make it work when building from the podspec. I also would just write like a five line shell/ruby/whatever script to generate the methods in a separate file as an extension on Realm. Could even use boost.PP and the C preprocessor! (don't do this)
Regardless of how we did it it'd be a giant nightmare, but a giant nightmare for us that makes the API a lot more pleasant for the users may be worth it.
Speaking with @JadenGeller about this just now, I'd like to add to the bikeshedding:
SequenceType and preserve the common[Model1(), Model1()] and the block vends[Model1]; pass in [Model1(), Model2()] and the block vends an[Object]. Same thing for RealmCollectionType<T>.Here's a small Swift Playgrounds that can serve as a proof of concept:
import Foundation
class Object: NSObject {}
class Realm {}
class Model1: Object {}
class Model2: Object {}
extension Realm {
func async(block: (Realm) -> Void) {
block(self)
}
func async<S: SequenceType where S.Generator.Element: Object>(objects: S, block: (Realm, S) -> Void) {
block(self, objects)
}
}
let realm = Realm()
let array = [Model1(), Model1()]
let set: Set = [Model1(), Model1()]
let polymorphic = [Model1(), Model2()]
realm.async(array) { realm, objects in
// objects: Array<Model1>
}
realm.async(set) { realm, objects in
// objects: Set<Model1>
}
realm.async(polymorphic) { realm, objects in
// objects: Array<Object>
}
@jpsim We won't be able to construct an S from the _new_ objects on the other thread.
That is to say that the API would need be something like:
func async<S: SequenceType where S.Generator.Element: Object>(objects: S, block: (Realm, AnySequence<S.Generator.Element>) -> Void)
I definitely prefer the async read API to the asyncWrite API, so +1 on that!
Elaborating on @jpsim's suggestion, this API design doesn't require us to provide a completion handler. Since the only throwing call during handover is creating a Realm instance on the new thread, and since this will never fail (right?), we don't need to provide an error path.
What do y'all imagine the behavior should be if an Object subclass that isn't _yet_ managed by Realm is handed over? Probably throw an exception, but we could reasonably create a copy of the object instead.
@JadenGeller we should have a completion handler so users can build reactive code from it
@bigfish24 I don't think a completion handler is necessary to build reactive code in this case. It completes when their code completes.
realm.async(handingOver: object) { realm, object in
// do stuff
arbitraryUserCallback()
}
A completion handler is only necessary if we perform some operation after their block completes.
If the block is wrapped in a write transaction, you might want to do something after that.
@jpsim and I were discussing _not_ wrapping the block in a write transaction since it'd mean we'd retain the same try error handling for the write block.
Ah I didn't catch that, so its just handover, with the user needing to do a write transaction in the block manually
It's reasonable that a user might want to do an async read, and this API allows that. Even if that's uncommon, it seems like a good idea to allow users to use the same familiar write function they already know how to use rather than introduce a semantically different alternative.
realm.async(handingOver: object) { realm, object in
try! realm.write {
object.foo = "bar"
}
// if you want to do anything after completion, do it here
}
@jpsim Actually, we _can_ provide a polymorphic implementation if we constrain to RangeReplaceableCollectionType instead of SequenceType since the former requires both init() and append(_:). 👍
Hey guys ,
I am late to this party and newbie to Realm
I have created a signleton class having following method to write but it crashes at times because incorrect thread access
Let me know what I am doing wrong here.
func save<T:Object>(_ realmObject:T) {
let backgroundQueue = DispatchQueue(label: ".realm", qos: .background)
backgroundQueue.async {
let realm = try! Realm()
try! realm.write {
realm.add(realmObject)
}
}
}
Hi @santhoshs5, even though we've released support for thread-safe references, Realm objects are still otherwise thread-confined. To modify your example:
func save<T: Object>(_ realmObject: T) {
let backgroundQueue = DispatchQueue(label: ".realm", qos: .background)
let ref = ThreadSafeReference(to: realmObject)
backgroundQueue.async {
let realm = try! Realm()
guard let obj = realm.resolve(ref) else { return } // object was deleted
try! realm.write {
realm.add(obj)
}
}
}
Please refer to our release post on thread-safe references to learn more: https://realm.io/news/obj-c-swift-2-2-thread-safe-reference-sort-properties-relationships/
I'll be closing this ticket now as we have a mechanism for asynchronously passing thread-safe references in Realm and performing write transactions, and we have a separate ticket open to track making convenience APIs around it in #4477.