Jitsi-meet: Screen sharing on mobile app does not exist

Created on 19 Mar 2020  ·  68Comments  ·  Source: jitsi/jitsi-meet

Is your feature request related to a problem you are facing?
Teacher wants to share screen with class
Uses Apple Ipad
Starts Jitsi in Browser
App comes up, can not be avoided
App does not allow to share screen
Materials not visible to students

Describe the solution you'd like
Enable screen sharing just as in browser

Describe alternatives you've considered
Use desktop computer instead of ipad
But is not portable and has other apps / entire workflow would have to be changed

Thanks for consideration.

feature-request ios mobile

Most helpful comment

I've just looked a bit around what steps are needed for jitsi meet to implement screen sharing from iOS:

Screen Sharing with Jitsi Meet running in a browser (safari)
This is currently not possible. Safari for iOS lacks the getDisplayMedia API to capture the screens content. The corresponding webkit bug is https://bugs.webkit.org/show_bug.cgi?id=186294

Screen Sharing with the Jitsi Meet iOS App
This could work - iOS Apps can access the Screen Content with the ReplayKit Framework. For streaming the Screen Content - not only of the Jitsi Meet app itself but from the whole system - into the Jitsi video call, the ReplayKit2 Broadcast Functionality could be used. The Jitsi Meet App for iOS is based on React Native, the only ReplayKit React Native Module I've found only supports recording, but not the required broadcast functionality.
→ Jitsi Meet for iOS has to implement the ReplayKit2 Broadcast WebRTC Adapter in the native part of the app or extend and implement the react native module for it.

All 68 comments

I've just looked a bit around what steps are needed for jitsi meet to implement screen sharing from iOS:

Screen Sharing with Jitsi Meet running in a browser (safari)
This is currently not possible. Safari for iOS lacks the getDisplayMedia API to capture the screens content. The corresponding webkit bug is https://bugs.webkit.org/show_bug.cgi?id=186294

Screen Sharing with the Jitsi Meet iOS App
This could work - iOS Apps can access the Screen Content with the ReplayKit Framework. For streaming the Screen Content - not only of the Jitsi Meet app itself but from the whole system - into the Jitsi video call, the ReplayKit2 Broadcast Functionality could be used. The Jitsi Meet App for iOS is based on React Native, the only ReplayKit React Native Module I've found only supports recording, but not the required broadcast functionality.
→ Jitsi Meet for iOS has to implement the ReplayKit2 Broadcast WebRTC Adapter in the native part of the app or extend and implement the react native module for it.

Will Jitsi support ReplayKit in future update by luck? Looking forward to it!!

There's an article at https://medium.com/better-programming/building-an-ios-screen-recorder-with-react-native-9e8c764c477e which covers processing broadcasts from React Native.

Is ir possible to share the screen using Jitsi on iOS ipad?

It is not possible. Skype and others allow it.

@emcho @saghul Is this feature currently being developed? I want to try and work on it, but if anyone already has any progress, maybe we can continue from there or work on it together

+1 for Screen Sharing

@robbi5 I was able to implement Broadcast in the native layer with the help of this article, I am getting sampleBuffer. Do you have any idea how to push this buffer into Jitsi video feed?

Thanks in advance

When I attempted this in the past the challenge was to pass the samples from the extension proess to the main app so we could a video soure with them. Are you at that stage already?

the guy from the article (https://medium.com/better-programming/building-an-ios-screen-recorder-with-react-native-9e8c764c477e ) offered his source code as well: https://github.com/linuxpi/ScreenRecordingRNDemo

@saghul You're talking about passing the samples from the extension app to react app right? I wasn't sure if i was going in the right direction hence haven't started that yet. If that is the case i can try doing that. Can you let me know what needs to be done post that?

You're talking about passing the samples from the extension app to react app right? I

Correct.

The you'd need to create an entity that implements this API: https://github.com/jitsi/webrtc/blob/M75/sdk/objc/base/RTCVideoCapturer.h (see an example which reads files: https://github.com/jitsi/webrtc/blob/M75/sdk/objc/components/capturer/RTCFileVideoCapturer.m) and then add an API to the RN module to use this custom capturer for Screen Sharing: https://github.com/react-native-webrtc/react-native-webrtc

The capturer can probably live on the RN module, but I don't think we acn put the extension there, it would need to be provided by the app and feed the capturer.

That's as far as I got, planning wise, but didn't have the time to do the actual implementation.

@saghul what is the file that is being read here :- https://github.com/jitsi/webrtc/blob/M75/sdk/objc/components/capturer/RTCFileVideoCapturer.m

Is the camera video feed being put into a file and then it is picked up by the capturer ? Is that assumption correct?
If so then is it naive to think that ios screen sharing would be as simple as getting the buffer from replayKit and then maybe calling a similar function that does not have to read from file and convert to buffer but can directly call the readNextBuffer function implemented in the same file I linked above.

Is the camera video feed being put into a file and then it is picked up by the capturer ? Is that assumption correct?

No. Another capturer is implemented for that. I mentioned the file capturer as an example of another capturer that doesn't use a the camera.

The capturer that takes frames from ReplayKit would need to implement the RTCVideoCapturer API.

Thanks a lot for your quick replies.
Okay so this is my understanding

  1. Create a button on the React Native APP UI
  2. On click of this button stop the camera feed (requires research) and call the implementation of RTCVideoCapturer API that will be created in the the webrtc React Native Module.
  3. Add the broadcast upload extension to the app and send the CMSampleBufferRef buffers obtained from the extension to the webrtc React Native module's implementation of the RTCVideoCapturer API.
  4. The RTCVideoCapturer API will handle sending the CMSampleBufferRef obtained from replayKit to the other users and all that is required from us would be to implement a function similar to publishSampleBuffer as seen in the the RTCFileVideoCapturer.m file
  5. I have made an assumption that just by implementing the RTCVideoCapturer API and calling a function similar to publishSampleBuffer we have already pushed our video frames for the other users in the conference to view it.
  6. On click of button again, get the original camera feed if camera was on.

let me know if this just sounds stupid and I am completely on the wrong track. Please suggest if there are any changes to this flow as well.

Thanks a ton for your quick replies!!

Hey @saghul
I see two webrtc repos.

  1. Branch M75 which I assume is latest on https://github.com/jitsi/webrtc/tree/M75 and
  2. https://github.com/react-native-webrtc/react-native-webrtc

Would you be so kind as to elaborate on the differences between the two ?
I see that https://github.com/react-native-webrtc/react-native-webrtc has been added to the package.json here https://github.com/jitsi/jitsi-meet.

So if I were to raise PR for this which repo do I target? Or is 2 somehow a dependency of 1 ?

let me know if this just sounds stupid and I am completely on the wrong track. Please suggest if there are any changes to this flow as well.

That sounds correct.

I see two webrtc repos.

1. Branch M75 which I assume is latest on https://github.com/jitsi/webrtc/tree/M75 and

2. https://github.com/react-native-webrtc/react-native-webrtc

Would you be so kind as to elaborate on the differences between the two ?

  1. Is the acctual WebRTC library, you don't need to touch that one.
  2. Is the RN wrapper for 1. This is where the new capturer must live. An API would need to be provided to make sure we can "inject" frames from outside, and only enable the capturer if an injector has been attached / enabled.

In addition the sscreen-sharing stream should be exposed with the standard getDisplayMedia API, which would create a stream with a track that uses the screen capturer as its source, if available.

Hey @saghul
Kinda stuck on this. Have a couple of questions.

  1. Could you give me a starting point on how react app and react native implements the camera capturer, so I can learn and implement the screen capturer similarly.
  2. Have you gotten to the part of adding the react native module as an embedded framework to the broadcast extension? Is this approach even correct ?

Would appreciate your guidance. Thank you!!

@saghul Getting the CMSampleBuffer from the extension to the app seems to be a near impossible task. So now i am trying to add the JitsiMeet sdk and webrtc framework into the extension and try to join as a new user but with only Video frames (no audio). Is this a viable approach. ?

Thanks

@Lakshman1996 I haven't tried but I don't think it's a viiable approach due to the tight memory constraints that exteensions have.

https://github.com/twilio/video-quickstart-ios

Twilio is also doing something similar hence thought might try this way.

@Lakshman1996 @ChrisTomAlx do you have any code snippets that you could share?

@AliKarpuzoglu
I don't have any code snippets as such, but just some concepts.
On iOS, according to me there are two way to achieve this. Once the broadcast extension is added to the app we get the CMSampleBuffers after that two things can be done :-

  1. Either process the CMSampleBuffer within the extension completely. Couldn't really get too far using this method but like saghul said since extension has memory limits it would probably not work as well as expected.
  2. Pass the CMSampleBuffer to the app and then follow the same flow as the camera feed does. There are couple of ways I found we could do this.

    • Use https://github.com/mutualmobile/MMWormhole to communicate small back and forth information if required ( But it cannot be used to send CMSampleBuffer, atleast as far as I know )

    • Use NSFileCoordinator and coordinate read write into the same file from both extension and App. This could work but it might lead to frame drops or lags. since the read write has to happen at a pretty high speed of atleast one read write per 200 - 400 ms.

    • Use Core Data ( Haven't gotten too much into this )

Sorry that's all I can offer right now. Have paused the work on this for now. But I hope these concepts help someone looking into this.

Thanks to all for helping

Hi everyone.
@saghul

We (@linuxpi mostly) are pretty far with the screen sharing but we do require some assistance. Our current solution gets the Screen content, and we can bring it to Swift/React Native. We do, however, struggle to push the content into webrtc... Our current solution manages to share a single frame and then drop.
Is anyone able to help, here's some snippets

fetching video track where we want to write the frame

  RTCVideoCapturer *videoCapturer = [[RTCVideoCapturer alloc] init];
  RTCMediaStream *stream = self.localStreams[mediaStreamId];
  RTCVideoTrack *videoTrack = stream.videoTracks[0];`
  ```  
here we write the frame on the video track

@try {
[[videoTrack source] capturer:videoCapturer didCaptureVideoFrame:videoFrame];
}
@catch (NSException *exception) {
NSLog(@"error while writing frame");
}
```

after one frame, the stream freezes, even though it still is recording on the iOS side

We desperately NEED this screencast Feature as teachers, using ipads. PLEASE implement it till the reopening of schools in autumn. Youll get as many coffee as you can use ;)!!!

@AliKarpuzoglu Exciting! What thread are you calling those functions in?

Hi @saghul

We are executing the following functions on a user initiated dispatch queue:

  RTCVideoCapturer *videoCapturer = [[RTCVideoCapturer alloc] init];
  RTCMediaStream *stream = self.localStreams[mediaStreamId];
  RTCVideoTrack *videoTrack = stream.videoTracks[0];

and we are writing the frames in a background dispatch queue

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
    while (true) {
      RTCVideoFrame *videoFrame = [SocketShim getNextFrame];
      @try {
           [[videoTrack source] capturer:videoCapturer didCaptureVideoFrame:videoFrame];
        }
        @catch (NSException *exception) {
           NSLog(@"error while writing frame");
        }
    }
});

Neither of them are being executed on the main thread. Let me know if you need any other information or if i missed something here.

@saghul Screenshare is working. The fix for us was to actually use a timestamp in nanoseconds and not milliseconds. Very weird bug bug multiplying the timestamp by 1000000 worked :D

Now our only issue is keeping the app alive in the background. Any ideas?

So now we reduced the cpu load by reducing the fps and it's running for a while. (2 mins, we'll see how far we get)

The app should work in the background with the right Audio/VOIP setting.
https://jitsi.github.io/handbook/docs/dev-guide/dev-guide-ios-sdk
"In order for app to properly work in the background, select the "audio" and "voip" background modes"

Have you tried that?

We are also working on this - would you mind sharing what you have so we can join forces?

@AliKarpuzoglu @linuxpi what is your [SocketShim getNextFrame] method doing?

Is anyone able to help, here's some snippets

Hi everyone.
@saghul

We (@linuxpi mostly) are pretty far with the screen sharing but we do require some assistance. Our current solution gets the Screen content, and we can bring it to Swift/React Native. We do, however, struggle to push the content into webrtc... Our current solution manages to share a single frame and then drop.
Is anyone able to help, here's some snippets

fetching video track where we want to write the frame

  RTCVideoCapturer *videoCapturer = [[RTCVideoCapturer alloc] init];
  RTCMediaStream *stream = self.localStreams[mediaStreamId];
  RTCVideoTrack *videoTrack = stream.videoTracks[0];`

here we write the frame on the video track

  @try {
           [[videoTrack source] capturer:videoCapturer didCaptureVideoFrame:videoFrame];
        }
        @catch (NSException *exception) {
           NSLog(@"error while writing frame");
        }

after one frame, the stream freezes, even though it still is recording on the iOS side

@AliKarpuzoglu : can u tell us what is meaning of getting screen content to Swift/React native. May be share high-level design approach u followed that will be helpful for us

@saghul it's now running in the background for a longer time (we ran it for 20 mins, unless you open sth like youtube). There are some small issues left, like the button being a bit ugly, but we would love to share it. Are you interested in checking out the testflight, before we place a pull request?

@AliKarpuzoglu Sure thing mate! saghul @ jitsi . org Cheers!

Hey @AliKarpuzoglu ,
Would you mind sharing your current state of work? I'm really interested on how you managed to achieve this, I tried to implement it myself, but due to my missing experience with the WebRTC React Native Binding, I haven't completed it.

Cheers

@lagmoellertim sure. Send me a mail, it’s on my Website

Also of anyone else wants to join the TestFlight, send me a mail. We’re still working on optimizing it

Hey Ali,

Can't find your email, however have sent you dm on twitter.

Please do respond back on how we can test ?

send an email to testflight (at) alikarpuzoglu (dot) com

https://github.com/HoppFoundation/jitsi-meet/tree/screenshare

Here's the source code. If anyone is able to contribute:

  • Performance could be improved, by sending raw bytes in SampleHandler.swift and receiving them in SocketShim.swift
  • Maybe by improving this performance we don't have to scale down the images as much. However, we need to manage the memory usage
  • we sometimes have an orange screen when sharing, and ending the sharing... Maybe someone can reproduce the error so we find the cause
  • (easy) instead of creating random numbers to drop 4/5 Frames just count and take every 5th frame (or other performance improvements)
  • localization in the swift part ( the screenshare button and error message)

If you want to try this out, but you don't know how to build it, join the testflight here : https://testflight.apple.com/join/NIyrFWyX

(you'll need to change the server in the settings)

If you want to try this out, but you don't know how to build it, join the testflight here : https://testflight.apple.com/join/NIyrFWyX

Is it ready for android clients too?

If you want to try this out, but you don't know how to build it, join the testflight here : https://testflight.apple.com/join/NIyrFWyX

Is it ready for android clients too?

5244 already has a working implementation. Maybe there is a way to combine them, but we only worked on iOS

@saghul how would we create a track that doesn't compress the screen the way a video would be compressed? Screenshare on the pc is usually very high quality or not sent at all. I think we might be using a wrong track? Do you think this needs to be added to react-native-webrtc?

I feel like the screen sharing display looks worse on other iOS devices compared to the desktop.
For example: Sharing a laptop screen (landscape) and receiving it on a phone (landscape) gives you a higher quality image compared to the phone in portrait...

If you want to try this out, but you don't know how to build it, join the testflight here : https://testflight.apple.com/join/NIyrFWyX

Is it ready for android clients too?

5244 already has a working implementation. Maybe there is a way to combine them, but we only worked on iOS

I used xcode 11.6 to compile the framework from screenshare branch. But after integrating into my project, an error was reported:'AppRegistry is not a registered callable module (calling runApplication)'. It is same as issue #7361 .How to solve this problem, bro🙏 @AliKarpuzoglu @HoppFoundation

Hi @AliKarpuzoglu @HoppFoundation @saghul

I have downloaded the test flight and start broadcasting is working fine for me
When I cloned the code, switch to screenshare branch and generated the build with my own bundle Id it is only showing me Start Recording feature.

I am stuck here Can you please help me

@parveensachdeva
make sure all the bundle ids are changed... you can find all of them if you run grep -rnie 'hopp-foundation' app in the ios folder. If that doesn't help try building the Extensions seperately

@parveensachdeva
make sure all the bundle ids are changed... you can find all of them if you run grep -rnie 'hopp-foundation' app in the ios folder. If that doesn't help try building the Extensions seperately

Thanks @AliKarpuzoglu, for quick reply I will check.

We tested with the provided TestFlight, our comments are:

1) it looks like it's not posible to share screen if mic was enabled before starting to share screen (app crashes).
2) If you try to play some media (like a song) it stops screen sharing
3) With this code, Android devices can't hangout the meetings.
4) Landscape/Portrait modes are not updated

Great job!

We tested again and it looks like it is fully working the first time screen sharing is enabled. Second time there are some issues related to playing media (starting screen sharing while mic enabled or playing media like youtube). It thows the following error:

_Fatal error: Array index is out of range: file_

@AliKarpuzoglu Any ideas/hints in order to solve this by our side and collaborate to this feature?

Sure, send me an email.
The TestFlight is not the most current version. If you want I’ll add you to it

Thanks everyone for your help and support.
Do we have any ETA or update regarding screen-sharing in iOS Apps?

No ETA, but we've started looking into the implementation @AliKarpuzoglu / @HoppFoundation and it goes exactly in the direction I was hoping for.

@saghul how would we create a track that doesn't compress the screen the way a video would be compressed? Screenshare on the pc is usually very high quality or not sent at all. I think we might be using a wrong track? Do you think this needs to be added to react-native-webrtc?

I feel like the screen sharing display looks worse on other iOS devices compared to the desktop.
For example: Sharing a laptop screen (landscape) and receiving it on a phone (landscape) gives you a higher quality image compared to the phone in portrait...

Sorry I missed this @AliKarpuzoglu . I think the problem is that on Android the VideoSource API is able to mark a track as screen-sharing, but that API seems missing from the ObjC counterpart.

https://github.com/jitsi/webrtc/blob/dc40d5cc81e8fe9aa1cd78a38ee8bb9e91ec49a0/sdk/android/api/org/webrtc/PeerConnectionFactory.java#L463

https://github.com/jitsi/webrtc/blob/dc40d5cc81e8fe9aa1cd78a38ee8bb9e91ec49a0/sdk/objc/api/peerconnection/RTCPeerConnectionFactory.mm#L270

I think the engine would apply the screen-share behavior if this flag was set.

Thats great news @saghul . Will you guys handle the addition to webrtc?

We have already adjusted the quality settings quite a bit, and I think with stable internet the quality is already satisfying while not using too much data.. One note: I will try to test it on iPad Pros this week, since they have a higher resolution and we have encountered some issues.
Looking forward to the results

Will you guys handle the addition to webrtc?

You mean the isScreencast flag on ObjC? I think that's attainable, yeah.

We can hopefully begin working on this in a few week's time.

@saghul @AliKarpuzoglu Hi guys.

We noticed that "Screen share" button was designed in Swift instead of React Native, the last one only receives "onStart" and "onEnd" as callback props. Is there a reason why it developed like this? If we want to change the implementation to draw the button from React Native with the legends of "Start sharing" and "Stop sharing", what would be the steps to follow and the difficulties we would encounter?

Thank you very much.
Regards.
-Alan

Adding React Native to the extension used for the screen-sharing part is probably not worth it.

@saghul am I understanding correctly that we only have to change one bit on the webrtc side to specify that it's a screenshare stream? https://github.com/pristineio/webrtc-mirror/blob/7a5bcdffaab90a05bc1146b2b1ea71c004e54d71/webrtc/api/video/video_content_type.cc#L82

Similar to how it's done here?

Just wanted to note my support for an implementation of this.
I am currently primarily working from my iPad and I would love to be able to share my iPad screen with others, like I can in Microsoft Teams.
I sadly have no development experience with iOS, but I am more than willing to help test a potential beta version of this feature!

@saghul am I understanding correctly that we only have to change one bit on the webrtc side to specify that it's a screenshare stream? https://github.com/pristineio/webrtc-mirror/blob/7a5bcdffaab90a05bc1146b2b1ea71c004e54d71/webrtc/api/video/video_content_type.cc#L82

Similar to how it's done here?

@AliKarpuzoglu Yep, that should be it.

Hi @saghul When will the screen share functionality be integrated in main?
Regards.
Raj

Quick update: Android is now integrated and we released a beta with it. A stable release will follow probably this week. iOS is still work in progress.

let me know if we can help in any way @saghul

Hi @saghul, is it possible to share latest update link to point Android Screen Sharing integrated? Thanks in advance.

The latest release in the Play Store and F-Droid contains screen-sharing. It's version 20.5.0.

Thank you @saghul, I was wondering if there is source code link available. Is it open source?

Thank you! There are some news about iOS deployment? Thanks in advance

Sure thing, this very repository, though the heavy lifting is done in https://github.com/react-native-webrtc/react-native-webrtc/commit/b14787c84670b46cc171126548abd1411a77e8ef

Good morning.

I know that iOS is still in progress, but I have one question. We are trying @AliKarpuzoglu implementation but we are facing some issues related to frame updates. As we know, ReplayKit just send a new frame if the screen has changed. If the user is sharing a presentation and the screen remains static for some seconds, the other users (it is more evident in the other mobile clients) just see the laptop icon while the participants status appears as "Lost". I know that happens when jvb determines that a users must be sending its streaming but it detects no changes after some time. How do you think this can be handled ? Thank you!!

Note: This only happens in lanscape mode because in portrait mode there is a red indicator that causes frequent screen updates.

Interesting observation, thanks for sharing!

Was this page helpful?
0 / 5 - 0 ratings