I apologize for filing this as an "issue", unfortunately github doesn't provide too many avenues of communication.
I have considered writing my own FfmpegVideoRenderer (or perhaps using libx264 directly) to software decode unsupported formats for viewing on the phone. I was mostly curious if this has been attempted before, and dropped due to technical reasons or perhaps someone had already planned on working on it?
We don't have any plans to provide an Ffmpeg video renderer. I'm fairly sure it's perfectly feasible to implement one, although note that in many cases software decoders will not be as power efficient or as performant as those provided by the platform. Marking as an enhancement in any case.
Appreciate the response. I'll definitely take a crack at this then. The idea was that it would only be used as a fallback.
There was one thing I wasn't 100% sure of though. Most cases of unsupported video in my use case is generally because of extreme resolutions (3-8MP cameras). Would it make more sense to scale the image using ffmpeg/x264? Or rely on the rasterizer to downscale the final result?
Yep you would be correct. I always assumed it supported decoding, but I never dug into the libx264 side of it too much until now.
Sounds like I'll have to do a bit of benchmarking to come up with the ideal solution. I will definitely look at the VP9 extension more. Thanks for the advice.
It would be great to have a ffmpeg video render
I ended up not pursuing this on my end because of the licensing issues with FFMPEG.
I've started writing a skeleton of FfmpegVideoRenderer. At the moment it's mainly a lot of placeholders with plumbing to a ffmpeg mpeg2video decoder. However the decoding fails in ffmpeg, because of invalid input data.
The "render" method is very similar to other available renderers: wait for input format, then drain output buffers, then feed input buffers. What I noticed is that the data returned by "readSource" (inherited from BaseRenderer) only contains the picture data (with start code 00000100), but I also need to feed all data to the ffmpeg decoder, in particular the sequence segment (with start code 000001b3). I have no idea why I only get the picture segments, and not the entire MPEG2 stream. Any hint appreciated.
I've compared the data that is processed by MediaCodecRender (my Android device has a MPEG2 decoder builtin), and it appears that its calls to "readSource" also only get the picture segments. Is that a property of the implementation of H262Reader? (my sample clip is a MPEG/TS with MPEG2/MP2). If yes, is there a workaround to make the renderer to get the full MPEG2 data stream?
My initial assessment was incorrect. The MPEG2 stream is segmented by H262Reader into TrackOutput on the START_PICTURE boundaries. That's why I see that start code when I inspect the starting bytes of the data received my renderer. However the problem is that the data between the start of the MPEG2 stream and the first START_PICTURE is lost and does not reach the renderer. And this is where the sequence header is located (the stream starts with sequence, extended sequence, GOP and then the first START_PICTURE), which is required by ffmpeg decoder.
You'll probably find what you're looking for in the Format that's provided to the renderer. Specifically in Format.initializationData.
Thanks, I hadn't realized that the initialization data contained the sequence and sequence extension headers. They're pre-parsed by the mpeg2video codec automatically when set as extradata on the context. Now I can get my FfmpegVideoRenderer to work.
Hi @goffioul could you share you code regarding FfmpegVideoRenderer thanks. I'd like to implement the very same functionality.
@michalliu This is the current code I have. It's quite raw, it's just an experiment at the moment. It was focused on getting MPEG2 decoder, but unfortunately I hit bug #2891, in particular bad pixellation due to the segmentation in H262Reader. And trying to fix it resulted in another bug I don't know how to solve. A few things to note:
At the moment I stopped working on this, because of other priorities and a lack of time. But I hope to be able to resume the work at a later stage. Let me know if you're interested in joint effort.
@goffioul Thanks for sharing the code and the instructions. I'm very interested in this idea. My work is busy too. We use exoplayer in our project currently. Hopefully I can persuade my boss to support my idea so i can put my time on it.
Hi guys. I'd like to let you know we have implemented hevc software decoder using OpenHEVC, we are ready to submit a merge request soon.
The repository is located at https://github.com/michalliu/exoplayer2-hevc-extension
@michalliu how to use this ?
@michalliu how to use this ?
@ranakhizar1556
same as the vpx render, add code to your render list. Noted, the adding sequence is important, if you just want debugging you should add LibHevcVideoRenderer firstly.
protected List<Renderer> buildVideoRenderers() {
List<Renderer> renderers = new ArrayList<>();
renderers.add(new MediaCodecVideoRenderer(context, MediaCodecSelector.DEFAULT, allowedJoiningTimeMs, drmSessionManager, false, handler, videoRendererEventListener, droppedFrameNotificationAmount));
renderers.add(new LibHevcVideoRenderer(true, allowedJoiningTimeMs, handler, videoRendererEventListener, droppedFrameNotificationAmount, null, false));
renderers.add(new LibvpxVideoRenderer(true, allowedJoiningTimeMs, handler, videoRendererEventListener, droppedFrameNotificationAmount, null, false, true));
return renderers;
}
strongly suggest support FfmpegAudioRenderer,because lots of device are not work perfectly on MediaCode。
There is a prototype pull request here: https://github.com/google/ExoPlayer/pull/7079
As an update on where we are with this:
DecoderInputBuffer to support extra padding which is a requirement for some FFmpeg decoders).We will be working on getting more of this merged over the next couple of weeks.
Is there any updates for this request? I wish to try to do similar things now as the Exoplayer is not working in my target devices when playing 4K video(see my issue #7835 ). I understand that the HW decoder is far more powerful than SW's, but the fact is that:
in most cases, we have to use SW solution with Exoplayer as some kind of POC purpose before we actually integrates our staffs with customers' real HW. In other word, we wish our POC player is totally independent to any customer's HW.
So this requirement really has huge values for many users, at least, in my case it is values a lot.
We haven't made much progress beyond the update above, unfortunately. Doing it properly turned out to be quite a lot of work. Since this issue isn't considered high priority, it's being worked on on a best-effort basis, and we haven't managed to dedicate sufficient time to get something merged. We have made a small amount of progress since the previous update above:
Most helpful comment
@michalliu This is the current code I have. It's quite raw, it's just an experiment at the moment. It was focused on getting MPEG2 decoder, but unfortunately I hit bug #2891, in particular bad pixellation due to the segmentation in H262Reader. And trying to fix it resulted in another bug I don't know how to solve. A few things to note:
At the moment I stopped working on this, because of other priorities and a lack of time. But I hope to be able to resume the work at a later stage. Let me know if you're interested in joint effort.
ffmpeg-video-renderer.zip