
Recherche avancée
Autres articles (36)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (5864)
-
CVOpenGLESTextureCacheCreateTextureFromImage from uint8_t buffer
6 novembre 2015, par resident_I’m developing an video player for iPhone. I’m using ffmpeg libraries to decode frames of video and I’m using opengl 2.0 to render the frames to the screen.
But my render method is very slowly.
A user told me :
iOS 5 includes a new way to do this fast. The trick is to use AVFoundation and link a Core Video pixel buffer directly to an OpenGL texture.My problem now is that my video player send to render method a uint8_t* type that I use then with glTexSubImage2D.
But if I want to use CVOpenGLESTextureCacheCreateTextureFromImage I need a CVImageBufferRef with the frame.
The question is : How I can create CVImageBufferRef from uint8_t buffer ?
This is my render method :
- (void) render: (uint8_t*) buffer
NSLog(@"render") ;[EAGLContext setCurrentContext:context];
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glViewport(0, 0, backingWidth, backingHeight);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// OpenGL loads textures lazily so accessing the buffer is deferred until draw; notify
// the movie player that we're done with the texture after glDrawArrays.
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, mFrameW, mFrameH, GL_RGB,GL_UNSIGNED_SHORT_5_6_5, buffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[moviePlayerDelegate bufferDone];
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];Thanks,
-
ffmpeg-Error "Buffer queue overflow, dropping." when merging two videos with delay
20 septembre 2016, par Stefan UrbanskyI want to merge two videos (as example the iphone video from https://peach.blender.org/trailer-page/). The videos are placed on an background image with the overlay filter and the second video starts 3 seconds later.
And I need that the audio is mixed.
Here is my code :
ffmpeg \
-loop 1 -i background.png \
-itsoffset 0 -i trailer_iphone.m4v \
-itsoffset 3 -i trailer_iphone.m4v \
\
-y \
-t 36 \
-filter_complex "
[2:a] adelay=3000 [2delayed];
[1:a][2delayed] amerge=inputs=2 [audio];
[0][1:v] overlay=10:10:enable='between(t,0,33)' [lv1];
[lv1][2:v] overlay=10:300:enable='between(t,0,36)' [video]
" \
\
-threads 0 \
-map "[video]" -map "[audio]" \
-vcodec libx264 -acodec aac \
merged-video.mp4I get the error message :
[Parsed_overlay_3 @ 0x7fe892502ac0] [framesync @ 0x7fe892502b88] Buffer queue overflow, dropping.
And the merged video has many dropped frames.
I know that are some other posting with this error message. But the suggested solutions doesn’t work for me.
How can I fix the problem ?
-
Yes or no, will ffmpeg api do hardware decoding on iOS ?
15 janvier 2019, par FattieThere seems to be conflicting information on this.
https://trac.ffmpeg.org/wiki/HWAccelIntro
notice the first diagram, it firmly marks iOS as “Y” on VideoToolbox
however in the comments down the bottom it says
VideoToolbox. VideoToolbox, only supported on macOS. H.264 decoding is available in FFmpeg/libavcodec.
And in the confusing second diagram it says "Standalone" is not done for VideoToolbox.
We have found that using ffmpeg compiled in to iOS .... it seems to not use hardware decoding, which is really a pain.
-
With
avcodec_get_hw_config()
we getAV_PIX_FMT_VIDEOTOOLBOX
,AV_HWDEVICE_TYPE_VIDEOTOOLBOX
which is seemingly correct. -
But usage and framerates clearly shows everything is being done in CPU. The code is in
ff_hevc_hls_residual_coding
all the time. (That’s fffmpeg’s software decoder.) -
This very diff very long git.videolan.org URL here seems to suggest again it should all be working.
-
Have tried every iPhone etc. of course
-