
Recherche avancée
Autres articles (49)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (7083)
-
Extracting frame out of video based on time in seconds
20 avril 2024, par VickyI'm developing a web-based video editing tool where users can pause a video and draw circles or lines on it using canvas. When a user pauses the video, I retrieve the current playback time in seconds using the HTML5 video.currentTime property. I then send this time value along with the shape details to the server. On the server-side, we use FFmpeg to extract the specific paused frame from the video. The issue I'm encountering is a frame mismatch between the one displayed in the browser and the one generated in the backend using FFmpeg.


I've experimented with various approaches for this process.


Extracting frame based on time. Example : in this case time is 3.360 second.




ffmpeg -i input.mp4 -ss 00:00:03.360 -frames:v 1 frame.jpg




Converting time to frame number using the following logic : Math.round(video.currentTime * fps)




ffmpeg -i input.mp4 -vf "select=eq(n,101)" -vsync vfr frame.jpg






ffmpeg -i input .mp4 -vf "select='lt(t,3.360)lt(3.360-t,1/31.019)',setpts=N/(31.019TB)" -vsync 0 frame.jpg




The challenge I'm facing is that sometimes the frame I see in the browser at the pause time doesn't match the one generated in the backend using FFmpeg. How can I solve this problem ? If it's an issue with currentTime, are there any other approaches I can try ?


-
FFMPEG Incompatible sprop-parameter-sets with libstagefright on Android 4.0 for H264
7 août 2012, par user1582384I am having an issue with the RTSP SDP generated by FFMPEG in Android.
I have the FFMPEG 0.11.1 compiled with NDK and running on Android. I use the FFSERVER to stream an mp4 file locally to the MediaPlayer.Here is the generated SDP description :
08-07 14:04:04.034 : V/rtsp_server(20161) : o=- 0 0 IN IP4 127.0.0.1
08-07 14:04:04.034 : V/rtsp_server(20161) : s=No Title
08-07 14:04:04.034 : V/rtsp_server(20161) : c=IN IP4 0.0.0.0
08-07 14:04:04.034 : V/rtsp_server(20161) : t=0 0
08-07 14:04:04.034 : V/rtsp_server(20161) : a=tool:libavformat 54.6.100
08-07 14:04:04.034 : V/rtsp_server(20161) : m=video 0 RTP/AVP 96
08-07 14:04:04.034 : V/rtsp_server(20161) : b=AS:312
08-07 14:04:04.034 : V/rtsp_server(20161) : a=x-dimensions:512,384
08-07 14:04:04.034 : V/rtsp_server(20161) : a=rtpmap:96 H264/90000
08-07 14:04:04.034 : V/rtsp_server(20161) : a=fmtp:96
packetization-mode=0 ;profile-level-id=640015 ;sprop-parameter-sets=J2QAH62ECSZuIzSQgSTNxGaSECSZuIzSQgSTNxGaSECSZuIzSQgSTNxGaSEFWuvX1+T+vyfXrrVQgq116+vyf1+T69daq0BAGMg=,KO48sA==08-07 14:04:04.034 : V/rtsp_server(20161) : a=control:streamid=0
08-07 14:04:04.034 : V/rtsp_server(20161) : m=audio 0 RTP/AVP 97
08-07 14:04:04.034 : V/rtsp_server(20161) : b=AS:29
08-07 14:04:04.034 : V/rtsp_server(20161) : a=rtpmap:97
MPEG4-GENERIC/44100/108-07 14:04:04.034 : V/rtsp_server(20161) : a=fmtp:97
profile-level-id=1 ;mode=AAC-hbr ;sizelength=13 ;indexlength=3 ;indexdeltalength=3 ;
config=120808-07 14:04:04.034 : V/rtsp_server(20161) : a=control:streamid=1
Running the same application on Galaxy S2 works since it uses libopencore. But on newer devices it crashes while trying to extract the width and height from the extradata.
So the problem is with the sprop-parameter-sets.
My question is why does libstagefright parse the sprop-parameter-sets differently and how could the existing string be converted for compatibility ?
-
Convert ffmpeg frame into array of YUV pixels in C
9 juin 2016, par loneraverI’m using the ffmpeg C libraries and trying to convert an AVFrame into a 2d array of pixels with YUV* components for analysis. I figured out how to convert the Y component for each pixel. :
uint8_t y_val = pFrame->data[0][pFrame->linesize[0] * y + x];
Since all frames have a Y component this is easy. However most digital video do not have a 4:4:4 chroma subsampling, so getting the UV components is stumping me.
I’m using straight C for this project. No C++. An ideas ?
*Note : Yes, I know it’s technically YCbCr and not YUV.
Edit :
I’m rather new to C so it might not be the prettiest code out there.
When I try :
VisYUVFrame *VisCreateYUVFrame(const AVFrame *pFrame){
VisYUVFrame *tmp = (VisYUVFrame*)malloc(sizeof(VisYUVFrame));
if(tmp == NULL){ return NULL;}
tmp->height = pFrame->height;
tmp->width = pFrame->width;
tmp->data = (PixelYUV***)malloc(sizeof(PixelYUV**) * pFrame->height);
if(tmp->data == NULL) { return NULL;};
for(int y = 0; y < pFrame->height; y++){
tmp->data[y] = (PixelYUV**)malloc(sizeof(PixelYUV*) * pFrame->width);
if(tmp->data[y] == NULL) { return NULL;}
for(int x = 0; x < pFrame->width; x++){
tmp->data[y][x] = (PixelYUV*)malloc(sizeof(PixelYUV*));
if(tmp->data[y][x] == NULL){ return NULL;};
tmp->data[y][x]->Y = pFrame->data[0][pFrame->linesize[0] * y + x];
tmp->data[y][x]->U = pFrame->data[1][pFrame->linesize[1] * y + x];
tmp->data[y][x]->V = pFrame->data[2][pFrame->linesize[2] * y + x];
}
}
return tmp;Luma works but when I run Valgrind, I get
0x26
1
InvalidRead
Invalid read of size 10x100003699
/Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
VisCreateYUVFrame
/Users/hborcher/ClionProjects/borcherscope/lib
visualization.c
1450x100006B5B
/Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
render
/Users/hborcher/ClionProjects/borcherscope/lib/decoder
simpleDecoder2.c
2530x100002D24
/Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
main
/Users/hborcher/ClionProjects/borcherscope/src
createvisual2.c
93Address 0x10e9f91ef is 0 bytes after a block of size 92,207 alloc’d
0x100013EEA
/usr/local/Cellar/valgrind/3.11.0/lib/valgrind/vgpreload_memcheck-amd64-darwin.so
malloc_zone_memalign0x1084B5416
/usr/lib/system/libsystem_malloc.dylib
posix_memalign0x10135D317
/usr/local/Cellar/ffmpeg/3.0.2/lib/libavutil.55.17.103.dylib
av_malloc0x27
1
InvalidRead
Invalid read of size 10x1000036BA
/Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
VisCreateYUVFrame
/Users/hborcher/ClionProjects/borcherscope/lib
visualization.c
1470x100006B5B
/Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
render
/Users/hborcher/ClionProjects/borcherscope/lib/decoder
simpleDecoder2.c
2530x100002D24
/Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
main
/Users/hborcher/ClionProjects/borcherscope/src
createvisual2.c
93Address 0x10e9f91ef is 0 bytes after a block of size 92,207 alloc’d
0x100013EEA
/usr/local/Cellar/valgrind/3.11.0/lib/valgrind/vgpreload_memcheck-amd64-darwin.so
malloc_zone_memalign0x1084B5416
/usr/lib/system/libsystem_malloc.dylib
posix_memalign0x10135D317
/usr/local/Cellar/ffmpeg/3.0.2/lib/libavutil.55.17.103.dylib
av_malloc