
Recherche avancée
Autres articles (30)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (4234)
-
Android FFMPEG Build with librtmp
20 décembre 2013, par Dan TurnerI have been pulling my hair out trying to compile FFMPEG for Android with librtmp enabled. I have successfully built FFMPEG using the Guardian Project here, but it doesn't have librtmp support. The only answer I have found to this issue is on a previous Stack Overflow question (HERE), but it doesn't want to work for me.
At the moment, I have the cross-compiled
librtmp.so.0
file from the official rtmpdump android build in an rtmpdump/librtmp folder sitting in the android-ffmpeg folder. An extract from my configure_ffmpeg.sh file reads as follows :./configure \
$DEBUG_FLAG \
--arch=arm \
--cpu=cortex-a8 \
--target-os=linux \
--enable-runtime-cpudetect \
--prefix=$prefix \
--enable-pic \
--disable-shared \
--enable-static \
--cross-prefix=$NDK_TOOLCHAIN_BASE/bin/$NDK_ABI-linux-androideabi- \
--sysroot="$NDK_SYSROOT" \
--extra-cflags="-I../x264 -mfloat-abi=softfp -mfpu=neon" \
--extra-ldflags="-L../x264" \
--extra-cflags="-I/home/dan/android-ffmpeg/rtmpdump" \
--extra-ldflags="-L/home/dan/android-ffmpeg/rtmpdump -lrtmp"
\
--enable-version3 \
--enable-gpl \
\
--disable-doc \
--enable-yasm \
\
--enable-decoders \
--enable-encoders \
--enable-muxers \
--enable-demuxers \
--enable-parsers \
--enable-protocols \
--enable-filters \
--enable-avresample \
--enable-libfreetype \
\
--disable-indevs \
--enable-indev=lavfi \
--disable-outdevs \
\
--enable-hwaccels \
\
--enable-ffmpeg \
--disable-ffplay \
--disable-ffprobe \
--enable-ffserver \
--enable-network \
\
--enable-libx264 \
--enable-zlib \
--enable-librtmp \When I try to compile this, it eventually displays an error and my FFMPEG config.log file tells me that it can't find -lrtmp. I'm positive I'm directing it to the right directory... does anyone have any ideas ?
Regards
Dan
-
Why motion vector extracted from b frames are unchange ?
19 août 2019, par 霍宇琦I am writing c codes to extract motion vectors from b frame in MPEG4(Part2) compressed video format. But some motion vector seems wrong.
I use a raw video clips, using the extract_mvs.c from ffmpeg 4.2. For example if the frame seq is IPBPPBPP.... i can get all the side data for all frame. But when inspecting the mv->src_x, mv->src_y, mv->dst_x, mv->dst_y i find that all the srcs are equal to dsts for some individual frames, there must be sth wrong in it, but i change little from the official code.
//modify from ffmpeg/doc/example/extract_mvs.c:
while(getting frames one by one) {
AVFrameSideData *sd;
video_frame_count++;
//printf("%d", video_frame_count);
if(video_frame_count < 19){
if (frame->pict_type == AV_PICTURE_TYPE_I ) printf("\nI");
if (frame->pict_type == AV_PICTURE_TYPE_B ) printf("B");
if (frame->pict_type == AV_PICTURE_TYPE_P ) printf("P");
sd = av_frame_get_side_data(frame, AV_FRAME_DATA_MOTION_VECTORS);
if (sd) {
const AVMotionVector *mvs = (const AVMotionVector *)sd->data;
for (i = 0; i < sd->size / sizeof(*mvs); i++) {
const AVMotionVector *mv = &mvs[i];
if (mv->dst_x - mv->src_x != 0 || mv->dst_y - mv->src_y != 0) {
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64"\n",
video_frame_count, mv->source,
mv->w, mv->h, mv->src_x, mv->src_y,
mv->dst_x, mv->dst_y, mv->flags);
break;
}
}
}
av_frame_unref(frame);
}
}Outputs are :
Input #0, avi, from 'origin.avi':
Duration: 00:00:13.63, start: 0.000000, bitrate: 492 kb/s
Stream #0:0: Video: mpeg4 (DX50 / 0x30355844), yuv420p, 320x240 [SAR 1:1 DAR 4:3], 486 kb/s, 30 fps, 30 tbr, 30 tbn, 30k tbc
Metadata:
title : H:\HumanActionDB\MotionClips\hmdb51_30fps_wBrd_10off_divx\brush_??
framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags
[mpeg4 @ 0x55739dc29580] Video uses a non-standard and wasteful way to store B-frames ('packed B-frames'). Consider using the mpeg4_unpack_bframes bitstream filter without encoding but stream copy to fix it.
IP2,-1,16,16, 137, 24, 136, 24,0x0
BP4,-1,16,16, 152, 57, 152, 56,0x0
P5,-1,16,16, 56, 155, 56, 152,0x0
BP7,-1,16,16, 151, 40, 152, 40,0x0
PB9,-1,16,16, 151, 40, 152, 40,0x0
P10,-1,16,16, 152, 39, 152, 40,0x0
P11,-1,16,16, 26, 55, 24, 56,0x0
B12,-1,16,16, 152, 39, 152, 40,0x0
P13,-1,16,16, 152, 39, 152, 40,0x0
P14,-1,16,16, 41, 168, 40, 168,0x0
B15,-1,16,16, 152, 39, 152, 40,0x0
P16,-1,16,16, 153, 39, 152, 40,0x0
PB18,-1,16,16, 168, 55, 168, 56,0x0you can see that the third frame(B) and the sixth, eighth frame(B, P) and the 17th frame (P) can be read, and the data can be extract from them, but
mv->dst_x == mv->src_x && mv->dst_y - mv->src_y
Can someone help me ? Thanks.
-
ffmpeg HLS multisize with no audio
29 août 2021, par KonstantinI need to implement HLS video conversion and I've found a shell script that works almost perfectly. Here it is :


VIDEO_IN=test.mov
VIDEO_OUT=master
HLS_TIME=4
FPS=25
GOP_SIZE=100
PRESET_P=veryslow
V_SIZE_1=960x540
V_SIZE_2=416x234
V_SIZE_3=640x360
V_SIZE_4=768x432
V_SIZE_5=1280x720
V_SIZE_6=1920x1080

# HLS
ffmpeg -i $VIDEO_IN -y \
 -preset $PRESET_P -keyint_min $GOP_SIZE -g $GOP_SIZE -sc_threshold 0 -r $FPS -c:v libx264 -pix_fmt yuv420p \
 -map v:0 -s:0 $V_SIZE_1 -b:v:0 2M -maxrate:0 2.14M -bufsize:0 3.5M \
 -map v:0 -s:1 $V_SIZE_2 -b:v:1 145k -maxrate:1 155k -bufsize:1 220k \
 -map v:0 -s:2 $V_SIZE_3 -b:v:2 365k -maxrate:2 390k -bufsize:2 640k \
 -map v:0 -s:3 $V_SIZE_4 -b:v:3 730k -maxrate:3 781k -bufsize:3 1278k \
 -map v:0 -s:4 $V_SIZE_4 -b:v:4 1.1M -maxrate:4 1.17M -bufsize:4 2M \
 -map v:0 -s:5 $V_SIZE_5 -b:v:5 3M -maxrate:5 3.21M -bufsize:5 5.5M \
 -map v:0 -s:6 $V_SIZE_5 -b:v:6 4.5M -maxrate:6 4.8M -bufsize:6 8M \
 -map v:0 -s:7 $V_SIZE_6 -b:v:7 6M -maxrate:7 6.42M -bufsize:7 11M \
 -map v:0 -s:8 $V_SIZE_6 -b:v:8 7.8M -maxrate:8 8.3M -bufsize:8 14M \
 -map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -c:a aac -b:a 128k -ac 1 -ar 44100\
 -f hls -hls_time $HLS_TIME -hls_playlist_type vod -hls_flags independent_segments \
 -master_pl_name $VIDEO_OUT.m3u8 \
 -hls_segment_filename HLS/stream_%v/s%06d.ts \
 -strftime_mkdir 1 \
 -var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2 v:3,a:3 v:4,a:4 v:5,a:5 v:6,a:6 v:7,a:7 v:8,a:8" HLS/stream_%v.m3u8



It works as expected when test.mov has an audio track. But if it is, e.g. a screencast recorded with Quick Time with no audio track it will fail with this error :


Stream map 'a:0' matches no streams.
To ignore this, add a trailing '?' to the map.



I tried to do what it recommends and add ? to all audio mappings like :


-map a:0?



In this case it failed on -var_stream_map because it doesn't allow optional parameters.


I've also found how to add an empty audio track here adding silent audio in ffmpeg


But I had no luck trying to combine it with the script above.
Can anyone help me change the script so it could accept any files with and without audio ?


p.s. I honestly read the official docs of ffmpeg but it didn't help at all