
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (70)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Changer son thème graphique
22 février 2011, parLe thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
Modifier le thème graphique utilisé
Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
Il suffit ensuite de se rendre dans l’espace de configuration du (...)
Sur d’autres sites (12185)
-
ffmpeg in a bash pipe
27 décembre 2019, par Martin WangI have a rmvb file path list, and want to convert this files to mp4 files. So I hope to use bash pipeline to handle it. The code is
Convert() {
ffmpeg -i "$1" -vcodec mpeg4 -sameq -acodec aac -strict experimental "$1.mp4"
}
Convert_loop(){
while read line; do
Convert $line
done
}
cat list.txt | Convert_loopHowever, it only handle the first file and the pipe exits.
So, does ffmpeg affect the bash pipe ?
-
wowza multiple files in one playlist
14 février 2014, par kalafunIs it possible to create a stream in wowza from multiple files ? So these file would be played in a row after each other ? As far as I know, I can only stream from one file being in the
content
directory..1.) I would like to split that one file for my own reasons, to add some security to it etc... , and then to create the playlist from these multiple files and publish it for streaming.. so it won't take that much time comparing to the second way.
2.) Or do I need to put these multiple files back together and then publish the playlist ?
I would also like to consider the time of the playlist being created even using a big file. I am using ffmpeg to split the file into smaller pieces using a script.
Therefore it would be automatic, when a user would request a stream, I run the script that splits the files and creates the playlist for user..
I hope I didn't take it from the wrong way. Help please
-
ffmpeg libx264 neon optimization breaks execution
27 janvier 2014, par nmxprimeHi use libx264 source obtained from
x264-snapshot-20140122-2245
and compiling using below scriptNDK=~/Android/android-ndk-r7c
PLATFORM=$NDK/platforms/android-9/arch-arm/
PREBUILT=$NDK/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86
function build_one
{
./configure --prefix=$PREFIX \
--sysroot=$PLATFORM \
--disable-avs \
--disable-lavf \
--disable-ffms \
--disable-gpac \
--cross-prefix=$PREBUILT/bin/arm-linux-androideabi- \
--host=arm-linux \
--enable-static \
--libdir=$PLATFORM/usr/lib \
--includedir=$PLATFORM/usr/include \
--extra-cflags="-march=armv7-a -mfloat-abi=softfp -mfpu=neon -mvectorize-with-neon-quad" \
--extra-ldflags="-Wl,--fix-cortex-a8" \
--enable-debugThe config log is :
platform: ARM
system: LINUX
cli: yes
libx264: internal
shared: no
static: yes
asm: yes
interlaced: yes
avs: no
lavf: no
ffms: no
mp4: no
gpl: yes
thread: no
opencl: yes
filters: crop select_every
debug: yes
gprof: no
strip: no
PIC: no
bit depth: 8
chroma format: all
You can run 'make' or 'make fprofiled' now.I hope the above code compiles and optimizes for
NEON
executionDoubts :
Why Threads is
no
, because i didn't specified--disable-thread
What is
cli
and it's significance here, also significance of opencl, such that libx264 uses opencl features ?
Then i built
ffmpeg 1.2.5
with following script./configure --target-os=linux \
--prefix=$PREFIX \
--enable-cross-compile \
--extra-libs="-lgcc" \
--arch=arm \
--cc=$PREBUILT/bin/arm-linux-androideabi-gcc \
--cross-prefix=$PREBUILT/bin/arm-linux-androideabi- \
--nm=$PREBUILT/bin/arm-linux-androideabi-nm \
--sysroot=$PLATFORM \
--extra-cflags=" -O3 -fpic -DANDROID -DHAVE_SYS_UIO_H=1 -Dipv6mr_interface=ipv6mr_ifindex -fasm -Wno-psabi -fno-short-enums -fno-strict-aliasing -finline-limit=300 $OPTIMIZE_CFLAGS " \
--disable-shared \
--enable-static \
--extra-ldflags="-Wl,-rpath-link=$PLATFORM/usr/lib -L$PLATFORM/usr/lib -nostdlib -lc -lm -ldl -llog -lx264 $EXTRA_LD_FLAG" \
--disable-ffplay \
--disable-everything \
--enable-avformat \
--enable-avcodec \
--enable-libx264 \
--enable-gpl \
--enable-encoder=libx264 \
--enable-encoder=libx264rgb \
--enable-decoder=h264 \
--disable-network \
--disable-avfilter \
--disable-avdevice \
--enable-debug=3 \
$ADDITIONAL_CONFIGURE_FLAGwhere
ADDITIONAL_CONFIGURE_FLAG = --enable-debug=3
OPTIMIZE_CFLAGS="-mfloat-abi=softfp -mfpu=neon -marm -march=$CPU -mvectorize-with-neon-quad"The log shows NEON supported.
When i run the code (called in a while loop),
ret = avcodec_encode_video2(c, &pkt, picture, &got_output);//avcodec_encode_video(c, finalout, outbuf_size, picture);
fprintf(stderr,"ret = %d, got-out = %d \n",ret,got_output);
if (ret < 0)
fprintf(stderr, "error encoding frame\n");
if (got_output)
fprintf(stderr,"encoding frame %3d (size=%5d): (ret=%d)\n", 1, pkt.size,ret);it runs for 2 or 3 time(during which
if(got_output)
is not true), then i getSIGSEGV
. Triedaddr2line
andndk-stack
but no use[Though i enabled debug info, ndk-stack cannot find routine info].I edited
libx264's encoder.c
code with some fprintf statements.
Posting snippet of codeif( h->frames.i_input <= h->frames.i_delay + 1 - h->i_thread_frames )
{
/* Nothing yet to encode, waiting for filling of buffers */
pic_out->i_type = X264_TYPE_AUTO;
fprintf(stderr,"EditLog:: Returns as waiting for filling \n"); //edit
return 0;
}
}
else
{
/* signal kills for lookahead thread */
x264_pthread_mutex_lock( &h->lookahead->ifbuf.mutex );
h->lookahead->b_exit_thread = 1;
x264_pthread_cond_broadcast( &h->lookahead->ifbuf.cv_fill );
x264_pthread_mutex_unlock( &h->lookahead->ifbuf.mutex );
}
fprintf(stderr,"After wait for fill \n");
fprintf(stderr,"h: %p \n",h); //edit
fprintf(stderr,"h->i_frame = %p \n",&h->i_frame); //edit
h->i_frame++;
fprintf(stderr,"after i_frame++");in log, i don't see
after i_frame++
, here occurs (may be) theSIGSEGV
.Please help in solving it. The same works without neon optimization !!