
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (30)
-
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (3989)
-
avformat/hlsenc : remove the first slash of the relative path line in the master m3u8...
26 mars 2020, par Limin Wangavformat/hlsenc : remove the first slash of the relative path line in the master m3u8 file
Please testing with the following command :
./ffmpeg -y -i input.mkv \
-b:v:0 5250k -c:v h264 -pix_fmt yuv420p -profile:v main -level 4.1 \
-b:a:0 256k \
-c:a mp2 -ar 48000 -ac 2 -map 0:v -map 0:a:0\
-f hls -var_stream_map "v:0,a:0" \
-master_pl_name master.m3u8 -t 300 -hls_time 10 -hls_init_time 4 -hls_list_size \
10 -master_pl_publish_rate 10 -hls_flags \
delete_segments+discont_start+split_by_time ./tmp/video.m3u8then cat ./tmp/master.m3u8
before :
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=6056600,RESOLUTION=1280x720,CODECS="avc1.4d4829,mp4a.40.33"
/video.m3u8$ ./ffmpeg -i ./tmp/master.m3u8 -c:v copy -c:a mp2 ./test.mkv
[hls @ 0x7f82f9000000] Skip ('#EXT-X-VERSION:3')
[hls @ 0x7f82f9000000] Opening '/video.m3u8' for reading
[hls @ 0x7f82f9000000] parse_playlist error No such file or directory [/video.m3u8]
./tmp/master.m3u8 : No such file or directoryafter :
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=6056600,RESOLUTION=1280x720,CODECS="avc1.4d4829,mp4a.40.33"
video.m3u8Signed-off-by : Limin Wang <lance.lmwang@gmail.com>
-
error while compiling x264 in linux
14 avril 2011, par kartulHey all
I'm trying to compile x264 under linux (x64). But it keeps throwing me an error. I've googled around but found nothing. Here's the commandline history :spin@around : /x264$ git clone git ://git.videolan.org/x264.git Cloning into x264... remote : Counting objects : 13539, done. remote : Compressing objects : 100% (4416/4416), done. remote : Total 13539 (delta 11005), reused 11225 (delta 9082) Receiving objects : 100% (13539/13539), 3.29 MiB | 2.79 MiB/s, done. Resolving deltas : 100% (11005/11005), done. spin@around : /x264/x264$ ./configure Found no assembler Minimum version is yasm-0.7.0 If you really want to compile without asm, configure with —disable-asm. spin@around : /x264/x264$ ./configure —disable-asm Platform : X86_64 System : LINUX asm : no avs : no lavf : no ffms : no gpac : no gpl : yes thread : posix filters : crop select_every debug : no gprof : no PIC : no shared : no visualize : no bit depth : 8
You can run 'make' or 'make fprofiled' now.
spin@around : /x264/x264$ make
gcc -Wshadow -O3 -ffast-math -Wall -I. -std=gnu99 -s -fomit-frame-pointer -fno-tree-vectorize -c -o x264.o x264.c
In file included from common/common.h:864,
from x264.c:33 :
common/rectangle.h : In function āx264_macroblock_cache_rectā :
common/rectangle.h:84 : error : āv4siā undeclared (first use in this function)
common/rectangle.h:84 : error : (Each undeclared identifier is reported only once
common/rectangle.h:84 : error : for each function it appears in.)
common/rectangle.h:84 : error : expected ā ;ā before āv16ā
common/rectangle.h:86 : error : ā__m128ā undeclared (first use in this function)
common/rectangle.h:86 : error : expected ā ;ā before āv16ā
common/rectangle.h:87 : error : expected ā ;ā before āv16ā
common/rectangle.h:89 : error : expected ā ;ā before āv16ā
common/rectangle.h:90 : error : expected ā ;ā before āv16ā
make : *** [x264.o] Error 1
spin@around : /x264/x264$and here is the file, from line 83 to 91 :
#if HAVE_VECTOREXT && defined(__SSE__) v4si v16 = v,v,v,v ;
M128( d+s*0+0 ) = (__m128)v16 ;
M128( d+s*1+0 ) = (__m128)v16 ;
if( h == 2 ) return ;
M128( d+s*2+0 ) = (__m128)v16 ;
M128( d+s*3+0 ) = (__m128)v16 ;
#else -
How can I programmatically write and read random video watermarks ?
13 novembre 2017, par GreenTriangleI spent a few minutes trying to think of a clearer way to word my title, but I couldn’t manage it, sorry.
I want to essentially canary trap video files : I am (hypothetically, this is not real but a personal exercise) offering them up to 5,000 different people, and if one gets leaked, I want to know who leaked it. Metadata is too easily emoved, so what I’d like to do is add a random and subtle watermark to each file, and store information about that in a database.
For example : on Joe Smith’s copy, a 10x10 pixel 80% transparent red square in the upper left corner for 5 frames. On Diane Brown’s copy, a full-width 5-pixel 90% transparent black bar on the bottom edge for 15 frames. Then, if I find a leaked copy, I could check it against the database.
I know this still isn’t foolproof : cropping would break co-ordinates, hue/brightness transforms would break colour reading, cutting time would break timestamps. But if I did want to do this anyway, what would be a good strategy for it ?
My idea was to generate PNG overlays randomly, split the video into parts with mkvtoolnix/ffmpeg, re-encode the middle part with ffmpeg + overlay filter, and then rejoin them. But is this silly when there’s a "proper" way to do it ? And what would I be doing to read the watermarks, which I can’t even really conceive of ?