Recherche avancée

Médias (91)

Autres articles (45)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (8091)

  • Web-based video editor

    13 avril 2021, par Danny

    We have a web-based editor currently that allows users to build animated web apps. The apps are made up of shapes, text, images, and videos. Except for videos, all other elements can also be animated around the screen. The result of building a animated app is basically a big blob of JSON.

    



    The playback code for the web app is web-based as well. It takes the JSON blob and constructs the HTML, which ends up playing back in some sort of browser environment. The problem is that most of the time this playback occurs on lower-end hardware like televisions and set-top boxes.

    



    These performance issues go away if there is some way to be able to convert a digital sign to video. Then the STB/smart TV simply plays a video, which is much more performant than playing back animations in a web view.

    



    Given a blob of JSON describing each layer and how to draw each type of object, its animation points, etc, how could I somehow take that and convert it to video on the server ?

    



    My first attempt at this was using PhantomJS to load the playback page in a headless browser, take a series of screenshots, and then use ffmpeg to merge those screenshots into a video. That worked great so long as there is no video. But it does not work with video since there is no HTML5 video tag support in PhantomJS, and even if there was, I would lose any audio.

    



    The other way I was thinking of doing it would be to again load the playback page in PhantomJS, but turn off the video layers and leave them transparent, then take screenshots as a series of PNGs with transparency. I would then combine these with the video layers.

    



    None of this feels very elegant though. I know there are web-based video editors out there that basically do what I'm trying to accomplish, so how do they do it ?

    


  • slicing and seeking extremeley small sections of video in ffmpeg

    20 février 2021, par Zarc Rowden

    I am writing a program that maps midi data to timestamps in a video. The end result is a kind of automatic generation of audio visuals for beat making or techno heads. The program takes in midi, slices a video into chunks based on the midi events and mappings and finally joins the slices into a video with 1:1 timing of monophonic midi notes to sections of a video.

    


    When it is successful, the result is very cool and watching the video jump around and lock in to midi notes is very interesting

    


    However, I am affraid that the ffmpeg commands I use are not giving exact results.

    


    The code I feed to ffmpeg looks like this

    


    EVENTS : left is midinote number, right is time from start of recording in which note occurs.

    


      [{"note"=>"start", "timestamp"=>0.0},
   {"note"=>48, "timestamp"=>0.5700000037904829},
   {"note"=>51, "timestamp"=>383.7100000018836},
   {"note"=>45, "timestamp"=>884.3500000002678},
   {"note"=>48, "timestamp"=>999.0449999968405},
   {"note"=>51, "timestamp"=>1383.544999996957},
   {"note"=>45, "timestamp"=>1884.2599999989034},
   {"note"=>48, "timestamp"=>1998.890000002575},
   {"note"=>51, "timestamp"=>2383.4199999982957},
   {"note"=>45, "timestamp"=>2884.1000000029453},
   {"note"=>48, "timestamp"=>2998.7200000032317},
   {"note"=>51, "timestamp"=>3383.2800000018324},
   {"note"=>45, "timestamp"=>3883.894999999029},
   {"note"=>48, "timestamp"=>3998.6250000001746},
   {"note"=>51, "timestamp"=>4384.550000002491},
   {"note"=>45, "timestamp"=>4883.780000003753},
   {"note"=>48, "timestamp"=>4998.404999998456},
   {"note"=>51, "timestamp"=>5384.39500000095},
   {"note"=>45, "timestamp"=>5883.565000003728},
   {"note"=>48, "timestamp"=>5998.464999996941},
   {"note"=>51, "timestamp"=>6384.254999997211},
   {"note"=>45, "timestamp"=>6883.4550000028685},
   {"note"=>48, "timestamp"=>6998.585000001185},
   {"note"=>51, "timestamp"=>7384.055000002263},
   {"note"=>45, "timestamp"=>7883.249999998952},],


    


    MAPPINGS : left side is midi note, right is timestamp in seconds

    


    {
 48=>234.3489,
 45=>124.334489,
 51=>2789.34,
}


    


    That Events are a sequential array of midi notes and time taken from recordings or standard midi file. The number is in milliseconds but I convert for ffmpeg before feeding the arguments.

    


    The mappings are just in seconds and tell the program what to show when certain midi notes are encountered as we loop through the events and begin slicing the video.

    


    The command I send to ffmpeg is constructed like this :

    


    "ffmpeg -an -y -ss #{begin_at} -i #{project_tempfile_url} -t #{slice_duration} -c:v libx264 #{temp_url}"


    


    When I concatenate these slices, they only look exact when my notes are very consistent like a kickdrum doing 4/4 rythms. Anything too fast or varied creates unpleasant results.

    


    Is there a specific set of commands that will tell ffmpeg to cut down to the frame ? I think keyframe are not an ideal answer but not sure. I also think I can adjust by making sure that I only ever map the notes to keyframes, I can settle for it but it would be great if I could just cut almost anywhere between start and end like ANYWHERE like

    


      rand(0...video.length)
  # and then have
  332.3253613134


    


    But I may just be dreaming :P

    


    Do you think that I would be better off writing a custom c program to cut frames like this ? I understand that frame rates could be an issue and that there may actually not be any data at 7.34667898999 seconds and that it might be here instead : 7.356788722342 and that ffmpeg probably searches for the nearest frame from whatever timestamp you input, but I feel like there must be a way to get good results still despite these limitations.

    


    Thank you so much in advance for those who take the time to read this and understand this issue.

    


  • FFMPEG split video to segments with audio

    28 septembre 2020, par Kiril Mytsykov

    I'm tying to split mov file to .ts segments with m3u8 playlist.
All is ok except of audio. Audio doesn't work and it seems that ffmpeg ignores audio stream.

    


    This is my command :

    


    ffmpeg -i stepteen2.mov 
       -c:a aac 
       -c:v libx264 
       -an 
       -map 0 
       -muxdelay 0 
       -muxpreload 0 
       -output_ts_offset 0 
       -segment_time 2 
       -segment_wrap 1000 
       -segment_list_size 0 
       -segment_list temp.m3u8 
       -segment_list_flags +cache 
       -segment_list_type m3u8 
       -f segment 
       temp-%03d.ts


    


    After this command I receive 5 ts segments and m3u8 playlist with segment paths list.
If I open properties of segment files there are no any information in Audio and Video tabs (although video works)

    


    This is output :

    


    ffmpeg version n4.1.4 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
  configuration: --prefix= --prefix=/usr --disable-debug --disable-doc --disable-static --enable-avisynth --enable-cuda --enable-cuvid --enable-libdrm --enable-ffplay --enable-gnutls --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopus --enable-libpulse --enable-sdl2 --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxvid --enable-nonfree --enable-nvenc --enable-omx --enable-openal --enable-opencl --enable-runtime-cpudetect --enable-shared --enable-vaapi --enable-vdpau --enable-version3 --enable-xlib
  libavutil      56. 22.100 / 56. 22.100
  libavcodec     58. 35.100 / 58. 35.100
  libavformat    58. 20.100 / 58. 20.100
  libavdevice    58.  5.100 / 58.  5.100
  libavfilter     7. 40.101 /  7. 40.101
  libswscale      5.  3.100 /  5.  3.100
  libswresample   3.  3.100 /  3.  3.100
  libpostproc    55.  3.100 / 55.  3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'stepteen2.mov':
  Metadata:
    creation_time   : 1998-11-12T18:28:20.000000Z
  Duration: 00:00:28.60, start: 0.000000, bitrate: 111 kb/s
    Stream #0:0(eng): Video: svq1 (SVQ1 / 0x31515653), yuv410p, 160x120, 90 kb/s, 7.52 fps, 7.50 tbr, 600 tbn, 600 tbc (default)
    Metadata:
      creation_time   : 1998-11-12T18:28:20.000000Z
      handler_name    : Apple Video Media Handler
      encoder         : Sorenson Video
    Stream #0:1(eng): Audio: qdmc (QDMC / 0x434D4451), 44100 Hz, mono, s16, 20 kb/s (default)
    Metadata:
      creation_time   : 1998-11-12T18:28:20.000000Z
      handler_name    : Apple Sound Media Handler
Stream mapping:
  Stream #0:0 -> #0:0 (svq1 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x55a5a80b8dc0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x55a5a80b8dc0] profile High, level 1.0
[libx264 @ 0x55a5a80b8dc0] 264 - core 152 r2854 e9a5903 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=7 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
[segment @ 0x55a5a7fd2e00] Opening 'temp-000.ts' for writing
Output #0, segment, to 'temp-%03d.ts':
  Metadata:
    encoder         : Lavf58.20.100
    Stream #0:0(eng): Video: h264 (libx264), yuv420p, 160x120, q=-1--1, 7.50 fps, 90k tbn, 7.50 tbc (default)
    Metadata:
      creation_time   : 1998-11-12T18:28:20.000000Z
      handler_name    : Apple Video Media Handler
      encoder         : Lavc58.35.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
[segment @ 0x55a5a7fd2e00] Opening 'temp.m3u8.tmp' for writing
[segment @ 0x55a5a7fd2e00] Opening 'temp-001.ts' for writing
[segment @ 0x55a5a7fd2e00] Opening 'temp.m3u8.tmp' for writing
[segment @ 0x55a5a7fd2e00] Opening 'temp-002.ts' for writing
[segment @ 0x55a5a7fd2e00] Opening 'temp.m3u8.tmp' for writing
[segment @ 0x55a5a7fd2e00] Opening 'temp-003.ts' for writing
[segment @ 0x55a5a7fd2e00] Opening 'temp.m3u8.tmp' for writing
[segment @ 0x55a5a7fd2e00] Opening 'temp-004.ts' for writing
[segment @ 0x55a5a7fd2e00] Opening 'temp.m3u8.tmp' for writing
[segment @ 0x55a5a7fd2e00] Opening 'temp-005.ts' for writing
[segment @ 0x55a5a7fd2e00] Opening 'temp.m3u8.tmp' for writing
frame=  215 fps=0.0 q=-1.0 Lsize=N/A time=00:00:28.26 bitrate=N/A speed= 162x    
video:217kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[libx264 @ 0x55a5a80b8dc0] frame I:8     Avg QP:14.15  size:  2625
[libx264 @ 0x55a5a80b8dc0] frame P:93    Avg QP:21.28  size:  1295
[libx264 @ 0x55a5a80b8dc0] frame B:114   Avg QP:24.62  size:   703
[libx264 @ 0x55a5a80b8dc0] consecutive B-frames: 27.0%  2.8% 12.6% 57.7%
[libx264 @ 0x55a5a80b8dc0] mb I  I16..4: 28.3% 23.0% 48.8%
[libx264 @ 0x55a5a80b8dc0] mb P  I16..4:  1.5%  6.2%  7.0%  P16..4: 21.3% 17.8% 15.1%  0.0%  0.0%    skip:31.1%
[libx264 @ 0x55a5a80b8dc0] mb B  I16..4:  1.2%  1.8%  1.5%  B16..8: 28.1% 16.0% 10.0%  direct:11.0%  skip:30.4%  L0:43.4% L1:31.9% BI:24.8%
[libx264 @ 0x55a5a80b8dc0] 8x8 transform intra:35.9% inter:39.5%
[libx264 @ 0x55a5a80b8dc0] coded y,uvDC,uvAC intra: 73.6% 71.0% 48.2% inter: 34.9% 32.7% 4.3%
[libx264 @ 0x55a5a80b8dc0] i16 v,h,dc,p: 61% 12% 26%  2%
[libx264 @ 0x55a5a80b8dc0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 32% 14% 23%  5%  4%  5%  4%  7%  6%
[libx264 @ 0x55a5a80b8dc0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 16% 15%  6%  5%  8%  6%  7%  7%
[libx264 @ 0x55a5a80b8dc0] i8c dc,h,v,p: 53% 17% 21%  9%
[libx264 @ 0x55a5a80b8dc0] Weighted P-Frames: Y:14.0% UV:9.7%
[libx264 @ 0x55a5a80b8dc0] ref P L0: 65.2% 16.8% 12.2%  5.4%  0.4%
[libx264 @ 0x55a5a80b8dc0] ref B L0: 92.3%  6.2%  1.4%
[libx264 @ 0x55a5a80b8dc0] ref B L1: 97.3%  2.7%
[libx264 @ 0x55a5a80b8dc0] kb/s:61.84


    


    I see that Output #0, segment, to 'temp-%03d.ts' contains only video stream :

    


    Stream #0:0(eng): Video: h264 (libx264), yuv420p, 160x120, q=-1--1, 7.50 fps, 90k tbn, 7.50 tbc (default)
...
video:217kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown