Recherche avancée

Médias (91)

Autres articles (36)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (6218)

  • Merge multiple mp4 (h264 + aac) videos into one frame without re-encoding

    1er décembre 2017, par shubhadeep banerjee

    I have four mp4 files with same length (1 minute each) , say 1.mp4, 2.mp4, 3.mp4 and 4.mp4. I want to merge these four videos into a single frame and create a mosaic of four videos ; 1.mp4 at the top-left, 2.mp4 at the top-right, 3.mp4 at the bottom-left and 4.mp4 at the bottom-right.

    I have tried the same using ffmpeg filter_complex.

    ffmpeg -i 1.mp4 -i 2.mp4 -i 3.mp4 -i 4.mp4 -filter_complex "
    nullsrc=size=640x480 [base] ; [0:v] setpts=PTS-STARTPTS,
    scale=320x240 [upperleft] ; [1:v] setpts=PTS-STARTPTS, scale=320x240
    [upperright] ; [2:v] setpts=PTS-STARTPTS, scale=320x240 [lowerleft] ;
    [3:v] setpts=PTS-STARTPTS, scale=320x240 [lowerright] ;
    [base][upperleft] overlay=shortest=1 [tmp1] ; [tmp1][upperright]
    overlay=shortest=1:x=320 [tmp2] ; [tmp2][lowerleft]
    overlay=shortest=1:y=240 [tmp3] ; [tmp3][lowerright]
    overlay=shortest=1:x=320:y=240 " output.mp4

    But the problem is, ffmpeg re-encodes the video. The input videos already have h.264 + aac encoding, and the output video will have the same encoding. I would be running this code on a very low-power embedded Linux board which have only one core CPU. The board is taking almost 6x times of the file length to produce the output (i.e. 6 minutes in my case), and also consuming 100% of the CPU.

    Is there any other way to do this, without re-encoding ? I run the whole thing on Debian, so I can use any package you suggest.

  • Repair mpeg files using ffmpeg

    27 février 2014, par rsdrsd

    I have a bunch of mpeg files which are some how invalid or incorrect. I can play the files in different media players but when I upload the files and they should automagically be converted. It takes a very long time to create screenshots and it creates about 10000 screenshots instead of the 50 to be expected. The command is part of an automatic conversion app. With mp4 and other files it works great but whit mpeg it doesn't work as expected. The creation of screenshots eats up all memory and processor power.

    For creating screenshots I have tried the following :

       ffmpeg -y -i /input/file.mpeg -f image2 -aspect 16:9 -bt 20M -vsync passthrough -vf select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)' /output/file-%05d.jpg

    this just creates 2 screenshots while I expect 50 or so. The following command :

       ffmpeg -y -i /input/file.mpeg -f image2 -vf fps=fps=1/10 -aspect 16:9 -vsync passthrough -bt 20M /output/file-%05d.jpg

    gave me errors about buffers :

       ffmpeg version N-39361-g1524b0f Copyright (c) 2000-2014 the FFmpeg developers
         built on Feb 26 2014 23:46:40 with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-4)
         configuration: --prefix=/home/example/ffmpeg_build --extra-cflags=-I/home/example/ffmpeg_build/include --extra-ldflags=-L/home/example/ffmpeg_build/lib --bindir=/home/example/bin --extra-libs=-ldl --enable-gpl --enable-nonfree --enable-libfdk_aac --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libfreetype --enable-libspeex --enable-libtheora
         libavutil      52. 66.100 / 52. 66.100
         libavcodec     55. 52.102 / 55. 52.102
         libavformat    55. 33.100 / 55. 33.100
         libavdevice    55. 10.100 / 55. 10.100
         libavfilter     4.  2.100 /  4.  2.100
         libswscale      2.  5.101 /  2.  5.101
         libswresample   0. 18.100 /  0. 18.100
         libpostproc    52.  3.100 / 52.  3.100
       [mp3 @ 0x200d7c0] Header missing
       [mpegts @ 0x2008a60] DTS discontinuity in stream 0: packet 6 with DTS 34185, packet 7 with DTS 8589926735
       [mpegts @ 0x2008a60] Invalid timestamps stream=0, pts=7157, dts=8589932741, size=150851
       Input #0, mpegts, from '/home/example/app/uploads/21.mpeg':
         Duration: 00:03:14.75, start: 0.213000, bitrate: 26112 kb/s
         Program 1
    Stream #0:0[0x3e9]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv), 1440x1080 [SAR 4:3 DAR 16:9], max. 25000 kb/s, 29.97 fps, 60 tbr, 90k tbn, 59.94 tbc
    Stream #0:1[0x3ea]: Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, s16p, 384 kb/s
       [swscaler @ 0x1ff9860] deprecated pixel format used, make sure you did set range correctly
       Output #0, image2, to '/home/example/app/uploads/21-%05d.jpg':
         Metadata:
    encoder         : Lavf55.33.100
    Stream #0:0: Video: mjpeg, yuvj420p, 1440x1080 [SAR 4:3 DAR 16:9], q=2-31, 200 kb/s, 90k tbn, 0.10 tbc
       Stream mapping:
         Stream #0:0 -> #0:0 (mpeg2video -> mjpeg)
       Press [q] to stop, [?] for help
       [mpegts @ 0x2008a60] Invalid timestamps stream=0, pts=7157, dts=8589932741, size=150851
       [output stream 0:0 @ 0x1ff2ba0] 100 buffers queued in output stream 0:0, something may be wrong.
       [output stream 0:0 @ 0x1ff2ba0] 1000 buffers queued in output stream 0:0, something may be wrong.

    and it creates about 10000 screenshots while I expect 50.

    Now I have read some where on how to repair some broken files. For this I have the following command :

       ffmpeg -y -i input.mpeg -codec:v copy -codec:a copy output.mpeg

    This however creates a file somewhat smaller, but if I run the same command on the output again, I would expect that it creates the same file, but the following command

       ffmpeg -y -i output.mpeg -codec:v copy -codec:a copy output2.mpeg

    returns a file which is much smaller and runs for only a few seconds while the original was about 3 minutes.

    If I run the "repair" commands for a not broken mpeg it results the first time I run the command in a much smaller file. With ffprobe I checked what changed but the only thing that changes is MPEG-TS to MPEG-PS.

    If I run the command over an mp4 file it results in exactly the same file as expected. Does someone have a clue of what is going wrong. It is boggling me now for about two days and I really have no clue. Or does someone have a good suggestion on how to extract screenshots every 10 seconds without creating too much screenshots and eating up all memory and processor power.

  • lavu/tx : invert permutation lookups

    27 février 2021, par Lynne
    lavu/tx : invert permutation lookups
    

    out[lut[i]] = in[i] lookups were 4.04 times(!) slower than
    out[i] = in[lut[i]] lookups for an out-of-place FFT of length 4096.

    The permutes remain unchanged for anything but out-of-place monolithic
    FFT, as those benefit quite a lot from the current order (it means
    there's only 1 lookup necessary to add to an offset, rather than
    a full gather).

    The code was based around non-power-of-two FFTs, so this wasn't
    benchmarked early on.

    • [DH] libavutil/tx.c
    • [DH] libavutil/tx_priv.h
    • [DH] libavutil/tx_template.c