Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (109)

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

Sur d’autres sites (13146)

  • Problem accessing audio track from mp4 file

    20 décembre 2019, par Thomas Spycher

    For a TV Project we want to transcode the source videofiles to other formats. The Sourcefiles are fragmented mp4 files. Destination format could be mp4 or any other format.
    The sourcefile contains multiple audio tracks for different codes (aac, eac3, aac-otherlanguage).

    Something with this files is odd. I can play them without problems on quicktime or VLC. Import them into Premiere for example ends up with the video but without sound.

    Converting them on AWS MediaConvert results in weird issues as well like (depending on the settings) :

    • No audio frames decoded on [selector-(Audio Selector 1)-track-1-drc] (selecting aac audiotrack (Track:1))
    • Decoder : [Dolby decoder error : failed to configure to stream in first 100 frames] (selecting eac3 audiotrack (Track:2) )

    I’m able to convert the files with Handbreak and the result is a MP4 file with one audiotrack which is working everywhere. I’m trying to figure out whats odd with this file to make them working with AWS MediaConvert.

    Here is the ffprobe output of one of the files :

    ffprobe version 4.1 Copyright (c) 2007-2018 the FFmpeg developers
     built with Apple LLVM version 10.0.0 (clang-1000.10.44.4)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gpl --enable-libmp3lame --enable-libopus --enable-libsnappy --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfreetype --enable-opencl --enable-videotoolbox
     libavutil      56. 22.100 / 56. 22.100
     libavcodec     58. 35.100 / 58. 35.100
     libavformat    58. 20.100 / 58. 20.100
     libavdevice    58.  5.100 / 58.  5.100
     libavfilter     7. 40.101 /  7. 40.101
     libavresample   4.  0.  0 /  4.  0.  0
     libswscale      5.  3.100 /  5.  3.100
     libswresample   3.  3.100 /  3.  3.100
     libpostproc    55.  3.100 / 55.  3.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '123893.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 0
       compatible_brands: mp42isomiso2iso5dashavc1dby1mp41
     Duration: 00:01:24.02, start: 0.000000, bitrate: 5600 kb/s
       Stream #0:0(deu): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 4869 kb/s, 50 fps, 50 tbr, 1k tbn, 100 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(deu): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 127 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
       Stream #0:2(deu): Audio: eac3 (ec-3 / 0x332D6365), 48000 Hz, 5.1(side), fltp, 256 kb/s
       Metadata:
         handler_name    : SoundHandler
       Side data:
         audio service type: main

    An example file can get downloaded here : https://wilmaa-rnd.s3-eu-west-1.amazonaws.com/124041.mp4

  • Automatically detect box/coordinates of burned-in subtitles in a video source

    8 mars 2021, par AndroidX

    In reality I'd like to detect the coordinates of the "biggest" (both in height and width) burned-in subtitle of a given video source. But in order to do this I first need to detect the box coordinates of every distinct subtitle in the sample video, and compare them to find the biggest one. I didn't know where to start about this, so the closest thing I found (sort of) was ffmpeg's bbox video filter which according to the documentation computes "the bounding box for the non-black pixels in the input frame luminance plane", based on a given luminance value :

    


    ffmpeg -i input.mkv -vf bbox=min_val=130 -f null -


    


    This gives me a line with coordinates for each input frame in the video, ex. :

    


    [Parsed_bbox_0 @ 0ab734c0] n:123 pts:62976 pts_time:4.1 x1:173 x2:1106 y1:74 y2:694 w:934 h:621 crop=934:621:173:74 drawbox=173:74:934:621


    


    The idea was to make a script and loop through the filter's output, detect the "biggest" box by comparing them all, and output its coordinates and frame number as representative of the longest subtitle.

    


    The bbox filter though can't properly detect the subtitle box even in a relatively dark video with white hardsubs. By trial and error and only for a particular video sample which I used to run my tests, the "best" result for detecting the box of any subtitle was to use a min_val of 130 (supposedly the meaningful values of min_val are in the range of 0-255, although the docs don't say anything). Using the drawbox filter with ffplay to test the coordinates reported for a particular frame, I can see that it correctly detects only the bottom/left/right boundary of the subtitle, presumably because the outline of the globe in the image below is equally bright :

    


    enter image description here

    


    Raising min_val to 230 slightly breaks the previously correct boundaries at the bottom/left/right side :

    


    enter image description here

    


    And raising it to 240 gives me a weird result :

    


    enter image description here

    


    However even if I was able to achieve a perfect outcome with the bbox filter, this technique wouldn't be bulletproof for obvious reasons (the min_val should be arbitrarily chosen, the burned-in subtitles can be of different color, the image behind the subtitles can be equally or even more bright depending the video source, etc.).

    


    So if possible I would like to know :

    


      

    1. Is there a filter or another technique I can use with ffmpeg to do what I want
    2. 


    3. Is there perhaps another CLI tool or programming library to achieve this
    4. 


    5. Any hint that could help (perhaps I'm looking at the problem the wrong way)
    6. 


    


  • PHP & FFMPEG running on AWS worker finishes 2/3 of operations properly then fails

    1er novembre 2016, par jreikes

    I have a PHP application (using the Laravel 5.3 framework) that performs several operations on video after upload (transcoding, thumbnail generation, etc.). Everything works great locally. But it works a little differently in AWS and that seems to be causing problems.

    In AWS, uploads go to S3, then the EC2 workers pull those files into a local temp folder, perform about 8 operations with FFMPEG via shell_exec() (storing the results into the temp folder), then transfer the finished files back to S3. The first 6 operations (which are related to transcoding) finish properly. The last 2 operations (which create thumbnails) usually fail (about 1/10 times I test it, the whole thing works — inexplicably).

    I have a WorkingCopy class to give me relative and fully qualified paths, as needed, and to automatically delete temp files after completion. I also have a ColdStroage class to handle the S3 data. All 8 FFMPEG operations are structured the same way using this class.

    In trying to troubleshoot the problem, I tried running the FFMPEG thumbnail generation via SSH and found that it was failing. It’s this error :

    [image2 @ 0x2e43320] Could not open file : path/filename.png
    av_interleaved_write_frame(): Input/output error

    https://trac.ffmpeg.org/wiki/Errors explains that this happens when the destination folder doesn’t exist. But, while testing via SSH, I was getting this error even when the destination folder existed. If I CHMOD 777 the destination folder, the operation completes successfully via SSH.

    Seems simple enough, right ? Thing is, my WorkingCopy class creates the temp folder with mkdir($folderName, 0777, true) — so the folders should already be 777. They actually don’t appear to be truly 777 when I check them via SSH, but still — why do the first 6 operations work ? Just to be sure this wasn’t the issue, I added a CHMOD to my script just before thumbnail creation and it still failed.

    Here’s one more weird thing... If I comment out those last two FFMPEG operations, then transcoding (the preceding operation) fails. Based on that, I thought maybe the file system needs a moment to finish writing the FFMPEG output before proceeding, so I added sleep(5) before the end of the script. It still doesn’t work.

    I can’t share the full code publicly, but here’s the general format of the FFMPEG calls

    $pathForLargeThumbnails = 'large_thumbnails';

    $thumbnailLargeWorkingCopy = new WorkingCopy($pathForLargeThumbnails);
    $thumbnailLarge = new ColdStorage($pathForLargeThumbnails);

    $shellErrors .= shell_exec(
         "/usr/local/bin/ffmpeg/ffmpeg"
       . " -loglevel error"
       . " -i " . "\"" . $originalVideoStream->fullPath . "\""
       . " -y"
       . " -vf thumbnail -frames:v 1"
       . " \"" . $thumbnailLargeWorkingCopy->fullPath . "\""
    );

    $thumbnailLarge->put($thumbnailLargeWorkingCopy->get());

    Anyone know why this succeeds 6 times and then fails twice at the end ?