Recherche avancée

Médias (0)

Mot : - Tags -/objet éditorial

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (55)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (7787)

  • ffmpeg concat demuxer dropping audio after first clip

    6 septembre 2020, par marcman

    I'm trying to concatenate a video collection that all should be the same type and format. It seems to work like expected when the original sources are the same type, but when they're different types the demuxer drops audio. I understand that the demux requires all inputs to have the same codecs, but I believe I am doing that already.

    


    This is my workflow (pseudocode with Python-like for loop) :

    


    for i, video in enumerate(all_videos):&#xA;    # Call command for transcoding and filtering&#xA;    # I allow this command to be called on mp4, mov, and avi files&#xA;    # The point of this filter is:&#xA;    # (1) to superimpose a timestamp on the bottom right of the video&#xA;    # (2) to scale and pad the videos to a common output resolution (the specific numbers below are just copied from a video I ran, but they are filled in automatically for each given video by the rest of my script)&#xA;    # (3) To transcode all videos to the same common format&#xA;    ffmpeg \&#xA;        -y \&#xA;        -loglevel quiet \&#xA;        -stats \&#xA;        -i video_<i>.{mp4, mov, avi} \&#xA;        -vcodec libx264 \&#xA;        -acodec aac \&#xA;        -vf "scale=607:1080, pad=width=1920:height=1080:x=656:y=0:color=black, drawtext=expansion=strftime: basetime=$(date &#x2B;%s -d&#x27;2020-08-27 16:42:26&#x27;)000000 : fontcolor=white : text=&#x27;%^b %d, %Y%n%l\\:%M%p&#x27; : fontsize=36 : y=1080-4*lh : x=1263-text_w-2*max_glyph_w" \&#xA;        tmpdir/video_<i>.mp4&#xA;&#xA;&#xA;# create file_list.txt, e.g.&#xA;#&#xA;# file &#x27;/abspath/to/tmpdir/video_1.mp4&#x27;&#xA;# file &#x27;/abspath/to/tmpdir/video_2.mp4&#x27;&#xA;# file &#x27;/abspath/to/tmpdir/video_3.mp4&#x27;&#xA;# ...&#xA;&#xA;&#xA;ffmpeg \&#xA;    -f concat \&#xA;    -safe 0 \&#xA;    -i file_list.txt \&#xA;    -c copy \&#xA;    all_videos.mp4&#xA;</i></i>

    &#xA;

    In my test case, my inputs are 3 videos in this order :

    &#xA;

      &#xA;
    1. a camcorder video output in H.264+aac in a mp4
    2. &#xA;

    3. an iphone video in mov format
    4. &#xA;

    5. an iphone video in mp4 format
    6. &#xA;

    &#xA;

    When I review each of the intermediate mp4-transcoded videos in tmpdir they all playback audio and video just fine and the filtering works as expected. However, when I review the final concatenated output, only the first clip (the camcorder video) has sound. When all the videos are from the camcorder, there is no audio issue—they all have sound.

    &#xA;

    When I output ffmpeg warnings and errors, the only thing that shows up is an expected warning about the timestamp :

    &#xA;

    [mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909504, current: 5430298; changing to 5909505. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909505, current: 5431322; changing to 5909506. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909506, current: 5432346; changing to 5909507. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909507, current: 5433370; changing to 5909508. This may result in incorrect timestamps in the output file.&#xA;...&#xA;

    &#xA;

    What might I be doing wrong here ? I'm testing in both the Ubuntu 20.04 "Videos" applications as well as VLC Player and both demonstrate the same problem. I'd prefer to use the demuxer if possible for speed as re-encoding during concatenation is quite expensive.

    &#xA;

    NOTE : This is a different issue than laid out here, in which some of the videos had no audio. In my case, all videos have both video and audio.

    &#xA;

  • Displaying YUV420 data using Opengles shader is too slow

    28 novembre 2012, par user1278982

    I have a child thread called A to decode video using ffmpeg on iPhone 3GS, another thread called B to display yuv data, in thread B, I used glSubTexImage2D to upload Y U V textures, and then convert yuv data to RGB in shader, but the frame rate in the decode thread is only 15fps.Why ?

    Update :
    The frame size is 720 * 576.
    I also found something interesting that if I didn't start the thread displaying the YUV data, the frame rate calculated in the decode thread is 22 fps,otherwise 15 fps.So I think that my displaying method must be inefficient.the code as below.

    I have a callback in the decode thread :

    typedef struct _DVDVideoPicture
    {
      char *plane[4];
      int iLineSize[4];
    }DVDVideoPicture;

    void YUVCallBack(void *pYUVData, void *pContext)
    {
      VideoView *view = (VideoView *)pContext;
      [view.glView copyYUVData:(DVDVideoPicture *)pData];
      [view calculateFrameRate];
    }

    The copyYUVData method extract the y u v planes separately. The following is displaying thread method.

  • "Moov Atom not Found " while converting .m4a into .mp3 audio file ?

    22 juillet 2021, par Comrade Shiva

    when i was converting iphone Audio recording .m4a file into .mp3 format at that time i got error "Moov Atom not found" showing on the screen,,,,&#xA;so i tried to check this error on internet ,found that some ffmpeg commands and recover_mp4's commands but same error showing on the screen ,,,,&#xA;it may be my Audio file was damaged so any solution to recover the audio file ,,,,,&#xA;recover_mp4.exe showing some commands and claiming to recover file but in my lappy its not workable When ffmpeg is used&#xA;when recover_mp4 is used

    &#xA;