Advanced search

Medias (2)

Tag: - Tags -/documentation

Other articles (9)

  • Contribute to documentation

    13 April 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including: critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Keeping control of your media in your hands

    13 April 2011, by

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Installation en mode ferme

    4 February 2011, by

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

On other websites (3910)

  • ffmpeg concat demuxer dropping audio after first clip

    6 September 2020, by marcman

    I'm trying to concatenate a video collection that all should be the same type and format. It seems to work like expected when the original sources are the same type, but when they're different types the demuxer drops audio. I understand that the demux requires all inputs to have the same codecs, but I believe I am doing that already.

    


    This is my workflow (pseudocode with Python-like for loop):

    


    for i, video in enumerate(all_videos):&#xA;    # Call command for transcoding and filtering&#xA;    # I allow this command to be called on mp4, mov, and avi files&#xA;    # The point of this filter is:&#xA;    # (1) to superimpose a timestamp on the bottom right of the video&#xA;    # (2) to scale and pad the videos to a common output resolution (the specific numbers below are just copied from a video I ran, but they are filled in automatically for each given video by the rest of my script)&#xA;    # (3) To transcode all videos to the same common format&#xA;    ffmpeg \&#xA;        -y \&#xA;        -loglevel quiet \&#xA;        -stats \&#xA;        -i video_<i>.{mp4, mov, avi} \&#xA;        -vcodec libx264 \&#xA;        -acodec aac \&#xA;        -vf "scale=607:1080, pad=width=1920:height=1080:x=656:y=0:color=black, drawtext=expansion=strftime: basetime=$(date &#x2B;%s -d&#x27;2020-08-27 16:42:26&#x27;)000000 : fontcolor=white : text=&#x27;%^b %d, %Y%n%l\\:%M%p&#x27; : fontsize=36 : y=1080-4*lh : x=1263-text_w-2*max_glyph_w" \&#xA;        tmpdir/video_<i>.mp4&#xA;&#xA;&#xA;# create file_list.txt, e.g.&#xA;#&#xA;# file &#x27;/abspath/to/tmpdir/video_1.mp4&#x27;&#xA;# file &#x27;/abspath/to/tmpdir/video_2.mp4&#x27;&#xA;# file &#x27;/abspath/to/tmpdir/video_3.mp4&#x27;&#xA;# ...&#xA;&#xA;&#xA;ffmpeg \&#xA;    -f concat \&#xA;    -safe 0 \&#xA;    -i file_list.txt \&#xA;    -c copy \&#xA;    all_videos.mp4&#xA;</i></i>

    &#xA;

    In my test case, my inputs are 3 videos in this order:

    &#xA;

      &#xA;
    1. a camcorder video output in H.264+aac in a mp4
    2. &#xA;

    3. an iphone video in mov format
    4. &#xA;

    5. an iphone video in mp4 format
    6. &#xA;

    &#xA;

    When I review each of the intermediate mp4-transcoded videos in tmpdir they all playback audio and video just fine and the filtering works as expected. However, when I review the final concatenated output, only the first clip (the camcorder video) has sound. When all the videos are from the camcorder, there is no audio issue—they all have sound.

    &#xA;

    When I output ffmpeg warnings and errors, the only thing that shows up is an expected warning about the timestamp:

    &#xA;

    [mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909504, current: 5430298; changing to 5909505. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909505, current: 5431322; changing to 5909506. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909506, current: 5432346; changing to 5909507. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909507, current: 5433370; changing to 5909508. This may result in incorrect timestamps in the output file.&#xA;...&#xA;

    &#xA;

    What might I be doing wrong here? I'm testing in both the Ubuntu 20.04 "Videos" applications as well as VLC Player and both demonstrate the same problem. I'd prefer to use the demuxer if possible for speed as re-encoding during concatenation is quite expensive.

    &#xA;

    NOTE: This is a different issue than laid out here, in which some of the videos had no audio. In my case, all videos have both video and audio.

    &#xA;

  • Displaying YUV420 data using Opengles shader is too slow

    28 November 2012, by user1278982

    I have a child thread called A to decode video using ffmpeg on iPhone 3GS, another thread called B to display yuv data, in thread B, I used glSubTexImage2D to upload Y U V textures, and then convert yuv data to RGB in shader, but the frame rate in the decode thread is only 15fps.Why?

    Update:
    The frame size is 720 * 576.
    I also found something interesting that if I didn't start the thread displaying the YUV data, the frame rate calculated in the decode thread is 22 fps,otherwise 15 fps.So I think that my displaying method must be inefficient.the code as below.

    I have a callback in the decode thread:

    typedef struct _DVDVideoPicture
    {
      char *plane[4];
      int iLineSize[4];
    }DVDVideoPicture;

    void YUVCallBack(void *pYUVData, void *pContext)
    {
      VideoView *view = (VideoView *)pContext;
      [view.glView copyYUVData:(DVDVideoPicture *)pData];
      [view calculateFrameRate];
    }

    The copyYUVData method extract the y u v planes separately. The following is displaying thread method.

  • "Moov Atom not Found " while converting .m4a into .mp3 audio file?

    22 July 2021, by Comrade Shiva

    when i was converting iphone Audio recording .m4a file into .mp3 format at that time i got error "Moov Atom not found" showing on the screen,,,,&#xA;so i tried to check this error on internet ,found that some ffmpeg commands and recover_mp4's commands but same error showing on the screen ,,,,&#xA;it may be my Audio file was damaged so any solution to recover the audio file ,,,,,&#xA;recover_mp4.exe showing some commands and claiming to recover file but in my lappy its not workable When ffmpeg is used&#xA;when recover_mp4 is used

    &#xA;