Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (99)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (9689)

  • Displaying YUV420 data using Opengles shader is too slow

    28 novembre 2012, par user1278982

    I have a child thread called A to decode video using ffmpeg on iPhone 3GS, another thread called B to display yuv data, in thread B, I used glSubTexImage2D to upload Y U V textures, and then convert yuv data to RGB in shader, but the frame rate in the decode thread is only 15fps.Why ?

    Update :
    The frame size is 720 * 576.
    I also found something interesting that if I didn't start the thread displaying the YUV data, the frame rate calculated in the decode thread is 22 fps,otherwise 15 fps.So I think that my displaying method must be inefficient.the code as below.

    I have a callback in the decode thread :

    typedef struct _DVDVideoPicture
    {
      char *plane[4];
      int iLineSize[4];
    }DVDVideoPicture;

    void YUVCallBack(void *pYUVData, void *pContext)
    {
      VideoView *view = (VideoView *)pContext;
      [view.glView copyYUVData:(DVDVideoPicture *)pData];
      [view calculateFrameRate];
    }

    The copyYUVData method extract the y u v planes separately. The following is displaying thread method.

  • ffmpeg concat demuxer dropping audio after first clip

    6 septembre 2020, par marcman

    I'm trying to concatenate a video collection that all should be the same type and format. It seems to work like expected when the original sources are the same type, but when they're different types the demuxer drops audio. I understand that the demux requires all inputs to have the same codecs, but I believe I am doing that already.

    


    This is my workflow (pseudocode with Python-like for loop) :

    


    for i, video in enumerate(all_videos):&#xA;    # Call command for transcoding and filtering&#xA;    # I allow this command to be called on mp4, mov, and avi files&#xA;    # The point of this filter is:&#xA;    # (1) to superimpose a timestamp on the bottom right of the video&#xA;    # (2) to scale and pad the videos to a common output resolution (the specific numbers below are just copied from a video I ran, but they are filled in automatically for each given video by the rest of my script)&#xA;    # (3) To transcode all videos to the same common format&#xA;    ffmpeg \&#xA;        -y \&#xA;        -loglevel quiet \&#xA;        -stats \&#xA;        -i video_<i>.{mp4, mov, avi} \&#xA;        -vcodec libx264 \&#xA;        -acodec aac \&#xA;        -vf "scale=607:1080, pad=width=1920:height=1080:x=656:y=0:color=black, drawtext=expansion=strftime: basetime=$(date &#x2B;%s -d&#x27;2020-08-27 16:42:26&#x27;)000000 : fontcolor=white : text=&#x27;%^b %d, %Y%n%l\\:%M%p&#x27; : fontsize=36 : y=1080-4*lh : x=1263-text_w-2*max_glyph_w" \&#xA;        tmpdir/video_<i>.mp4&#xA;&#xA;&#xA;# create file_list.txt, e.g.&#xA;#&#xA;# file &#x27;/abspath/to/tmpdir/video_1.mp4&#x27;&#xA;# file &#x27;/abspath/to/tmpdir/video_2.mp4&#x27;&#xA;# file &#x27;/abspath/to/tmpdir/video_3.mp4&#x27;&#xA;# ...&#xA;&#xA;&#xA;ffmpeg \&#xA;    -f concat \&#xA;    -safe 0 \&#xA;    -i file_list.txt \&#xA;    -c copy \&#xA;    all_videos.mp4&#xA;</i></i>

    &#xA;

    In my test case, my inputs are 3 videos in this order :

    &#xA;

      &#xA;
    1. a camcorder video output in H.264+aac in a mp4
    2. &#xA;

    3. an iphone video in mov format
    4. &#xA;

    5. an iphone video in mp4 format
    6. &#xA;

    &#xA;

    When I review each of the intermediate mp4-transcoded videos in tmpdir they all playback audio and video just fine and the filtering works as expected. However, when I review the final concatenated output, only the first clip (the camcorder video) has sound. When all the videos are from the camcorder, there is no audio issue—they all have sound.

    &#xA;

    When I output ffmpeg warnings and errors, the only thing that shows up is an expected warning about the timestamp :

    &#xA;

    [mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909504, current: 5430298; changing to 5909505. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909505, current: 5431322; changing to 5909506. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909506, current: 5432346; changing to 5909507. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909507, current: 5433370; changing to 5909508. This may result in incorrect timestamps in the output file.&#xA;...&#xA;

    &#xA;

    What might I be doing wrong here ? I'm testing in both the Ubuntu 20.04 "Videos" applications as well as VLC Player and both demonstrate the same problem. I'd prefer to use the demuxer if possible for speed as re-encoding during concatenation is quite expensive.

    &#xA;

    NOTE : This is a different issue than laid out here, in which some of the videos had no audio. In my case, all videos have both video and audio.

    &#xA;

  • Using ffmpeg to build a streaming server to stream static media files (broadcast behaviour)

    15 février 2018, par MiDaa

    I’ve read some online articles and SO questions, most of them are about streaming MY video to SERVER like youtube or switch.

    This is about a project of interest, here are what it should do.

    • Work on a Linux server
    • Serve media(preferably multiple format like mp4 mkv) files to client through rtp protocol maybe ?
    • Server could set a specific time to start the streaming or end it
    • Server could pause and resume the streaming(?)
    • Multiple clients connect and play the stream at same time(sounds like a basic feature)

    After some research, I found that ffmpeg is a great open-source candidate for such a project but as a newbie in this area, I’m having a tough time understanding how this whole thing work.

    As this(ffmpeg doc) states, it looks like just a one liner command. But I don’t find anything fit my feature listed above.

    Can ffmpeg be used to achieve those ? If not appriciate any suggesstion on where I should be looking at.

    EDIT :

    • Target devices : iPad,iPhone, Android phones should be able to watch the stream using a web browser(assume a modern browser)