Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (14)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (3530)

  • FFMPEG mosaic/side-by-side-compositing from simultaneous DirectShow input devices

    9 juin 2013, par timlukins

    This is what I'm trying to do :

    ffmpeg.exe -y \
    -f dshow -i video="Microsoft LifeCam Cinema" \
    -f dshow -i video="Microsoft LifeCam VX-2000" \
    -filter_complex "[0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout]" \
    -map "[fileout]" -vcodec libx264 -f flv out.flv

    Basically, I have 2 webcams and I would like to combine them into a single video file in which the frames are 2x1 in size with the frame from one camera in the left and the other on the right.

    In other words, what might be termed "mosaic-ing" or "side-by-side compositing". This is not concatenation - i.e. one file after the other (so not using the concat filter).

    I've gleamed that this use of -filter_complex to pad and then position the frames appears the prescribed way. Indeed, when I test this with files like so :

    ffmpeg.exe -y -i test1.flv -i test2.flv -filter_complex "[0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout]" -map "[fileout]" -vcodec libx264 -f flv testout.flv

    It works fine !

    With the "live" version however, both cameras seem to start (their lights come on) but the capture stalls.

    (Suspiciously like there is some DirectShow deadlock on the separate input device threads...)

    And so, I wonder is there some way to overcome this and force the two input stream's data to merge ?

    I have also tried the extended format of the dshow filter option like so as well :

    -f dshow -i video="Microsoft LifeCam Cinema":video="Microsoft LifeCam VX-2000"

    But only one camera is then selected (I suspect this option is really only to enable separate video and audio streams to be combined).

    I've also tried explicitly setting each input device to have the exact same frame size and rate with -f dshow -video_size 640x480 -framerate 30. No joy though. It still stalls once the camera is listed.

    Here is the tail end of the output (with -v debug on) :

    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option y (overwrite output files) with argument 1.
    Applying option v (set libav* logging level) with argument debug.
    Applying option filter_complex (create a complex filtergraph) with argument [0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout].
    Successfully parsed a group of options.
    Parsing a group of options: input file video=Microsoft LifeCam Cinema.
    Applying option f (force format) with argument dshow.
    Successfully parsed a group of options.
    Opening an input file: video=Microsoft LifeCam Cinema.
    [dshow @ 00000000016e79a0] All info found
    [dshow @ 00000000016e79a0] Estimating duration from bitrate, this may be inaccurate
    Input #0, dshow, from 'video=Microsoft LifeCam Cinema':
     Duration: N/A, start: 1130406.072000, bitrate: N/A
       Stream #0:0, 1, 1/10000000: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 333333/10000000, 30 tbr, 10000k tbn, 30 tbc
    Successfully opened the file.
    Parsing a group of options: input file video=Microsoft LifeCam VX-2000.
    Applying option f (force format) with argument dshow.
    Successfully parsed a group of options.
    Opening an input file: video=Microsoft LifeCam VX-2000.
    [dshow @ 00000000016e79a0] real-time buffer 101% full! frame dropped!

    EDIT Further details trying to fix within the code...*

    I've always understood from past Windows DirectShow work that multiple calls to CoInitialize() on the same thread is bad. See here. Perhaps I've misunderstood how FFMPEG is multi-threaded (i.e. if each input device is on it's own thread) but I thought to just try regulating the call with a guard variable (a static int com_init = 0; - this should probably be mutex-ed...).

    e.g. in libavdevice/dshow.c method dshow_read_header

    889    if (com_init==0)
    890     CoInitialize(0);
    891    com_init++

    And similar for dshow_read_close

    170    com_init--;
    171    if (com_init==0)
    172     CoUninitialize()

    Sadly, this doesn't work. The first camera starts but the second doesn't and the error is :

    [dshow @ 0000000000301760] Could not set video options
    video=Microsoft LifeCam VX-2000: Input/output error

    (Worth a shot. Looks like each input device is indeed on the same thread...)

  • ultimate video player not working with one video (Uncaught in promise)

    29 octobre 2018, par completeidiothereDoh

    So I have a small website that I have yet to launch using PHP / MySQL. One section there is a video player, and I am using the Ultimate video player script beecause it easily allowed me to change some codes to insert ads, etc. So far I’ve only added about 20 videos, mostly downloaded from archive.org. The uploads are processed using FFMPEG, specifically this command :
    exec(’ffmpeg -i ’.$uploadfile.’ -f mp4 ’.$new_flv) ;

    I also do some other things with ffmpeg such as create an animated .gif and a video thumbnail. So far everything has worked. Prior to using the current video script I was using video.js and this issue didn’t happen. Once I switched over to the new script I noticed this one video won’t play. It acts like it wants to play (spinner spins for a few seconds) but ultimately goes back to the pre-play video thumbnail rather than playing the video.

    I am experiencing this in Google Chrome, though the same thing using Edge. The Chrome console messages reads as follows : Uncaught (in promise) DOMException : The play() request was interrupted by a call to pause().

    The video worked prior to switching to this video player. The file also works attempting to open in Windows. The file works opening directly in Chrome. Plus all other videos work except for this one. I contacted the author and was told that it sounds like an encoding issue and that he isn’t very good at that sort of thing.

    I won’t post the paid script’s js source file here, but I guess I am wondering if anything I’ve listed points to maybe he is correct (although then why did it work in video.js and not uvp ?) and my rudimentary ffmpeg mp4 conversion command needs some work, or else perhaps some of the research I’ve done on this google chrome console message can fix the issue. I don’t believe I can make the corrections myself but I’m curious if anyone suspects this is really an encoding issue.

    I’ve only managed to add about 20 videos, so so far 1/20, or 5% videos aren’t working. I am worried that ignoring a video that isn’t working now, once there are thousands of videos, then maybe 100 won’t work, etc. Furthermore, although probably mostly SERPS (since I haven’t technically launched yet), the video in question has the most views haha. So if it is attracting more organic traffic than the other videos I would obviously like to have it working again.

    I may need to move back to video.js, but the reason I switched was I was having a difficult time (not being an excellent coder) attempting to implement an ads system. Oh well, thanks if anyone has any suggestions or ideas.

  • How to make a MPEG-DASH MPD which starts the playback in the middle of the first segment ?

    18 septembre 2018, par ravin.wang

    Here are the reproduce steps :

    1. Normalize an H.264 video stream

      ffmpeg -i 2.h264 -c:v libx264 -intra -r 25 -vf scale=640x360,setdar=16:9 2@25fps@intra@640x360.h264

      (*) After that, I got an H.264 stream where all pictures are H.264 IDR frames, and fps is 25, resolution is 640x360, aspect-ratio is 16:9.

    2. Generate an MP4 file

      MP4Box -add 2@25fps@intra@640x360.h264:timescale=1000 -fps 25 2@25fps@intra@640x360.mp4

    3. Make dash MP4 fragmented content, including init mp4, .m4s files and one .mpd file

      MP4Box -dash 5000 -frag 5000 -dash-scale 1000 -frag-rap -segment-name ’seg_second$Number$’ -segment-timeline -profile live 2@25fps@intra@640x360.mp4

    4. Copy and publish all these files to a folder under one HTTPD server
    5. I want to play from 4s of the first segment, and don’t display any frames before 4s, so I changed the .MPD file to modify the fields "SegmentTemplate@presentationTimeOffset", "SegmentTimeline:S@d/t", like as :

      <?xml version="1.0"?>
      <mpd xmlns="urn:mpeg:dash:schema:mpd:2011" minbuffertime="PT1.500S" type="static" mediapresentationduration="PT0H0M26.000S" maxsegmentduration="PT0H0M5.000S" profiles="urn:mpeg:dash:profile:isoff-live:2011">
      <period duration="PT0H0M26.000S">
      <adaptationset segmentalignment="true" maxwidth="640" maxheight="360" maxframerate="25" par="16:9" lang="und">
       <segmenttemplate presentationtimeoffset="4000" media="seg_second$Number$.m4s" timescale="1000" startnumber="1" initialization="seg_secondinit.mp4">
         <segmenttimeline>
             <s d="1000" t="4000"></s>
             <s d="5000" r="4"></s>
         </segmenttimeline>
        </segmenttemplate>
      <representation mimetype="video/mp4" codecs="avc3.64101E" width="640" height="360" framerate="25" sar="1:1" startwithsap="1" bandwidth="2261831">
      </representation>
      </adaptationset>
      </period>
      </mpd>


    6. Play the MPD url from VLC player, or Edge browser, it always starts the the first frame of the first segment, the frames between 0s 4s are also displayed unexpectedly.

    What’s wrong with my steps ? Or any other options for it ?