Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (47)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (10407)

  • ffmpeg exited with code 1 : Output with label 'videooutput' does not exist in any defined filter graph, or was already used elsewhere

    12 avril 2022, par ThePrime

    I'm using the https://www.npmjs.com/package/live-stream-radio module, but a few times i get this error using it. How could i solve it ? My code :

    


    const NodeMediaServer = require('node-media-server');

const config = {
  rtmp: {
    port: 1935,
    chunk_size: 60000,
    gop_cache: true,
    ping: 30,
    ping_timeout: 60
  },
  http: {
    port: 8080,
    allow_origin: '*',
    mediaroot: 'F:/VMediaServer/mediaroot'
  },
  trans: { ffmpeg: 'C:/ffmpeg/bin/ffmpeg.exe', tasks: [ { app: 'live', ac: 'aac', vc: 'libx264', hls: true, hlsFlags: '[hls_time=2:hls_list_size=3:hls_flags=delete_segments]', dash: true, dashFlags: '[f=dash:window_size=3:extra_window_size=5]' } ] }
};

var nms = new NodeMediaServer(config)
nms.run();


    


  • How to handle errors during downloading (using videomanifest) video and make it more robust

    5 mai 2023, par Wakan Tanka

    I'm using ffmpeg to download videos from streaming site following way :

    


    ffmpeg.exe -i videomanifest -codec copy video.mp4 


    


    This works, but it has some drawbacks. The problem is that sometimes my (or maybe error on server side etc.) internet connection fails, and I'm forced to restart the whole download again. I would like to make this process more robust. It would be great if ffmpeg can continuously split video to several smaller files during the download and if download will fail then ffmpeg will continue (or maybe skip) from broken part. After downloading all parts I will concat them manually, this is no problem. Please treat above text just like an example, any other solution which will make downloading process more robust is welcomed. Is this possible with ffmpeg ?

    


    Here is example of videomanifest (this is not a fully functional file, important url parts were deleted due to privacy, so please treat it as an example for demonstration purposes)

    


    <mpd type="static" maxsegmentduration="PT5.000S" profiles="urn:mpeg:dash:profile:isoff-live:2011" mediapresentationduration="PT0H0M4961.472S" minbuffertime="PT4.500S" xmlns="urn:mpeg:DASH:schema:MPD:2011">&#xA;  <baseurl>https://........................</baseurl>&#xA;  <period>&#xA;    <adaptationset contenttype="audio" mimetype="audio/mp4" segmentalignment="true" bitstreamswitching="true" codecs="mp4a.40.2">&#xA;      <contentprotection schemeiduri="urn:mpeg:dash:sea:2012">&#xA;        &#xA;        &#xA;        &#xA;      </contentprotection>&#xA;      <label>EnhancedAudio</label>&#xA;      <segmenttemplate timescale="44100" initialization="videotranscode?provider=Spo&amp;amp;docId=https%3a%2...................">&#xA;        <segmenttimeline>&#xA;          <s d="221184" r="988"></s>&#xA;          <s d="49939"></s>&#xA;        </segmenttimeline>&#xA;      </segmenttemplate>&#xA;      <representation audiosamplingrate="44100" bandwidth="128000" codecs="mp4a.40.2"></representation>&#xA;      <representation audiosamplingrate="44100" bandwidth="192000" codecs="mp4a.40.2"></representation>&#xA;    </adaptationset>&#xA;    <adaptationset contenttype="audio" mimetype="audio/mp4" segmentalignment="true" bitstreamswitching="true" codecs="mp4a.40.2">&#xA;      <contentprotection schemeiduri="urn:mpeg:dash:sea:2012">&#xA;        &#xA;        &#xA;        &#xA;      </contentprotection>&#xA;      <label>OriginalAudio</label>&#xA;      <segmenttemplate timescale="44100" initialization="videotranscode?provider=Spo&amp;amp;docId=https%3a%2f%2fstu">&#xA;        <segmenttimeline>&#xA;          <s d="221184" r="988"></s>&#xA;          <s d="49939"></s>&#xA;        </segmenttimeline>&#xA;      </segmenttemplate>&#xA;      <representation audiosamplingrate="44100" bandwidth="128000" codecs="mp4a.40.2"></representation>&#xA;      <representation audiosamplingrate="44100" bandwidth="192000" codecs="mp4a.40.2"></representation>&#xA;    </adaptationset>&#xA;    <adaptationset contenttype="video" mimetype="video/mp4" segmentalignment="true" bitstreamswitching="true" maxframerate="82690000/5512747" maxwidth="1920" maxheight="1080" codecs="avc1.4d0028">&#xA;      <contentprotection schemeiduri="urn:mpeg:dash:sea:2012">&#xA;        &#xA;        &#xA;        &#xA;      </contentprotection>&#xA;      <segmenttemplate timescale="15000" initialization="videotranscode?provider=Spo&amp;amp;docId=https............">&#xA;        <segmenttimeline>&#xA;          <s d="74999" r="991"></s>&#xA;          <s d="22080"></s>&#xA;        </segmenttimeline>&#xA;      </segmenttemplate>&#xA;      <representation width="426" height="240" framerate="82690000/5512747" bandwidth="144000" codecs="avc1.4d401e"></representation>&#xA;      <representation width="852" height="480" framerate="82690000/5512747" bandwidth="480000" codecs="avc1.4d001f"></representation>&#xA;      <representation width="1280" height="720" framerate="82690000/5512747" bandwidth="1081600" codecs="avc1.4d001f"></representation>&#xA;      <representation width="1920" height="1080" framerate="82690000/5512747" bandwidth="2428800" codecs="avc1.4d0028"></representation>&#xA;    </adaptationset>&#xA;  </period>&#xA;</mpd>&#xA;

    &#xA;

  • Why are there multiple timescale specified in a MP4/AVC container file ?

    24 juin 2021, par Lexx32117

    I am currently parsing a MPEG-DASH stream initialization segment (generated by FFMPEG) and I noticed that the timescale is specified at multiple different places in my file :

    &#xA;

      &#xA;
    • In the movie header box (mvhd) : 1000
    • &#xA;

    • In the media header box of my video track (mdhd) : 15360
    • &#xA;

    • In the AVC Configuration box (avcC) more precisely in the VUI section of the sequence parameter set NAL unit : 60
    • &#xA;

    &#xA;

    Why is it specified in so many different places ? Why do they have different values ? Is there a hierarchy in these value ? For example does 60 overrides 15360 and 15360 overrides 1000 ?

    &#xA;

    Here's the command I used to generate the file I am looking at :

    &#xA;

    ffmpeg -f v4l2 -pixel_format yuyv422 -vcodec rawvideo -framerate 30 -video_size 640x360 -i /dev/video0 \&#xA;        -f dash -remove_at_exit false -use_template true -use_timeline true -streaming true -window_size 5 -extra_window_size 5 -seg_duration 5 -vcodec libx264 -b:v 1M -maxrate 1M -bufsize 2M  -pix_fmt yuv420p -r 30 -s 640x360 -aspect 16:9 /var/www/html/media/live.mpd&#xA;

    &#xA;