Recherche avancée

Médias (0)

Mot : - Tags -/optimisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (74)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

Sur d’autres sites (14871)

  • FFPlay : How to manually select video quality during playing the mpd stream ?

    19 août 2023, par Brian_wu

    I used ffplay to play mpd stream and successed,

    


    Here the MPD file :

    


    &lt;?xml version="1.0" encoding="utf-8"?>&#xA;<mpd xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" type="static" mediapresentationduration="PT1M8.7S" maxsegmentduration="PT5.0S" minbuffertime="PT12.5S">&#xA;    <programinformation>&#xA;    </programinformation>&#xA;    <servicedescription>&#xA;    </servicedescription>&#xA;    <period start="PT0.0S">&#xA;        <adaptationset contenttype="video" startwithsap="1" segmentalignment="true" bitstreamswitching="true" framerate="24000/1001" maxwidth="1280" maxheight="720" par="16:9">&#xA;        <representation mimetype="video/mp4" codecs="avc1.4d401f" bandwidth="10237" width="480" height="270" sar="1:1">&#xA;            <segmenttemplate timescale="24000" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="1">&#xA;                <segmenttimeline>&#xA;                    <s t="0" d="150150" r="9"></s>&#xA;                    <s d="149149"></s>&#xA;                </segmenttimeline>&#xA;            </segmenttemplate>&#xA;        </representation>&#xA;        <representation mimetype="video/mp4" codecs="avc1.4d401f" bandwidth="60882" width="1280" height="720" sar="1:1">&#xA;            <segmenttemplate timescale="24000" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="1">&#xA;                <segmenttimeline>&#xA;                    <s t="0" d="150150" r="9"></s>&#xA;                    <s d="149149"></s>&#xA;                </segmenttimeline>&#xA;            </segmenttemplate>&#xA;        </representation>&#xA;    </adaptationset>&#xA;    <adaptationset contenttype="audio" startwithsap="1" segmentalignment="true" bitstreamswitching="true" lang="und">&#xA;        <representation mimetype="audio/mp4" codecs="mp4a.40.2" bandwidth="128000" audiosamplingrate="32000">&#xA;            <audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></audiochannelconfiguration>&#xA;            <segmenttemplate timescale="32000" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="1">&#xA;                <segmenttimeline>&#xA;                    <s t="0" d="159744"></s>&#xA;                    <s d="160768" r="11"></s>&#xA;                    <s d="111915"></s>&#xA;                </segmenttimeline>&#xA;            </segmenttemplate>&#xA;        </representation>&#xA;        <representation mimetype="audio/mp4" codecs="mp4a.40.2" bandwidth="128000" audiosamplingrate="44100">&#xA;            <audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></audiochannelconfiguration>&#xA;            <segmenttemplate timescale="44100" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="1">&#xA;                <segmenttimeline>&#xA;                    <s t="0" d="220160"></s>&#xA;                    <s d="221184" r="11"></s>&#xA;                    <s d="158713"></s>&#xA;                </segmenttimeline>&#xA;            </segmenttemplate>&#xA;        </representation>&#xA;    </adaptationset>&#xA;</period>&#xA;</mpd>

    &#xA;&#xA;

    this dash stream contains 2 vindeos(480P and 720P), and 2 audios.The videos always starts low quality(480P), I want to change the video resolution to the high quality(720P) during playing process, what should i do ?

    &#xA;

  • Bash : Create copy of music files in different format and folder

    13 mai 2021, par Brett Sjoholm

    I'm trying to create a bash scipt to simply automate finding my flac files and creating an alac copy of them in a separate folder. Just so I have my little itunes folder. Want to automate because so many.

    &#xA;

    So I find my flac folders within my Eminem folder....

    &#xA;

    :~$ find /mnt/music/Eminem -type d -name *FLAC&#xA;    /mnt/music/Eminem/2009 Relapse FLAC&#xA;    /mnt/music/Eminem/1996 Infinite FLAC&#xA;    /mnt/music/Eminem/1999 The Slim Shady LP FLAC&#xA;    /mnt/music/Eminem/2000 Marshall Mathers LP FLAC&#xA;

    &#xA;

    Now instead of going into each folder and converting manually using something like

    &#xA;

        ffmpeg -i track.flac -acodec alac track.m4a...&#xA;

    &#xA;

    How do I, within a bash script, take these multiple folders. Create an ALAC copy of the contents in /mnt/music/iTunes using FFMpeg ?

    &#xA;

    New folder will be...

    &#xA;

    /mnt/music/iTunes/Eminem/2009 Relapse ALAC/track.m4a&#xA;

    &#xA;

    All flac folders have FLAC at the end in the same folder structure.

    &#xA;

    /mnt/music/Artist/Year Album FLAC&#xA;

    &#xA;

    I understand most of the locating, copying, converting, manually terminal command stuff but when it comes to putting it into a bash script. I don't understand how I take the output of each command and use it for another. The list of folder for example. Don't know how to automate doing all the steps for each.

    &#xA;

    Kind of long winded but any help will be much appreciated. Even some videos you recommend for learning.

    &#xA;

  • FFMPEG : Specifying Output Stream Type When Combing Multiple Filters

    7 mai 2021, par Leonard Bedner

    I currently have 3 separate ffmpeg commands that do the following :

    &#xA;

      &#xA;
    1. Overlay a watermark on a video : ffmpeg -i samplegreen.webm -i foregrounds/myimage.png -r 30 -filter_complex "overlay=(W-w)/2:H-h" -af "adelay=700" output.mp4
    2. &#xA;

    3. Overlay the results of 1) onto a beach video : ffmpeg -i backgrounds/beachsunsetmp4.mp4 -i output.mp4 -filter_complex "[1:v]chromakey=0x005d0b:0.1485:0.03[ckout];[0:v][ckout]overlay[o]" -map [o] -map 1:a -shortest somefolder/sample_video.mp4
    4. &#xA;

    5. Merge the audio of the results of 2) with another audio file : ffmpeg -i somefolder/sample_video.mp4 -i backgrounds/beachsunsetmp4.mp3 -filter_complex &#x27;[0:a][1:a]amerge=inputs=2[a]&#x27; -map 0:v -map &#x27;[a]&#x27; -c:v copy -ac 2 -shortest anotherfolder/sample_video.mp4
    6. &#xA;

    &#xA;

    Now, this all works as intended, however, I was looking into attempting to combine them all into a single command, combining all the filters, like so :

    &#xA;

    ffmpeg -i samplegreen.webm -i foregrounds/myimage.png -r 30 -i backgrounds/beachsunsetmp4.mp4 -i backgrounds/beachsunsetmp4.mp3 -filter_complex \&#xA;    "[0]overlay=(W-w)/2:H-h[output_1]; \&#xA;     [output_1]chromakey=0x005d0b:0.1485:0.03[ckout]; \&#xA;     [2:v][ckout]overlay[output_2]; \&#xA;     [output_2][3:a] amerge=inputs=2 [output_3]" \&#xA;    -af "adelay=700" -map [output_3] shortest final.mp4&#xA;

    &#xA;

    It fails with the following error (Media type mismatch between the &#x27;Parsed_overlay_2&#x27; filter output pad 0 (video) and the &#x27;Parsed_amerge_3&#x27; filter input pad 0 (audio)) :

    &#xA;

    ffmpeg version 4.3.2 Copyright (c) 2000-2021 the FFmpeg developers&#xA;  built with Apple clang version 11.0.0 (clang-1100.0.33.17)&#xA;  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.3.2_1 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox&#xA;  libavutil      56. 51.100 / 56. 51.100&#xA;  libavcodec     58. 91.100 / 58. 91.100&#xA;  libavformat    58. 45.100 / 58. 45.100&#xA;  libavdevice    58. 10.100 / 58. 10.100&#xA;  libavfilter     7. 85.100 /  7. 85.100&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  7.100 /  5.  7.100&#xA;  libswresample   3.  7.100 /  3.  7.100&#xA;  libpostproc    55.  7.100 / 55.  7.100&#xA;Input #0, matroska,webm, from &#x27;samplegreen.webm&#x27;:&#xA;  Metadata:&#xA;    encoder         : Chrome&#xA;  Duration: N/A, start: 0.000000, bitrate: N/A&#xA;    Stream #0:0(eng): Video: vp8, yuv420p(progressive), 1280x720, SAR 1:1 DAR 16:9, 1k tbr, 1k tbn, 1k tbc (default)&#xA;    Metadata:&#xA;      alpha_mode      : 1&#xA;    Stream #0:1(eng): Audio: opus, 48000 Hz, mono, fltp (default)&#xA;Input #1, png_pipe, from &#x27;foregrounds/myimage.png&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;    Stream #1:0: Video: png, rgba(pc), 350x86, 25 tbr, 25 tbn, 25 tbc&#xA;Input #2, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;backgrounds/beachsunsetmp4.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42mp41&#xA;    creation_time   : 2021-02-16T18:24:40.000000Z&#xA;  Duration: 00:00:32.53, start: 0.000000, bitrate: 3032 kb/s&#xA;    Stream #2:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1280x720, 3027 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)&#xA;    Metadata:&#xA;      creation_time   : 2021-02-16T18:24:40.000000Z&#xA;      handler_name    : ?Mainconcept Video Media Handler&#xA;      encoder         : AVC Coding&#xA;[mp3 @ 0x7f86cf809000] Estimating duration from bitrate, this may be inaccurate&#xA;Input #3, mp3, from &#x27;backgrounds/beachsunsetmp4.mp3&#x27;:&#xA;  Metadata:&#xA;    date            : 2021-02-18 06:49&#xA;    id3v2_priv.XMP  : &lt;?xpacket begin="\xef\xbb\xbf" id="W5M0MpCehiHzreSzNTczkc9d"?>\x0a\x0a \x0a  s&#xA;    Stream #3:0: Audio: mp3, 48000 Hz, stereo, fltp, 128 kb/s&#xA;[Parsed_overlay_2 @ 0x7f86cd4039c0] Media type mismatch between the &#x27;Parsed_overlay_2&#x27; filter output pad 0 (video) and the &#x27;Parsed_amerge_3&#x27; filter input pad 0 (audio)&#xA;[AVFilterGraph @ 0x7f86cd402a40] Cannot create the link overlay:0 -> amerge:0&#xA;Error initializing complex filters.&#xA;Invalid argument&#xA;

    &#xA;

    As far as I can tell, the issue is that the filter, amerge, wants 2 audio streams. Normally, I could take the input stream argument (which is a video), and make it use the audio by doing something like [0:a][1:a]amerge=inputs=2[results]. However, since my input stream is the output of a preceding filter, that doesn't seem to work (i.e. [output_2:a]). It bombs out with :

    &#xA;

    [matroska,webm @ 0x7fecca000000] Invalid stream specifier: output_2:a.&#xA;    Last message repeated 1 times&#xA;Stream specifier &#x27;output_2:a&#x27; in filtergraph description [0]overlay=(W-w)/2:H-h[output_1];      [output_1]chromakey=0x005d0b:0.1485:0.03[ckout];      [2:v][ckout]overlay[output_2];      [output_2:a][3:a] amerge=inputs=2 [output_3] matches no streams.&#xA;

    &#xA;

    So all of that said... Is there a way to specify that I'd like to use the audio stream from the output of a preceding filter ? Or any other ways to combine all of these filters into a single command ?

    &#xA;

    Thanks.

    &#xA;

    Any help would be greatly appreciated !

    &#xA;