Recherche avancée

Médias (91)

Autres articles (87)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (14147)

  • what is wrong with this ffmpeg command ? i am trying to trim a video & apply audio filter(audio disable) to it

    18 septembre 2022, par Khaled Mortaja

    i am working on a a command that trim a video, after trim it applies audio filter to some seconds of the trimmed video.

    


    -ss 0:00:02.000000 -i "/data/user/0/com.example.cameraapp/cache/REC4972494270640553218.mp4" -t 0:00:14.000000 -avoid_negative_ts make_zero  -af volume=enable='between(t,5,10)':volume=0 "/data/user/0/com.example.cameraapp/app_flutter/Trimmer/REC4972494270640553218_trimmed:Sep18,2022-22:24:01.mp4"


    


    the af is the audio filter
anyone have idea what is wrong here ?

    


  • How to send ffmpeg AVPacket through WebRTC (using libdatachannel)

    14 novembre 2022, par mike

    I'm encoding a video frame with the ffmpeg libraries, generating an AVPacket with compressed data.

    


    Thanks to some recent advice here on S/O, I am trying to send that frame over a network using the WebRTC library libdatachannel, specifically by adapting the example here :

    


    https://github.com/paullouisageneau/libdatachannel/tree/master/examples/streamer

    


    I am seeing problems inside h264rtppacketizer.cpp (part of the library, not the example) which are almost certainly to do with how I'm providing the sample data.
(I don't think that this is anything to do with libdatachannel specifically, it will be an issue with what I'm sending)

    


    The example code reads each encoded frame from a file, and populates a sample by setting the content of the file to the contents of the file :

    


    sample = *reinterpret_cast *>(&fileContents);

    


    sample is just a std::vector<byte>;</byte>

    &#xA;

    I have naively copied the contents of an AVPacket->data pointer into the sample vector :

    &#xA;

    sample.resize(pkt->size);&#xA;memcpy(sample.data(), pkt->data, pkt->size * sizeof(std::byte));    &#xA;

    &#xA;

    but the packetizer is falling over when trying to get length values out of that data.&#xA;Specifically, in the following code, the first iteration gets a length of 1, but the second, looking up index 5, gives 1119887324. This is way too big for my data, which is only 3526 bytes (the whole frame is a single colour so likely to be small once encoded) :

    &#xA;

    while (index &lt; message->size()) {&#xA;assert(index &#x2B; 4 &lt; message->size());&#xA;auto lengthPtr = (uint32_t *)(message->data() &#x2B; index);&#xA;uint32_t length = ntohl(*lengthPtr);&#xA;auto naluStartIndex = index &#x2B; 4;&#xA;auto naluEndIndex = naluStartIndex &#x2B; length;&#xA;assert(naluEndIndex &lt;= message->size());    &#xA;        &#xA;auto begin = message->begin() &#x2B; naluStartIndex;&#xA;auto end = message->begin() &#x2B; naluEndIndex;&#xA;nalus->push_back(std::make_shared<nalunit>(begin, end));&#xA;index = naluEndIndex;&#xA;}&#xA;</nalunit>

    &#xA;

    Here is a dump of

    &#xA;

    uint32_t length = ntohl(*lengthPtr);&#xA;

    &#xA;

    for the first few elements of the message (*lengthPtr in parentheses) :

    &#xA;

    [2022-03-29 15:12:01.182] [info] index 0: 1  (16777216)&#xA;[2022-03-29 15:12:01.183] [info] index 1: 359  (1728118784)&#xA;[2022-03-29 15:12:01.184] [info] index 2: 91970  (1114046720)&#xA;[2022-03-29 15:12:01.186] [info] index 3: 23544512  (3225577217)&#xA;[2022-03-29 15:12:01.186] [info] index 4: 1732427807  (532693607)&#xA;[2022-03-29 15:12:01.187] [info] index 5: 1119887324  (3693068354)&#xA;[2022-03-29 15:12:01.188] [info] index 6: 3223313413  (98312128)&#xA;[2022-03-29 15:12:01.188] [info] index 7: 534512896  (384031)&#xA;[2022-03-29 15:12:01.188] [info] index 8: 3691315291  (1526728156)&#xA;[2022-03-29 15:12:01.189] [info] index 9: 83909537  (2707095557)&#xA;[2022-03-29 15:12:01.189] [info] index 10: 6004992  (10574592)&#xA;[2022-03-29 15:12:01.190] [info] index 11: 1537277952  (41307)&#xA;[2022-03-29 15:12:01.190] [info] index 12: 2701131779  (50331809)&#xA;[2022-03-29 15:12:01.192] [info] index 13: 768  (196608)&#xA;

    &#xA;

    (I know I should post a complete sample, I am working on it)

    &#xA;

      &#xA;
    • I am fairly sure I am just missing something basic. E.g. am I supposed to do something with the AVPacket side_data, does AVPacket have or miss some header info ?

      &#xA;

    • &#xA;

    • If I just fwrite the pkt->data for a single frame to disk, I can read the codec information with ffprobe :

      &#xA;

    • &#xA;

    &#xA;

    Input #0, h264, from &#x27;encodedOut.h264&#x27;:&#xA;Duration: N/A, bitrate: N/A&#xA;Stream #0:0: Video: h264 (Constrained Baseline), yuv420p(progressive), 1280x720, 30 tbr, 1200k tbn&#xA;

    &#xA;

    Update : This issue is solved by changing the H264RtpPacketizer separator setting from H264RtpPacketizer::Separator::Length to H264RtpPacketizer::Separator::LongStartSequence, many thanks to author of libdatachannel paullouisageneau (see answer below)

    &#xA;

    I have issues related to settings for the libx264 encoder, but can happily encode with h264_nvenc and h264_mf

    &#xA;

  • How correctly show video with transparency in Qt with OpenCV + FFMpeg

    11 avril 2022, par TheEnigmist

    I'm trying to show a video with transparency in a Qt6 application using OpenCV + FFMPEG.&#xA;Actually those are tool versions :

    &#xA;

      &#xA;
    • Win 11
    • &#xA;

    • Qt 6.3.0
    • &#xA;

    • OpenCV 4.5.5 (built with CMake)
    • &#xA;

    • FFMPEG 2022-04-03-git-1291568c98-full_build-www.gyan.dev
    • &#xA;

    &#xA;

    I've used a base .mov video with transparency as test (link provided below).&#xA;First of all I've converted .mov video to .webm video (VP9) and I see in output text that alpha channel remains

    &#xA;

    &#xA;

    ffmpeg -i '.\Retro Bars.mov' -c:v libvpx-vp9 -crf 30 -b:v 0 output.webm

    &#xA;

    &#xA;

    Input #0, mov,mp4,m4a,3gp,3g2,mj2,&#xA;    ...&#xA;    Stream #0:0[0x1](eng): Video: qtrle (rle  / 0x20656C72), argb(progressive),&#xA;    ...&#xA;&#xA;Output #0, webm, &#xA;   ...&#xA;   Stream #0:0(eng): Video: vp9, yuva420p(tv, progressive),&#xA;   ...&#xA;

    &#xA;

    But when I show info of output file with ffmpeg it loses alpha channel :

    &#xA;

    &#xA;

    ffmpeg -i .\output.webm

    &#xA;

    &#xA;

    Input #0, matroska,webm,&#xA;    ...&#xA;    Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv, progressive),&#xA;    ...&#xA;

    &#xA;

    If I open output.webm with OBS it is shown correctly without a background, as shown in picture :&#xA;obs_load

    &#xA;

    If I try to open it with OpenCV + FFMPEG it shows a black background under bars, as shown in picture :&#xA;Qt_out

    &#xA;

    This is how I load video in Qt :

    &#xA;

    cv::VideoCapture capture;&#xA;capture.open(filename, cv::CAP_FFMPEG);&#xA;capture.set(cv::CAP_PROP_CONVERT_RGB, false); // try forcing load alpha channel&#xA;... //in a thread&#xA;while (capture.read(frame)) {&#xA;    qDebug() &lt;&lt; "c" &lt;&lt; frame.channels() &lt;&lt; "t" &lt;&lt;  frame.type() &lt;&lt; "d" &lt;&lt;  frame.depth(); // output: c 3 t 16 d 0&#xA;    cv::cvtColor(frame, frame, cv::COLOR_BGR2RGBA); //useless since no alpha channel is detected&#xA;    img = QImage(frame.data, frame.cols, frame.rows, QImage::Format_RGBA8888);&#xA;    emit processedImage(img); // to show image in a QLabel with QPixmap::fromImage(img)&#xA;}&#xA;

    &#xA;

    I think the problem is when I load the video with OpenCV, it doens't detect alpha channel, since I can load correctly in other player (obs, html5, etc.)

    &#xA;

    What I'm wrong with all process to show this video in Qt with transparency ?

    &#xA;

    EDIT : Added dropbox link with test video + ffmpeg outputs :&#xA;sample items

    &#xA;