Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (90)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (10963)

  • avcodec/me_cmp : remove ff_me_cmp_init_static()

    3 février 2018, par Muhammad Faiz
    avcodec/me_cmp : remove ff_me_cmp_init_static()
    

    Precalculate and constify ff_square_tab.

    Reviewed-by : James Almer <jamrial@gmail.com>
    Signed-off-by : Muhammad Faiz <mfcc64@gmail.com>

    • [DH] libavcodec/me_cmp.c
    • [DH] libavcodec/me_cmp.h
    • [DH] libavcodec/mpegvideo_enc.c
    • [DH] libavcodec/mpegvideoencdsp.c
    • [DH] libavcodec/snowenc.c
    • [DH] libavcodec/utils.c
  • Prevent FFmpeg from closing when a named pipe has been read completely

    21 octobre 2019, par Werther Berselli

    I’m doing a C++ application for a university project that takes a video file (e.g. matroska) and using FFmpeg (embedding commands inside std::system() instructions) apply these steps :

    • Extract chunks of frames (e.g. 10 seconds per chunk) and chunks of .aac audio

    • Apply some filters to frames

    • Encode video chunk and audio chunk with x264 and send it to a listening client using RTSP. This step is achieved using two named pipes, one for video and one for audio.

    • Receive the stream into another process and play it using ffplay (on localhost or lan).

    I need to divide my stream in chunks because I need eventually to satisfy real-time constraints, so I can’t apply filters before to my entire input video file and only then start streaming to client.
    My primary problem here is that once FFmpeg has emptied the two pipes, it close ; but other chunks of video and audio have still to be piped and streamed. I need FFmpeg to listen to the pipes waiting for new data.

    In bash I achieved this with the following commands.

    Start to listen in a prompt for a RTSP stream :

    ffplay -rtsp_flags listen rtsp://127.0.0.1@127.0.0.1:8090

    Create a video named pipe and an audio named pipe :

    mkfifo video_pipe
    mkfifo audio_pipe

    Use this command to prevent FFmpeg to close when video pipe is emptied :

    exec 7&lt;>video_pipe

    (it is sufficient to apply it to the pipe video and neither will the audio pipe give problems)

    Activate FFmpeg command

    ffmpeg -probesize 2147483647 -re -s 1280x720 -pix_fmt rgb24 -i pipe:0 -vsync 0 -i audio_pipe -r 25 -vcodec libx264 -crf 23 -preset ultrafast -f rtsp -rtsp_transport tcp rtsp://127.0.0.1@127.0.0.1:8090 &lt; video_pipe

    And then feed the pipes in another prompt :

    cat audiochunk.aac > audio_pipe &amp; cat frame*.bmp > video_pipe

    These commands work well using 3 prompts, then I tried them in C++ embedding commands in std::system() instructions (using different threads) ; everything works, but once ffmpeg command empty the video pipe (finishing first chunk), it closes.
    exec command seems to be uneffective here.
    How could I prevent FFmpeg from closing ?

    Two days struggling on that problem, viewing all possible internet solution. I hope I was clear despite a great headache, thanks for your suggestions in advance.

    UPDATE :
    My C++ code is something like this ; I putted it in a single function on a single thread, but it doesn’t change it’s behaviour.
    I’m on Ubuntu 18.04.2

    void CameraThread::ffmpegJob()
    {
       std::string strvideo_length, command, timing;
       long video_length, begin_chunk, end_chunk;
       int begin_h, begin_m, begin_s, end_h, end_m, end_s;

       command = "ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 " + Configurations::file_name;
       strvideo_length = execCmd(command.c_str());
       strvideo_length.pop_back(); // remove \n character
       video_length = strToPositiveDigit(strvideo_length);

       if(video_length == -1)
       {
           std::cout &lt;&lt; "Invalid input file";
           return;
       }

       std::system("bash -c \"rm mst-temp/mst_video_pipe\"");
       std::system("bash -c \"rm mst-temp/mst_audio_pipe\"");
       std::system("bash -c \"mkfifo mst-temp/mst_video_pipe\"");
       std::system("bash -c \"mkfifo mst-temp/mst_audio_pipe\"");
       // Keep video pipe opened
       std::system("bash -c \"exec 7&lt;>mst-temp/mst_video_pipe\"");

       std::string rtsp_url = "rtsp://" + Configurations::my_own_used_ip + "@" + Configurations::client_ip +
               ":" + std::to_string(Configurations::port + 1);

       command = "ffmpeg -probesize 2147483647 -re -s 1280x720 -pix_fmt rgb24 -i pipe:0 "
                 "-i mst-temp/mst_audio_pipe -r 25 -vcodec libx264 -crf 23 -preset ultrafast -f rtsp "
                 "-rtsp_transport tcp " + rtsp_url + " &lt; mst-temp/mst_video_pipe &amp;"; // Using &amp; to continue without block on command
       std::system(command.c_str());

       begin_chunk = -1 * VIDEO_CHUNK;
       end_chunk = 0;

       // Extract the complete audio track
       command = "bash -c \"ffmpeg -i " + Configurations::file_name + " -vn mst-temp/audio/complete.aac -y\"";
       std::system(command.c_str());

       while(true)
       {
           // Define the actual video chunk (in seconds) to use, if EOF is reached, exit
           begin_chunk += (end_chunk - begin_chunk);
           if(begin_chunk == video_length)
               break;
           if(end_chunk + VIDEO_CHUNK &lt;= video_length)
               end_chunk += VIDEO_CHUNK;
           else
               end_chunk += (video_length - end_chunk);

           // Set begin and end H, M, S variables
           begin_h = static_cast<int>(begin_chunk / 3600);
           begin_chunk -= (begin_h * 3600);
           begin_m = static_cast<int>(begin_chunk / 60);
           begin_chunk -= (begin_m * 60);
           begin_s = static_cast<int>(begin_chunk);
           end_h = static_cast<int>(end_chunk / 3600);
           end_chunk -= (end_h * 3600);
           end_m = static_cast<int>(end_chunk / 60);
           end_chunk -= (end_m * 60);
           end_s = static_cast<int>(end_chunk);

           // Extract bmp frames and audio from video chunk
           // Extract frames
           timing = " -ss " + std::to_string(begin_h) + ":" + std::to_string(begin_m) +
                   ":" + std::to_string(begin_s) + " -to " + std::to_string(end_h) +
                   ":" + std::to_string(end_m) + ":" + std::to_string(end_s);
           command = "bash -c \"ffmpeg -i " + Configurations::file_name + timing +
                   " -compression_algo raw -pix_fmt rgb24 mst-temp/frames/output%03d.bmp\"";
           std::system(command.c_str());
           // Extract audio
           command = "bash -c \"ffmpeg -i mst-temp/audio/complete.aac" + timing +
                   " -vn mst-temp/audio/audiochunk.aac -y\"";
           std::system(command.c_str());

           // Apply elaborations on audio and frames.........................

           // Write modified audio and frames to streaming pipes
           command = "bash -c \"cat mst-temp/audio/audiochunk.aac > mst-temp/mst_audio_pipe &amp; "
                     "cat mst-temp/frames/output*.bmp > mst-temp/mst_video_pipe\"";
           std::system(command.c_str());
       }
    }
    </int></int></int></int></int></int>
  • C# Desktop recorder h.264 cross-"computer"

    14 décembre 2013, par rodi

    I was looking for a method with "least requirements" for a maximum compatibility.

    I have seen that there are mainly 2 ways : through directx, through .NET ( only >Win7) but both requires Win7 Or DirectX 9 installed.

    could ffmpeg be useful ? I have tried following this Wiki, but it prompts me :

    > ffmpeg -f dshow -i video="UScreenCapture":audio="Microphone" output.flv

    [dshow @ 00000000003ea700] Could not find video device.
    video=UScreenCapture: Input/output error

    Any advice ?