Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (109)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (11339)

  • Executing shell script on Google Cloud Functions

    9 juillet 2020, par João Abrantes

    I am trying to encode .mp4 videos into hls using FFmpeg.

    


    I am using subprocess to call FFmpeg :

    


    def transcoder(data, context):
    """Background Cloud Function to be triggered by Cloud Storage.
       This generic function logs relevant data when a file is changed.

    Args:
        data (dict): The Cloud Functions event payload.
        context (google.cloud.functions.Context): Metadata of triggering event.
    Returns:
        None; the output is written to Stackdriver Logging
    """
    try:
        input_filename = data['name'].split('/')[-1] #videos have no extension
        input_path = f'/tmp/{input_filename}'
        print(f'filename {input_filename}')
        print(f'input_path {input_path}')
        print(f"bucket {data['bucket']}")
        print(f"name {data['name']}")

        outdir_path = f'/tmp/output/{input_filename}'
        os.makedirs(outdir_path, exist_ok=True)

        bucket = client.get_bucket(data['bucket'])
        blob = bucket.get_blob(data['name'])
        blob.download_to_filename(input_path)

        cmd = f'''ffmpeg -y -i {input_path} \
              -preset ultrafast -g 60 -sc_threshold 0 \
              -map 0:0 -map 0:1 -map 0:0 -map 0:1 \
              -s:v:0 360x640 -c:v:0 libx264 -b:v:0 365k \
              -s:v:1 720x1280 -c:v:1 libx264 -b:v:1 3000k \
              -c:a copy \
              -var_stream_map "v:0,a:0 v:1,a:1" \
              -master_pl_name master.m3u8 \
              -f hls -hls_time 6 -hls_list_size 0 \
              -hls_segment_filename "{outdir_path}/%v_fileSequence%d.ts" \
              -hls_playlist_type vod \
               {outdir_path}/%v_prog_index.m3u8'''

        process = subprocess.Popen(cmd)
        stdout, stderr = process.communicate()
        upload_local_directory_to_gcs(outdir_path, upload_bucket, input_filename)
    except Exception as e:
        print(e)


    


    The problem is that I get an error :

    


    [Errno 2] No such file or directory: 'ffmpeg -y -i /tmp/video -preset ultrafast -g 60 -sc_threshold 0 -map 0:0 -map 0:1 -map 0:0 -map 0:1 -s:v:0 360x640 -c:v:0 libx264 -b:v:0 365k -s:v:1 720x1280 -c:v:1 libx264 -b:v:1 3000k -c:a copy -var_stream_map "v:0,a:0 v:1,a:1" -master_pl_name master.m3u8 -f hls -hls_time 6 -hls_list_size 0 -hls_segment_filename "/tmp/output/video/%v_fileSequence%d.ts" -hls_playlist_type vod /tmp/output/video/%v_prog_index.m3u8': 'ffmpeg -y -i /tmp/video -preset ultrafast -g 60 -sc_threshold 0 -map 0:0 -map 0:1 -map 0:0 -map 0:1 -s:v:0 360x640 -c:v:0 libx264 -b:v:0 365k -s:v:1 720x1280 -c:v:1 libx264 -b:v:1 3000k -c:a copy -var_stream_map "v:0,a:0 v:1,a:1" -master_pl_name master.m3u8 -f hls -hls_time 6 -hls_list_size 0 -hls_segment_filename "/tmp/output/video/%v_fileSequence%d.ts" -hls_playlist_type vod /tmp/output/video/%v_prog_index.m3u8'


    


    But I know that the input files and the output files do exist because I debugged that using print(os.listdir(path)) so now I am wondering if the FFmpeg I call with subprocess has access to the /tmp folder..?

    


    I know that there is a Python FFmpeg library I could use, but I don't know how to run my FFmpeg command using that library. Can you help ?

    


    p.s. I can run this locally with success.

    


  • ffmpeg : Print negative times like "-00:05:01.22" instead of "00 :-5 :-1.-22"

    10 février 2015, par Michael Niedermayer
    ffmpeg : Print negative times like "-00:05:01.22" instead of "00 :-5 :-1.-22"
    

    Signed-off-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] ffmpeg.c
  • using ffmpeg libraray to write to a mp4, ffprobe shows there are 100 frames and 100 packets, but av_interleaved_write_frame only called 50 times

    2 mai 2023, par ollydbg23

    here is my code to generate a mp4 file by using ffmpeg and opencv library. The opencv library is only try to generate 100 images(frames), and ffmpeg library is to compress the images to a mp4 files.

    &#xA;

    Here is the working code :

    &#xA;

    #include <iostream>&#xA;#include <vector>&#xA;#include <cstring>&#xA;#include <fstream>&#xA;#include <sstream>&#xA;#include <stdexcept>&#xA;#include <opencv2></opencv2>opencv.hpp>&#xA;extern "C" {&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;}&#xA;&#xA;#include<cstdlib> // to generate time stamps&#xA;&#xA;using namespace std;&#xA;using namespace cv;&#xA;&#xA;int main()&#xA;{&#xA;    // Set up input frames as BGR byte arrays&#xA;    vector<mat> frames;&#xA;&#xA;    int width = 640;&#xA;    int height = 480;&#xA;    int num_frames = 100;&#xA;    Scalar black(0, 0, 0);&#xA;    Scalar white(255, 255, 255);&#xA;    int font = FONT_HERSHEY_SIMPLEX;&#xA;    double font_scale = 1.0;&#xA;    int thickness = 2;&#xA;&#xA;    for (int i = 0; i &lt; num_frames; i&#x2B;&#x2B;) {&#xA;        Mat frame = Mat::zeros(height, width, CV_8UC3);&#xA;        putText(frame, std::to_string(i), Point(width / 2 - 50, height / 2), font, font_scale, white, thickness);&#xA;        frames.push_back(frame);&#xA;    }&#xA;&#xA;    // generate a serial of time stamps which is used to set the PTS value&#xA;    // suppose they are in ms unit, the time interval is between 30ms to 59ms&#xA;    vector<int> timestamps;&#xA;&#xA;    for (int i = 0; i &lt; num_frames; i&#x2B;&#x2B;) {&#xA;        int timestamp;&#xA;        if (i == 0)&#xA;            timestamp = 0;&#xA;        else&#xA;        {&#xA;            int random = 30 &#x2B; (rand() % 30);&#xA;            timestamp = timestamps[i-0] &#x2B; random;&#xA;        }&#xA;&#xA;        timestamps.push_back(timestamp);&#xA;    }&#xA;&#xA;    // Populate frames with BGR byte arrays&#xA;&#xA;    // Initialize FFmpeg&#xA;    //av_register_all();&#xA;&#xA;    // Set up output file&#xA;    AVFormatContext* outFormatCtx = nullptr;&#xA;    //AVCodec* outCodec = nullptr;&#xA;    AVCodecContext* outCodecCtx = nullptr;&#xA;    //AVStream* outStream = nullptr;&#xA;    //AVPacket outPacket;&#xA;&#xA;    const char* outFile = "output.mp4";&#xA;    int outWidth = frames[0].cols;&#xA;    int outHeight = frames[0].rows;&#xA;    int fps = 25;&#xA;&#xA;    // Open the output file context&#xA;    avformat_alloc_output_context2(&amp;outFormatCtx, nullptr, nullptr, outFile);&#xA;    if (!outFormatCtx) {&#xA;        cerr &lt;&lt; "Error: Could not allocate output format context" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Open the output file&#xA;    if (avio_open(&amp;outFormatCtx->pb, outFile, AVIO_FLAG_WRITE) &lt; 0) {&#xA;        cerr &lt;&lt; "Error opening output file" &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Set up output codec&#xA;    const AVCodec* outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);&#xA;    if (!outCodec) {&#xA;        cerr &lt;&lt; "Error: Could not find H.264 codec" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    outCodecCtx = avcodec_alloc_context3(outCodec);&#xA;    if (!outCodecCtx) {&#xA;        cerr &lt;&lt; "Error: Could not allocate output codec context" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;    outCodecCtx->codec_id = AV_CODEC_ID_H264;&#xA;    outCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    outCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;    outCodecCtx->width = outWidth;&#xA;    outCodecCtx->height = outHeight;&#xA;    //outCodecCtx->time_base = { 1, fps*1000 };   // 25000&#xA;    outCodecCtx->time_base = { 1, fps};   // 25000&#xA;    outCodecCtx->framerate = {fps, 1};          // 25&#xA;    outCodecCtx->bit_rate = 4000000;&#xA;&#xA;    //https://github.com/leandromoreira/ffmpeg-libav-tutorial&#xA;    //We set the flag AV_CODEC_FLAG_GLOBAL_HEADER which tells the encoder that it can use the global headers.&#xA;    if (outFormatCtx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;    {&#xA;        outCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; //&#xA;    }&#xA;&#xA;    // Open output codec&#xA;    if (avcodec_open2(outCodecCtx, outCodec, nullptr) &lt; 0) {&#xA;        cerr &lt;&lt; "Error: Could not open output codec" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Create output stream&#xA;    AVStream* outStream = avformat_new_stream(outFormatCtx, outCodec);&#xA;    if (!outStream) {&#xA;        cerr &lt;&lt; "Error: Could not allocate output stream" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Configure output stream parameters (e.g., time base, codec parameters, etc.)&#xA;    // ...&#xA;&#xA;    // Connect output stream to format context&#xA;    outStream->codecpar->codec_id = outCodecCtx->codec_id;&#xA;    outStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    outStream->codecpar->width = outCodecCtx->width;&#xA;    outStream->codecpar->height = outCodecCtx->height;&#xA;    outStream->codecpar->format = outCodecCtx->pix_fmt;&#xA;    outStream->time_base = outCodecCtx->time_base;&#xA;&#xA;    int ret = avcodec_parameters_from_context(outStream->codecpar, outCodecCtx);&#xA;    if (ret &lt; 0) {&#xA;        cerr &lt;&lt; "Error: Could not copy codec parameters to output stream" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    outStream->avg_frame_rate = outCodecCtx->framerate;&#xA;    //outStream->id = outFormatCtx->nb_streams&#x2B;&#x2B;;  &lt;--- We shouldn&#x27;t modify outStream->id&#xA;&#xA;    ret = avformat_write_header(outFormatCtx, nullptr);&#xA;    if (ret &lt; 0) {&#xA;        cerr &lt;&lt; "Error: Could not write output header" &lt;&lt; endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Convert frames to YUV format and write to output file&#xA;    int frame_count = -1;&#xA;    for (const auto&amp; frame : frames) {&#xA;        frame_count&#x2B;&#x2B;;&#xA;        AVFrame* yuvFrame = av_frame_alloc();&#xA;        if (!yuvFrame) {&#xA;            cerr &lt;&lt; "Error: Could not allocate YUV frame" &lt;&lt; endl;&#xA;            return -1;&#xA;        }&#xA;        av_image_alloc(yuvFrame->data, yuvFrame->linesize, outWidth, outHeight, AV_PIX_FMT_YUV420P, 32);&#xA;&#xA;        yuvFrame->width = outWidth;&#xA;        yuvFrame->height = outHeight;&#xA;        yuvFrame->format = AV_PIX_FMT_YUV420P;&#xA;&#xA;        // Convert BGR frame to YUV format&#xA;        Mat yuvMat;&#xA;        cvtColor(frame, yuvMat, COLOR_BGR2YUV_I420);&#xA;        memcpy(yuvFrame->data[0], yuvMat.data, outWidth * outHeight);&#xA;        memcpy(yuvFrame->data[1], yuvMat.data &#x2B; outWidth * outHeight, outWidth * outHeight / 4);&#xA;        memcpy(yuvFrame->data[2], yuvMat.data &#x2B; outWidth * outHeight * 5 / 4, outWidth * outHeight / 4);&#xA;&#xA;        // Set up output packet&#xA;        //av_init_packet(&amp;outPacket); //error C4996: &#x27;av_init_packet&#x27;: was declared deprecated&#xA;        AVPacket* outPacket = av_packet_alloc();&#xA;        memset(outPacket, 0, sizeof(outPacket)); //Use memset instead of av_init_packet (probably unnecessary).&#xA;        //outPacket->data = nullptr;&#xA;        //outPacket->size = 0;&#xA;&#xA;        // set the frame pts, do I have to set the package pts?&#xA;&#xA;        // yuvFrame->pts = av_rescale_q(timestamps[frame_count]*25, outCodecCtx->time_base, outStream->time_base); //Set PTS timestamp&#xA;        yuvFrame->pts = av_rescale_q(frame_count*frame_count, outCodecCtx->time_base, outStream->time_base); //Set PTS timestamp&#xA;&#xA;        // Encode frame and write to output file&#xA;        int ret = avcodec_send_frame(outCodecCtx, yuvFrame);&#xA;        if (ret &lt; 0) {&#xA;            cerr &lt;&lt; "Error: Could not send frame to output codec" &lt;&lt; endl;&#xA;            return -1;&#xA;        }&#xA;        while (ret >= 0)&#xA;        {&#xA;            ret = avcodec_receive_packet(outCodecCtx, outPacket);&#xA;&#xA;            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            {&#xA;                int abc;&#xA;                abc&#x2B;&#x2B;;&#xA;                break;&#xA;            }&#xA;            else if (ret &lt; 0)&#xA;            {&#xA;                cerr &lt;&lt; "Error: Could not receive packet from output codec" &lt;&lt; endl;&#xA;                return -1;&#xA;            }&#xA;&#xA;            //av_packet_rescale_ts(&amp;outPacket, outCodecCtx->time_base, outStream->time_base);&#xA;&#xA;            outPacket->stream_index = outStream->index;&#xA;&#xA;            outPacket->duration = av_rescale_q(1, outCodecCtx->time_base, outStream->time_base);   // Set packet duration&#xA;&#xA;            ret = av_interleaved_write_frame(outFormatCtx, outPacket);&#xA;&#xA;            static int call_write = 0;&#xA;&#xA;            call_write&#x2B;&#x2B;;&#xA;            printf("av_interleaved_write_frame %d\n", call_write);&#xA;&#xA;            av_packet_unref(outPacket);&#xA;            if (ret &lt; 0) {&#xA;                cerr &lt;&lt; "Error: Could not write packet to output file" &lt;&lt; endl;&#xA;                return -1;&#xA;            }&#xA;        }&#xA;&#xA;        av_frame_free(&amp;yuvFrame);&#xA;    }&#xA;&#xA;    // Flush the encoder&#xA;    ret = avcodec_send_frame(outCodecCtx, nullptr);&#xA;    if (ret &lt; 0) {&#xA;        std::cerr &lt;&lt; "Error flushing encoder: " &lt;&lt; std::endl;&#xA;        return -1;&#xA;    }&#xA;&#xA;    while (ret >= 0) {&#xA;        AVPacket* pkt = av_packet_alloc();&#xA;        if (!pkt) {&#xA;            std::cerr &lt;&lt; "Error allocating packet" &lt;&lt; std::endl;&#xA;            return -1;&#xA;        }&#xA;        ret = avcodec_receive_packet(outCodecCtx, pkt);&#xA;&#xA;        // Write the packet to the output file&#xA;        if (ret == 0)&#xA;        {&#xA;            pkt->stream_index = outStream->index;&#xA;            pkt->duration = av_rescale_q(1, outCodecCtx->time_base, outStream->time_base);   // &lt;---- Set packet duration&#xA;            ret = av_interleaved_write_frame(outFormatCtx, pkt);&#xA;            av_packet_unref(pkt);&#xA;            if (ret &lt; 0) {&#xA;                std::cerr &lt;&lt; "Error writing packet to output file: " &lt;&lt; std::endl;&#xA;                return -1;&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;&#xA;    // Write output trailer&#xA;    av_write_trailer(outFormatCtx);&#xA;&#xA;    // Clean up&#xA;    avcodec_close(outCodecCtx);&#xA;    avcodec_free_context(&amp;outCodecCtx);&#xA;    avformat_free_context(outFormatCtx);&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;</int></mat></cstdlib></stdexcept></sstream></fstream></cstring></vector></iostream>

    &#xA;

    Note that I have used the ffprobe tool(one of the tool from ffmpeg) to inspect the generated mp4 files.

    &#xA;

    I see that the mp4 file has 100 frames and 100 packets, but in my code, I have such lines :

    &#xA;

                static int call_write = 0;&#xA;&#xA;            call_write&#x2B;&#x2B;;&#xA;            printf("av_interleaved_write_frame %d\n", call_write);&#xA;

    &#xA;

    I just see that the av_interleaved_write_frame function is only called 50 times, not the expected 100 times, anyone can explain it ?

    &#xA;

    Thanks.

    &#xA;

    BTW, from the ffmpeg document( see here : For video, it should typically contain one compressed frame ), I see that a packet mainly has one video frame, so the ffprobe's result looks correct.

    &#xA;

    Here is the command I used to inspect the mp4 file :

    &#xA;

    ffprobe -show_frames output.mp4 >> frames.txt&#xA;ffprobe -show_packets output.mp4 >> packets.txt&#xA;

    &#xA;

    My testing code is derived from an answer in another question here : avformat_write_header() function call crashed when I try to save several RGB data to a output.mp4 file

    &#xA;