Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (74)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (13246)

  • FFMPEG Inverse blur

    28 juillet 2021, par Matthew Wilson

    We are generating an FFMPEG complex filter that blurs all but a specific section of a video. This section can be moved and resized by a user.

    


    The resulting filter chain looks like this

    


            [0:v]
            split=2
        [inverseBlurInputFilter0-1][inverseBlurInputFilter0-2];
        [inverseBlurInputFilter0-1]
            boxblur=40:
            enable='between(t,0.0,0.056268)'
        [blurFilter0];
        [inverseBlurInputFilter0-2]
            crop=500.0:500.0:642.5:372.5
        [cropFilter0];
        [blurFilter0][cropFilter0]
            overlay=642.5:372.5
        [overlayInverseBlurFilter0];
    
        [overlayInverseBlurFilter0]
            split=2
        [inverseBlurInputFilter1-1][inverseBlurInputFilter1-2];
        [inverseBlurInputFilter1-1]
            boxblur=40:
            enable='between(t,0.07586,0.089914)'
        [blurFilter1];
        [inverseBlurInputFilter1-2]
            crop=500.0:500.0:640.0:417.5
        [cropFilter1];
        [blurFilter1][cropFilter1]
            overlay=640.0:417.5
        [overlayInverseBlurFilter1];
    
        [overlayInverseBlurFilter1]
            split=2
        [inverseBlurInputFilter2-1][inverseBlurInputFilter2-2];
        [inverseBlurInputFilter2-1]
            boxblur=40:
            enable='between(t,0.099871,0.118166)'
        [blurFilter2];
        [inverseBlurInputFilter2-2]
            crop=500.0:500.0:635.0:460.0
        [cropFilter2];
        [blurFilter2][cropFilter2]
            overlay=635.0:460.0
        [overlayInverseBlurFilter2];
        
        ...


    


    This "works" most of the time for short videos. But for long videos we are seeing poor performance and incorrect blurring.

    


    I have a hunch that it relates to the multiple split=2 and feel like there would be a more efficient way of building up a chain for this task.

    


  • FFmpeg:A General error in an external library occurred when using FFmpeg6.1's avcodec_send_frame

    4 janvier 2024, par MMingY

    I have the same code that can successfully push streams (rtmp) in the environment, but in the Android environment, I fail with an error message. The error message method is avcodec_send_frame in ffmpeg6.1. By the way, I compiled the FFmpeg library on Android myself, and I downloaded the official package for Win11. I will provide the code for Android and Win11 below.

    


    android :

    


    static void encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt,

                   AVFormatContext *outFormatCtx) {
    int ret;

    /* send the frame to the encoder */
    if (frame)
        LOGE2("Send frame %ld\n", frame->pts);

    ret = avcodec_send_frame(enc_ctx, frame);
    if (ret < 0) {
        char errbuf[AV_ERROR_MAX_STRING_SIZE];
        av_strerror(ret, errbuf, AV_ERROR_MAX_STRING_SIZE);
        LOGE2("Error sending a frame for encoding ,%s\n", errbuf);
//        exit(1);
        return;
    }

    while (ret >= 0) {
        ret = avcodec_receive_packet(enc_ctx, pkt);
        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
            return;
        else if (ret < 0) {
            fprintf(stderr, "Error during encoding\n");
            exit(1);
        }

        printf("Write packet (size=%5d)\n", pkt->pts);
        /*    ret = av_interleaved_write_frame(outFormatCtx, pkt);
            if (ret < 0) {
                LOGE2("write frame err=%s", av_err2str(ret));
                break;
            }*/
//        printf("Write packet %3"PRId64" (size=%5d)\n", pkt->pts, pkt->size);
        av_write_frame(outFormatCtx, pkt); // Write the packet to the RTMP stream
        av_packet_unref(pkt);
    }
}

PUSHER_FUNC(int, testPush, jstring yuvPath, jstring outputPath) {
    const char *yvu_path = env->GetStringUTFChars(yuvPath, JNI_FALSE);
    const char *output_path = env->GetStringUTFChars(outputPath, JNI_FALSE);
    const char *rtmp_url = output_path;
    const AVCodec *codec;
    AVCodecContext *codecContext = NULL;
    AVFormatContext *outFormatCtx;
    int ret = 0;
    AVStream *outStream;
    AVFrame *frame;
    AVPacket *pkt;
    int i, x, y;
    avformat_network_init();

    codec = avcodec_find_encoder(AV_CODEC_ID_H264);
//    codec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
//    codec = avcodec_find_encoder(AV_CODEC_ID_H265);
    if (!codec) {
        LOGE2("JNI Error finding H.264 encoder");
        return -1;
    }
    codecContext = avcodec_alloc_context3(codec);
    if (!codecContext) {
        fprintf(stderr, "Could not allocate video codec context\n");
        return -1;
    }

    /* Allocate the output context */
    outFormatCtx = avformat_alloc_context();
    if (!outFormatCtx) {
        fprintf(stderr, "Could not allocate output context\n");
        return -1;
    }

    /* Open the RTMP output */
    const AVOutputFormat *ofmt = av_guess_format("flv", NULL, NULL);
//    const AVOutputFormat *ofmt = av_guess_format("mpegts", NULL, NULL);
//    const AVOutputFormat *ofmt = av_guess_format("mp4", NULL, NULL);
    if (!ofmt) {
        fprintf(stderr, "Could not find output format\n");
        return -1;
    }
    outFormatCtx->oformat = ofmt;
    outFormatCtx->url = av_strdup(rtmp_url);
    /* Add a video stream */
    outStream = avformat_new_stream(outFormatCtx, codec);
    if (!outStream) {
        fprintf(stderr, "Could not allocate stream\n");
        return -1;
    }
    outStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
    outStream->codecpar->codec_id = codec->id;
    outStream->codecpar->width = 352;
    outStream->codecpar->height = 288;

    /* Set the output URL */
    av_dict_set(&outFormatCtx->metadata, "url", rtmp_url, 0);

    pkt = av_packet_alloc();
    if (!pkt)
        return -1;

    /* ... (rest of the setup code) ... */
/* put sample parameters */
    codecContext->bit_rate = 400000;
    /* resolution must be a multiple of two */
    codecContext->width = 352;
    codecContext->height = 288;
    /* frames per second */
    codecContext->time_base = (AVRational) {1, 25};
    codecContext->framerate = (AVRational) {25, 1};

    /* emit one intra frame every ten frames
     * check frame pict_type before passing frame
     * to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
     * then gop_size is ignored and the output of encoder
     * will always be I frame irrespective to gop_size
     */
    codecContext->gop_size = 10;
    codecContext->max_b_frames = 1;
    codecContext->pix_fmt = AV_PIX_FMT_YUV420P;

    if (codec->id == AV_CODEC_ID_H264)
        av_opt_set(codecContext->priv_data, "preset", "slow", 0);

    /* open it */
    ret = avcodec_open2(codecContext, codec, NULL);
    if (ret < 0) {
        LOGE2("JNI Error opening codec eer%s", av_err2str(ret));
        return ret;
    }

    avcodec_parameters_to_context(codecContext, outStream->codecpar);

    if (avio_open(&outFormatCtx->pb, rtmp_url, AVIO_FLAG_WRITE)) {
        fprintf(stderr, "Could not open output\n");
        return ret;
    }
    /* Write the header */
    if (avformat_write_header(outFormatCtx, NULL) != 0) {
        fprintf(stderr, "Error occurred when opening output\n");
        return ret;
    }

    frame = av_frame_alloc();
    if (!frame) {
        fprintf(stderr, "Could not allocate video frame\n");
        return -1;
    }
    frame->format = codecContext->pix_fmt;
    frame->format = AV_PIX_FMT_YUV420P;
    frame->format = 0;
    frame->width = codecContext->width;
    frame->height = codecContext->height;

    ret = av_frame_get_buffer(frame, 0);
    if (ret < 0) {
        fprintf(stderr, "Could not allocate the video frame data ,%s\n", av_err2str(ret));
        return ret;
    }

    /*  FILE *yuv_file = fopen(yvu_path, "rb");
      if (yuv_file == NULL) {
          LOGE2("cannot open h264 file");
          return -1;
      }*/

    /* encode 1 second of video */
    for (i = 0; i < 25000; i++) {
//    for (i = 0; i < 25; i++) {
//        fflush(stdout);

        /* make sure the frame data is writable */
        ret = av_frame_make_writable(frame);
        if (ret < 0)
            exit(1);

        /* prepare a dummy image */
        /* Y */
        for (y = 0; y < codecContext->height; y++) {
            for (x = 0; x < codecContext->width; x++) {
                frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
            }
        }

        /* Cb and Cr */
        for (y = 0; y < codecContext->height / 2; y++) {
            for (x = 0; x < codecContext->width / 2; x++) {
                frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
                frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
            }
        }

        frame->pts = i;

        /* encode the image */
        encode(codecContext, frame, pkt, outFormatCtx);
    }

//    fclose(yuv_file);

    /* flush the encoder */
    encode(codecContext, NULL, pkt, outFormatCtx);

    /* Write the trailer */
    av_write_trailer(outFormatCtx);

    /* Close the output */
    avformat_free_context(outFormatCtx);

    avcodec_free_context(&codecContext);
    av_frame_free(&frame);
    av_packet_free(&pkt);
}


    


    win11:

    


    #include &#xA;#include &#xA;#include &#xA;&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libavutil></libavutil>time.h>&#xA;&#xA;static void encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt,&#xA;                   AVFormatContext *outFormatCtx) {&#xA;    int ret;&#xA;&#xA;    /* send the frame to the encoder */&#xA;    if (frame)&#xA;        printf("Send frame %3"PRId64"\n", frame->pts);&#xA;&#xA;    ret = avcodec_send_frame(enc_ctx, frame);&#xA;    if (ret &lt; 0) {&#xA;        char errbuf[AV_ERROR_MAX_STRING_SIZE];&#xA;        av_strerror(ret, errbuf, AV_ERROR_MAX_STRING_SIZE);&#xA;        fprintf(stderr, "Error sending a frame for encoding ,%s\n", errbuf);&#xA;        exit(1);&#xA;    }&#xA;&#xA;    while (ret >= 0) {&#xA;        ret = avcodec_receive_packet(enc_ctx, pkt);&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            return;&#xA;        else if (ret &lt; 0) {&#xA;            fprintf(stderr, "Error during encoding\n");&#xA;            exit(1);&#xA;        }&#xA;&#xA;        printf("Write packet %3"PRId64" (size=%5d)\n", pkt->pts, pkt->size);&#xA;        av_write_frame(outFormatCtx, pkt); // Write the packet to the RTMP stream&#xA;        av_packet_unref(pkt);&#xA;    }&#xA;}&#xA;&#xA;int main(int argc, char **argv) {&#xA;    av_log_set_level(AV_LOG_DEBUG);&#xA;    const char *rtmp_url, *codec_name;&#xA;    const AVCodec *codec;&#xA;    AVCodecContext *codecContext = NULL;&#xA;    int i, ret, x, y;&#xA;    AVFormatContext *outFormatCtx;&#xA;    AVStream *st;&#xA;    AVFrame *frame;&#xA;    AVPacket *pkt;&#xA;    uint8_t endcode[] = {0, 0, 1, 0xb7};&#xA;&#xA;    if (argc &lt;= 3) {&#xA;        fprintf(stderr, "Usage: %s <rtmp url="url"> <codec>\n", argv[0]);&#xA;        exit(0);&#xA;    }&#xA;    rtmp_url = argv[1];&#xA;    codec_name = argv[2];&#xA;    avformat_network_init();&#xA;    /* find the mpeg1video encoder */&#xA;//    codec = avcodec_find_encoder_by_name(codec_name);&#xA;//    codec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);&#xA;//    codec = avcodec_find_encoder(AV_CODEC_ID_VP9);&#xA;//    codec = avcodec_find_encoder(AV_CODEC_ID_MPEG2VIDEO);&#xA;//    codec = avcodec_find_encoder(AV_CODEC_ID_H264);&#xA;    codec = avcodec_find_encoder(AV_CODEC_ID_H264);&#xA;//    codec = avcodec_find_encoder(AV_CODEC_ID_AV1);&#xA;//    codec = avcodec_find_encoder(AV_CODEC_ID_H265);&#xA;    if (!codec) {&#xA;        fprintf(stderr, "Codec &#x27;%s&#x27; not found\n", codec_name);&#xA;        exit(1);&#xA;    }&#xA;    codecContext = avcodec_alloc_context3(codec);&#xA;    if (!codecContext) {&#xA;        fprintf(stderr, "Could not allocate video codec context\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* Allocate the output context */&#xA;    outFormatCtx = avformat_alloc_context();&#xA;    if (!outFormatCtx) {&#xA;        fprintf(stderr, "Could not allocate output context\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* Open the RTMP output */&#xA;    const AVOutputFormat *ofmt = av_guess_format("flv", NULL, NULL);&#xA;//    const AVOutputFormat *ofmt = av_guess_format("MKV", NULL, NULL);&#xA;//    const AVOutputFormat *ofmt = av_guess_format("rtmp", NULL, NULL);&#xA;//    const AVOutputFormat *ofmt = av_guess_format("mpegts", NULL, NULL);&#xA;//    const AVOutputFormat *ofmt = av_guess_format("mp4", NULL, NULL);&#xA;    if (!ofmt) {&#xA;        fprintf(stderr, "Could not find output format\n");&#xA;        exit(1);&#xA;    }&#xA;    outFormatCtx->oformat = ofmt;&#xA;    outFormatCtx->url = av_strdup(rtmp_url);&#xA;    /* Add a video stream */&#xA;    st = avformat_new_stream(outFormatCtx, codec);&#xA;    if (!st) {&#xA;        fprintf(stderr, "Could not allocate stream\n");&#xA;        exit(1);&#xA;    }&#xA;    st->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    st->codecpar->codec_id = codec->id;&#xA;    st->codecpar->width = 352;&#xA;    st->codecpar->height = 288;&#xA;//    st->codecpar = c;&#xA;//    st->codecpar->format = AV_PIX_FMT_YUV420P;&#xA;    // Set video stream parameters&#xA;//    st->codecpar->framerate = (AVRational){25, 1};&#xA;&#xA;    /* Set the output URL */&#xA;    av_dict_set(&amp;outFormatCtx->metadata, "url", rtmp_url, 0);&#xA;&#xA;&#xA;    pkt = av_packet_alloc();&#xA;    if (!pkt)&#xA;        exit(1);&#xA;&#xA;    /* ... (rest of the setup code) ... */&#xA;/* put sample parameters */&#xA;    codecContext->bit_rate = 400000;&#xA;    /* resolution must be a multiple of two */&#xA;    codecContext->width = 352;&#xA;    codecContext->height = 288;&#xA;    /* frames per second */&#xA;    codecContext->time_base = (AVRational) {1, 25};&#xA;    codecContext->framerate = (AVRational) {25, 1};&#xA;&#xA;    /* emit one intra frame every ten frames&#xA;     * check frame pict_type before passing frame&#xA;     * to encoder, if frame->pict_type is AV_PICTURE_TYPE_I&#xA;     * then gop_size is ignored and the output of encoder&#xA;     * will always be I frame irrespective to gop_size&#xA;     */&#xA;    codecContext->gop_size = 10;&#xA;    codecContext->max_b_frames = 1;&#xA;    codecContext->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;&#xA;    if (codec->id == AV_CODEC_ID_H264)&#xA;        av_opt_set(codecContext->priv_data, "preset", "slow", 0);&#xA;&#xA;    /* open it */&#xA;    ret = avcodec_open2(codecContext, codec, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not open codec: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    avcodec_parameters_to_context(codecContext, st->codecpar);&#xA;&#xA;    if (avio_open(&amp;outFormatCtx->pb, rtmp_url, AVIO_FLAG_WRITE)) {&#xA;        fprintf(stderr, "Could not open output\n");&#xA;        exit(1);&#xA;    }&#xA;    /* Write the header */&#xA;    if (avformat_write_header(outFormatCtx, NULL) != 0) {&#xA;        fprintf(stderr, "Error occurred when opening output\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    frame = av_frame_alloc();&#xA;    if (!frame) {&#xA;        fprintf(stderr, "Could not allocate video frame\n");&#xA;        exit(1);&#xA;    }&#xA;//    frame->format = c->pix_fmt;&#xA;//    frame->format = AV_PIX_FMT_YUV420P;&#xA;    frame->format = 0;&#xA;    frame->width = codecContext->width;&#xA;    frame->height = codecContext->height;&#xA;&#xA;    ret = av_frame_get_buffer(frame, 0);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not allocate the video frame data ,%s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* encode 1 second of video */&#xA;    for (i = 0; i &lt; 2500; i&#x2B;&#x2B;) {&#xA;        /* ... (rest of the encoding loop) ... */&#xA;        fflush(stdout);&#xA;&#xA;        /* make sure the frame data is writable */&#xA;        ret = av_frame_make_writable(frame);&#xA;        if (ret &lt; 0)&#xA;            exit(1);&#xA;&#xA;        /* prepare a dummy image */&#xA;        /* Y */&#xA;        for (y = 0; y &lt; codecContext->height; y&#x2B;&#x2B;) {&#xA;            for (x = 0; x &lt; codecContext->width; x&#x2B;&#x2B;) {&#xA;                frame->data[0][y * frame->linesize[0] &#x2B; x] = x &#x2B; y &#x2B; i * 3;&#xA;            }&#xA;        }&#xA;&#xA;        /* Cb and Cr */&#xA;        for (y = 0; y &lt; codecContext->height / 2; y&#x2B;&#x2B;) {&#xA;            for (x = 0; x &lt; codecContext->width / 2; x&#x2B;&#x2B;) {&#xA;                frame->data[1][y * frame->linesize[1] &#x2B; x] = 128 &#x2B; y &#x2B; i * 2;&#xA;                frame->data[2][y * frame->linesize[2] &#x2B; x] = 64 &#x2B; x &#x2B; i * 5;&#xA;            }&#xA;        }&#xA;&#xA;        frame->pts = i;&#xA;&#xA;        /* encode the image */&#xA;        encode(codecContext, frame, pkt, outFormatCtx);&#xA;    }&#xA;&#xA;    /* flush the encoder */&#xA;    encode(codecContext, NULL, pkt, outFormatCtx);&#xA;&#xA;    /* Write the trailer */&#xA;    av_write_trailer(outFormatCtx);&#xA;&#xA;    /* Close the output */&#xA;    avformat_free_context(outFormatCtx);&#xA;&#xA;    avcodec_free_context(&amp;codecContext);&#xA;    av_frame_free(&amp;frame);&#xA;    av_packet_free(&amp;pkt);&#xA;&#xA;    return 0;&#xA;}&#xA;</codec></rtmp>

    &#xA;

    I suspect it's an issue with the ffmpeg library I compiled, so I searched for a step to compile ffmpeg on GitHub, but the package it compiled still has the same problem. I don't know what to do now.

    &#xA;

  • Shades of black

    14 février 2011, par Mans — Random ramblings

    Some time ago now, I was looking for a new laptop. Having compared the technical specifications of a number of models, I turned my attention to the most important aspect : the colour. Everybody knows black is the best colour, but which particular shade of black ? There are, apparently, quite a few to choose from.

    While some may settle for the plain Black, others will demand something more distinguished. The musician, for instance, might find Piano Black more attractive, while Ebony Black has, perhaps, an organic touch. For a more “hi-tech” feeling, there is Carbon Black, and if that is insufficient, Ultimate Carbon should hopefully do the trick. The French-sounding Intense Noir might, I speculate, be designed to evoke quasi-artistic images, whereas Platinum Black to me rings mostly of expensive and hardly at all of black. The last entry on my list is Liquorice Black, for which interpretation I refer to those capable of ingesting this vile substance.

    To this day I remain completely clueless regarding any actual variation in physical appearance, as for my purchase I selected black, plain and simple, and spent the difference on a RAM upgrade.