Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (18)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (4085)

  • Encoding frames to video with ffmpeg

    5 septembre 2017, par Mher Didaryan

    I am trying to encode a video in Unreal Engine 4 with C++. I have access to the separate frames. Below is the code which reads viewport's displayed pixels and stores in buffer.

    //Safely get render target resource.
    FRenderTarget* RenderTarget = TextureRenderTarget->GameThread_GetRenderTargetResource();
    FIntPoint Size = RenderTarget->GetSizeXY();
    auto ImageBytes = Size.X* Size.Y * static_cast<int32>(sizeof(FColor));
    TArray<uint8> RawData;
    RawData.AddUninitialized(ImageBytes);

    //Get image raw data.
    if (!RenderTarget->ReadPixelsPtr((FColor*)RawData.GetData()))
    {
       RawData.Empty();
       UE_LOG(ExportRenderTargetBPFLibrary, Error, TEXT("ExportRenderTargetAsImage: Failed to get raw data."));
       return false;
    }

    Buffer::getInstance().add(RawData);
    </uint8></int32>

    Unreal Engine has IImageWrapperModule with which you can get an image from frame, but noting for video encoding. What I want is to encode frames in real time basis for live streaming service.

    I found this post Encoding a screenshot into a video using FFMPEG which is kind of what I want, but I have problems adapting this solution for my case. The code is outdated (for example avcodec_encode_video changed to avcodec_encode_video2 with different parameters).

    Bellow is the code of encoder.

    void Compressor::DoWork()
    {
    AVCodec* codec;
    AVCodecContext* c = NULL;
    //uint8_t* outbuf;
    //int /*i, out_size,*/ outbuf_size;

    UE_LOG(LogTemp, Warning, TEXT("encoding"));

    codec = avcodec_find_encoder(AV_CODEC_ID_MPEG1VIDEO);            // finding the H264 encoder
    if (!codec) {
       UE_LOG(LogTemp, Warning, TEXT("codec not found"));
       exit(1);
    }
    else UE_LOG(LogTemp, Warning, TEXT("codec found"));

    c = avcodec_alloc_context3(codec);
    c->bit_rate = 400000;
    c->width = 1280;                                        // resolution must be a multiple of two (1280x720),(1900x1080),(720x480)
    c->height = 720;
    c->time_base.num = 1;                                   // framerate numerator
    c->time_base.den = 25;                                  // framerate denominator
    c->gop_size = 10;                                       // emit one intra frame every ten frames
    c->max_b_frames = 1;                                    // maximum number of b-frames between non b-frames
    c->keyint_min = 1;                                      // minimum GOP size
    c->i_quant_factor = (float)0.71;                        // qscale factor between P and I frames
    //c->b_frame_strategy = 20;                               ///// find out exactly what this does
    c->qcompress = (float)0.6;                              ///// find out exactly what this does
    c->qmin = 20;                                           // minimum quantizer
    c->qmax = 51;                                           // maximum quantizer
    c->max_qdiff = 4;                                       // maximum quantizer difference between frames
    c->refs = 4;                                            // number of reference frames
    c->trellis = 1;                                         // trellis RD Quantization
    c->pix_fmt = AV_PIX_FMT_YUV420P;                           // universal pixel format for video encoding
    c->codec_id = AV_CODEC_ID_MPEG1VIDEO;
    c->codec_type = AVMEDIA_TYPE_VIDEO;

    if (avcodec_open2(c, codec, NULL) &lt; 0) {
       UE_LOG(LogTemp, Warning, TEXT("could not open codec"));         // opening the codec
       //exit(1);
    }
    else UE_LOG(LogTemp, Warning, TEXT("codec oppened"));

    FString FinalFilename = FString("C:/Screen/sample.mpg");
    auto &amp;PlatformFile = FPlatformFileManager::Get().GetPlatformFile();
    auto FileHandle = PlatformFile.OpenWrite(*FinalFilename, true);

    if (FileHandle)
    {
       delete FileHandle; // remove when ready
       UE_LOG(LogTemp, Warning, TEXT("file opened"));
       while (true)
       {
           UE_LOG(LogTemp, Warning, TEXT("removing from buffer"));

           int nbytes = avpicture_get_size(AV_PIX_FMT_YUV420P, c->width, c->height);                                      // allocating outbuffer
           uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes * sizeof(uint8_t));

           AVFrame* inpic = av_frame_alloc();
           AVFrame* outpic = av_frame_alloc();

           outpic->pts = (int64_t)((float)1 * (1000.0 / ((float)(c->time_base.den))) * 90);                              // setting frame pts
           avpicture_fill((AVPicture*)inpic, (uint8_t*)Buffer::getInstance().remove().GetData(),
               AV_PIX_FMT_PAL8, c->width, c->height); // fill image with input screenshot
           avpicture_fill((AVPicture*)outpic, outbuffer, AV_PIX_FMT_YUV420P, c->width, c->height);                        // clear output picture for buffer copy
           av_image_alloc(outpic->data, outpic->linesize, c->width, c->height, c->pix_fmt, 1);

           /*
           inpic->data[0] += inpic->linesize[0]*(screenHeight-1);                                                      
           // flipping frame
           inpic->linesize[0] = -inpic->linesize[0];                                                                  
           // flipping frame

           struct SwsContext* fooContext = sws_getContext(screenWidth, screenHeight, PIX_FMT_RGB32, c->width, c->height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
           sws_scale(fooContext, inpic->data, inpic->linesize, 0, c->height, outpic->data, outpic->linesize);          // converting frame size and format

           out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);                                            
           // save in file

           */

       }
       delete FileHandle;
    }
    else
    {
       UE_LOG(LogTemp, Warning, TEXT("Can't open file"));
    }
    }

    Can someone explain flipping frame part (why it’s done ?) and how to use avcodec_encode_video2 function instead of avcodec_encode_video ?

  • Writing image to RTP with ffmpeg

    22 septembre 2017, par Gaulois94

    I am actually trying to send real time images via the network efficiently. For this, I thought that the RTP protocole in video streaming can be a good way to achieve this.

    I actually tried this :

       extern "C"
    {
       #include <libavcodec></libavcodec>avcodec.h>
       #include <libavformat></libavformat>avformat.h>
       #include <libswscale></libswscale>swscale.h>

       #include <libavutil></libavutil>opt.h>
       #include <libavutil></libavutil>channel_layout.h>
       #include <libavutil></libavutil>common.h>
       #include <libavutil></libavutil>imgutils.h>
       #include <libavutil></libavutil>mathematics.h>
       #include <libavutil></libavutil>samplefmt.h>
    }
    #include <iostream>
    #include
    #include

    //Mainly based on https://stackoverflow.com/questions/40825300/ffmpeg-create-rtp-stream
    int main()
    {
       //Init ffmpeg
       avcodec_register_all();
       av_register_all();
       avformat_network_init();

       //Init the codec used to encode our given image
       AVCodecID codecID = AV_CODEC_ID_MPEG4;
       AVCodec* codec;
       AVCodecContext* codecCtx;

       codec    = avcodec_find_encoder(codecID);
       codecCtx = avcodec_alloc_context3(codec);

       //codecCtx->bit_rate      = 400000;
       codecCtx->width         = 352;
       codecCtx->height        = 288;

       codecCtx->time_base.num = 1;
       codecCtx->time_base.den = 25;
       codecCtx->gop_size      = 25;
       codecCtx->max_b_frames  = 1;
       codecCtx->pix_fmt       = AV_PIX_FMT_YUV420P;
       codecCtx->codec_type    = AVMEDIA_TYPE_VIDEO;

       if (codecID == AV_CODEC_ID_H264)
       {
           av_opt_set(codecCtx->priv_data, "preset", "ultrafast", 0);
           av_opt_set(codecCtx->priv_data, "tune", "zerolatency", 0);
       }

       avcodec_open2(codecCtx, codec, NULL);

       //Init the Frame containing our raw data
       AVFrame* frame;

       frame         = av_frame_alloc();
       frame->format = codecCtx->pix_fmt;
       frame->width  = codecCtx->width;
       frame->height = codecCtx->height;
       av_image_alloc(frame->data, frame->linesize, frame->width, frame->height, codecCtx->pix_fmt, 32);

       //Init the format context
       AVFormatContext* fmtCtx  = avformat_alloc_context();
       AVOutputFormat*  format  = av_guess_format("rtp", NULL, NULL);
       avformat_alloc_output_context2(&amp;fmtCtx, format, format->name, "rtp://127.0.0.1:49990");

       avio_open(&amp;fmtCtx->pb, fmtCtx->filename, AVIO_FLAG_WRITE);

       //Configure the AVStream for the output format context
       struct AVStream* stream      = avformat_new_stream(fmtCtx, codec);

       avcodec_parameters_from_context(stream->codecpar, codecCtx);
       stream->time_base.num        = 1;
       stream->time_base.den        = 25;

       /* Rewrite the header */
       avformat_write_header(fmtCtx, NULL);

       /* Write a file for VLC */
       char buf[200000];
       AVFormatContext *ac[] = { fmtCtx };
       av_sdp_create(ac, 1, buf, 20000);
       printf("sdp:\n%s\n", buf);
       FILE* fsdp = fopen("test.sdp", "w");
       fprintf(fsdp, "%s", buf);
       fclose(fsdp);

       AVPacket pkt;
       int j = 0;
       for(int i = 0; i &lt; 10000; i++)
       {
           fflush(stdout);
           av_init_packet(&amp;pkt);
           pkt.data = NULL;    // packet data will be allocated by the encoder
           pkt.size = 0;

           int R, G, B;
           R = G = B = i % 255;

           int Y =  0.257 * R + 0.504 * G + 0.098 * B +  16;
           int U = -0.148 * R - 0.291 * G + 0.439 * B + 128;
           int V =  0.439 * R - 0.368 * G - 0.071 * B + 128;

           /* prepare a dummy image */
           /* Y */
           for (int y = 0; y &lt; codecCtx->height; y++)
               for (int x = 0; x &lt; codecCtx->width; x++)
                   frame->data[0][y * codecCtx->width + x] = Y;

           for (int y = 0; y &lt; codecCtx->height/2; y++)
               for (int x=0; x &lt; codecCtx->width / 2; x++)
               {
                   frame->data[1][y * frame->linesize[1] + x] = U;
                   frame->data[2][y * frame->linesize[2] + x] = V;
               }

           /* Which frame is it ? */
           frame->pts = i;

           /* Send the frame to the codec */
           avcodec_send_frame(codecCtx, frame);

           /* Use the data in the codec to the AVPacket */
           switch(avcodec_receive_packet(codecCtx, &amp;pkt))
           {
               case AVERROR_EOF:
                   printf("Stream EOF\n");
                   break;

               case AVERROR(EAGAIN):
                   printf("Stream EAGAIN\n");
                   break;

               default:
                   printf("Write frame %3d (size=%5d)\n", j++, pkt.size);

                   /* Write the data on the packet to the output format  */
                   av_interleaved_write_frame(fmtCtx, &amp;pkt);

                   /* Reset the packet */
                   av_packet_unref(&amp;pkt);
                   break;
           }

           usleep(1e6/25);
       }

       // end
       avcodec_send_frame(codecCtx, NULL);

       //Free everything
       av_free(codecCtx);
       av_free(fmtCtx);

       return 0;
    }
    </iostream>

    And I can with VLC to see one image, but not a video (I have to reaload it to see another image in grayscale).

    Does someone know why vlc don’t play my video well ? Thank you !

  • C++ FFmpeg create mp4 file

    1er février 2021, par DDovzhenko

    I'm trying to create mp4 video file with FFmpeg and C++, but in result I receive broken file (windows player shows "Can't play ... 0xc00d36c4"). If I create .h264 file, it can be played with 'ffplay' and successfully converted to mp4 via CL.

    &#xA;&#xA;

    My code :

    &#xA;&#xA;

    int main() {&#xA;    char *filename = "tmp.mp4";&#xA;    AVOutputFormat *fmt;&#xA;    AVFormatContext *fctx;&#xA;    AVCodecContext *cctx;&#xA;    AVStream *st;&#xA;&#xA;    av_register_all();&#xA;    avcodec_register_all();&#xA;&#xA;    //auto detect the output format from the name&#xA;    fmt = av_guess_format(NULL, filename, NULL);&#xA;    if (!fmt) {&#xA;        cout &lt;&lt; "Error av_guess_format()" &lt;&lt; endl; system("pause"); exit(1);&#xA;    }&#xA;&#xA;    if (avformat_alloc_output_context2(&amp;fctx, fmt, NULL, filename) &lt; 0) {&#xA;        cout &lt;&lt; "Error avformat_alloc_output_context2()" &lt;&lt; endl; system("pause"); exit(1);&#xA;    }&#xA;&#xA;&#xA;    //stream creation &#x2B; parameters&#xA;    st = avformat_new_stream(fctx, 0);&#xA;    if (!st) {&#xA;        cout &lt;&lt; "Error avformat_new_stream()" &lt;&lt; endl; system("pause"); exit(1);&#xA;    }&#xA;&#xA;    st->codecpar->codec_id = fmt->video_codec;&#xA;    st->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    st->codecpar->width = 352;&#xA;    st->codecpar->height = 288;&#xA;    st->time_base.num = 1;&#xA;    st->time_base.den = 25;&#xA;&#xA;    AVCodec *pCodec = avcodec_find_encoder(st->codecpar->codec_id);&#xA;    if (!pCodec) {&#xA;        cout &lt;&lt; "Error avcodec_find_encoder()" &lt;&lt; endl; system("pause"); exit(1);&#xA;    }&#xA;&#xA;    cctx = avcodec_alloc_context3(pCodec);&#xA;    if (!cctx) {&#xA;        cout &lt;&lt; "Error avcodec_alloc_context3()" &lt;&lt; endl; system("pause"); exit(1);&#xA;    }&#xA;&#xA;    avcodec_parameters_to_context(cctx, st->codecpar);&#xA;    cctx->bit_rate = 400000;&#xA;    cctx->width = 352;&#xA;    cctx->height = 288;&#xA;    cctx->time_base.num = 1;&#xA;    cctx->time_base.den = 25;&#xA;    cctx->gop_size = 12;&#xA;    cctx->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;    if (st->codecpar->codec_id == AV_CODEC_ID_H264) {&#xA;        av_opt_set(cctx->priv_data, "preset", "ultrafast", 0);&#xA;    }&#xA;    if (fctx->oformat->flags &amp; AVFMT_GLOBALHEADER) {&#xA;        cctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    }&#xA;    avcodec_parameters_from_context(st->codecpar, cctx);&#xA;&#xA;    av_dump_format(fctx, 0, filename, 1);&#xA;&#xA;    //OPEN FILE &#x2B; WRITE HEADER&#xA;    if (avcodec_open2(cctx, pCodec, NULL) &lt; 0) {&#xA;        cout &lt;&lt; "Error avcodec_open2()" &lt;&lt; endl; system("pause"); exit(1);&#xA;    }&#xA;    if (!(fmt->flags &amp; AVFMT_NOFILE)) {&#xA;        if (avio_open(&amp;fctx->pb, filename, AVIO_FLAG_WRITE) &lt; 0) {&#xA;            cout &lt;&lt; "Error avio_open()" &lt;&lt; endl; system("pause"); exit(1);&#xA;        }&#xA;    }&#xA;    if (avformat_write_header(fctx, NULL) &lt; 0) {&#xA;        cout &lt;&lt; "Error avformat_write_header()" &lt;&lt; endl; system("pause"); exit(1);&#xA;    }&#xA;&#xA;&#xA;    //CREATE DUMMY VIDEO&#xA;    AVFrame *frame = av_frame_alloc();&#xA;    frame->format = cctx->pix_fmt;&#xA;    frame->width = cctx->width;&#xA;    frame->height = cctx->height;&#xA;    av_image_alloc(frame->data, frame->linesize, cctx->width, cctx->height, cctx->pix_fmt, 32);&#xA;&#xA;    AVPacket pkt;&#xA;    double video_pts = 0;&#xA;    for (int i = 0; i &lt; 50; i&#x2B;&#x2B;) {&#xA;        video_pts = (double)cctx->time_base.num / cctx->time_base.den * 90 * i;&#xA;&#xA;        for (int y = 0; y &lt; cctx->height; y&#x2B;&#x2B;) {&#xA;            for (int x = 0; x &lt; cctx->width; x&#x2B;&#x2B;) {&#xA;                frame->data[0][y * frame->linesize[0] &#x2B; x] = x &#x2B; y &#x2B; i * 3;&#xA;                if (y &lt; cctx->height / 2 &amp;&amp; x &lt; cctx->width / 2) {&#xA;                    /* Cb and Cr */&#xA;                    frame->data[1][y * frame->linesize[1] &#x2B; x] = 128 &#x2B; y &#x2B; i * 2;&#xA;                    frame->data[2][y * frame->linesize[2] &#x2B; x] = 64 &#x2B; x &#x2B; i * 5;&#xA;                }&#xA;            }&#xA;        }&#xA;&#xA;        av_init_packet(&amp;pkt);&#xA;        pkt.flags |= AV_PKT_FLAG_KEY;&#xA;        pkt.pts = frame->pts = video_pts;&#xA;        pkt.data = NULL;&#xA;        pkt.size = 0;&#xA;        pkt.stream_index = st->index;&#xA;&#xA;        if (avcodec_send_frame(cctx, frame) &lt; 0) {&#xA;            cout &lt;&lt; "Error avcodec_send_frame()" &lt;&lt; endl; system("pause"); exit(1);&#xA;        }&#xA;        if (avcodec_receive_packet(cctx, &amp;pkt) == 0) {&#xA;            //cout &lt;&lt; "Write frame " &lt;&lt; to_string((int) pkt.pts) &lt;&lt; endl;&#xA;            av_interleaved_write_frame(fctx, &amp;pkt);&#xA;            av_packet_unref(&amp;pkt);&#xA;        }&#xA;    }&#xA;&#xA;    //DELAYED FRAMES&#xA;    for (;;) {&#xA;        avcodec_send_frame(cctx, NULL);&#xA;        if (avcodec_receive_packet(cctx, &amp;pkt) == 0) {&#xA;            //cout &lt;&lt; "-Write frame " &lt;&lt; to_string((int)pkt.pts) &lt;&lt; endl;&#xA;            av_interleaved_write_frame(fctx, &amp;pkt);&#xA;            av_packet_unref(&amp;pkt);&#xA;        }&#xA;        else {&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    //FINISH&#xA;    av_write_trailer(fctx);&#xA;    if (!(fmt->flags &amp; AVFMT_NOFILE)) {&#xA;        if (avio_close(fctx->pb) &lt; 0) {&#xA;            cout &lt;&lt; "Error avio_close()" &lt;&lt; endl; system("pause"); exit(1);&#xA;        }&#xA;    }&#xA;    av_frame_free(&amp;frame);&#xA;    avcodec_free_context(&amp;cctx);&#xA;    avformat_free_context(fctx);&#xA;&#xA;    system("pause");&#xA;    return 0;&#xA;}&#xA;

    &#xA;&#xA;

    Output of program :

    &#xA;&#xA;

    Output #0, mp4, to &#x27;tmp.mp4&#x27;:&#xA;    Stream #0:0: Video: h264, yuv420p, 352x288, q=2-31, 400 kb/s, 25 tbn&#xA;[libx264 @ 0000021c4a995ba0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0000021c4a995ba0] profile Constrained Baseline, level 2.0&#xA;[libx264 @ 0000021c4a995ba0] 264 - core 152 r2851 ba24899 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=12 keyint_min=1 scenecut=0 intra_refresh=0 rc=abr mbtree=0 bitrate=400 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0&#xA;[libx264 @ 0000021c4a995ba0] frame I:5     Avg QP: 7.03  size:  9318&#xA;[libx264 @ 0000021c4a995ba0] frame P:45    Avg QP: 4.53  size:  4258&#xA;[libx264 @ 0000021c4a995ba0] mb I  I16..4: 100.0%  0.0%  0.0%&#xA;[libx264 @ 0000021c4a995ba0] mb P  I16..4:  0.0%  0.0%  0.0%  P16..4: 100.0%  0.0%  0.0%  0.0%  0.0%    skip: 0.0%&#xA;[libx264 @ 0000021c4a995ba0] final ratefactor: 9.11&#xA;[libx264 @ 0000021c4a995ba0] coded y,uvDC,uvAC intra: 18.9% 21.8% 14.5% inter: 7.8% 100.0% 15.5%&#xA;[libx264 @ 0000021c4a995ba0] i16 v,h,dc,p:  4%  5%  5% 86%&#xA;[libx264 @ 0000021c4a995ba0] i8c dc,h,v,p:  2%  9%  6% 82%&#xA;[libx264 @ 0000021c4a995ba0] kb/s:264.68&#xA;

    &#xA;&#xA;

    If I will try to play mp4 file with 'ffplay' it prints :

    &#xA;&#xA;

    [mov,mp4,m4a,3gp,3g2,mj2 @ 00000000026bf900] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 352x288, 138953 kb/s): unspecified pixel format&#xA;[h264 @ 00000000006c6ae0] non-existing PPS 0 referenced&#xA;[h264 @ 00000000006c6ae0] decode_slice_header error&#xA;[h264 @ 00000000006c6ae0] no frame!&#xA;

    &#xA;&#xA;

    I've spent a lot of time without success of finding issue, what could be the reason of it ?

    &#xA;&#xA;

    Thank for help !

    &#xA;