Recherche avancée

Médias (0)

Mot : - Tags -/presse-papier

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (64)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (8244)

  • C++, FFmpeg, save AVPacket infomation into videostate structure in ffplay.c

    28 mai 2015, par Yoohoo

    I am currently working on a project that tests video streaming. In the project, video stream is encoded with H.264 before send and decoded after receive, using FFmpeg codec and functions.

    I can encode video stream by

    init_video_encode(AV_CODEC_ID_H264);

    where

    static void init_video_encode(AVCodecID codec_id){
    codec = avcodec_find_encoder(codec_id);
    if (!codec) {
       fprintf(stderr, "Codec not found\n");
       exit(1);
    }

    c = avcodec_alloc_context3(codec);
    if (!c) {
       fprintf(stderr, "Could not allocate video codec context\n");
       exit(1);
    }

    /* put sample parameters */
    c->bit_rate = 400000;
    /* resolution must be a multiple of two */
    c->width = 640;
    c->height = 480;
    /* frames per second */
    c->time_base= (AVRational){1,25};
    c->gop_size = 10; /* emit one intra frame every ten frames */
    c->max_b_frames=max_f;
    c->pix_fmt = AV_PIX_FMT_YUV420P;

    if(codec_id == AV_CODEC_ID_H264)
       av_opt_set(c->priv_data, "preset", "slow", 0);

    /* open it */
    if (avcodec_open2(c, codec, NULL) < 0) {
       fprintf(stderr, "Could not open codec\n");
       exit(1);
    }

    frame = avcodec_alloc_frame();
    if (!frame) {
       fprintf(stderr, "Could not allocate video frame\n");
       exit(1);
    }
    frame->format = c->pix_fmt;
    frame->width  = c->width;
    frame->height = c->height;

    /* the image can be allocated by any means and av_image_alloc() is
    * just the most convenient way if av_malloc() is to be used */
    ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height,
                        c->pix_fmt, 32);

    /* get the delayed frames */
    if (ret < 0) {
       fprintf(stderr, "Could not allocate raw picture buffer\n");
       exit(1);
    }
       av_init_packet(&pkt);

    //}

       pkt.data = NULL;    // packet data will be allocated by the encoder
       pkt.size = 0;
       //cout<<"\nBefore YUV\n";
       if(count == 0)
       read_yuv420(frame->data[0]);
       count ++;
      // cout<<"\nAfter YUV\n";
       if(count == SUBSITY) {
       count = 0;
       }

       frame->pts = i++;

       /* encode the image */
       ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
       if (ret < 0) {
            fprintf(stderr, "Error encoding frame\n");
            return -1;
       }
       //cout<<"\nRecord the Video\n";
       if (got_output) {
            //printf("Write frame %3d (size=%5d)\n", i, pkt.size);
            //cout<<"\nBefore Memcpy\n\n\n";

            memcpy(inbufout+totalSize,pkt.data,pkt.size);
            //cout<<"\nAfter Memcpy\n\n\n";
            totalSize+=pkt.size;

    The video encoder works very well, if I write the encoded packet into a .h264 file, it can be played. But at the decoder side, I cannot decode the received packet with :

       codec = avcodec_find_decoder(AV_CODEC_ID_H264);
       if (!codec) {
           fprintf(stderr, "Codec not found\n");
           exit(1);
       }

       c = avcodec_alloc_context3(codec);
       if (!c) {
           fprintf(stderr, "Could not allocate video codec context\n");
           exit(1);
       }

       if(codec->capabilities&CODEC_CAP_TRUNCATED)
           c->flags|= CODEC_FLAG_TRUNCATED;

       /* open it */
       if (avcodec_open2(c, codec, NULL) < 0) {
           fprintf(stderr, "Could not open codec\n");
           exit(1);
       }

       frame = avcodec_alloc_frame();
       if (!frame) {
           fprintf(stderr, "Could not allocate video frame\n");
           exit(1);
       }
    len = avcodec_decode_video2(avctx, frame, &got_frame, pkt);
       if (len < 0) {
           fprintf(stderr, "Error while decoding frame %d\n", *frame_count);
           return len;
       }

    The reason of failure is lacking parser, I have tried to build a parser but failed......

    Therefore I am wondering using ffplay.c as a header file in my receiver program so that I can use it as the decoder and player.

    I have took a look at ffplay.c, it actually fetch file into a videostate structure and processing it. The fetching part is from line 3188 of ffplay.c :

    VideoState *is;

       is = av_mallocz(sizeof(VideoState));
       if (!is)
           return NULL;
       av_strlcpy(is->filename, filename, sizeof(is->filename));
       is->iformat = iformat;
       is->ytop    = 0;
       is->xleft   = 0;

       /* start video display */
       if (frame_queue_init(&is->pictq, &is->videoq, VIDEO_PICTURE_QUEUE_SIZE, 1) < 0)
           goto fail;
       if (frame_queue_init(&is->subpq, &is->subtitleq, SUBPICTURE_QUEUE_SIZE, 0) < 0)
           goto fail;
       if (frame_queue_init(&is->sampq, &is->audioq, SAMPLE_QUEUE_SIZE, 1) < 0)
           goto fail;

       packet_queue_init(&is->videoq);
       packet_queue_init(&is->audioq);
       packet_queue_init(&is->subtitleq);

       is->continue_read_thread = SDL_CreateCond();

       init_clock(&is->vidclk, &is->videoq.serial);
       init_clock(&is->audclk, &is->audioq.serial);
       init_clock(&is->extclk, &is->extclk.serial);
       is->audio_clock_serial = -1;
       is->av_sync_type = av_sync_type;
       is->read_tid     = SDL_CreateThread(read_thread, is);
       if (!is->read_tid) {
    fail:
           stream_close(is);
           return NULL;
       }

    Now instead of fetching file, I want to modify ffplay.c code so that let it fetch the received packet, I can save received packet to AVPacket by

    static AVPacket avpkt;
    avpkt.data = inbuf;

    My question is : how to put AVPacket information into videostate structure ?

  • ffmpeg C API - creating queue of frames

    25 mai 2016, par lupod

    I have created using the C API of ffmpeg a C++ application that reads frames from a file and writes them to a new file. Everything works fine, as long as I write immediately the frames to the output. In other words, the following structure of the program outputs the correct result (I put only the pseudocode for now, if needed I can also post some real snippets but the classes that I have created for handling the ffmpeg functionalities are quite large) :

    AVFrame* frame = av_frame_alloc();
    int got_frame;

    // readFrame returns 0 if file is ended, got frame = 1 if
    // a complete frame has been extracted
    while(readFrame(inputfile,frame, &got_frame)) {
     if (got_frame) {
       // I actually do some processing here
       writeFrame(outputfile,frame);
     }
    }
    av_frame_free(&frame);

    The next step has been to parallelize the application and, as a consequence, frames are not written immediately after they are read (I do not want to go into the details of the parallelization). In this case problems arise : there is some flickering in the output, as if some frames get repeated randomly. However, the number of frames and the duration of the output video remains correct.

    What I am trying to do now is to separate completely the reading from writing in the serial implementation in order to understand what is going on. I am creating a queue of pointers to frames :

    std::queue queue;
    int ret = 1, got_frame;
    while (ret) {
     AVFrame* frame = av_frame_alloc();
     ret = readFrame(inputfile,frame,&got_frame);
     if (got_frame)
       queue.push(frame);
    }

    To write frames to the output file I do :

    while (!queue.empty()) {
     frame = queue.front();
     queue.pop();
     writeFrame(outputFile,frame);
     av_frame_free(&frame);
    }

    The result in this case is an output video with the correct duration and number of frames that is only a repetition of the last 3 (I think) frames of the video.

    My guess is that something might go wrong because of the fact that in the first case I use always the same memory location for reading frames, while in the second case I allocate many different frames.

    Any suggestions on what could be the problem ?

  • Capturing PCM audio data stream into file, and playing stream via ffmpeg, how ?

    11 avril 2015, par icarus74

    Would like to do following four things (separately), and need a bit of help understanding how to approach this,

    1. Dump audio data (from a serial-over-USB port), encoded as PCM, 16-bit, 8kHz, little-endian, into a file (plain binary data dump, not into any container format). Can this approach be used :

      $ cat /dev/ttyUSB0 > somefile.dat

    Can I do a ^C to close the file writing, while the dumping is in progress, as per the above command ?

    1. Stream audio data (same as above described kind), directly into ffmpeg for it to play out ? Like this :

      $ cat /dev/ttyUSB0 | ffmpeg

    or, do I have to specify the device port as a "-source" ? If so, I couldn’t figure out the format.

    Note that, I’ve tried this,

    $ cat /dev/urandom | aplay

    which works as expected, by playing out white-noise..., but trying the following doesn’t help :

    $ cat /dev/ttyUSB1 | aplay -f S16_LE

    Even though, opening /dev/ttyUSB1 using picocom @ 115200bps, 8-bit, no parity, I do see gibbrish, indicating presence of audio data, exactly when I expect.

    1. Use the audio data dumped into the file, use as a source in ffmpeg ? If so how, because so far I get the impression that ffmpeg can read a file in standard containers.

    2. Use pre-recorded audio captured in any format (perhaps .mp3 or .wav) to be streamed by ffmpeg, into /dev/ttyUSB0 device. Should I be using this as a "-sink" parameter, or pipe into it or redirect into it ? Also, is it possible that in 2 terminal windows, I use ffmpeg to capture and transmit audio data from/into same device /dev/ttyUSB0, simultaneously ?

    My knowledge of digital audio recording/processing formats, codecs is somewhat limited, so not sure if what I am trying to do qualifies as working with ’raw’ audio or not ?

    If ffmpeg is unable to do what I am hoping to achieve, could gstreamer be the solution ?

    PS> If anyone thinks that the answer could be improved, please feel free to suggest specific points. Would be happy to add any detail requested, provided I have the information.