Recherche avancée

Médias (91)

Autres articles (110)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (13945)

  • C++, FFmpeg, save AVPacket infomation into videostate structure in ffplay.c

    28 mai 2015, par Yoohoo

    I am currently working on a project that tests video streaming. In the project, video stream is encoded with H.264 before send and decoded after receive, using FFmpeg codec and functions.

    I can encode video stream by

    init_video_encode(AV_CODEC_ID_H264);

    where

    static void init_video_encode(AVCodecID codec_id){
    codec = avcodec_find_encoder(codec_id);
    if (!codec) {
       fprintf(stderr, "Codec not found\n");
       exit(1);
    }

    c = avcodec_alloc_context3(codec);
    if (!c) {
       fprintf(stderr, "Could not allocate video codec context\n");
       exit(1);
    }

    /* put sample parameters */
    c->bit_rate = 400000;
    /* resolution must be a multiple of two */
    c->width = 640;
    c->height = 480;
    /* frames per second */
    c->time_base= (AVRational){1,25};
    c->gop_size = 10; /* emit one intra frame every ten frames */
    c->max_b_frames=max_f;
    c->pix_fmt = AV_PIX_FMT_YUV420P;

    if(codec_id == AV_CODEC_ID_H264)
       av_opt_set(c->priv_data, "preset", "slow", 0);

    /* open it */
    if (avcodec_open2(c, codec, NULL) < 0) {
       fprintf(stderr, "Could not open codec\n");
       exit(1);
    }

    frame = avcodec_alloc_frame();
    if (!frame) {
       fprintf(stderr, "Could not allocate video frame\n");
       exit(1);
    }
    frame->format = c->pix_fmt;
    frame->width  = c->width;
    frame->height = c->height;

    /* the image can be allocated by any means and av_image_alloc() is
    * just the most convenient way if av_malloc() is to be used */
    ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height,
                        c->pix_fmt, 32);

    /* get the delayed frames */
    if (ret < 0) {
       fprintf(stderr, "Could not allocate raw picture buffer\n");
       exit(1);
    }
       av_init_packet(&pkt);

    //}

       pkt.data = NULL;    // packet data will be allocated by the encoder
       pkt.size = 0;
       //cout<<"\nBefore YUV\n";
       if(count == 0)
       read_yuv420(frame->data[0]);
       count ++;
      // cout<<"\nAfter YUV\n";
       if(count == SUBSITY) {
       count = 0;
       }

       frame->pts = i++;

       /* encode the image */
       ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
       if (ret < 0) {
            fprintf(stderr, "Error encoding frame\n");
            return -1;
       }
       //cout<<"\nRecord the Video\n";
       if (got_output) {
            //printf("Write frame %3d (size=%5d)\n", i, pkt.size);
            //cout<<"\nBefore Memcpy\n\n\n";

            memcpy(inbufout+totalSize,pkt.data,pkt.size);
            //cout<<"\nAfter Memcpy\n\n\n";
            totalSize+=pkt.size;

    The video encoder works very well, if I write the encoded packet into a .h264 file, it can be played. But at the decoder side, I cannot decode the received packet with :

       codec = avcodec_find_decoder(AV_CODEC_ID_H264);
       if (!codec) {
           fprintf(stderr, "Codec not found\n");
           exit(1);
       }

       c = avcodec_alloc_context3(codec);
       if (!c) {
           fprintf(stderr, "Could not allocate video codec context\n");
           exit(1);
       }

       if(codec->capabilities&CODEC_CAP_TRUNCATED)
           c->flags|= CODEC_FLAG_TRUNCATED;

       /* open it */
       if (avcodec_open2(c, codec, NULL) < 0) {
           fprintf(stderr, "Could not open codec\n");
           exit(1);
       }

       frame = avcodec_alloc_frame();
       if (!frame) {
           fprintf(stderr, "Could not allocate video frame\n");
           exit(1);
       }
    len = avcodec_decode_video2(avctx, frame, &got_frame, pkt);
       if (len < 0) {
           fprintf(stderr, "Error while decoding frame %d\n", *frame_count);
           return len;
       }

    The reason of failure is lacking parser, I have tried to build a parser but failed......

    Therefore I am wondering using ffplay.c as a header file in my receiver program so that I can use it as the decoder and player.

    I have took a look at ffplay.c, it actually fetch file into a videostate structure and processing it. The fetching part is from line 3188 of ffplay.c :

    VideoState *is;

       is = av_mallocz(sizeof(VideoState));
       if (!is)
           return NULL;
       av_strlcpy(is->filename, filename, sizeof(is->filename));
       is->iformat = iformat;
       is->ytop    = 0;
       is->xleft   = 0;

       /* start video display */
       if (frame_queue_init(&is->pictq, &is->videoq, VIDEO_PICTURE_QUEUE_SIZE, 1) < 0)
           goto fail;
       if (frame_queue_init(&is->subpq, &is->subtitleq, SUBPICTURE_QUEUE_SIZE, 0) < 0)
           goto fail;
       if (frame_queue_init(&is->sampq, &is->audioq, SAMPLE_QUEUE_SIZE, 1) < 0)
           goto fail;

       packet_queue_init(&is->videoq);
       packet_queue_init(&is->audioq);
       packet_queue_init(&is->subtitleq);

       is->continue_read_thread = SDL_CreateCond();

       init_clock(&is->vidclk, &is->videoq.serial);
       init_clock(&is->audclk, &is->audioq.serial);
       init_clock(&is->extclk, &is->extclk.serial);
       is->audio_clock_serial = -1;
       is->av_sync_type = av_sync_type;
       is->read_tid     = SDL_CreateThread(read_thread, is);
       if (!is->read_tid) {
    fail:
           stream_close(is);
           return NULL;
       }

    Now instead of fetching file, I want to modify ffplay.c code so that let it fetch the received packet, I can save received packet to AVPacket by

    static AVPacket avpkt;
    avpkt.data = inbuf;

    My question is : how to put AVPacket information into videostate structure ?

  • ffmpeg C API - creating queue of frames

    25 mai 2016, par lupod

    I have created using the C API of ffmpeg a C++ application that reads frames from a file and writes them to a new file. Everything works fine, as long as I write immediately the frames to the output. In other words, the following structure of the program outputs the correct result (I put only the pseudocode for now, if needed I can also post some real snippets but the classes that I have created for handling the ffmpeg functionalities are quite large) :

    AVFrame* frame = av_frame_alloc();
    int got_frame;

    // readFrame returns 0 if file is ended, got frame = 1 if
    // a complete frame has been extracted
    while(readFrame(inputfile,frame, &got_frame)) {
     if (got_frame) {
       // I actually do some processing here
       writeFrame(outputfile,frame);
     }
    }
    av_frame_free(&frame);

    The next step has been to parallelize the application and, as a consequence, frames are not written immediately after they are read (I do not want to go into the details of the parallelization). In this case problems arise : there is some flickering in the output, as if some frames get repeated randomly. However, the number of frames and the duration of the output video remains correct.

    What I am trying to do now is to separate completely the reading from writing in the serial implementation in order to understand what is going on. I am creating a queue of pointers to frames :

    std::queue queue;
    int ret = 1, got_frame;
    while (ret) {
     AVFrame* frame = av_frame_alloc();
     ret = readFrame(inputfile,frame,&got_frame);
     if (got_frame)
       queue.push(frame);
    }

    To write frames to the output file I do :

    while (!queue.empty()) {
     frame = queue.front();
     queue.pop();
     writeFrame(outputFile,frame);
     av_frame_free(&frame);
    }

    The result in this case is an output video with the correct duration and number of frames that is only a repetition of the last 3 (I think) frames of the video.

    My guess is that something might go wrong because of the fact that in the first case I use always the same memory location for reading frames, while in the second case I allocate many different frames.

    Any suggestions on what could be the problem ?

  • Capturing PCM audio data stream into file, and playing stream via ffmpeg, how ?

    11 avril 2015, par icarus74

    Would like to do following four things (separately), and need a bit of help understanding how to approach this,

    1. Dump audio data (from a serial-over-USB port), encoded as PCM, 16-bit, 8kHz, little-endian, into a file (plain binary data dump, not into any container format). Can this approach be used :

      $ cat /dev/ttyUSB0 > somefile.dat

    Can I do a ^C to close the file writing, while the dumping is in progress, as per the above command ?

    1. Stream audio data (same as above described kind), directly into ffmpeg for it to play out ? Like this :

      $ cat /dev/ttyUSB0 | ffmpeg

    or, do I have to specify the device port as a "-source" ? If so, I couldn’t figure out the format.

    Note that, I’ve tried this,

    $ cat /dev/urandom | aplay

    which works as expected, by playing out white-noise..., but trying the following doesn’t help :

    $ cat /dev/ttyUSB1 | aplay -f S16_LE

    Even though, opening /dev/ttyUSB1 using picocom @ 115200bps, 8-bit, no parity, I do see gibbrish, indicating presence of audio data, exactly when I expect.

    1. Use the audio data dumped into the file, use as a source in ffmpeg ? If so how, because so far I get the impression that ffmpeg can read a file in standard containers.

    2. Use pre-recorded audio captured in any format (perhaps .mp3 or .wav) to be streamed by ffmpeg, into /dev/ttyUSB0 device. Should I be using this as a "-sink" parameter, or pipe into it or redirect into it ? Also, is it possible that in 2 terminal windows, I use ffmpeg to capture and transmit audio data from/into same device /dev/ttyUSB0, simultaneously ?

    My knowledge of digital audio recording/processing formats, codecs is somewhat limited, so not sure if what I am trying to do qualifies as working with ’raw’ audio or not ?

    If ffmpeg is unable to do what I am hoping to achieve, could gstreamer be the solution ?

    PS> If anyone thinks that the answer could be improved, please feel free to suggest specific points. Would be happy to add any detail requested, provided I have the information.