Recherche avancée

Médias (1)

Mot : - Tags -/biographie

Autres articles (111)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (8809)

  • getting frame using ffmpeg in android

    5 août 2014, par user2098010

    I wanna get frames from .mp4(recored by phone camera) using ffmpeg in android.
    did ! comfile and ndk-build
    my questions are

    1. for getting frames from .mp4, what should i do ?
      encoding ? decoding ?

    2. encoded mp4 video file, could not play file mp4 on mediaplayer.

    help me plz....

    enter code here

    #include
    #include
    #include
    #include <android></android>log.h>
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>channel_layout.h>
    #include <libavutil></libavutil>common.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libavutil></libavutil>mathematics.h>
    #include <libavutil></libavutil>samplefmt.h>
    #include "player.h"

    void video_encode_example(const char *filename, int codec_id)
    {
       AVCodec *codec;
       AVCodecContext *c= NULL;
       int i, ret, x, y, got_output;
       FILE *f;
       AVFrame *frame;
       AVPacket pkt;
       uint8_t endcode[] = { 0, 0, 1, 0xb7 };

           __android_log_print(ANDROID_LOG_DEBUG, "BASEBALL", "%s",filename);

       /* find the mpeg1 video encoder */
       codec = avcodec_find_encoder(CODEC_ID_H263);
       if (!codec) {
           __android_log_print(ANDROID_LOG_DEBUG, "BASEBALL", "Codec not found");
           exit(1);
       }

       c = avcodec_alloc_context3(codec);
       if (!c) {
           __android_log_print(ANDROID_LOG_DEBUG, "BASEBALL", "Could not allocate video codec context");
           exit(1);
       }

       /* put sample parameters */
       c->bit_rate = 400000;
       /* resolution must be a multiple of two */
       c->width = 352;
       c->height = 288;
       /* frames per second */
       c->time_base= (AVRational){1,25};
       c->gop_size = 10; /* emit one intra frame every ten frames */
       c->max_bframes=1;
       c->pix_fmt = AV_PIX_FMT_YUV420P;


      // if(codec == AV_CODEC_ID_H264)
         // av_opt_set(c->priv_data, "preset", "slow", 0);

       /* open it */
       if (avcodec_open2(c, codec, NULL) &lt; 0) {
           __android_log_print(ANDROID_LOG_DEBUG, "BASEBALL", "Could not open codec");
           exit(1);
       }

       f = fopen(filename, "wb");
       if (f == NULL) {
           __android_log_print(ANDROID_LOG_DEBUG, "BASEBALL", "Could not open");
           exit(1);
       }

       frame = avcodec_alloc_frame();
       if (!frame) {
           __android_log_print(ANDROID_LOG_DEBUG, "BASEBALL", "Could not allocate video frame");
           exit(1);
       }
       frame->format = c->pix_fmt;
       frame->width  = c->width;
       frame->height = c->height;

       /* the image can be allocated by any means and av_image_alloc() is
        * just the most convenient way if av_malloc() is to be used */
       ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height,
                            c->pix_fmt, 32);
       if (ret &lt; 0) {
           __android_log_print(ANDROID_LOG_DEBUG, "BASEBALL", "Could not allocate raw picture buffer");
           exit(1);
       }

       /* encode 1 second of video */
       for(i=0;i&lt;250;i++) {
           av_init_packet(&amp;pkt);
           pkt.data = NULL;    // packet data will be allocated by the encoder
           pkt.size = 0;

           fflush(stdout);
           /* prepare a dummy image */
           /* Y */
           for(y=0;yheight;y++) {
               for(x=0;xwidth;x++) {
                   frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
               }
           }

           /* Cb and Cr */
           for(y=0;yheight/2;y++) {
               for(x=0;xwidth/2;x++) {
                   frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
                   frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
               }
           }

           frame->pts = i;

           /* encode the image */
           ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_output);
           if (ret &lt; 0) {
               __android_log_print(ANDROID_LOG_DEBUG, "BASEBALL", "Error encoding frame");
               exit(1);
           }

           if (got_output) {
               __android_log_print(ANDROID_LOG_DEBUG, "BASEBALL", "encode the image Write frame pktsize  %d", pkt.size);
               fwrite(pkt.data, 1, pkt.size, f);
               av_free_packet(&amp;pkt);
           }
       }

       /* get the delayed frames */
       for (got_output = 1; got_output; i++) {
           fflush(stdout);

           ret = avcodec_encode_video2(c, &amp;pkt, NULL, &amp;got_output);
           if (ret &lt; 0) {
               __android_log_print(ANDROID_LOG_DEBUG, "BASEBALL", "Error encoding frame");
               fprintf(stderr, "Error encoding frame\n");
               exit(1);
           }

           if (got_output) {
               __android_log_print(ANDROID_LOG_DEBUG, "BASEBALL", "get the delayed frames Write frame pktsize  %d", pkt.size);
               fwrite(pkt.data, 1, pkt.size, f);
               av_free_packet(&amp;pkt);
           }
       }

       /* add sequence end code to have a real mpeg file */
       fwrite(endcode, 1, sizeof(endcode), f);
       fclose(f);

       avcodec_close(c);
       av_free(c);
       av_freep(&amp;frame->data[0]);
       avcodec_free_frame(&amp;frame);
    }
    enter code here
  • Manual encoding into MPEG-TS

    4 juillet 2014, par Lane

    SO...

    I am trying to take a H264 Annex B byte stream video and encode it into MPEG-TS in pure Java. My goals is to create a minimal MPEG-TS, Single Program, valid stream and to not include any timing information information (PCR, PTS, DTS).

    I am currently at the point where my generated file can be passed to ffmpeg (ffmpeg -i myVideo.ts) and ffmpeg reports...

    [NULL @ 0x7f8103022600] start time is not set in estimate_timings_from_pts

    Input #0, mpegts, from 'video.ts':
    Duration: N/A, bitrate: N/A
    Program 1
     Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc

    ...it seems like this warning for start time is not a big deal... and ffmpeg is unable to determine how long the video is. If I create another mpeg-ts file from my video file (ffmpeg -i myVideo.ts -vcodec copy validVideo.ts) and run ffmpeg -i validVideo.ts I get...

    Input #0, mpegts, from 'video2.ts':
    Duration: 00:00:11.61, start: 1.400000, bitrate: 3325 kb/s
    Program 1
     Metadata:
       service_name    : Service01
       service_provider: FFmpeg
     Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc

    ...so you can see the timing information and bitrate is there and so is the metadata.

    My H264 video is comprised of only I and P Frames (with the SPS and PPS preceding the I Frame of course) and the way that I am creating my MPEG-TS stream is...

    1. Write a single PAT at the beginning of the file
    2. Write a single PMT at the beginning of the file
    3. Create TS and PES packets from SPS, PPS and I Frame (AUD NALs too, if this is required ?)
    4. Create TS and PES packets from P Frame (again, AUD NALs too, if required)
    5. For the last payload of either an I Frame or P Frame, add filler bytes to an adaptation field to make sure it fits into a full TS packet
    6. Repeat 3-5 for the entire file

    ...my PAT looks like this...

    4740 0010 0000 b00d 0001 c100 0000 01f0
    002a b104 b2ff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff

    ...and my PMT looks like this...

    4750 0010
    0002 b012 0001 c100 00ff fff0 001b e100
    f000 c15b 41e0 ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff

    ...notice after the c100 00, the "ff ff", f0... says that we are not using a PCR... Also notice that I have updated my CRC to reflect this change to the PMT. My first I Frame packet looks like...

    4741 0010 0000 01e0
    0000 8000 0000 0000 0109 f000 0000 0127
    4d40 288d 8d60 2802 dd80 b501 0101 4000
    00fa 4000 3a98 3a18 00b7 2000 3380 2ef2
    e343 0016 e400 0670 05de 5c16 345d c000
    0000 0128 ee3c 8000 0000 0165 8880 0020
    0000 4fe5 63b5 4e90 b11c 9f8f f891 10f3
    13b1 666b 9fc6 03e9 e321 36bf 1788 347b
    eb23 fc89 5772 6e2e 1714 96df ed16 9b30
    252d ceb7 07e9 a0c7 c6e7 9515 be87 2df1
    81f3 b9d2 ba5f 243e 2d5c cba2 8ca5 b798
    6bec 8c43 0b5d bbda bc5b 6e7c e15c 84e8
    2f13 be84

    ...you’ll notice after the 01e0 0000, 8000 00 is the PES header extension where I specify no PTS / DTS and the remaining length is zero. My first P Frame packet looks like...

    4741 001d
    0000 01e0 0000 8000 0000 0000 0109 f000
    0000 0141 9a00 0200 0593 ff45 a7ae 1acd
    f2d7 f9ec 557f cdb6 ba38 60d6 a626 5edb
    4bb9 9783 89e2 d7e1 102e 4625 2fbf ce16
    f952 d8c9 f027 e55a 6b2a 81c3 48d4 6a45
    050a f355 fbec db01 6562 6405 04aa e011
    50ec 0b45 45e5 0df7 2fed a3f8 ac13 2e69
    6739 6d81 f13d 2455 e6ca 1c6b dc96 65d5
    3bad f250 7dab 42e4 7ba9 f564 ee61 29fb
    1b2c 974c 6924 1a1f 99ef 063c b99a c507
    8c22 b0f8 b14c 3e4d 01d0 6120 4e19 8725
    2fda 6550 f907 3f87

    ...and whenever an I Frame or P Frame is ending, I have a TS packet with an adaptation field like...

    4701 003c b000 ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff

    ...where the first b0 bytes are the adaptation field stuffing bytes and the remaining ones are the final bytes of the I or P Frame. So as you can tell I can use ffmpeg and pass it my file to create a valid movie in any format. However, I need the file I create to be in the proper format and I cannot quite figure out what the last piece I am missing is. Any ideas ?

  • Convert opencv mat frame to ffmpeg AVFrame

    1er avril 2015, par Yoohoo

    I am currently work on a c++ real-time video transmission project, now I am using opencv to capture the webcam, and I want to convert the opencv Mat to ffmepg AVFrame to do the encoding and write into buffer. At the decoder side, read packet from buffer, using ffmpeg decode, and then convert ffmpeg AVFrame to opencv Mat again and play.

    Now I have finished the opencv capturing, and I can encode v4l2 source with ffmpeg, but what I want to do is replace the v4l2 source with opencv Mat. But it get error in follow code (I just show the part of the conversion) :

       Mat opencvin;                    //frame from webcam
       cap.read(opencvin);

       Mat* opencvframe;
       opencvframe = &amp;opencvin;
       AVFrame ffmpegout;
       Size frameSize = opencvframe->size();
       AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
       AVFormatContext* outContainer = avformat_alloc_context();
       AVStream *outStream = avformat_new_stream(outContainer, encoder);
       avcodec_get_context_defaults3(outStream->codec, encoder);

       outStream->codec->pix_fmt = AV_PIX_FMT_YUV420P;
       outStream->codec->width = opencvframe->cols;
       outStream->codec->height = opencvframe->rows;
       avpicture_fill((AVPicture*)&amp;ffmpegout, opencvframe->data, PIX_FMT_BGR24, outStream->codec->width, outStream->codec->height);
       ffmpegout.width = frameSize.width;
       ffmpegout.height = frameSize.height;

    This is a code I borrow from Internet, it seems the frame have already be encoded during conversion, before I use

    static AVCodecContext *c= NULL;

    c = avcodec_alloc_context3(codec);
    if (!c) {
       fprintf(stderr, "Could not allocate video codec context\n");
       exit(1);
    }
    c->bit_rate = 400000;
    /* resolution must be a multiple of two */
    c->width = 640;
    c->height = 480;
    /* frames per second */
    c->time_base= (AVRational){1,25};
    c->gop_size = 10; /* emit one intra frame every ten frames */
    c->max_b_frames=15;
    c->pix_fmt = AV_PIX_FMT_YUV420P;

    ret = avcodec_encode_video2(c, &amp;pkt, &amp;ffmpegout, &amp;got_output);

    to encode the frame. And I get core dumped error if I continue encode the converted frame.

    I want encode the frame after conversion so that I can keep the data in pkt. How can I get a pure converted ffmpeg frame from opencv frame ?

    Or if the encode have already done during coversion, how can I get the output into pkt ?