Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (26)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (4915)

  • why ffmpeg frame to opengles texture is black

    5 juin 2012, par joe

    I'm trying to converted from a video using ffmpeg to an opengles texture in jni, but i just get a black texture. I have output the opengl with the glGetError(), there is any error.
    Here is my code :

    void*              pixels;    int err;
    int i;
    int frameFinished = 0;
    AVPacket packet;
    static struct SwsContext *img_convert_ctx;
    static struct SwsContext *scale_context = NULL;
    int64_t seek_target;

    int target_width = 320;
    int target_height = 240;
    GLenum error = GL_NO_ERROR;
    sws_freeContext(img_convert_ctx);  

    i = 0;
    while((i==0) && (av_read_frame(pFormatCtx, &packet)>=0)) {
       if(packet.stream_index==videoStream) {
           avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

           if(frameFinished) {
               LOGI("packet pts %llu", packet.pts);
               img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
                      pCodecCtx->pix_fmt,
                      target_width, target_height, PIX_FMT_RGB24, SWS_BICUBIC,
                      NULL, NULL, NULL);
               if(img_convert_ctx == NULL) {
                   LOGE("could not initialize conversion context\n");
                   return;
               }
               sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
               LOGI("sws_scale");

               videoTextures = new Texture*[1];
               videoTextures[0]->mWidth = 256; //(unsigned)pCodecCtx->width;
               videoTextures[0]->mHeight = 256; //(unsigned)pCodecCtx->height;
               videoTextures[0]->mData = pFrameRGB->data[0];

               glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

               glGenTextures(1, &(videoTextures[0]->mTextureID));
               glBindTexture(GL_TEXTURE_2D, videoTextures[0]->mTextureID);
               glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
               glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

               if(0 == got_texture)
               {
                   glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, videoTextures[0]->mWidth, videoTextures[0]->mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);

                   glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, videoTextures[0]->mWidth, videoTextures[0]->mHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);
               }else
               {
                   glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, videoTextures[0]->mWidth, videoTextures[0]->mHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);

               }

               i = 1;
               error = glGetError();
               if( error != GL_NO_ERROR ) {
                   LOGE("couldn't create texture!!");
                      switch (error) {
                       case GL_INVALID_ENUM:
                       LOGE("GL Error: Enum argument is out of range");
                       break;
                       case GL_INVALID_VALUE:
                           LOGE("GL Error: Numeric value is out of range");
                       break;
                       case GL_INVALID_OPERATION:
                           LOGE("GL Error: Operation illegal in current state");
                       break;
                       case GL_OUT_OF_MEMORY:
                           LOGE("GL Error: Not enough memory to execute command");
                       break;
                       default:
                           break;
                      }
               }
           }
       }
       av_free_packet(&packet);
    }

    i success to change pFrameRGB to a java bitmap, but i just want to change it to a texture in the c code,

  • why is streaming to an flv server with ffmpeg only showing black/blank video for clients ?

    6 septembre 2012, par rogerdpack

    When I stream to an flv server, like flash media server, using ffmpeg, like

    ffmpeg -i input -vcodec libx264 rtmp://hostname/streamname

    it turns out black. Why is that ?

  • getting black and white image after encoding

    16 mai 2012, par user1310596

    I am trying to encode image using ffmpeg library and objective c. I am using following code but I am getting black and white image as the result. Is there something to do with pixel format(PIX_FMT) ? Please help me so that I can get colored image.

    av_register_all();
    avcodec_init();
    avcodec_register_all();
    avformat_alloc_context();

    AVCodec *codec;
    AVCodecContext *ctx= NULL;
    int out_size, size, outbuf_size;
    AVFrame *picture;
    uint8_t *outbuf;
    unsigned char *flvdata = malloc(sizeof(unsigned char) * 30);


    outbuf_size = 200000;
    outbuf = malloc(outbuf_size);


    printf("Video encoding\n");

    codec = avcodec_find_encoder(CODEC_ID_FLV1);
    if (!codec) {
           fprintf(stderr, "codec not found\n");
           exit(1);
    }

    ctx= avcodec_alloc_context();
    picture= avcodec_alloc_frame();


    ctx->width = 320;
    ctx->height = 240;
    ctx -> sample_rate = 11025;
    ctx -> time_base.den = 1000;
    ctx -> time_base.num = 23976;
    ctx -> codec_id = CODEC_ID_FLV1;
    ctx -> codec_type = CODEC_TYPE_VIDEO;
    ctx->pix_fmt = PIX_FMT_YUV420P;

    if (avcodec_open(ctx, codec) < 0) {
           fprintf(stderr, "could not open codec\n");
           exit(1);
    }

    outbuf_size = 100000;
    outbuf = malloc(outbuf_size);
    size = ctx->width * ctx->height;

    AVFrame* outpic = avcodec_alloc_frame();
    int nbytes = avpicture_get_size(PIX_FMT_YUV420P, ctx->width, ctx->height);

    uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

    fflush(stdout);

    int numBytes = avpicture_get_size(PIX_FMT_YUV420P, ctx->width, ctx->height);

    UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"0.jpg"]];
    CGImageRef newCgImage = [image CGImage];

    CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
    CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
    long dataLength = CFDataGetLength(bitmapData);

    uint8_t *buffer = (uint8_t *)av_malloc(dataLength);
    buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);

    for(int i = 0; i < dataLength; i++)
    {
           if((i + 1) % 16 == 1 && i != 1)
                   printf("\n");
           printf("%X\t",buffer[i]); // getting something different than the     actual hex value of the image
    }



    outpic -> pts = 0;        

    avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGB8, ctx->width, ctx->height);

    avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, ctx->width, ctx->height);

    struct SwsContext* fooContext = sws_getContext(ctx->width, ctx->height,
                                                          PIX_FMT_RGB8,
                                                          ctx->width, ctx->height,
                                                          PIX_FMT_YUV420P,
                                                          SWS_FAST_BILINEAR, NULL, NULL, NULL);

    sws_scale(fooContext, picture->data, picture->linesize, 0, ctx->height, outpic->data, outpic->linesize);

    printf("abcdefghijklmnop");
    out_size = avcodec_encode_video(ctx, outbuf, outbuf_size, outpic);
    printf("\n\n out_size %d   outbuf_size %d",out_size,outbuf_size);

    Thanks in advance