Recherche avancée

Médias (91)

Autres articles (37)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (6089)

  • iPhone camera shooting video using the AVCaptureSession and using ffmpeg CMSampleBufferRef a change in h.264 format is the issue. please advice

    4 janvier 2012, par isaiah

    My goal is h.264/AAC , mpeg2-ts streaming to server from iphone device.

    Current my source is FFmpeg+libx264 compile success. I Know gnu License. I want the demo program.

    I'm want to know that

    1.CMSampleBufferRef to AVPicture data is success ?

    avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);
     pFrame linesize and data is not null but pst -9233123123 . outpic also .
    Because of this I have to guess 'non-strictly-monotonic PTS' message

    2.This log is repeat.

    encoding frame (size= 0)
    encoding frame = "" , 'avcodec_encode_video' return 0 is success but always 0 .

    I don't know what to do...

    2011-06-01 15:15:14.199 AVCam[1993:7303] pFrame = avcodec_alloc_frame();
    2011-06-01 15:15:14.207 AVCam[1993:7303] avpicture_fill = 1228800
    Video encoding
    2011-0601 15:5:14.215 AVCam[1993:7303] codec = 5841844
    [libx264 @ 0x1441e00] using cpu capabilities: ARMv6 NEON
    [libx264 @ 0x1441e00] profile Constrained Baseline, level 2.0[libx264 @ 0x1441e00] non-strictly-monotonic PTS
    encoding frame (size=    0)
    encoding frame
    [libx264 @ 0x1441e00] final ratefactor: 26.74

    3.I have to guess 'non-strictly-monotonic PTS' message is the cause of all problems.
    what is this 'non-strictly-monotonic PTS' .

     this is source 

    (void)        captureOutput:(AVCaptureOutput *)captureOutput
           didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
                  fromConnection:(AVCaptureConnection *)connection
    {

       if( !CMSampleBufferDataIsReady(sampleBuffer) )
       {
           NSLog( @"sample buffer is not ready. Skipping sample" );
           return;
       }


       if( [isRecordingNow isEqualToString:@"YES"] )
       {
           lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
           if( videoWriter.status != AVAssetWriterStatusWriting  )
           {
               [videoWriter startWriting];
               [videoWriter startSessionAtSourceTime:lastSampleTime];
           }

           if( captureOutput == videooutput )
           {
               [self newVideoSample:sampleBuffer];

               CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
               CVPixelBufferLockBaseAddress(pixelBuffer, 0);

               // access the data
               int width = CVPixelBufferGetWidth(pixelBuffer);
               int height = CVPixelBufferGetHeight(pixelBuffer);
               unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);

               AVFrame *pFrame;
               pFrame = avcodec_alloc_frame();
               pFrame->quality = 0;

               NSLog(@"pFrame = avcodec_alloc_frame(); ");

    //          int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);

    //          int bytesSize = height * bytesPerRow ;  

    //          unsigned char *pixel = (unsigned char*)malloc(bytesSize);

    //          unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

    //          memcpy (pixel, rowBase, bytesSize);


               int avpicture_fillNum = avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);//PIX_FMT_RGB32//PIX_FMT_RGB8
               //NSLog(@"rawPixelBase = %i , rawPixelBase -s = %s",rawPixelBase, rawPixelBase);
               NSLog(@"avpicture_fill = %i",avpicture_fillNum);
               //NSLog(@"width = %i,height = %i",width, height);



               // Do something with the raw pixels here

               CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

               //avcodec_init();
               //avdevice_register_all();
               av_register_all();





               AVCodec *codec;
               AVCodecContext *c= NULL;
               int  out_size, size, outbuf_size;
               //FILE *f;
               uint8_t *outbuf;

               printf("Video encoding\n");

               /* find the mpeg video encoder */
               codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
               NSLog(@"codec = %i",codec);
               if (!codec) {
                   fprintf(stderr, "codec not found\n");
                   exit(1);
               }

               c= avcodec_alloc_context();

               /* put sample parameters */
               c->bit_rate = 400000;
               c->bit_rate_tolerance = 10;
               c->me_method = 2;
               /* resolution must be a multiple of two */
               c->width = 352;//width;//352;
               c->height = 288;//height;//288;
               /* frames per second */
               c->time_base= (AVRational){1,25};
               c->gop_size = 10;//25; /* emit one intra frame every ten frames */
               //c->max_b_frames=1;
               c->pix_fmt = PIX_FMT_YUV420P;

               c ->me_range = 16;
               c ->max_qdiff = 4;
               c ->qmin = 10;
               c ->qmax = 51;
               c ->qcompress = 0.6f;

               /* open it */
               if (avcodec_open(c, codec) < 0) {
                   fprintf(stderr, "could not open codec\n");
                   exit(1);
               }


               /* alloc image and output buffer */
               outbuf_size = 100000;
               outbuf = malloc(outbuf_size);
               size = c->width * c->height;

               AVFrame* outpic = avcodec_alloc_frame();
               int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

               //create buffer for the output image
               uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

    #pragma mark -  

               fflush(stdout);

    <pre>//         int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
    //          uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
    //          
    //          //UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"10%d", i]];
    //          CGImageRef newCgImage = [self imageFromSampleBuffer:sampleBuffer];//[image CGImage];
    //          
    //          CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
    //          CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
    //          buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);  
    //          
    //          avpicture_fill((AVPicture*)pFrame, buffer, PIX_FMT_RGB8, c->width, c->height);
               avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);

               struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                                                              PIX_FMT_RGB8,
                                                              c->width, c->height,
                                                              PIX_FMT_YUV420P,
                                                              SWS_FAST_BILINEAR, NULL, NULL, NULL);

               //perform the conversion
               sws_scale(fooContext, pFrame->data, pFrame->linesize, 0, c->height, outpic->data, outpic->linesize);
               // Here is where I try to convert to YUV

               /* encode the image */

               out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
               printf("encoding frame (size=%5d)\n", out_size);
               printf("encoding frame %s\n", outbuf);


               //fwrite(outbuf, 1, out_size, f);

               //              free(buffer);
               //              buffer = NULL;      



               /* add sequence end code to have a real mpeg file */
    //          outbuf[0] = 0x00;
    //          outbuf[1] = 0x00;
    //          outbuf[2] = 0x01;
    //          outbuf[3] = 0xb7;
               //fwrite(outbuf, 1, 4, f);
               //fclose(f);
               free(outbuf);

               avcodec_close(c);
               av_free(c);
               av_free(pFrame);
               printf("\n");
    </pre>
  • streaming camera captured video to another location

    28 février 2012, par Fei Su

    I am using Kinect camera capture images and compress them to video and save the video into the disk(python opencv). How can I stream and display it at another location ? Because the video is made at real-time and I also need to display and stream it in real time.

  • Resize video using FFmpeg C API

    28 février 2012, par cr_az

    I am trying to resize video file on Android device using FFmpeg. Remark : I cant use FFmpeg as binary - in my case it should be used as shared library (only FFmpeg C API is accessible).

    I did not find any documentation regarding video resizing, however it looks like algorithm is following :

    10 OPEN video_stream FROM video.mp4
    20 READ packet FROM video_stream INTO frame
    30 IF frame NOT COMPLETE GOTO 20
    40 RESIZE frame
    50 WRITE frame TO converted_video.mp4
    60 GOTO 20

    Should I use sws_scale function in order to resize frame ? Is there any other (easier ?) way to resize video file using FFmpeg C API ?