Recherche avancée

Médias (91)

Autres articles (73)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (11503)

  • streaming camera captured video to another location

    28 février 2012, par Fei Su

    I am using Kinect camera capture images and compress them to video and save the video into the disk(python opencv). How can I stream and display it at another location ? Because the video is made at real-time and I also need to display and stream it in real time.

  • iPhone camera shooting video using the AVCaptureSession and using ffmpeg CMSampleBufferRef a change in h.264 format is the issue. please advice

    4 janvier 2012, par isaiah

    My goal is h.264/AAC , mpeg2-ts streaming to server from iphone device.

    Current my source is FFmpeg+libx264 compile success. I Know gnu License. I want the demo program.

    I'm want to know that

    1.CMSampleBufferRef to AVPicture data is success ?

    avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);
     pFrame linesize and data is not null but pst -9233123123 . outpic also .
    Because of this I have to guess 'non-strictly-monotonic PTS' message

    2.This log is repeat.

    encoding frame (size= 0)
    encoding frame = "" , 'avcodec_encode_video' return 0 is success but always 0 .

    I don't know what to do...

    2011-06-01 15:15:14.199 AVCam[1993:7303] pFrame = avcodec_alloc_frame();
    2011-06-01 15:15:14.207 AVCam[1993:7303] avpicture_fill = 1228800
    Video encoding
    2011-0601 15:5:14.215 AVCam[1993:7303] codec = 5841844
    [libx264 @ 0x1441e00] using cpu capabilities: ARMv6 NEON
    [libx264 @ 0x1441e00] profile Constrained Baseline, level 2.0[libx264 @ 0x1441e00] non-strictly-monotonic PTS
    encoding frame (size=    0)
    encoding frame
    [libx264 @ 0x1441e00] final ratefactor: 26.74

    3.I have to guess 'non-strictly-monotonic PTS' message is the cause of all problems.
    what is this 'non-strictly-monotonic PTS' .

     this is source 

    (void)        captureOutput:(AVCaptureOutput *)captureOutput
           didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
                  fromConnection:(AVCaptureConnection *)connection
    {

       if( !CMSampleBufferDataIsReady(sampleBuffer) )
       {
           NSLog( @"sample buffer is not ready. Skipping sample" );
           return;
       }


       if( [isRecordingNow isEqualToString:@"YES"] )
       {
           lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
           if( videoWriter.status != AVAssetWriterStatusWriting  )
           {
               [videoWriter startWriting];
               [videoWriter startSessionAtSourceTime:lastSampleTime];
           }

           if( captureOutput == videooutput )
           {
               [self newVideoSample:sampleBuffer];

               CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
               CVPixelBufferLockBaseAddress(pixelBuffer, 0);

               // access the data
               int width = CVPixelBufferGetWidth(pixelBuffer);
               int height = CVPixelBufferGetHeight(pixelBuffer);
               unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);

               AVFrame *pFrame;
               pFrame = avcodec_alloc_frame();
               pFrame->quality = 0;

               NSLog(@"pFrame = avcodec_alloc_frame(); ");

    //          int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);

    //          int bytesSize = height * bytesPerRow ;  

    //          unsigned char *pixel = (unsigned char*)malloc(bytesSize);

    //          unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

    //          memcpy (pixel, rowBase, bytesSize);


               int avpicture_fillNum = avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);//PIX_FMT_RGB32//PIX_FMT_RGB8
               //NSLog(@"rawPixelBase = %i , rawPixelBase -s = %s",rawPixelBase, rawPixelBase);
               NSLog(@"avpicture_fill = %i",avpicture_fillNum);
               //NSLog(@"width = %i,height = %i",width, height);



               // Do something with the raw pixels here

               CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

               //avcodec_init();
               //avdevice_register_all();
               av_register_all();





               AVCodec *codec;
               AVCodecContext *c= NULL;
               int  out_size, size, outbuf_size;
               //FILE *f;
               uint8_t *outbuf;

               printf("Video encoding\n");

               /* find the mpeg video encoder */
               codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
               NSLog(@"codec = %i",codec);
               if (!codec) {
                   fprintf(stderr, "codec not found\n");
                   exit(1);
               }

               c= avcodec_alloc_context();

               /* put sample parameters */
               c->bit_rate = 400000;
               c->bit_rate_tolerance = 10;
               c->me_method = 2;
               /* resolution must be a multiple of two */
               c->width = 352;//width;//352;
               c->height = 288;//height;//288;
               /* frames per second */
               c->time_base= (AVRational){1,25};
               c->gop_size = 10;//25; /* emit one intra frame every ten frames */
               //c->max_b_frames=1;
               c->pix_fmt = PIX_FMT_YUV420P;

               c ->me_range = 16;
               c ->max_qdiff = 4;
               c ->qmin = 10;
               c ->qmax = 51;
               c ->qcompress = 0.6f;

               /* open it */
               if (avcodec_open(c, codec) < 0) {
                   fprintf(stderr, "could not open codec\n");
                   exit(1);
               }


               /* alloc image and output buffer */
               outbuf_size = 100000;
               outbuf = malloc(outbuf_size);
               size = c->width * c->height;

               AVFrame* outpic = avcodec_alloc_frame();
               int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

               //create buffer for the output image
               uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

    #pragma mark -  

               fflush(stdout);

    <pre>//         int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
    //          uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
    //          
    //          //UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"10%d", i]];
    //          CGImageRef newCgImage = [self imageFromSampleBuffer:sampleBuffer];//[image CGImage];
    //          
    //          CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
    //          CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
    //          buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);  
    //          
    //          avpicture_fill((AVPicture*)pFrame, buffer, PIX_FMT_RGB8, c->width, c->height);
               avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);

               struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                                                              PIX_FMT_RGB8,
                                                              c->width, c->height,
                                                              PIX_FMT_YUV420P,
                                                              SWS_FAST_BILINEAR, NULL, NULL, NULL);

               //perform the conversion
               sws_scale(fooContext, pFrame->data, pFrame->linesize, 0, c->height, outpic->data, outpic->linesize);
               // Here is where I try to convert to YUV

               /* encode the image */

               out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
               printf("encoding frame (size=%5d)\n", out_size);
               printf("encoding frame %s\n", outbuf);


               //fwrite(outbuf, 1, out_size, f);

               //              free(buffer);
               //              buffer = NULL;      



               /* add sequence end code to have a real mpeg file */
    //          outbuf[0] = 0x00;
    //          outbuf[1] = 0x00;
    //          outbuf[2] = 0x01;
    //          outbuf[3] = 0xb7;
               //fwrite(outbuf, 1, 4, f);
               //fclose(f);
               free(outbuf);

               avcodec_close(c);
               av_free(c);
               av_free(pFrame);
               printf("\n");
    </pre>
  • Create DVD-Video Menu's with ff-mpeg

    28 février 2012, par Bat Masterson

    Is it possible to create a DVD-Video Menu with ff-mpeg only ?

    I may be misunderstanding but I don't think that ff-mpeg alone is capable of creating a dvd menu.

    I've found some examples using ff-mpeg and various other tools but nothing solely ff-mpeg. The other tools are all linux and I'm stuck in windows.

    If someone would clear that up for me that would be great. Also if you know of a library for creating dvd menus or a command line utility would also help.

    Thanks In Advance