Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (51)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (5942)

  • php ming flash slideshow to mp4/avi

    19 août 2013, par Stefan

    After hours of searching and trying i finally got a nice script together that generates a good looking Flash .swf file with a nice transaction in between de images.
    It works great if you access the swf file directly in a browser, depending on the amount of images the flash created takes anywhere between 10 and 60 seconds.
    But when uploading to Youtube the movie created flashed by in one second.
    Because swf isnt really a accepted fileformat for Youtube we decided to convert the flash file to mp4 or avi using ffmpeg.
    Unfortunally that didnt work, it had the same effect as the youtube movie.
    We had a old version of ffmpeg and updated that to a recent version and tried to convert again with the same result.
    The main thing i see is that ffmpeg cant see the swf file duration and bitrate, they are both 'N/A' while were do set them in the php script.

    Now i have to admit i havent really tested with the new version because the commandline options are a little different but ill work on that after i post this.
    In the previous version we tried setting the framerate of the source swf file, but that didnt work either.

    Anyone here that can has a idea ? it would be greatly appriciated.

    PHP Ming Script :

         $fps = 30;
            foreach($objects as $objectId => $images){
                   // START FLASH MOVIE
                   $m = new SWFMovie();
                   $m->setDimension($width, $height);
                   $m->setBackground(0, 0, 0);
                   $m->setRate($fps);
                   $m->setFrames(count($images)*202); //count(images)* 2 breaks *($fps*$breakTime)+22(fadeOut))

                   $i = 0;
                   foreach($images as $image){

                       // REMOVE THE BACKGROUND IMAGE
                       if($behind){
                           $m->remove($behind);
                       }
                       // # REMOVE

                       // LOAD NEW IMAGE
                       $img = new SWFBitmap(fopen($image,"rb"));
                       $pic = $m->add($img);
                       $pic->setdepth(3);
                       // # LOAD

                       // BREAK TIME
                       for($j=1;$j<=($fps*$breakTime);$j++){
                           $m->nextFrame();
                       }
                       $m->remove($pic);
                       // # BREAK

                       // LOAD THE NEXT IMAGE AS BACKGROUND, IF LAST IMAGE, LOAD FIRST
                       $nextBackgrondImage =($images[$i+1]) ? $images[$i+1] : $images[0] ;
                       $img = new SWFBitmap(fopen($nextBackgrondImage,"rb"));
                       $behind = $m->add($img);
                       $behind->setdepth(2);
                       // # LOAD

                       // AND FADE OUT AGAIN
                       $img = fadeOut($image, $width, $height);
                       $pic = $m->add($img);
                       $pic->setdepth(3);
                       // # FADE OUT

                       // BREAK TIME
                       for($j=1;$j<=($fps*$breakTime);$j++){
                           $m->nextFrame();
                       }
                       $m->remove($pic);
                       # BREAK
                       $i++;
                   }      
                   $m->save('./flash/'.$nvmId.'_'.$objectId.'.swf');  
               unset($m);
               }
    }

    FFMPEG version :

    root@server:~# ffmpeg -version
    \FFmpeg version SVN-r26402, Copyright (c) 2000-2011 the FFmpeg developers
     built on Aug 15 2013 20:43:21 with gcc 4.4.5
     configuration: --enable-libmp3lame --enable-libtheora --enable-libx264
     --enable-libgsm --enable-postproc --enable-libxvid --enable-libfaac --enable-pthreads
     --enable-libvorbis --enable-gpl --enable-x11grab --enable-nonfree
     libavutil     50.36. 0 / 50.36. 0
     libavcore      0.16. 1 /  0.16. 1
     libavcodec    52.108. 0 / 52.108. 0
     libavformat   52.93. 0 / 52.93. 0
     libavdevice   52. 2. 3 / 52. 2. 3
     libavfilter    1.74. 0 /  1.74. 0
     libswscale     0.12. 0 /  0.12. 0
     libpostproc   51. 2. 0 / 51. 2. 0
    FFmpeg SVN-r26402
    libavutil     50.36. 0 / 50.36. 0
    libavcore      0.16. 1 /  0.16. 1
    libavcodec    52.108. 0 / 52.108. 0
    libavformat   52.93. 0 / 52.93. 0
    libavdevice   52. 2. 3 / 52. 2. 3
    libavfilter    1.74. 0 /  1.74. 0
    libswscale     0.12. 0 /  0.12. 0
    libpostproc   51. 2. 0 / 51. 2. 0

    FFMPEG command

    root@server:~# ffmpeg -r 30  -i '/pathTo/public_html/flash/73003_8962011.swf' -vcodec libx264 /pathTo/public_html/flash/out.mp4

    [swf @ 0x16c2510] Estimating duration from bitrate, this may be inaccurate
    Input #0, swf, from '/pathTo/public_html/flash/73003_8962011.swf':
     Duration: N/A, bitrate: N/A
       Stream #0.0: Video: mjpeg, yuvj420p, 360x480, 30 fps, 30 tbr, 30 tbn, 30 tbc
    [buffer @ 0x16d5850] w:360 h:480 pixfmt:yuvj420p
    [libx264 @ 0x16d4d80] broken ffmpeg default settings detected
    [libx264 @ 0x16d4d80] use an encoding preset (e.g. -vpre medium)
    [libx264 @ 0x16d4d80] preset usage: -vpre <speed> -vpre <profile>
    [libx264 @ 0x16d4d80] speed presets are listed in x264 --help
    [libx264 @ 0x16d4d80] profile is optional; x264 defaults to high
    Output #0, mp4, to &#39;/pathTo/public_html/out.mp4&#39;:
       Stream #0.0: Video: libx264, yuvj420p, 360x480, q=2-31, 200 kb/s, 90k tbn, 30 tbc
    Stream mapping:
     Stream #0.0 -> #0.0
    Error while opening encoder for output stream #0.0 - maybe incorrect parameters such as bit_rate, rate, width or height
    </profile></speed>
  • iOS allocation grow using x264 encoding

    19 juillet 2013, par cssmhyl

    I get the video yuv data in callback and save the image data by NSData.Then I put the data into NSData,And put the array to queue(NSMutableArray). These are code :

    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{

       if ([Application sharedInstance].isRecording) {
           if (captureOutput == self.captureManager.videOutput) {

               uint64_t capturedHostTime = [self GetTickCount];
               int allSpace = capturedHostTime - lastCapturedHostTime;
               NSNumber *spaces = [[NSNumber alloc] initWithInt:allSpace];
               NSNumber *startTime = [[NSNumber alloc] initWithUnsignedLongLong:lastCapturedHostTime];
               lastCapturedHostTime = capturedHostTime;

               CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

               CVPixelBufferLockBaseAddress(pixelBuffer, 0);

               uint8_t  *baseAddress0 = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
               uint8_t  *baseAddress1 = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);

               size_t width = CVPixelBufferGetWidth(pixelBuffer);
               size_t height = CVPixelBufferGetHeight(pixelBuffer);

               NSData *baseAddress0Data = [[NSData alloc] initWithBytes:baseAddress0 length:width*height];
               NSData *baseAddress1Data = [[NSData alloc] initWithBytes:baseAddress1 length:width*height/2];
               CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

               NSArray *array = [[NSArray alloc] initWithObjects:baseAddress0Data,baseAddress1Data,spaces,startTime ,nil];
               [baseAddress0Data release];
               [baseAddress1Data release];
               [spaces release];
               [startTime release];

               @synchronized([Application sharedInstance].pearVideoQueue){
                   [[Application sharedInstance] enqueuePearVideo:[Application sharedInstance].pearVideoQueue withData:array];
                   [array release];
               }            
           }
       }
    }

    now,I run an operation and get data from the queue ,then encode them by x264.I destory de array after encoding.

    - (void)main{


       while ([Application sharedInstance].pearVideoQueue) {
           if (![Application sharedInstance].isRecording) {
               NSLog(@"encode operation break");
               break;
           }
           if (![[Application sharedInstance].pearVideoQueue isQueueEmpty]) {
               NSArray *pearVideoArray;
               @synchronized([Application sharedInstance].pearVideoQueue){

                  pearVideoArray = [[Application sharedInstance].pearVideoQueue dequeue];
                   [[Application sharedInstance] encodeToH264:pearVideoArray];
                   [pearVideoArray release];
                   pearVideoArray = nil;
               }
           } else{
             [NSThread sleepForTimeInterval:0.01];  
           }
       }

    }

    this is encoding method

    - (void)encodeX264:(NSArray *)array{

       int         i264Nal;
       x264_picture_t pic_out;
       x264_nal_t  *p264Nal;


       NSNumber *st = [array lastObject];
       NSNumber *sp = [array objectAtIndex:2];
       uint64_t startTime = [st unsignedLongLongValue];
       int spaces = [sp intValue];

       NSData *baseAddress0Data = [array objectAtIndex:0];
       NSData *baseAddress1Data = [array objectAtIndex:1];

       const char *baseAddress0 = baseAddress0Data.bytes;
       const char *baseAddress1 = baseAddress1Data.bytes;


       if (baseAddress0 == nil) {
           return;
       }

       memcpy(p264Pic->img.plane[0], baseAddress0, PRESENT_FRAME_WIDTH*PRESENT_FRAME_HEIGHT);

       uint8_t * pDst1 = p264Pic->img.plane[1];
       uint8_t * pDst2 = p264Pic->img.plane[2];
       for( int i = 0; i &lt; PRESENT_FRAME_WIDTH*PRESENT_FRAME_HEIGHT/4; i ++ )
       {
           *pDst1++ = *baseAddress1++;
           *pDst2++ = *baseAddress1++;
       }

       if( x264_encoder_encode( p264Handle, &amp;p264Nal, &amp;i264Nal, p264Pic ,&amp;pic_out) &lt; 0 )
       {
           fprintf( stderr, "x264_encoder_encode failed/n" );
       }

       i264Nal = 0;
       if (i264Nal > 0) {

           int i_size;
           int spslen =0;
           unsigned char spsData[1024];        
           char * data = (char *)szBuffer+100;
           memset(szBuffer, 0, sizeof(szBuffer));
           if (ifFirstSps) {
               ifFirstSps = NO;
               if (![Application sharedInstance].ifAudioStarted) {
                   NSLog(@"video first");
                   [Application sharedInstance].startTick = startTime;
                   NSLog(@"startTick: %llu",startTime);
                   [Application sharedInstance].ifVideoStarted = YES;
               }
           }        
           for (int i=0 ; inal_buffer_size &lt; p264Nal[i].i_payload*3/2+4) {
                   p264Handle->nal_buffer_size = p264Nal[i].i_payload*2+4;
                   x264_free( p264Handle->nal_buffer );
                   p264Handle->nal_buffer = x264_malloc( p264Handle->nal_buffer_size );
               }

               i_size = p264Nal[i].i_payload;
               memcpy(data, p264Nal[i].p_payload, p264Nal[i].i_payload);
               int splitNum = 0;
               for (int i=0; i=1) {
                   timeSpace = spaces/(i264Nal-1)*i;
               }else{
                   timeSpace  = spaces/i264Nal*i;
               }            
               int timeStamp  = startTime-[Application sharedInstance].startTick + timeSpace;

               switch (type) {
                   case NALU_TYPE_SPS:
                       spslen = i_size-splitNum;
                       memcpy(spsData, data, spslen);                    
                       break;
                   case NALU_TYPE_PPS:
                       timeStamp  = timeStamp - timeSpace;
                       [self pushSpsAndPpsQueue:(char *)spsData andppsData:(char *)data withPPSlength:spslen andPPSlength:(i_size-splitNum) andTimeStamp:timeStamp];
                       break;
                   case NALU_TYPE_IDR:
                       [self pushVideoNALU:(char *)data withLength:(i_size-splitNum) ifIDR:YES andTimeStamp:timeStamp];
                       break;
                   case NALU_TYPE_SLICE:
                   case NALU_TYPE_SEI:
                       [self pushVideoNALU:(char *)data withLength:(i_size-splitNum) ifIDR:NO andTimeStamp:timeStamp];
                       break;
                   default:
                       break;
               }
           }
       }
    }

    the question is :
    I used instruments and found that the data I captured increase ,but NSLog
    show that the space-time I create de array and release it did not increase,and when I release ,the array's retain count is 1. the object's retain count it contains is also one.
    then I didn't encode,the memory didn't increase...I was confused...please help..
    the image pixel is 640x480.

    intruments leaks picture:

  • Revision 00d54aa331 : First pass clean up. One of a series of changes to clean up two pass allocation

    9 mai 2014, par Paul Wilkins

    Changed Paths :
     Modify /vp9/encoder/vp9_firstpass.c



    First pass clean up.

    One of a series of changes to clean up two pass
    allocation as precursor to support for multiple arf
    or boosted frames per GF/ARF group.

    This change pulls out the calculation of the total bits
    allocated to a GF/ARF group into a function, to aid
    readability and reduce the line count for define_gf_group().

    This change should have no material impact on output.

    Change-Id : I716fba08e26f9ddde3257e7d9b188453791883a3