Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (11)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • D’autres logiciels intéressants

    12 avril 2011, par

    On ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
    La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
    On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
    Videopress
    Site Internet : (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (5279)

  • How to seek one frame forward in ffmpeg [closed]

    10 mars, par Summit

    i want to seek one frame forward when i call this function but it gets stuck on the first frame seeked and does not move forward.

    


    void seekFrameUp() {
    if (!fmt_ctx || video_stream_index == -1) return;

    AVRational frame_rate = fmt_ctx->streams[video_stream_index]->r_frame_rate;
    if (frame_rate.num == 0) return;  // Avoid division by zero

    // Compute frame duration in AV_TIME_BASE_Q
    int64_t frame_duration = av_rescale_q(1,
        av_make_q(frame_rate.den, frame_rate.num),
        AV_TIME_BASE_Q);

    int64_t next_pts = requestedTimestamp + frame_duration;

    qDebug() << "Seeking forward: " << next_pts
        << " (Current PTS: " << requestedTimestamp
        << ", Frame Duration: " << frame_duration << ")";

    requestFrameAt(next_pts);

    // Update the requested timestamp after seeking
    requestedTimestamp = next_pts;
}





void requestFrameAt(int64_t timestamp) {
     {
         std::lock_guard lock(mtx);
         decoding = true;  // Ensure the thread keeps decoding when needed
     }
     cv.notify_one();
 }


void decodeLoop() {
    while (!stopThread) {
        std::unique_lock lock(mtx);
        cv.wait(lock, [this] { return decoding || stopThread; });

        if (stopThread) break;

        // Avoid redundant seeking
        if (requestedTimestamp == lastRequestedTimestamp) {
            decoding = false;
            continue;
        }

       

        lastRequestedTimestamp.store(requestedTimestamp.load());
        int64_t target_pts = av_rescale_q(requestedTimestamp, AV_TIME_BASE_Q, fmt_ctx->streams[video_stream_index]->time_base);

        target_pts = FFMAX(target_pts, 0); // Ensure it's not negative

        if (av_seek_frame(fmt_ctx, video_stream_index, target_pts, AVSEEK_FLAG_ANY) >= 0) {
            avcodec_flush_buffers(codec_ctx);  // Clear old frames from the decoder
            qDebug() << "Seek successful to PTS:" << target_pts;
        }
        else {
            qDebug() << "Seeking failed!";
            decoding = false;
            continue;
        }

        lock.unlock();

        // Keep decoding until we receive a valid frame
        bool frameDecoded = false;
        while (av_read_frame(fmt_ctx, pkt) >= 0) {
            if (pkt->stream_index == video_stream_index) {
                if (avcodec_send_packet(codec_ctx, pkt) == 0) {
                    while (avcodec_receive_frame(codec_ctx, frame) == 0) {
                        qDebug() << "FRAME DECODED ++++++++++++ PTS:" << frame->pts;
                        if (frame->pts != AV_NOPTS_VALUE) {
                            // Rescale PTS to AV_TIME_BASE_Q
                            int64_t pts_in_correct_base = av_rescale_q(frame->pts,
                                fmt_ctx->streams[video_stream_index]->time_base,
                                AV_TIME_BASE_Q);

                            // Ensure we don’t reset to 0 incorrectly
                            if (pts_in_correct_base > 0) {
                                current_pts.store(pts_in_correct_base);
                                qDebug() << "Updated current_pts to:" << current_pts.load();
                            }
                            else {
                                qDebug() << "Warning: Decoded frame has PTS <= 0, keeping last valid PTS.";
                            }
                        }
                        else {
                            qDebug() << "Invalid frame->pts (AV_NOPTS_VALUE)";
                        }

                        QImage img = convertFrameToImage(frame);
                        emit frameDecodedSignal(img);
                
                        frameDecoded = true;
                        break;  // Exit after the first valid frame
                    }

                    if (frameDecoded) {
                        decoding = (requestedTimestamp != lastRequestedTimestamp);
                        break;
                    }
                }
            }
            av_packet_unref(pkt);
        }
    }
}


    


  • Creating a file from Native c code android

    17 octobre 2013, par Azhagiri

    I am trying to demux a video in android application using FFMPEG library. Everything is working fine. But when i try to create a file from the c code to write the different streams in file fopen returns NULL. I can't proceed after that. My code is

    int ret , got_frame;
    av_register_all();
    LOGE("Registered formats");
    err = av_open_input_file(&pFormatCtx, "file:/mnt/sdcard/input.mp4", NULL, 0, NULL);
    LOGE("Called open file");
    if(err!=0) {
       LOGE("Couldn't open file");
       return;
    }
    LOGE("Opened file");

    if(av_find_stream_info(pFormatCtx)<0) {
       LOGE("Unable to get stream info");
       return;
    }

    videoStream = -1;
    audioStream = -1;
    for (i=0; inb_streams; i++) {
       if(pFormatCtx->streams[i]->codec->codec_type==CODEC_TYPE_VIDEO) {
           videoStream = i;
           if(audioStream != -1)
           break;
       }

       if(pFormatCtx->streams[i]->codec->codec_type==CODEC_TYPE_AUDIO) {
                   audioStream = i;
                   if(videoStream != -1)
                   break;
               }

    }
    if(videoStream==-1) {
       LOGE("Unable to find video stream");
       return;
    }
    if(audioStream==-1) {
         LOGE("Unable to find audio stream");
         return;
     }

    LOGI("Video stream is [%d] Audio stream [%d]", videoStream,audioStream);

    pCodecCtx=pFormatCtx->streams[videoStream]->codec;

    pCodecCtxAudio=pFormatCtx->streams[audioStream]->codec;

    LOGI("Video size is [%d x %d]", pCodecCtx->width, pCodecCtx->height);

    videoOut = fopen("file:/mnt/sdcard/videoout.mp4","wb");
    if (videoOut == NULL) {
       LOGE("Unable to open output video");
       return;
      }

    What is the problem with my code. I enabled permissions for the application to write in external storage. Also the path i specified is also correct. Help me.

  • iOS allocation grow using x264 encoding

    19 juillet 2013, par cssmhyl

    I get the video yuv data in callback and save the image data by NSData.Then I put the data into NSData,And put the array to queue(NSMutableArray). These are code :

    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{

       if ([Application sharedInstance].isRecording) {
           if (captureOutput == self.captureManager.videOutput) {

               uint64_t capturedHostTime = [self GetTickCount];
               int allSpace = capturedHostTime - lastCapturedHostTime;
               NSNumber *spaces = [[NSNumber alloc] initWithInt:allSpace];
               NSNumber *startTime = [[NSNumber alloc] initWithUnsignedLongLong:lastCapturedHostTime];
               lastCapturedHostTime = capturedHostTime;

               CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

               CVPixelBufferLockBaseAddress(pixelBuffer, 0);

               uint8_t  *baseAddress0 = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
               uint8_t  *baseAddress1 = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);

               size_t width = CVPixelBufferGetWidth(pixelBuffer);
               size_t height = CVPixelBufferGetHeight(pixelBuffer);

               NSData *baseAddress0Data = [[NSData alloc] initWithBytes:baseAddress0 length:width*height];
               NSData *baseAddress1Data = [[NSData alloc] initWithBytes:baseAddress1 length:width*height/2];
               CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

               NSArray *array = [[NSArray alloc] initWithObjects:baseAddress0Data,baseAddress1Data,spaces,startTime ,nil];
               [baseAddress0Data release];
               [baseAddress1Data release];
               [spaces release];
               [startTime release];

               @synchronized([Application sharedInstance].pearVideoQueue){
                   [[Application sharedInstance] enqueuePearVideo:[Application sharedInstance].pearVideoQueue withData:array];
                   [array release];
               }            
           }
       }
    }

    now,I run an operation and get data from the queue ,then encode them by x264.I destory de array after encoding.

    - (void)main{


       while ([Application sharedInstance].pearVideoQueue) {
           if (![Application sharedInstance].isRecording) {
               NSLog(@"encode operation break");
               break;
           }
           if (![[Application sharedInstance].pearVideoQueue isQueueEmpty]) {
               NSArray *pearVideoArray;
               @synchronized([Application sharedInstance].pearVideoQueue){

                  pearVideoArray = [[Application sharedInstance].pearVideoQueue dequeue];
                   [[Application sharedInstance] encodeToH264:pearVideoArray];
                   [pearVideoArray release];
                   pearVideoArray = nil;
               }
           } else{
             [NSThread sleepForTimeInterval:0.01];  
           }
       }

    }

    this is encoding method

    - (void)encodeX264:(NSArray *)array{

       int         i264Nal;
       x264_picture_t pic_out;
       x264_nal_t  *p264Nal;


       NSNumber *st = [array lastObject];
       NSNumber *sp = [array objectAtIndex:2];
       uint64_t startTime = [st unsignedLongLongValue];
       int spaces = [sp intValue];

       NSData *baseAddress0Data = [array objectAtIndex:0];
       NSData *baseAddress1Data = [array objectAtIndex:1];

       const char *baseAddress0 = baseAddress0Data.bytes;
       const char *baseAddress1 = baseAddress1Data.bytes;


       if (baseAddress0 == nil) {
           return;
       }

       memcpy(p264Pic->img.plane[0], baseAddress0, PRESENT_FRAME_WIDTH*PRESENT_FRAME_HEIGHT);

       uint8_t * pDst1 = p264Pic->img.plane[1];
       uint8_t * pDst2 = p264Pic->img.plane[2];
       for( int i = 0; i < PRESENT_FRAME_WIDTH*PRESENT_FRAME_HEIGHT/4; i ++ )
       {
           *pDst1++ = *baseAddress1++;
           *pDst2++ = *baseAddress1++;
       }

       if( x264_encoder_encode( p264Handle, &p264Nal, &i264Nal, p264Pic ,&pic_out) < 0 )
       {
           fprintf( stderr, "x264_encoder_encode failed/n" );
       }

       i264Nal = 0;
       if (i264Nal > 0) {

           int i_size;
           int spslen =0;
           unsigned char spsData[1024];        
           char * data = (char *)szBuffer+100;
           memset(szBuffer, 0, sizeof(szBuffer));
           if (ifFirstSps) {
               ifFirstSps = NO;
               if (![Application sharedInstance].ifAudioStarted) {
                   NSLog(@"video first");
                   [Application sharedInstance].startTick = startTime;
                   NSLog(@"startTick: %llu",startTime);
                   [Application sharedInstance].ifVideoStarted = YES;
               }
           }        
           for (int i=0 ; inal_buffer_size < p264Nal[i].i_payload*3/2+4) {
                   p264Handle->nal_buffer_size = p264Nal[i].i_payload*2+4;
                   x264_free( p264Handle->nal_buffer );
                   p264Handle->nal_buffer = x264_malloc( p264Handle->nal_buffer_size );
               }

               i_size = p264Nal[i].i_payload;
               memcpy(data, p264Nal[i].p_payload, p264Nal[i].i_payload);
               int splitNum = 0;
               for (int i=0; i=1) {
                   timeSpace = spaces/(i264Nal-1)*i;
               }else{
                   timeSpace  = spaces/i264Nal*i;
               }            
               int timeStamp  = startTime-[Application sharedInstance].startTick + timeSpace;

               switch (type) {
                   case NALU_TYPE_SPS:
                       spslen = i_size-splitNum;
                       memcpy(spsData, data, spslen);                    
                       break;
                   case NALU_TYPE_PPS:
                       timeStamp  = timeStamp - timeSpace;
                       [self pushSpsAndPpsQueue:(char *)spsData andppsData:(char *)data withPPSlength:spslen andPPSlength:(i_size-splitNum) andTimeStamp:timeStamp];
                       break;
                   case NALU_TYPE_IDR:
                       [self pushVideoNALU:(char *)data withLength:(i_size-splitNum) ifIDR:YES andTimeStamp:timeStamp];
                       break;
                   case NALU_TYPE_SLICE:
                   case NALU_TYPE_SEI:
                       [self pushVideoNALU:(char *)data withLength:(i_size-splitNum) ifIDR:NO andTimeStamp:timeStamp];
                       break;
                   default:
                       break;
               }
           }
       }
    }

    the question is :
    I used instruments and found that the data I captured increase ,but NSLog
    show that the space-time I create de array and release it did not increase,and when I release ,the array's retain count is 1. the object's retain count it contains is also one.
    then I didn't encode,the memory didn't increase...I was confused...please help..
    the image pixel is 640x480.

    intruments leaks picture: