Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Set presentation timestamps on sample buffers before giving them to AVSampleBufferDisplayLayer

    6 octobre 2016, par Moustik

    I am trying to decode and render a H264 video network stream using AVSampleBufferDisplayLayer on iOS 10.

    I am getting the frame packets using ffmpeg. Then, I am converting the NALUs to AVCC format and creating sample buffers. Finally I pass the buffers to the AVSampleBufferDisplayLayer for rendering. The stream is displaying well when kCMSampleAttachmentKey_DisplayImmediately is set to kCFBooleanTrue. However when I am trying to use a controlTimebase in order to define the presentation timestamps, the display is quite stuck.

    Any idea or help with the handling of presentation timestamps?

    AVPacket packet;
    av_read_frame(_formatCtx, &packet);
    
    // ...
    // Parse NALUs and create blockbuffer
    // ...
    
    AVStream *st = _formatCtx->streams[_videoStream];
    
    CMSampleTimingInfo* timing;
    timing = malloc(sizeof(CMSampleTimingInfo));
    timing->presentationTimeStamp = CMTimeMake(packet.pts, st->time_base.den);
    timing->duration = CMTimeMake(packet.duration, st->time_base.den);
    timing->decodeTimeStamp = CMTimeMake(packet.dts, st->time_base.den);
    
    const size_t sampleSize = blockLength;
    _status = CMSampleBufferCreate(kCFAllocatorDefault,
                                   blockBuffer, true, NULL, NULL,
                                   _formatDescriptionRef, 1, 1, timing, 1,
                                   &sampleSize, &sampleBuffer);
    
    [self.renderer enqueueSampleBuffer:sampleBuffer];
    
  • Android cropping and decreasing video size

    6 octobre 2016, par hellsayenci

    we are going to start an app that captures videos or picks videos from gallery. The videos will be exactly 30 seconds, so users can crop or trim videos like video editor.

    At this point, we got 2 main problems. The first one is video size. You know the video size could be huge for uploading :) So we need to decrease video size. The other problem is when iOS device captures and uploads video to server, there are some problems when trying to play that video on android app.

    We have used “ffmpeg” library before for an another project but that library has varous problems like slow compression or library .so file’s size. And also have compatibility issues with sdk 24 (nougat)

    Anyone has an experience about that? Or any ideas to overcome those problems? Thank u all.

  • how to recognize video codec of a file with ffmpeg

    6 octobre 2016, par GerryMulligan

    I have often problems reading AVI file with my TV dvd player if they are not divx or xvid (DX50,i.e., is not readable).

    I'd like to make a fast script to recognize the video codec of these files before burn it on cdrom/dvd.

    The command :

    ffmpeg -i file.avi
    

    give the "container" of the video stream (mpeg4,mpeg2,etc), not the codec.

    Any hint?

    Thanks

  • ffmpeg recording video from live stream closed if connention intrupt

    6 octobre 2016, par Farhan Shahid

    I am facing a issue with FFMPEG stream. I am trying to record my live running stream to File_Name.ts file. Its working fine with following code

    ffmpeg -i "http://clientportal.link:8080/live/tmalik/Tanveer/9026.m3u8" -c copy abc.ts -y
    

    But actual issue is that my input stream is not much stable and its stop after average 1 hour for 4-6 sec.

    Now is there any way that i can re-connect automatically if i got my stream back from Link(given above in code as input).

    Important thing is m working on UBUNTU machine. So if there is any bash file that would be grate.

  • include ffmpeg-command in c++ program

    6 octobre 2016, par Sweetspell

    I have a c++ program that reads rendered images from shared-memory and writes them into a pipe(mkfifo), so that I can capture them with ffmpeg and stream them as live-video over ffserver. For my stream to work I have to start the program and the ffmpeg-command seperately. I asked myself if there isn't a possibility to include the ffmpeg into the program and avoid the pipe.

    My ffmpeg command:

    ffmpeg -re -f -rawvideo -s 800x600 -pix_fmt rgb24 -i myfifo http://localhost:8090/feed1.ffm
    

    My question is:

    What would be the best way to include the ffmpeg-command into the c++-program? Is there some other idea to improve this solution?

    Any help is greatly appreciated. Thanks in advance.