Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Singler line FFMPEG cmd to Merge Video /Audio and retain both audios

    9 janvier 2017, par Deepak Prakash

    I have a project that requires merging of a video file with another audio file. The expected out put is an video file that will have both the audio from actual video and the merged audio file. The length of the output video file will be same to the size of the actual video file.

    Is there a single line FFMPEG command to achieve this using copy and -map parameters ?

    The video form I will be using is either flv or mp4 And the audio file format will be mp3

  • ffmpeg - detect subject (colored pixels) in a video with chroma key [on hold]

    9 janvier 2017, par DrH

    I have a video with a green screen background. Is it possible with ffmpeg to detect the subject in the video and get the position of it (X,Y) to make a crop around the subject ? I tried fooling it with cropdetect, but with no use. Is there a better way?

    Example: Mario detected and cropped

    Thank you in advance.

  • Python stream H.264 data over socket

    9 janvier 2017, par Ezra Knobloch

    I am creating a H.264 encoded stream on my Raspberry using the tool 'raspivid' and sending that stream to stdout. What i want is sending that stream to another computer and feed it to ffmpeg which in turn feeds it to MPlayer.

    When i do this using netcat and pipes it does work really good, i have a clean stream without any graphic bugs.

    I try to get the raspivid stdout over the subprocess module, see code below. Im more or less sure the problem lies somewhere there cause i did something similar a while back and it worked without many problems, and the only thing i did different now is using subprocess.

    My question is: does someone see what causes my problems?

    Some Notes:

    • Using streamObj.get_data(pcktSize) did never work until now (MPlayer and ffmpeg cant open the stream)

    • process.stdout.readline() seems to be a bit faster than just read()

    • this code seems to be slower than just piping and netcat, how would i make it faster? (while still using python)

    i need to write this sentence cause the code formatting below would be corrupted if i would not.

    import subprocess, socket, sys, time
    from thread import start_new_thread
    
    class streamObject:
        def __init__(self):
            global data
            data = ""
    
        def start_stream(self, rot, t, w, h, fps):
            global data
    
            process = subprocess.Popen(['/usr/bin/raspivid', '-rot', str(rot), '-t', str(t), '-w', str(w), '-h', str(h), '-fps', str(fps), '-o', '-'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
    
            nodata_counter = 0
    
            while 1:
                if process.poll() is None:
                    data += process.stdout.readline()
                    #data += process.stdout.read(256)
    
                    sys.stdout.write("buffered %i bytes.     \r" % len(data))
                    sys.stdout.flush()
                elif nodata_counter < 200:
                    nodata_counter += 1
    
                    time.sleep(0.1)
                else:
                    break
    
        def get_alldata(self):
            global data
    
            return data
    
        def get_data(self, data_len):
            global data
    
            if len(data) > data_len:
                temp = data[0:data_len]
                data = data[data_len:len(data)]
    
                return temp
            else:
                return None
    
        def clear_cache(self):
            global data
    
            data = ""
    
        def poll(self):
            global data
    
            try:
                if len(data) != 0:
                    return None
                else:
                    return 0
            except:
                return 0
    
    def socket_connect(ip, port):
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.connect((ip, port))
    
        return s
    
    def socket_listen(ip, port):
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.bind((ip, port))
        s.listen(1)
    
        conn, addr = s.accept()
    
        return conn
    
    def socket_close(sock):
        sock.close()
    
    def send_stream(sock, streamObj, pcktSize):
        timeout = 0
    
        while 1:
            if streamObj.poll() is None:
                #data = streamObj.get_data(pcktSize)
    
                data = streamObj.get_alldata()
                streamObj.clear_cache()
    
                if data is not None:
                    sock.send(data)
            elif timeout < 200:
                timeout += 1
    
                time.sleep(0.1)
            else:
                break
    
    stream = streamObject()
    
    start_new_thread(stream.start_stream, (180, 0, 1280, 720, 20))
    
    sock = socket_connect("xxxx.xxxx.xxxx.xxxx", 7777)
    
    send_stream(sock, stream, 256)
    

    Here is a short video of the graphic bugs i encounter.

    I am doing this over a direct ethernet connection at this time.

  • FFmpeg, videotoolbox and avplayer in iOS

    9 janvier 2017, par Hwangho Kim

    I have a question how these things are connected and what they exactly do.

    FYI, I have a few experience about video player and encoding and decoding.

    In my job I deal udp streaming from server and take it with ffmpeg and decodes it and draw it with openGL. And also using ffmpeg for video player.

    These are the questions...

    1. Only ffmpeg can decodes UDP streaming (encoded with ffmpeg from the server) or not?

    I found some useful information about videotoolbox which can decode streaming with hardware acceleration in iOS. so could I also decode the streaming from the server with videotoolbox?

    2. If it is possible to decode with videotoolbox (I mean if the videotoolbox could be the replacement for ffmpeg), then what is the videotoolbox source code in ffmpeg? why it is there?

    In my decoder I make AVCodecContext from the streaming and it has hwaccel and hwaccel_context field which set null both of them. I thought this videotoolbox is kind of API which can help ffmpeg to use hwaccel of iOS. But it looks not true for now...

    3. If videotoolbox can decode streaming, Does this also decode for H264 in local? or only streaming possible?

    AVPlayer is a good tool to play a video but if videotoolbox could replace this AVPlayer then, what's the benefit? or impossible?

    4. FFmpeg only uses CPU for decoding (software decoder) or hwaccel also?

    When I play a video with ffmpeg player, CPU usage over 100% and Does it means this ffmpeg uses only software decoder? or there is a way to use hwaccel?

    Please understand my poor english and any answer would be appreciated.

    Thanks.

  • How to upload a jpeg to rtmp server using c# ?

    9 janvier 2017, par maxchehab

    How does one upload a jpeg to a rtmp server using c#?

    I have investigated ffmpeg but can only find how to stream videos to a server.

    Thank you for your time.