Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Command-line streaming webcam with audio from Ubuntu server in WebM format

    28 avril 2017, par mjtb

    I am trying to stream video and audio from my webcam connected to my headless Ubuntu server (running Maverick 10.10). I want to be able to stream in WebM format (VP8 video + OGG). Bandwidth is limited, and so the stream must be below 1Mbps.

    I have tried using FFmpeg. I am able to record WebM video from the webcam with the following:

    ffmpeg -s 640x360 \
    -f video4linux2 -i /dev/video0 -isync -vcodec libvpx -vb 768000 -r 10 -vsync 1 \
    -f alsa -ac 1 -i hw:1,0 -acodec libvorbis -ab 32000 -ar 11025 \
    -f webm /var/www/telemed/test.webm 
    

    However despite experimenting with all manner of vsync and async options, I can either get out of sync audio, or Benny Hill style fast-forward video with matching fast audio. I have also been unable to get this actually working with ffserver (by replacing the test.webm path and filename with the relevant feed filename).

    The objective is to get a live, audio + video feed which is viewable in a modern browser, in a tight bandwidth, using only open-source components. (None of that MP3 format legal chaff)

    My questions are therefore: How would you go about streaming webm from a webcam via Linux with in-sync audio? What software you use?

    Have you succeeded in encoding webm from a webcam with in-sync audio via FFmpeg? If so, what command did you issue?

    Is it worth persevering with FFmpeg + FFserver, or are there other more suitable command-line tools around (e.g. VLC which doesn't seem too well built for encoding)?

    Is something like Gstreamer + flumotion configurable from the command line? If so, where do I find command line documentation because flumotion doc is rather light on command line details?

    Thanks in advance!

  • Web page with script

    28 avril 2017, par Peter

    please be patient with me, I have never been very much into the coding (only at the university just a little) but here is one thing I would love to have. I havent done any research before asking before I dont even know what to search for :) but here is what I would like to do:

    I have synology device with multicast TV stream connected to it. I have installed ffmpeg and everytime I want to record a TV show I have to do it from CLI so I was thinking it would be nice to run this ffmpeg from the web page. I mean I would click the TV channel from the drop down menu, set the name of a file, set the time and click RUN and then the command (ffmpeg -i http://MULTICAST_STREAM-variable -acodec copy -vcodec copy /var/services/homes/xxx/NAME_OF_TV_SHOW.mpg) will execute on my synology...Is something like this possible? (I hope it is)...What exactly should I search for?

    Thank you very much

  • How to I increase IO buffer size in Crystal ?

    28 avril 2017, par timur

    I have an IO class that will send data it receives across all connected web sockets. The data is received from a Process call running an FFMPEG command. The class is as so:

    class FfmpegBinaryIO
        include IO
    
        def initialize(@sockets = [] of HTTP::WebSocket)
        end
    
        def read(slice : Bytes)
            raise "FfmpegBinaryIO does not support reads"
        end
    
        def write(slice : Bytes)
            @sockets.each { |socket| socket.send Base64.encode(slice) }
        end
    end
    

    The write function will iterate the array of sockets I pass and send the chunks encoded in base64. The chunks are received by using the following Process call where binary_output is an instance of the FfmpegBinaryIO class:

    spawn do
        Process.run(command: ffmpeg_command, output: binary_output, error: ffmpeg, shell: true)
    end
    

    The class is working as it should and I am able to display the chunks using the following Javascript code:

    const img = document.getElementById('view')
    const ws = new WebSocket('ws://127.0.0.1:8080/ffmpeg')
    ws.onmessage = msg => {
        console.dir(msg)
        view.src = `data:image/jpeg;base64,${msg.data}`
    }
    

    However, the frames appear to be cut in half when displayed and it skips every other frames, presumably because the data is corrupted. What I found out when writing the same Bytes to JPG files is that chunks would be incomplete and as such every other file written was corrupted.

    This is not the kind of behavior I experienced when using a similar approach in other programming languages. My hypothesis is that chunks are being cut due a buffer size limit imposed. Is there any way I can have every write call represent a single frame chunk from FFMPEG by increasing this hypothetical buffer size?

    UPDATE

    My suspicions are right that this has to do with the size of slices that the FfmpegBinaryIO receives. I tried modifying the FFMPEG command to reduce the output byte size by reducing the scaling as so:

    ffmpeg_command = "ffmpeg -i /dev/video0 -framerate 6 -vf scale=320:240 -r 24 -qscale:v 35 -updatefirst 1 -f image2 -y pipe:1"
    

    Where previously the command I used did not include the -vf scale=320:240 option.

  • Tensorflow Video Decoding on a Separate Thread in a Distributed System

    28 avril 2017, par Issa Ayoub

    Having a distributed system, I need to enqueue frames on a CPU device, while processing the frames, that is, training the network, has to be done on a GPU device. Could this be performed in parallel (simultaneously) in tensorflow?

    Currently, tensorflow enables Audio coding through FFMPEG(contrib), are there any features for video encoding and decoding which is multi-threaded?

    Again, the purpose is is to perform an enqueue operation in one thread on a CPU device and through another thread, dequeue the frames and feed them to the graph which lies on a GPU device.

    Currently I need to process more than a 100 video with a minimum duration of 10 minutes each.

  • HLS to MPEG DASH

    28 avril 2017, par Maximilian

    Im currently working on a platform that relies on MPEG Dash to deliver audio and video to the browser. For onDemand im using ffmpeg to encode videos to h264/aac and MP4Box to create the manifest.mpd file. Now im trying to figure out how to create live MPEG Dash streams, more specifically encode hls live streams to MPEG Dash.

    1. Do i need to reencode all the .ts segments to .mp4(h264/aac) segments, since chrome doesnt support mpeg2ts?
    2. If so, how do i continuously reencode all the segments (different resolution, different bitrates)
    3. How do i create a dynamic manifest with MP4Box / how would the input parameter look like