Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
FFMPEG - Convert UInt16 Data to .264
22 mai 2018, par Lukas MarschallAt the moment, I'm trying to convert with
FFMPEG
my raw data inuint16
format from an infrared camera toMP4
format or at least to.h264
.My current command for ffmpeg is here:
ffmpeg -f rawvideo -pix_fmt gray16be -s:v 140x110 -r 30 -i binaryMarianData.bin -c:v libx264 -f rawvideo -pix_fmt yuv420p output.264
But my ouput File is not really looking good :(
Here is my Input File: http://fileshare.link/91a43a238e0de75b/binaryMarianData.bin
Update 1: Little Endian
Hey guys, would be great if it's possible to get the video output in the little endian byte order.
- This is a frame shown with ImageJ with the following settings
- Settings of the shown frame above in ImageJ
Unfortunaley my output doesn't look like this.
This is my command used to convert the RAW File:
ffmpeg -f rawvideo -pixel_format gray16le -video_size 110x140 -framerate 30 -i binaryMarianData.bin -vf transpose=clock -c:v libx264 -pix_fmt yuv420p output.264
-
Node.js, stream pipe output data to client with socket io-stream
22 mai 2018, par EmphaSorry for a repeating topic, but i've searched and experimented for 2 days now and i haven't been able to solve the problem.
I am trying to live stream pictures every 1 second to a client via socket.io-stream using the following code:
var args = [ "-i", "/dev/video0", "-s", "1280x720", "-qscale", 1, "-vf", "fps=1", config.imagePath, "-s", config.imageStream.resolution[0], "-f", "image2pipe", "-qscale", 1, "-vf", "fps=1", "pipe:1" ]; camera = spawn("avconv", args); // avconv = ffmpeg
The settings are good, and the process writes to stdout successfully. I capture all outgoing image data using this simplified code:
var ss = require("socket.io-stream"); camera.stdout.on("data", function(data) { var stream = ss.createStream(); ss(socket).emit("img", stream, "newImg"); // how do i write the data-object to the stream? // fs.createReadStream(imagePath).pipe(stream); });
"socket" comes from the client using the socket.io-package, no problem there. So what i am doing is that i listen to the stdout-pipe for the "data" event. That data gets passed to the function above. That means that at this stage "data" is not a stream, its a "
code>"-object, and therefore i cannot stream it like i could previously using the commented createReadStream-statement where i read the image from disk. How do i stream the data (Buffer at this stage) to the client? Can i do this differently, perhaps not using socket.io-stream? "data" is just one part of the whole image, so perhaps two or three "data"-objects need to be put together to form the complete image. I tried using "stream.write(data, "binary");" which did transfer the Buffer-objects, problem is that there is not end of stream-event and therefore i do not know when an image is complete. I tried registering to stdout.on "close", "end", "finish", nothing triggers. Am i missing something? Am i making it overly complex? The reasoning behind my implementation is that i need a new stream for each complete image, is that right?
Thanks alot!
-
Transcode video and upload on amazone s3
22 mai 2018, par devI want to transcode video in 360p, 480p, 720p and then upload to amazone s3.
Currenlty we are using php library FFMPEG
I have successfully transcode video on my server. But i did not get that how to achieve same on amazone s3.
Do i need to first upload orginal video on s3 and then get that video and transcode in different formate and send to amazone s3.? is it possible?
Or if any other way than please suggest me.
Thank in advance..!!
-
FFmpeg error count in C/C++
22 mai 2018, par daveIn my C/C++ application I need to count decoding errors. I'm not familiar with ffmpeg library. Is there any way to do it without grabbing errors from terminal? For example in terminal I got
[h264 @ 008df020] error while decoding MB 34 0, bytestream 3152
Any ideas? Thanks in advance.
-
ffmpeg How to add multiple drawtext to one input video
22 mai 2018, par RavenI need to add two texts to a video. First text appears in the bottom right for the first 6 seconds, and the second text at the center of the video for the last 3 seconds.
Below is my code:
ffmpeg -i input.mp4 -vf drawtext="text='Stack Overflow': fontcolor=white: borderw=2: fontfile=Arial Black: fontsize=w*0.04: x=(w-text_w)-(w*0.04): y=(h-text_h)-(w*0.04): enable='between(t,0,6)'", -vf drawtext="text='Stack Overflow': fontcolor=white: borderw=2: fontfile=Arial Black: fontsize=w*0.04: x=(w-text_w)/2: y=(h-text_h)/2: enable='between(t,7,10)'" -codec:a copy output2.mp4
I don't get any error running the above code but in the output file, only the second drawtext is applied.