Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Memory Leak in c++/cli application
10 décembre 2013, par AnkushI am passing bitmap from my c# app to c++/cli dll that add it to video. The problem is program slowly leaking memory. I tried _CrtDumpMemoryLeaks() shows me leak of bitmap & another 40 byte leak but i am disposing bitmap. Can anyone see memory leak, Here is code..
Flow:
1) Capture screenshot by takescreenshot()
2) pass it to c++/cli function
3) dispose bitmap
lines from my c# app
Bitmap snap = takescreeshot(); vencoder.AddBitmap(snap); snap.Dispose(); vencoder.printleak(); private static Bitmap takescreeshot() { System.Drawing.Bitmap bitmap = null; System.Drawing.Graphics graphics = null; bitmap = new Bitmap ( System.Windows.Forms.Screen.PrimaryScreen.Bounds.Width, System.Windows.Forms.Screen.PrimaryScreen.Bounds.Height, System.Drawing.Imaging.PixelFormat.Format24bppRgb ); graphics = System.Drawing.Graphics.FromImage(bitmap); graphics.CopyFromScreen(Screen.PrimaryScreen.Bounds.X, Screen.PrimaryScreen.Bounds.Y, 0, 0, Screen.PrimaryScreen.Bounds.Size); //Write TimeSpamp Rectangle rect = new Rectangle(1166, 738, 200, 20); String datetime= System.String.Format("{0:dd:MM:yy hh:mm:ss}",DateTime.Now); System.Drawing.Font sysfont = new System.Drawing.Font("Times New Roman", 14, FontStyle.Bold); graphics.DrawString(datetime, sysfont, Brushes.Red,rect); // Grayscale filter = new Grayscale(0.2125, 0.7154, 0.0721); Bitmap grayImage = filter.Apply(bitmap); //Dispose bitmap.Dispose(); graphics.Dispose(); return grayImage; }
now in c++/cli dll
bool VideoEncoder::AddBitmap(Bitmap^ bitmap) { BitmapData^ bitmapData = bitmap->LockBits( System::Drawing::Rectangle( 0, 0,bitmap->Width, bitmap->Height ),ImageLockMode::ReadOnly,PixelFormat::Format8bppIndexed); uint8_t* ptr = reinterpret_cast( static_cast( bitmapData->Scan0 ) ); uint8_t* srcData[4] = { ptr, NULL, NULL, NULL }; int srcLinesize[4] = { bitmapData->Stride, 0, 0, 0 }; pCurrentPicture = CreateFFmpegPicture(pVideoStream->codec->pix_fmt, pVideoStream->codec->width, pVideoStream->codec->height); sws_scale(pImgConvertCtx, srcData, srcLinesize, 0, bitmap->Height, pCurrentPicture->data, pCurrentPicture->linesize ); bitmap->UnlockBits( bitmapData ); write_video_frame(); bitmapData=nullptr; ptr=NULL; return true; } AVFrame * VideoEncoder::CreateFFmpegPicture(int pix_fmt, int nWidth, int nHeight) { AVFrame *picture = NULL; uint8_t *picture_buf = NULL; int size; picture = avcodec_alloc_frame(); if ( !picture) { printf("Cannot create frame\n"); return NULL; } size = avpicture_get_size((AVPixelFormat)pix_fmt, nWidth, nHeight); picture_buf = (uint8_t *) av_malloc(size); if (!picture_buf) { av_free(picture); printf("Cannot allocate buffer\n"); return NULL; } avpicture_fill((AVPicture *)picture, picture_buf, (AVPixelFormat)pix_fmt, nWidth, nHeight); return picture; } void VideoEncoder::write_video_frame() { AVCodecContext* codecContext = pVideoStream->codec; int out_size, ret = 0; if ( pFormatContext->oformat->flags & AVFMT_RAWPICTURE ) { printf( "raw picture must be written" ); } else { out_size = avcodec_encode_video( codecContext, pVideoEncodeBuffer,nSizeVideoEncodeBuffer, pCurrentPicture ); if ( out_size > 0 ) { AVPacket packet; av_init_packet( &packet ); if ( codecContext->coded_frame->pts != AV_NOPTS_VALUE ) { packet.pts = av_rescale_q( packet.pts, codecContext->time_base, pVideoStream->time_base ); } if ( codecContext->coded_frame->pkt_dts != AV_NOPTS_VALUE ) { packet.dts = av_rescale_q( packet.dts, codecContext->time_base, pVideoStream->time_base ); } if ( codecContext->coded_frame->key_frame ) { packet.flags |= AV_PKT_FLAG_KEY; } packet.stream_index = pVideoStream->index; packet.data = pVideoEncodeBuffer; packet.size = out_size; ret = av_interleaved_write_frame( pFormatContext, &packet ); av_free_packet(&packet); av_freep(pCurrentPicture); } else { // image was buffered } } if ( ret != 0 ) { throw gcnew Exception( "Error while writing video frame." ); } } void VideoEncoder::printleak() { printf("No of leaks: %d",_CrtDumpMemoryLeaks()); printf("\n"); }
-
Command find and and convert using ffmpeg
10 décembre 2013, par molwikoI would like to combine the two following commands to find mp4 files and convert them to mp3 and save them with same name. Thanks in advance. The two command line:
find ./ -name '*.mp4' ffmpeg -i video.mp4 -vn -acodec libmp3lame -ac 2 -ab 160k -ar 48000 audio.mp3
-
MUXING two media files using in .net
10 décembre 2013, par Newton Sheikhi got two media files xyz.wav(audio) and xyz.webm(video) i want to mux them into a single file xyz.mp4 in c#.
i google a bit and found this tool for c# http://www.ffmpeg-csharp.com/ but there is no documentation for merging audio and vdo files.
can u suggest me some way to achieve it or any other library that u have used.
So far i was using the command line way of doing it using the ffmpeg.exe
ffmpeg.exe" -i 180523220.wav -i 180523220.webm 1.mp4
but this is not what is required.
-
Node.js Webm live stream server : issues with tag
10 décembre 2013, par breathe0I'm using Node.js as stream server to stream realtime Webm videos that is sent by FFMPEG (executed from another application, the stream is done via HTTP) and received by a webapp that uses the tag.
This is what I'm doing: FFMPEG streams the received frames using the following command:
ffmpeg -r 30 -f rawvideo -pix_fmt bgra -s 640x480 -i \\.\pipe\STREAM_PIPE -r 60 -f segment -s 240x160 -codec:v libvpx -f webm http://my.domain.com/video_stream.webm
(the stream comes from an application that uses the Kinect as source and communicates with FFMPEG through a pipe, sending one frame after another)
When the webapp connects, it receives immediately this response from the server:
HTTP/1.1 200 OK X-Powered-By: Express content-type: video/webm cache-control: private connection: close Date: Fri, 06 Dec 2013 14:36:31 GMT
and a Webm header (previously stored on the server, with the same resolution and frame rate of the source stream and tested as working on VLC) is immediately appended. Then the webapp starts to receive the data streamed by FFMPEG. Here is a screenshot of Mkvinfo GUI showing the fields of the header:
However, even if the Network tab of the Chrome console shows that there is an actual stream of data (meaning that what is streamed is not completely garbage, otherwise the connection would be dropped), the player doesn't display anything. We tried to manually prepend our header to the dumped video received by the webapp and VLC plays it just fine, but this is not happening with the tag.
What can cause this problem? Are we missing something about the encoding on the FFMPEG side or we stored wrong values on the header (or they're not enough)?
PS: I cannot rely on an extern stream server.
PPS: We tried the following experiments:
- substituting the video header with the one stored in the server makes the video playable on both vlc and video tag
- if we dump the video that is already started (without an header) and we prepend the video header stored in the server or even its original header, the video is playable in VLC but not on the tag (we're carefully prepending the header just before the beggining of the cluster).
-
Bash Scripting FFMPEG how to wait the process to complete
9 décembre 2013, par user1738671I have a strange problem. I have a folder monitor with incrontab that launches an automatic transcoding script on
CLOSE_WRITE
state of a file I dropped in. The problem is that the script doesn't wait until the ffmpeg process finishes before continuing with the rest of the script commands. This means that the original file get deleted before the transcoding is finished, which is bad.First question: What is the root cause of this behaviour?
Second question: In a bash script, what is the best way to make sure an ffmpeg process is done before get going with the rest of the script?
Script:
#/bin/bash #transcoding /usr/bin/ffmpeg -i "sourcefile push with incron as $1" -vcodec somecode -acodec somecodec "destination file" #delete source rm "path/to/file$1"
Should I encapsulate my FFMPEG in a while statement?