Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
issue when I run ffmpeg
29 mai 2018, par MGMffmpeg was working fine until I got the following error message.
ffmpeg: error while loading shared libraries: libopencv_core.so.2.4: cannot open shared object file: No such file or directory
I tried to install opencv again I followed this script
$ opencv_version 3.4.1
any idea
Thks
-
needed list of steps to make desktop as video streaming server using ffmpeg
29 mai 2018, par vijayky88I am trying to run my Ubuntu machine as ffmpeg server. where i wanted to stream my local video over http.
please suggest the list of complete commands and steps as well.
Thanks in advance!
-
Converting video formats and copying tags with ffmpeg
29 mai 2018, par ScottI've been trying to convert some videos I took on my camera to a compressed format in order to save some storage space. I figured out how to use ffmpeg to convert the videos to the format I want, but what I haven't been able to figure out is how to copy the metadata. I'd like to copy the original metadata from when the video was taken (most importantly the creation time). I've tried running ffmpeg using the -map_meta_data 0:0 option, but that didn't seem to work. Any ideas?
It looks like the data I want to copy in this case is in the format section of the video. Using ffprobe with the show_format option, I get this output:
[FORMAT] filename=video.AVI nb_streams=2 format_name=avi format_long_name=AVI format start_time=0.000000 duration=124.565421 size=237722700 bit_rate=15267331 TAG:creation_time=2012-02-07 12:15:27 TAG:encoder=CanonMVI06 [/FORMAT]
I would like to copy the two tags to my new video.
-
FFmpeg avcodec_decode_video2 decode RTSP H264 HD-video packet to video picture with error
29 mai 2018, par Nguyen Ba ThiI used
FFmpeg
libraryversion 4.0
to have simple C++ program, in witch is a thread to receiveRTSP H264
video data from IP-camera and display it in program window.Code of this thread is follow:
DWORD WINAPI GrabbProcess(LPVOID lpParam) // Grabbing thread { DWORD i; int ret = 0, nPacket=0; FILE *pktFile; // Open video file pFormatCtx = avformat_alloc_context(); if(avformat_open_input(&pFormatCtx, nameVideoStream, NULL, NULL)!=0) fGrabb=-1; // Couldn't open file else // Retrieve stream information if(avformat_find_stream_info(pFormatCtx, NULL)<0) fGrabb=-2; // Couldn't find stream information else { // Find the first video stream videoStream=-1; for(i=0; inb_streams; i++) if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) { videoStream=i; break; } if(videoStream==-1) fGrabb=-3; // Didn't find a video stream else { // Get a pointer to the codec context for the video stream pCodecCtxOrig=pFormatCtx->streams[videoStream]->codec; // Find the decoder for the video stream pCodec=avcodec_find_decoder(pCodecCtxOrig->codec_id); if(pCodec==NULL) fGrabb=-4; // Codec not found else { // Copy context pCodecCtx = avcodec_alloc_context3(pCodec); if(avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0) fGrabb=-5; // Error copying codec context else { // Open codec if(avcodec_open2(pCodecCtx, pCodec, NULL)<0) fGrabb=-6; // Could not open codec else // Allocate video frame for input pFrame=av_frame_alloc(); // Determine required buffer size and allocate buffer numBytes=avpicture_get_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height); buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); // Assign appropriate parts of buffer to image planes in pFrame // Note that pFrame is an AVFrame, but AVFrame is a superset // of AVPicture avpicture_fill((AVPicture *)pFrame, buffer, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height); // Allocate video frame for display pFrameRGB=av_frame_alloc(); // Determine required buffer size and allocate buffer numBytes=avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height); bufferRGB=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); // Assign appropriate parts of buffer to image planes in pFrameRGB // Note that pFrameRGB is an AVFrame, but AVFrame is a superset // of AVPicture avpicture_fill((AVPicture *)pFrameRGB, bufferRGB, AV_PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height); // initialize SWS context for software scaling to FMT_RGB24 sws_ctx_to_RGB = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL); // Allocate video frame (grayscale YUV420P) for processing pFrameYUV=av_frame_alloc(); // Determine required buffer size and allocate buffer numBytes=avpicture_get_size(AV_PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height); bufferYUV=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); // Assign appropriate parts of buffer to image planes in pFrameYUV // Note that pFrameYUV is an AVFrame, but AVFrame is a superset // of AVPicture avpicture_fill((AVPicture *)pFrameYUV, bufferYUV, AV_PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height); // initialize SWS context for software scaling to FMT_YUV420P sws_ctx_to_YUV = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL); RealBsqHdr.biWidth = pCodecCtx->width; RealBsqHdr.biHeight = -pCodecCtx->height; } } } } while ((fGrabb==1)||(fGrabb==100)) { // Grabb a frame if (av_read_frame(pFormatCtx, &packet) >= 0) { // Is this a packet from the video stream? if(packet.stream_index==videoStream) { // Decode video frame int len = avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet); nPacket++; // Did we get a video frame? if(frameFinished) { // Convert the image from its native format to YUV sws_scale(sws_ctx_to_YUV, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize); // Convert the image from its native format to RGB sws_scale(sws_ctx_to_RGB, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize); HDC hdc=GetDC(hWndM); SetDIBitsToDevice(hdc, 0, 0, pCodecCtx->width, pCodecCtx->height, 0, 0, 0, pCodecCtx->height,pFrameRGB->data[0], (LPBITMAPINFO)&RealBsqHdr, DIB_RGB_COLORS); ReleaseDC(hWndM,hdc); av_frame_unref(pFrame); } } // Free the packet that was allocated by av_read_frame av_free_packet(&packet); } } // Free the org frame av_frame_free(&pFrame); // Free the RGB frame av_frame_free(&pFrameRGB); // Free the YUV frame av_frame_free(&pFrameYUV); // Close the codec avcodec_close(pCodecCtx); avcodec_close(pCodecCtxOrig); // Close the video file avformat_close_input(&pFormatCtx); avformat_free_context(pFormatCtx); if (fGrabb==1) sprintf(tmpstr,"Grabbing Completed %d frames", nCntTotal); else if (fGrabb==2) sprintf(tmpstr,"User break on %d frames", nCntTotal); else if (fGrabb==3) sprintf(tmpstr,"Can't Grabb at frame %d", nCntTotal); else if (fGrabb==-1) sprintf(tmpstr,"Couldn't open file"); else if (fGrabb==-2) sprintf(tmpstr,"Couldn't find stream information"); else if (fGrabb==-3) sprintf(tmpstr,"Didn't find a video stream"); else if (fGrabb==-4) sprintf(tmpstr,"Codec not found"); else if (fGrabb==-5) sprintf(tmpstr,"Error copying codec context"); else if (fGrabb==-6) sprintf(tmpstr,"Could not open codec"); i=(UINT) fGrabb; fGrabb=0; SetWindowText(hWndM,tmpstr); ExitThread(i); return 0; } // End Grabbing thread
When program receive
RTSP H264
video data with resolution704x576
then decoded video pictures are OK. When receiveRTSP H264
HD-video data with resolution1280x720
it look like that first video picture is decoded OK and then video pictures are decoded but always with some error.Please help me to fix this problem!
Here is problems brief :
I have an IP camera modelHI3518E_50H10L_S39
(product of China).
Camera can provide H264 video stream both at resolution 704x576 (with RTSP URI "rtsp://192.168.1.18:554/user=admin_password=tlJwpbo6_channel=1_stream=1.sdp?real_stream") or 1280x720 (with RTSP URI "rtsp://192.168.1.18:554/user=admin_password=tlJwpbo6_channel=1_stream=0.sdp?real_stream").
UsingFFplay
utility I can access and display them with good picture quality.
For testing of grabbing from this camera, I have a simple (above mentioned) program in VC-2005. In "Grabbing thread" program useFFmpeg
library version 4.0 for opening camera RTSP stream, retrieve stream information, find the first video stream... and prepare some variables.
Center of this thread is loop: Grab a frame (functionav_read_frame
) - Decode it if it's video (functionavcodec_decode_video2
) - Convert to RGB format (functionsws_scale
) - Display to program window (GDI functionSetDIBitsToDevice
).
When proram run with camera RTSP stream at resolution 704x576, I have good video picture. Here is a sample:
704x576 sample
When program run with camera RTSP stream at resolution 1280x720, first video picture is good:
First good at res.1280x720
but then not good:
not good at res.1280x720
Its seem to be my FFmpeg function call toavcodec_decode_video2
can't fully decode certain packet for some reasons. -
ffmpeg : flushing output file every chunk
29 mai 2018, par 2080I'm using ffmpeg to generate a sine tone in real time for 10 seconds. Unfortunately, ffmpeg seems to flush the output file only rarely, every few seconds. I'd like it to flush every 2048 bytes (=2bytes sample width*1024 samples, my custom chunk size).
The output of the following script:
import os import time import subprocess cmd = 'ffmpeg -y -re -f lavfi -i "sine=frequency=440:duration=10" -blocksize 2048 test.wav' subprocess.Popen(cmd, shell=True) time.sleep(0.1) while True: print(os.path.getsize("test.wav")) time.sleep(0.1)
looks like:
[...] 78 78 78 262222 262222 262222 [...]
A user on the #ffmpeg IRC proposed using
ffmpeg -re -f lavfi -i "sine=frequency=1000:duration=10" -f wav pipe: > test.wav
which works. But can this be achieved just using ffmpeg?