
Recherche avancée
Médias (91)
-
Chuck D with Fine Arts Militia - No Meaning No
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Paul Westerberg - Looking Up in Heaven
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Le Tigre - Fake French
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Thievery Corporation - DC 3000
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Dan the Automator - Relaxation Spa Treatment
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Gilberto Gil - Oslodum
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (68)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (6648)
-
Screenrecorder application output video resolution issues [closed]
23 juin 2022, par JessieKUsing Github code for ScreenRecorder on Linux
Everything works fine, besides the resolution of output video.
Tried to play with setting, quality has significantly improved, but still no way to change resolution.
I need to get output video with the same size as input video


using namespace std;

 /* initialize the resources*/
 ScreenRecorder::ScreenRecorder()
 {
 
 av_register_all();
 avcodec_register_all();
 avdevice_register_all();
 cout<<"\nall required functions are registered successfully";
 }
 
 /* uninitialize the resources */
 ScreenRecorder::~ScreenRecorder()
 {
 
 avformat_close_input(&pAVFormatContext);
 if( !pAVFormatContext )
 {
 cout<<"\nfile closed sucessfully";
 }
 else
 {
 cout<<"\nunable to close the file";
 exit(1);
 }
 
 avformat_free_context(pAVFormatContext);
 if( !pAVFormatContext )
 {
 cout<<"\navformat free successfully";
 }
 else
 {
 cout<<"\nunable to free avformat context";
 exit(1);
 }
 
 }
 
 /* function to capture and store data in frames by allocating required memory and auto deallocating the memory. */
 int ScreenRecorder::CaptureVideoFrames()
 {
 int flag;
 int frameFinished;//when you decode a single packet, you still don't have information enough to have a frame [depending on the type of codec, some of them //you do], when you decode a GROUP of packets that represents a frame, then you have a picture! that's why frameFinished will let //you know you decoded enough to have a frame.
 
 int frame_index = 0;
 value = 0;
 
 pAVPacket = (AVPacket *)av_malloc(sizeof(AVPacket));
 av_init_packet(pAVPacket);
 
 pAVFrame = av_frame_alloc();
 if( !pAVFrame )
 {
 cout<<"\nunable to release the avframe resources";
 exit(1);
 }
 
 outFrame = av_frame_alloc();//Allocate an AVFrame and set its fields to default values.
 if( !outFrame )
 {
 cout<<"\nunable to release the avframe resources for outframe";
 exit(1);
 }
 
 int video_outbuf_size;
 int nbytes = av_image_get_buffer_size(outAVCodecContext->pix_fmt,outAVCodecContext->width,outAVCodecContext->height,32);
 uint8_t *video_outbuf = (uint8_t*)av_malloc(nbytes);
 if( video_outbuf == NULL )
 {
 cout<<"\nunable to allocate memory";
 exit(1);
 }
 
 // Setup the data pointers and linesizes based on the specified image parameters and the provided array.
 value = av_image_fill_arrays( outFrame->data, outFrame->linesize, video_outbuf , AV_PIX_FMT_YUV420P, outAVCodecContext->width,outAVCodecContext->height,1 ); // returns : the size in bytes required for src
 if(value < 0)
 {
 cout<<"\nerror in filling image array";
 }
 
 SwsContext* swsCtx_ ;
 
 // Allocate and return swsContext.
 // a pointer to an allocated context, or NULL in case of error
 // Deprecated : Use sws_getCachedContext() instead.
 swsCtx_ = sws_getContext(pAVCodecContext->width,
 pAVCodecContext->height,
 pAVCodecContext->pix_fmt,
 outAVCodecContext->width,
 outAVCodecContext->height,
 outAVCodecContext->pix_fmt,
 SWS_BICUBIC, NULL, NULL, NULL);
 
 
 int ii = 0;
 int no_frames = 100;
 cout<<"\nenter No. of frames to capture : ";
 cin>>no_frames;
 
 AVPacket outPacket;
 int j = 0;
 
 int got_picture;
 
 while( av_read_frame( pAVFormatContext , pAVPacket ) >= 0 )
 {
 if( ii++ == no_frames )break;
 if(pAVPacket->stream_index == VideoStreamIndx)
 {
 value = avcodec_decode_video2( pAVCodecContext , pAVFrame , &frameFinished , pAVPacket );
 if( value < 0)
 {
 cout<<"unable to decode video";
 }
 
 if(frameFinished)// Frame successfully decoded :)
 {
 sws_scale(swsCtx_, pAVFrame->data, pAVFrame->linesize,0, pAVCodecContext->height, outFrame->data,outFrame->linesize);
 av_init_packet(&outPacket);
 outPacket.data = NULL; // packet data will be allocated by the encoder
 outPacket.size = 0;
 
 avcodec_encode_video2(outAVCodecContext , &outPacket ,outFrame , &got_picture);
 
 if(got_picture)
 {
 if(outPacket.pts != AV_NOPTS_VALUE)
 outPacket.pts = av_rescale_q(outPacket.pts, video_st->codec->time_base, video_st->time_base);
 if(outPacket.dts != AV_NOPTS_VALUE)
 outPacket.dts = av_rescale_q(outPacket.dts, video_st->codec->time_base, video_st->time_base);
 
 printf("Write frame %3d (size= %2d)\n", j++, outPacket.size/1000);
 if(av_write_frame(outAVFormatContext , &outPacket) != 0)
 {
 cout<<"\nerror in writing video frame";
 }
 
 av_packet_unref(&outPacket);
 } // got_picture
 
 av_packet_unref(&outPacket);
 } // frameFinished
 
 }
 }// End of while-loop



One part of two parts is above...Actually original app seem to record video of same size as does my application, but still it has not any use



Second part of the code


av_free(video_outbuf);

}

/* establishing the connection between camera or screen through its respective folder */
int ScreenRecorder::openCamera()
{

 value = 0;
 options = NULL;
 pAVFormatContext = NULL;

 pAVFormatContext = avformat_alloc_context();//Allocate an AVFormatContext.
/*

X11 video input device.
To enable this input device during configuration you need libxcb installed on your system. It will be automatically detected during configuration.
This device allows one to capture a region of an X11 display. 
refer : https://www.ffmpeg.org/ffmpeg-devices.html#x11grab
*/
 /* current below is for screen recording. to connect with camera use v4l2 as a input parameter for av_find_input_format */ 
 pAVInputFormat = av_find_input_format("x11grab");
 value = avformat_open_input(&pAVFormatContext, ":0.0+10,250", pAVInputFormat, NULL);
 if(value != 0)
 {
 cout<<"\nerror in opening input device";
 exit(1);
 }

 /* set frame per second */
 value = av_dict_set( &options,"framerate","30",0 );
 if(value < 0)
 {
 cout<<"\nerror in setting dictionary value";
 exit(1);
 }

 value = av_dict_set( &options, "preset", "medium", 0 );
 if(value < 0)
 {
 cout<<"\nerror in setting preset values";
 exit(1);
 }

// value = avformat_find_stream_info(pAVFormatContext,NULL);
 if(value < 0)
 {
 cout<<"\nunable to find the stream information";
 exit(1);
 }

 VideoStreamIndx = -1;

 /* find the first video stream index . Also there is an API available to do the below operations */
 for(int i = 0; i < pAVFormatContext->nb_streams; i++ ) // find video stream posistion/index.
 {
 if( pAVFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO )
 {
 VideoStreamIndx = i;
 break;
 }

 } 

 if( VideoStreamIndx == -1)
 {
 cout<<"\nunable to find the video stream index. (-1)";
 exit(1);
 }

 // assign pAVFormatContext to VideoStreamIndx
 pAVCodecContext = pAVFormatContext->streams[VideoStreamIndx]->codec;

 pAVCodec = avcodec_find_decoder(pAVCodecContext->codec_id);
 if( pAVCodec == NULL )
 {
 cout<<"\nunable to find the decoder";
 exit(1);
 }

 value = avcodec_open2(pAVCodecContext , pAVCodec , NULL);//Initialize the AVCodecContext to use the given AVCodec.
 if( value < 0 )
 {
 cout<<"\nunable to open the av codec";
 exit(1);
 }
}

/* initialize the video output file and its properties */
int ScreenRecorder::init_outputfile()
{
 outAVFormatContext = NULL;
 value = 0;
 output_file = "../media/output.mp4";

 avformat_alloc_output_context2(&outAVFormatContext, NULL, NULL, output_file);
 if (!outAVFormatContext)
 {
 cout<<"\nerror in allocating av format output context";
 exit(1);
 }

/* Returns the output format in the list of registered output formats which best matches the provided parameters, or returns NULL if there is no match. */
 output_format = av_guess_format(NULL, output_file ,NULL);
 if( !output_format )
 {
 cout<<"\nerror in guessing the video format. try with correct format";
 exit(1);
 }

 video_st = avformat_new_stream(outAVFormatContext ,NULL);
 if( !video_st )
 {
 cout<<"\nerror in creating a av format new stream";
 exit(1);
 }

 outAVCodecContext = avcodec_alloc_context3(outAVCodec);
 if( !outAVCodecContext )
 {
 cout<<"\nerror in allocating the codec contexts";
 exit(1);
 }

 /* set property of the video file */
 outAVCodecContext = video_st->codec;
 outAVCodecContext->codec_id = AV_CODEC_ID_MPEG4;// AV_CODEC_ID_MPEG4; // AV_CODEC_ID_H264 // AV_CODEC_ID_MPEG1VIDEO
 outAVCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
 outAVCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
 outAVCodecContext->bit_rate = 2500000; // 2500000
 outAVCodecContext->width = 1920;
 outAVCodecContext->height = 1080;
 outAVCodecContext->gop_size = 3;
 outAVCodecContext->max_b_frames = 2;
 outAVCodecContext->time_base.num = 1;
 outAVCodecContext->time_base.den = 30; // 15fps

 {
 av_opt_set(outAVCodecContext->priv_data, "preset", "slow", 0);
 }

 outAVCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
 if( !outAVCodec )
 {
 cout<<"\nerror in finding the av codecs. try again with correct codec";
 exit(1);
 }

 /* Some container formats (like MP4) require global headers to be present
 Mark the encoder so that it behaves accordingly. */

 if ( outAVFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
 {
 outAVCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }

 value = avcodec_open2(outAVCodecContext, outAVCodec, NULL);
 if( value < 0)
 {
 cout<<"\nerror in opening the avcodec";
 exit(1);
 }

 /* create empty video file */
 if ( !(outAVFormatContext->flags & AVFMT_NOFILE) )
 {
 if( avio_open2(&outAVFormatContext->pb , output_file , AVIO_FLAG_WRITE ,NULL, NULL) < 0 )
 {
 cout<<"\nerror in creating the video file";
 exit(1);
 }
 }

 if(!outAVFormatContext->nb_streams)
 {
 cout<<"\noutput file dose not contain any stream";
 exit(1);
 }

 /* imp: mp4 container or some advanced container file required header information*/
 value = avformat_write_header(outAVFormatContext , &options);
 if(value < 0)
 {
 cout<<"\nerror in writing the header context";
 exit(1);
 }


 cout<<"\n\nOutput file information :\n\n";
 av_dump_format(outAVFormatContext , 0 ,output_file ,1);



Github link https://github.com/abdullahfarwees/screen-recorder-ffmpeg-cpp


-
Desktop grabbing with FFmpeg at 60 fps using NVENC codec
5 juillet 2016, par AkatoshI’m having trouble recording my desktop at 60FPS using the latest Windows compiled FFmpeg with NVENC codec. Metadata says the file is 60 fps but when I play it, I can see clearly see it is not 60FPS.
The command-line I use is the following :
ffmpeg -y -rtbufsize 2000M -f gdigrab -framerate 60 -offset_x 0 -offset_y 0 -video_size 1920x1080 -i desktop -c:v h264_nvenc -preset:v fast -pix_fmt nv12 out.mp4
I tried using a real time buffer, using another DirectShow device, changing the profile or forcing a bitrate, but the video always seems to be at 30fps.
Recording the screen using NVIDIA’s ShadowPlay works well, so I know it’s feasible on my machine.
Using FFprobe to check the ShadowPlay’s output file I can see :
Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p(tv,
smpte170m/smpte170m/bt470m), 1920x1080 [SAR 1:1 DAR 16:9], 4573 kb/s,
59.38 fps, 240 tbr, 60k tbn, 120 tbc (default)But If I force my output to have the same bitrate and profile I get :
Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p,
1920x1080 [SAR 1:1 DAR 16:9], 5519 kb/s, 60 fps, 60 tbr, 15360 tbn,
120 tbc (default)I can see
tbr
andtbn
are different, so I know my output is duplicating frames.For testing, all of my recordings had this 60 frame rate test page on the background, and I could clearly see the differences.
I know ShadowPlay probably does a lot more under the hood than FFmpeg using the same codec. I know OBS can do it quite easily but I want to understand what I am doing wrong. Maybe it’s some FFmpeg limitation ?
Full console output
Using -v trace command :
[gdigrab @ 0000000002572cc0] Capturing whole desktop as 1920x1080x32 at (0,0)
[gdigrab @ 0000000002572cc0] Cursor pos (1850,750) -> (1842,741)
[gdigrab @ 0000000002572cc0] Probe buffer size limit of 5000000 bytes reached
[gdigrab @ 0000000002572cc0] Stream #0: not enough frames to estimate rate; consider increasing probesize
[gdigrab @ 0000000002572cc0] stream 0: start_time: 1467123648.275 duration: -9223372036854.775
[gdigrab @ 0000000002572cc0] format: start_time: 1467123648.275 duration: -9223372036854.775 bitrate=3981337 kb/s
Input #0, gdigrab, from 'desktop':
Duration: N/A, start: 1467123648.275484, bitrate: 3981337 kb/s
Stream #0:0, 1, 1/1000000: Video: bmp, 1 reference frame, bgra, 1920x1080 (0x0), 0/1, 3981337 kb/s, 60 fps, 1000k tbr, 1000k tbn, 1000k tbc
Successfully opened the file.
Parsing a group of options: output file out.mp4.
Applying option c:v (codec name) with argument h264_nvenc.
Applying option pix_fmt (set pixel format) with argument nv12.
Successfully parsed a group of options.
Opening an output file: out.mp4.
[file @ 0000000000e3a7c0] Setting default whitelist 'file,crypto'
Successfully opened the file.
detected 8 logical cores
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'video_size' to value '1920x1080'
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'pix_fmt' to value '30'
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'time_base' to value '1/1000000'
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'pixel_aspect' to value '0/1'
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'sws_param' to value 'flags=2'
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'frame_rate' to value '60/1'
[graph 0 input from stream 0:0 @ 000000000257ec00] w:1920 h:1080 pixfmt:bgra tb:1/1000000 fr:60/1 sar:0/1 sws_param:flags=2
[format @ 000000000257ffc0] compat: called with args=[nv12]
[format @ 000000000257ffc0] Setting 'pix_fmts' to value 'nv12'
[auto-inserted scaler 0 @ 00000000025802c0] Setting 'flags' to value 'bicubic'
[auto-inserted scaler 0 @ 00000000025802c0] w:iw h:ih flags:'bicubic' interl:0
[format @ 000000000257ffc0] auto-inserting filter 'auto-inserted scaler 0' between the filter 'Parsed_null_0' and the filter 'format'
[AVFilterGraph @ 0000000000e373c0] query_formats: 4 queried, 2 merged, 1 already done, 0 delayed
[auto-inserted scaler 0 @ 00000000025802c0] w:1920 h:1080 fmt:bgra sar:0/1 -> w:1920 h:1080 fmt:nv12 sar:0/1 flags:0x4
[h264_nvenc @ 0000000000e3ca20] Nvenc initialized successfully
[h264_nvenc @ 0000000000e3ca20] 1 CUDA capable devices found
[h264_nvenc @ 0000000000e3ca20] [ GPU #0 - < GeForce GTX 670 > has Compute SM 3.0 ]
[h264_nvenc @ 0000000000e3ca20] supports NVENC
[mp4 @ 0000000000e3b580] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
Output #0, mp4, to 'out.mp4':
Metadata:
encoder : Lavf57.40.101
Stream #0:0, 0, 1/15360: Video: h264 (h264_nvenc) (Main), 1 reference frame ([33][0][0][0] / 0x0021), nv12, 1920x1080, 0/1, q=-1--1, 2000 kb/s, 60 fps, 15360 tbn, 60 tbc
Metadata:
encoder : Lavc57.47.100 h264_nvenc
Side data:
cpb: bitrate max/min/avg: 0/0/2000000 buffer size: 4000000 vbv_delay: -1
Stream mapping:
Stream #0:0 -> #0:0 (bmp (native) -> h264 (h264_nvenc))
Press [q] to stop, [?] for help
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000008
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[gdigrab @ 0000000002572cc0] Cursor pos (1850,750) -> (1842,741)
*** 35 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1850,750) -> (1842,741)
*** 7 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1850,649) -> (1850,649)
*** 1 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1858,535) -> (1858,535)
*** 3 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1859,454) -> (1859,454)
*** 2 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1865,384) -> (1865,384)
*** 2 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1846,348) -> (1846,348)
*** 3 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1770,347) -> (1770,347)
*** 2 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1545,388) -> (1545,388)
*** 4 dup!
frame= 69 fps=0.0 q=35.0 size= 184kB time=00:00:00.63 bitrate=2384.0kbits/[gdigrab @ 0000000002572cc0] Cursor pos (1523,389) -> (1519,378) -
Video Concat using ffmpeg [closed]
17 juin 2022, par Milan K Jainffmpeg -i url1 -i url2 -i url3 -i url4 -filter_complex "[0:v:0]scale=1920:1080[c1] ; [1:v:0]scale=1920:1080[c2] ; [2:v:0]scale=1920:1080[c3] ; [3:v:0]scale=1920:1080[c4], [c1] [0:a:0] [c2] [1:a:0] [c3] [2:a:0] [c4] [3:a:0] concat=n=4:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" /Users/myname/Downloads/f1-2017-07-12.mp4 -y


In Place of url I want to give link U get from after storing my video in amazon s3 bucket
Someone pls help