
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (51)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (7628)
-
Stream webm to node.js from c# application in chunks
7 août 2015, par Dan-Levi TømtaI am in the process of learning about streaming between node.js with socket.io and c#.
I have code that successfully records the screen with ffmpeg, redirects it StandardOutput.BaseStream and stores it into a Memorybuffer, when i click stop in my application it sends the memorystream as a byte array to the node.js server which are storing the file so the clients can play it. This are working just fine and here are my setup for that :
C#
bool ffWorkerIsWorking = false;
private void btnFFMpeg_Click(object sender, RoutedEventArgs e)
{
BackgroundWorker ffWorker = new BackgroundWorker();
ffWorker.WorkerSupportsCancellation = true;
ffWorker.DoWork += ((ffWorkerObj,ffWorkerEventArgs) =>
{
ffWorkerIsWorking = true;
using (var FFProcess = new Process())
{
var processStartInfo = new ProcessStartInfo
{
FileName = "ffmpeg.exe",
RedirectStandardInput = true,
RedirectStandardOutput = true,
UseShellExecute = false,
CreateNoWindow = false,
Arguments = " -loglevel panic -hide_banner -y -f gdigrab -draw_mouse 1 -i desktop -threads 2 -deadline realtime -f webm -"
};
FFProcess.StartInfo = processStartInfo;
FFProcess.Start();
byte[] buffer = new byte[32768];
using (MemoryStream ms = new MemoryStream())
{
while (!FFProcess.HasExited)
{
int read = FFProcess.StandardOutput.BaseStream.Read(buffer, 0, buffer.Length);
if (read <= 0)
break;
ms.Write(buffer, 0, read);
Console.WriteLine(ms.Length);
if (!ffWorkerIsWorking)
{
clientSocket.Emit("video", ms.ToArray());
ffWorker.CancelAsync();
break;
}
}
}
}
});
ffWorker.RunWorkerAsync();
}JS (Server)
socket.on('video', function(data) {
fs.appendFile('public/fooTest.webm', data, function (err) {
if (err) throw err;
console.log('File uploaded');
});
});Now i need to change this code so it instead of sending the whole file it should sends chunks of byte arrays instead of the whole video, and node will then initially create a file and then append those chunks of byte arrays as they are received. Ok sound easy enough, but apparently not.
I need to somehow instruct the code to use a offset and just the bytes after that offset and then update the offset.
On the server side i think the best approach is to create a file and append the byte arrays to that file as they are received.
On the server side i would do something like this :
JS (Server)
var buffer = new Buffer(32768);
var isBuffering = false;
socket.on('video', function(data) {
//concatenate the buffer with the incoming data and broadcast.emit to clients
});How am i able to setup the offset for the bytes to be sent and update that offset, and how would i approach the way of concatenating the data to the initialized buffer ?
I have tried to write some code that only reads from the offset to the end and it seems like this is working although the video when added up in node is just black :
C#
while (!FFProcess.HasExited)
{
int read = FFProcess.StandardOutput.BaseStream.Read(buffer, 0, buffer.Length);
if (read <= 0)
break;
int offset = (read - buffer.Length > 0 ? read - buffer.Length : 0);
ms.Write(buffer, offset, read);
clientSocket.Emit("videoChunk", buffer.ToArray());
if (!ffWorkerIsWorking)
{
ffWorker.CancelAsync();
break;
}
}Node console output
JS (Server)
socket.on('videoChunk', function(data) {
if (!isBufferingDone) {
buffer = Buffer.concat([buffer, data]);
console.log(data.length);
}
});
socket.on('cancelVideo', function() {
isBufferingDone = true;
setTimeout(function() {
fs.writeFile("public/test.webm", buffer, function(err) {
if(err) {
return console.log(err);
}
console.log("The file was saved!");
buffer = new Buffer(32768);
});
}, 1000);
});JS (Client)
socket.on('video', function(filePath) {
console.log('path: ' + filePath);
$('#videoSource').attr('src',filePath);
$('#video').play();
});Thanks !
-
Nginx Rtmp Module - How to check resolution when pushing rtmp stream to server before redirecting stream to another application ?
22 août 2022, par bui the vuongI have a problem when developing a livestream system with nginx-rtmp-module . I have consulted some systems, there is a function that when pushing the rtmp stream, the livestream systems can recognize the resolution of the stream -> from there it will encode to hls with the corresponding profiles. For example stream 720p produces hls file with 360p -> 720p , if stream 1080p will produce hls file with 360p -> 1080p . I have tried the ways but no success. So how can I check the resolution and redirect the rtmp stream to the appropriate application for encoding. Looking forward to everyone's advice.


-
Screenrecorder application output video resolution issues [closed]
23 juin 2022, par JessieKUsing Github code for ScreenRecorder on Linux
Everything works fine, besides the resolution of output video.
Tried to play with setting, quality has significantly improved, but still no way to change resolution.
I need to get output video with the same size as input video


using namespace std;

 /* initialize the resources*/
 ScreenRecorder::ScreenRecorder()
 {
 
 av_register_all();
 avcodec_register_all();
 avdevice_register_all();
 cout<<"\nall required functions are registered successfully";
 }
 
 /* uninitialize the resources */
 ScreenRecorder::~ScreenRecorder()
 {
 
 avformat_close_input(&pAVFormatContext);
 if( !pAVFormatContext )
 {
 cout<<"\nfile closed sucessfully";
 }
 else
 {
 cout<<"\nunable to close the file";
 exit(1);
 }
 
 avformat_free_context(pAVFormatContext);
 if( !pAVFormatContext )
 {
 cout<<"\navformat free successfully";
 }
 else
 {
 cout<<"\nunable to free avformat context";
 exit(1);
 }
 
 }
 
 /* function to capture and store data in frames by allocating required memory and auto deallocating the memory. */
 int ScreenRecorder::CaptureVideoFrames()
 {
 int flag;
 int frameFinished;//when you decode a single packet, you still don't have information enough to have a frame [depending on the type of codec, some of them //you do], when you decode a GROUP of packets that represents a frame, then you have a picture! that's why frameFinished will let //you know you decoded enough to have a frame.
 
 int frame_index = 0;
 value = 0;
 
 pAVPacket = (AVPacket *)av_malloc(sizeof(AVPacket));
 av_init_packet(pAVPacket);
 
 pAVFrame = av_frame_alloc();
 if( !pAVFrame )
 {
 cout<<"\nunable to release the avframe resources";
 exit(1);
 }
 
 outFrame = av_frame_alloc();//Allocate an AVFrame and set its fields to default values.
 if( !outFrame )
 {
 cout<<"\nunable to release the avframe resources for outframe";
 exit(1);
 }
 
 int video_outbuf_size;
 int nbytes = av_image_get_buffer_size(outAVCodecContext->pix_fmt,outAVCodecContext->width,outAVCodecContext->height,32);
 uint8_t *video_outbuf = (uint8_t*)av_malloc(nbytes);
 if( video_outbuf == NULL )
 {
 cout<<"\nunable to allocate memory";
 exit(1);
 }
 
 // Setup the data pointers and linesizes based on the specified image parameters and the provided array.
 value = av_image_fill_arrays( outFrame->data, outFrame->linesize, video_outbuf , AV_PIX_FMT_YUV420P, outAVCodecContext->width,outAVCodecContext->height,1 ); // returns : the size in bytes required for src
 if(value < 0)
 {
 cout<<"\nerror in filling image array";
 }
 
 SwsContext* swsCtx_ ;
 
 // Allocate and return swsContext.
 // a pointer to an allocated context, or NULL in case of error
 // Deprecated : Use sws_getCachedContext() instead.
 swsCtx_ = sws_getContext(pAVCodecContext->width,
 pAVCodecContext->height,
 pAVCodecContext->pix_fmt,
 outAVCodecContext->width,
 outAVCodecContext->height,
 outAVCodecContext->pix_fmt,
 SWS_BICUBIC, NULL, NULL, NULL);
 
 
 int ii = 0;
 int no_frames = 100;
 cout<<"\nenter No. of frames to capture : ";
 cin>>no_frames;
 
 AVPacket outPacket;
 int j = 0;
 
 int got_picture;
 
 while( av_read_frame( pAVFormatContext , pAVPacket ) >= 0 )
 {
 if( ii++ == no_frames )break;
 if(pAVPacket->stream_index == VideoStreamIndx)
 {
 value = avcodec_decode_video2( pAVCodecContext , pAVFrame , &frameFinished , pAVPacket );
 if( value < 0)
 {
 cout<<"unable to decode video";
 }
 
 if(frameFinished)// Frame successfully decoded :)
 {
 sws_scale(swsCtx_, pAVFrame->data, pAVFrame->linesize,0, pAVCodecContext->height, outFrame->data,outFrame->linesize);
 av_init_packet(&outPacket);
 outPacket.data = NULL; // packet data will be allocated by the encoder
 outPacket.size = 0;
 
 avcodec_encode_video2(outAVCodecContext , &outPacket ,outFrame , &got_picture);
 
 if(got_picture)
 {
 if(outPacket.pts != AV_NOPTS_VALUE)
 outPacket.pts = av_rescale_q(outPacket.pts, video_st->codec->time_base, video_st->time_base);
 if(outPacket.dts != AV_NOPTS_VALUE)
 outPacket.dts = av_rescale_q(outPacket.dts, video_st->codec->time_base, video_st->time_base);
 
 printf("Write frame %3d (size= %2d)\n", j++, outPacket.size/1000);
 if(av_write_frame(outAVFormatContext , &outPacket) != 0)
 {
 cout<<"\nerror in writing video frame";
 }
 
 av_packet_unref(&outPacket);
 } // got_picture
 
 av_packet_unref(&outPacket);
 } // frameFinished
 
 }
 }// End of while-loop



One part of two parts is above...Actually original app seem to record video of same size as does my application, but still it has not any use



Second part of the code


av_free(video_outbuf);

}

/* establishing the connection between camera or screen through its respective folder */
int ScreenRecorder::openCamera()
{

 value = 0;
 options = NULL;
 pAVFormatContext = NULL;

 pAVFormatContext = avformat_alloc_context();//Allocate an AVFormatContext.
/*

X11 video input device.
To enable this input device during configuration you need libxcb installed on your system. It will be automatically detected during configuration.
This device allows one to capture a region of an X11 display. 
refer : https://www.ffmpeg.org/ffmpeg-devices.html#x11grab
*/
 /* current below is for screen recording. to connect with camera use v4l2 as a input parameter for av_find_input_format */ 
 pAVInputFormat = av_find_input_format("x11grab");
 value = avformat_open_input(&pAVFormatContext, ":0.0+10,250", pAVInputFormat, NULL);
 if(value != 0)
 {
 cout<<"\nerror in opening input device";
 exit(1);
 }

 /* set frame per second */
 value = av_dict_set( &options,"framerate","30",0 );
 if(value < 0)
 {
 cout<<"\nerror in setting dictionary value";
 exit(1);
 }

 value = av_dict_set( &options, "preset", "medium", 0 );
 if(value < 0)
 {
 cout<<"\nerror in setting preset values";
 exit(1);
 }

// value = avformat_find_stream_info(pAVFormatContext,NULL);
 if(value < 0)
 {
 cout<<"\nunable to find the stream information";
 exit(1);
 }

 VideoStreamIndx = -1;

 /* find the first video stream index . Also there is an API available to do the below operations */
 for(int i = 0; i < pAVFormatContext->nb_streams; i++ ) // find video stream posistion/index.
 {
 if( pAVFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO )
 {
 VideoStreamIndx = i;
 break;
 }

 } 

 if( VideoStreamIndx == -1)
 {
 cout<<"\nunable to find the video stream index. (-1)";
 exit(1);
 }

 // assign pAVFormatContext to VideoStreamIndx
 pAVCodecContext = pAVFormatContext->streams[VideoStreamIndx]->codec;

 pAVCodec = avcodec_find_decoder(pAVCodecContext->codec_id);
 if( pAVCodec == NULL )
 {
 cout<<"\nunable to find the decoder";
 exit(1);
 }

 value = avcodec_open2(pAVCodecContext , pAVCodec , NULL);//Initialize the AVCodecContext to use the given AVCodec.
 if( value < 0 )
 {
 cout<<"\nunable to open the av codec";
 exit(1);
 }
}

/* initialize the video output file and its properties */
int ScreenRecorder::init_outputfile()
{
 outAVFormatContext = NULL;
 value = 0;
 output_file = "../media/output.mp4";

 avformat_alloc_output_context2(&outAVFormatContext, NULL, NULL, output_file);
 if (!outAVFormatContext)
 {
 cout<<"\nerror in allocating av format output context";
 exit(1);
 }

/* Returns the output format in the list of registered output formats which best matches the provided parameters, or returns NULL if there is no match. */
 output_format = av_guess_format(NULL, output_file ,NULL);
 if( !output_format )
 {
 cout<<"\nerror in guessing the video format. try with correct format";
 exit(1);
 }

 video_st = avformat_new_stream(outAVFormatContext ,NULL);
 if( !video_st )
 {
 cout<<"\nerror in creating a av format new stream";
 exit(1);
 }

 outAVCodecContext = avcodec_alloc_context3(outAVCodec);
 if( !outAVCodecContext )
 {
 cout<<"\nerror in allocating the codec contexts";
 exit(1);
 }

 /* set property of the video file */
 outAVCodecContext = video_st->codec;
 outAVCodecContext->codec_id = AV_CODEC_ID_MPEG4;// AV_CODEC_ID_MPEG4; // AV_CODEC_ID_H264 // AV_CODEC_ID_MPEG1VIDEO
 outAVCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
 outAVCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
 outAVCodecContext->bit_rate = 2500000; // 2500000
 outAVCodecContext->width = 1920;
 outAVCodecContext->height = 1080;
 outAVCodecContext->gop_size = 3;
 outAVCodecContext->max_b_frames = 2;
 outAVCodecContext->time_base.num = 1;
 outAVCodecContext->time_base.den = 30; // 15fps

 {
 av_opt_set(outAVCodecContext->priv_data, "preset", "slow", 0);
 }

 outAVCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
 if( !outAVCodec )
 {
 cout<<"\nerror in finding the av codecs. try again with correct codec";
 exit(1);
 }

 /* Some container formats (like MP4) require global headers to be present
 Mark the encoder so that it behaves accordingly. */

 if ( outAVFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
 {
 outAVCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }

 value = avcodec_open2(outAVCodecContext, outAVCodec, NULL);
 if( value < 0)
 {
 cout<<"\nerror in opening the avcodec";
 exit(1);
 }

 /* create empty video file */
 if ( !(outAVFormatContext->flags & AVFMT_NOFILE) )
 {
 if( avio_open2(&outAVFormatContext->pb , output_file , AVIO_FLAG_WRITE ,NULL, NULL) < 0 )
 {
 cout<<"\nerror in creating the video file";
 exit(1);
 }
 }

 if(!outAVFormatContext->nb_streams)
 {
 cout<<"\noutput file dose not contain any stream";
 exit(1);
 }

 /* imp: mp4 container or some advanced container file required header information*/
 value = avformat_write_header(outAVFormatContext , &options);
 if(value < 0)
 {
 cout<<"\nerror in writing the header context";
 exit(1);
 }


 cout<<"\n\nOutput file information :\n\n";
 av_dump_format(outAVFormatContext , 0 ,output_file ,1);



Github link https://github.com/abdullahfarwees/screen-recorder-ffmpeg-cpp