
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (73)
-
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (12143)
-
ffmpeg video encoder skips first frames [duplicate]
19 octobre 2022, par Eduard BarnoviciuI am new to ffmpeg. I am trying to run this simple video encoding example :



#include <iostream>
#include <vector>
// FFmpeg
extern "C" {
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>imgutils.h>
#include <libswscale></libswscale>swscale.h>
}
// OpenCV
#include <opencv2></opencv2>opencv.hpp>
#include <opencv2></opencv2>highgui.hpp>


int main(int argc, char* argv[])
{
 if (argc < 2) {
 std::cout << "Usage: cv2ff <outfile>" << std::endl;
 return 1;
 }
 const char* outfile = argv[1];

 // av_log_set_level(AV_LOG_DEBUG);
 int ret;

 const int dst_width = 640;
 const int dst_height = 480;
 const AVRational dst_fps = {30, 1};

 // initialize OpenCV capture as input frame generator
 cv::VideoCapture cvcap(0);
 if (!cvcap.isOpened()) {
 std::cerr << "fail to open cv::VideoCapture";
 return 2;
 }
 cvcap.set(cv::CAP_PROP_FRAME_WIDTH, dst_width);
 cvcap.set(cv::CAP_PROP_FRAME_HEIGHT, dst_height);
 cvcap.set(cv::CAP_PROP_FPS, dst_fps.num);
 // some device ignore above parameters for capturing image,
 // so we query actual parameters for image rescaler.
 const int cv_width = cvcap.get(cv::CAP_PROP_FRAME_WIDTH);
 const int cv_height = cvcap.get(cv::CAP_PROP_FRAME_HEIGHT);
 const int cv_fps = cvcap.get(cv::CAP_PROP_FPS);

 // open output format context
 AVFormatContext* outctx = nullptr;
 ret = avformat_alloc_output_context2(&outctx, nullptr, nullptr, outfile);
 if (ret < 0) {
 std::cerr << "fail to avformat_alloc_output_context2(" << outfile << "): ret=" << ret;
 return 2;
 }

 // create new video stream
 AVCodec* vcodec = avcodec_find_encoder(outctx->oformat->video_codec);
 AVStream* vstrm = avformat_new_stream(outctx, vcodec);
 if (!vstrm) {
 std::cerr << "fail to avformat_new_stream";
 return 2;
 }

 // open video encoder
 AVCodecContext* cctx = avcodec_alloc_context3(vcodec);
 if (!vstrm) {
 std::cerr << "fail to avcodec_alloc_context3";
 return 2;
 }
 cctx->width = dst_width;
 cctx->height = dst_height;
 cctx->pix_fmt = vcodec->pix_fmts[0];
 cctx->time_base = av_inv_q(dst_fps);
 cctx->framerate = dst_fps;
 if (outctx->oformat->flags & AVFMT_GLOBALHEADER)
 cctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 ret = avcodec_open2(cctx, vcodec, nullptr);
 if (ret < 0) {
 std::cerr << "fail to avcodec_open2: ret=" << ret;
 return 2;
 }
 avcodec_parameters_from_context(vstrm->codecpar, cctx);

 // initialize sample scaler
 SwsContext* swsctx = sws_getContext(
 cv_width, cv_height, AV_PIX_FMT_BGR24,
 dst_width, dst_height, cctx->pix_fmt,
 SWS_BILINEAR, nullptr, nullptr, nullptr);
 if (!swsctx) {
 std::cerr << "fail to sws_getContext";
 return 2;
 }

 // allocate frame buffer for encoding
 AVFrame* frame = av_frame_alloc();
 frame->width = dst_width;
 frame->height = dst_height;
 frame->format = static_cast<int>(cctx->pix_fmt);
 ret = av_frame_get_buffer(frame, 32);
 if (ret < 0) {
 std::cerr << "fail to av_frame_get_buffer: ret=" << ret;
 return 2;
 }

 // allocate packet to retrive encoded frame
 AVPacket* pkt = av_packet_alloc();

 // open output IO context
 ret = avio_open2(&outctx->pb, outfile, AVIO_FLAG_WRITE, nullptr, nullptr);
 if (ret < 0) {
 std::cerr << "fail to avio_open2: ret=" << ret;
 return 2;
 }

 std::cout
 << "camera: " << cv_width << 'x' << cv_height << '@' << cv_fps << "\n"
 << "outfile: " << outfile << "\n"
 << "format: " << outctx->oformat->name << "\n"
 << "vcodec: " << vcodec->name << "\n"
 << "size: " << dst_width << 'x' << dst_height << "\n"
 << "fps: " << av_q2d(cctx->framerate) << "\n"
 << "pixfmt: " << av_get_pix_fmt_name(cctx->pix_fmt) << "\n"
 << std::flush;

 // write media container header (if any)
 ret = avformat_write_header(outctx, nullptr);
 if (ret < 0) {
 std::cerr << "fail to avformat_write_header: ret=" << ret;
 return 2;
 }

 cv::Mat image;

 // encoding loop
 int64_t frame_pts = 0;
 unsigned nb_frames = 0;
 bool end_of_stream = false;
 for (;;) {
 if (!end_of_stream) {
 // retrieve source image
 cvcap >> image;
 cv::imshow("press ESC to exit", image);
 if (cv::waitKey(33) == 0x1b) {
 // flush encoder
 avcodec_send_frame(cctx, nullptr);
 end_of_stream = true;
 }
 }
 if (!end_of_stream) {
 // convert cv::Mat(OpenCV) to AVFrame(FFmpeg)
 const int stride[4] = { static_cast<int>(image.step[0]) };
 sws_scale(swsctx, &image.data, stride, 0, image.rows, frame->data, frame->linesize);
 frame->pts = frame_pts++;
 // encode video frame
 ret = avcodec_send_frame(cctx, frame);
 if (ret < 0) {
 std::cerr << "fail to avcodec_send_frame: ret=" << ret << "\n";
 break;
 }
 }
 while ((ret = avcodec_receive_packet(cctx, pkt)) >= 0) {
 // rescale packet timestamp
 pkt->duration = 1;
 av_packet_rescale_ts(pkt, cctx->time_base, vstrm->time_base);
 // write encoded packet
 av_write_frame(outctx, pkt);
 av_packet_unref(pkt);
 std::cout << nb_frames << '\r' << std::flush; // dump progress
 ++nb_frames;
 }
 if (ret == AVERROR_EOF)
 break;
 };
 std::cout << nb_frames << " frames encoded" << std::endl;

 // write trailer and close file
 av_write_trailer(outctx);
 avio_close(outctx->pb);

 av_packet_free(&pkt);
 av_frame_free(&frame);
 sws_freeContext(swsctx);
 avcodec_free_context(&cctx);
 avformat_free_context(outctx);
 return 0;
}
</int></int></outfile></vector></iostream>


The problem is, while using codecs such as HEVC, H265 or VP9, the encoder always drops first 27 frames.


More exactly, at line 163 :


while ((ret = avcodec_receive_packet(cctx, pkt)) >= 0) {



ret is equal to -11 and it doesn't go inside the while loop. From that point onward it's always equal to 0 and no issues are found.


If I use MPEG4 for example, ret is 0 from the start and no frames are dropped.


-
Encode amr with ffmpeg (libavcodec)
22 août 2022, par Mohammadreza RostamI would like to encode a simple pcm to amr using libavcodec.


To begin with, I compiled ffmpeg and run the
encode_audio
example and it worked fine for the default Mp2 codec.

Then modified the encode_audio example and replace
AV_CODEC_ID_MP2
withAV_CODEC_ID_AMR_NB
and changedbit_rate
,sample_rate
, andchannel_layout
to12200
,8000
, andAV_CH_LAYOUT_MONO
respectively. Now, I am gettinginvalid fastbin entry (free)
error.

The compiled
ffmpeg
cli binary worked as expected and encode audio files toamr
successfully, so the issue is not in compilation or the linking step.

Any help would be much appreciated


Update :


As mentioned in the comments, the issue is not specifically for
amr
and theencode_audio
example would fail for all codecs ifAV_CH_LAYOUT_MONO
is selected for the channel layout. To get the demo to work, you need to also change the signal generator code block, in which having two channels of audio if assumed and hard-corded. Thanks @ronald-s-bultje for helping me here.

-
Cant Record Stream Using MediaRecorder In Server ?
13 avril 2022, par Riyad ZaigirdarFirst I am trying to make a Webrtc peer connection from the browser to the server using SFU model.


Here is the post request which makes the webrtc peer connection from the browser to the server(SFU)


app.post("/broadcast", async ({ body }, res) => {
 const peer = new webrtc.RTCPeerConnection({
 iceServers: [
 {
 urls: "stun:stun.stunprotocol.org",
 },
 ],
 });
 peer.ontrack = (e) => handleTrackEvent(e, peer); <-- Important
 const desc = new webrtc.RTCSessionDescription(body.sdp);
 await peer.setRemoteDescription(desc);
 const answer = await peer.createAnswer();
 await peer.setLocalDescription(answer);
 const payload = {
 sdp: peer.localDescription,
 };

 res.json(payload);
});



In the handleTrackEvent function, I am getting the stream which I want to start record and save in the server's local storage.


function handleTrackEvent(e, peer) {
 console.log(e.streams);
 senderStream = e.streams[0];
 var recorder = new MediaStreamRecorder(e.streams);
 recorder.recorderType = MediaRecorderWrapper;
 recorder.mimeType = "video/webm";
 recorder.ondataavailable = (blob) => {
 console.log(blob);
 };
 recorder.start(5 * 1000); <-- Error generator
}



But when try to start the recording and get the blob in 5 sec intervals, it gives me "MediaRecorder Not Found" ...


Passing following params over MediaRecorder API. { mimeType: 'video/webm' }
/Users/tecbackup/webrtc-peer/node_modules/msr/MediaStreamRecorder.js:672
 mediaRecorder = new MediaRecorder(mediaStream);
 ^

ReferenceError: MediaRecorder is not defined



I am very new to webrtc, I need some suggestion to save the live stream from the browser to the server....In future, if find the blob, then I will save the blobs sequentially in a mp4 file in the server. Then, on runtime i start pressing ffmpeg in that mp4 file to get 240p, 360p, 720p ts files for hls streaming