
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (64)
-
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (10173)
-
Can not add tmcd stream using libavcodec to replicate behavior of ffmpeg -timecode option
2 août, par Sailor JerryI'm trying to replicate option of command line ffmpeg -timecode in my C/C++ code. For some reasons the tcmd stream is not written to the output file. However the av_dump_format shows it in run time


Here is my minimal test


#include <iostream>
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
#include <libswscale></libswscale>swscale.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>samplefmt.h>
}
bool checkProResAvailability() {
 const AVCodec* codec = avcodec_find_encoder_by_name("prores_ks");
 if (!codec) {
 std::cerr << "ProRes codec not available. Please install FFmpeg with ProRes support." << std::endl;
 return false;
 }
 return true;
}

int main(){
 av_log_set_level(AV_LOG_INFO);

 const char* outputFileName = "test_tmcd.mov";
 AVFormatContext* formatContext = nullptr;
 AVCodecContext* videoCodecContext = nullptr;

 if (!checkProResAvailability()) {
 return -1;
 }

 std::cout << "Creating test file with tmcd stream: " << outputFileName << std::endl;

 // Allocate the output format context
 if (avformat_alloc_output_context2(&formatContext, nullptr, "mov", outputFileName) < 0) {
 std::cerr << "Failed to allocate output context!" << std::endl;
 return -1;
 }

 if (avio_open(&formatContext->pb, outputFileName, AVIO_FLAG_WRITE) < 0) {
 std::cerr << "Failed to open output file!" << std::endl;
 avformat_free_context(formatContext);
 return -1;
 }

 // Find ProRes encoder
 const AVCodec* videoCodec = avcodec_find_encoder_by_name("prores_ks");
 if (!videoCodec) {
 std::cerr << "Failed to find the ProRes encoder!" << std::endl;
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Video stream setup
 AVStream* videoStream = avformat_new_stream(formatContext, nullptr);
 if (!videoStream) {
 std::cerr << "Failed to create video stream!" << std::endl;
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 videoCodecContext = avcodec_alloc_context3(videoCodec);
 if (!videoCodecContext) {
 std::cerr << "Failed to allocate video codec context!" << std::endl;
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 videoCodecContext->width = 1920;
 videoCodecContext->height = 1080;
 videoCodecContext->pix_fmt = AV_PIX_FMT_YUV422P10;
 videoCodecContext->time_base = (AVRational){1, 30}; // Set FPS: 30
 videoCodecContext->bit_rate = 2000000;

 if (avcodec_open2(videoCodecContext, videoCodec, nullptr) < 0) {
 std::cerr << "Failed to open ProRes codec!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 if (avcodec_parameters_from_context(videoStream->codecpar, videoCodecContext) < 0) {
 std::cerr << "Failed to copy codec parameters to video stream!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 videoStream->time_base = videoCodecContext->time_base;

 // Timecode stream setup
 AVStream* timecodeStream = avformat_new_stream(formatContext, nullptr);
 if (!timecodeStream) {
 std::cerr << "Failed to create timecode stream!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 timecodeStream->codecpar->codec_type = AVMEDIA_TYPE_DATA;
 timecodeStream->codecpar->codec_id = AV_CODEC_ID_TIMED_ID3;
 timecodeStream->codecpar->codec_tag = MKTAG('t', 'm', 'c', 'd'); // Timecode tag
 timecodeStream->time_base = (AVRational){1, 30}; // FPS: 30

 if (av_dict_set(&timecodeStream->metadata, "timecode", "00:00:30:00", 0) < 0) {
 std::cerr << "Failed to set timecode metadata!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Write container header
 if (avformat_write_header(formatContext, nullptr) < 0) {
 std::cerr << "Failed to write file header!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Encode a dummy video frame
 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 std::cerr << "Failed to allocate video frame!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 frame->format = videoCodecContext->pix_fmt;
 frame->width = videoCodecContext->width;
 frame->height = videoCodecContext->height;

 if (av_image_alloc(frame->data, frame->linesize, frame->width, frame->height, videoCodecContext->pix_fmt, 32) < 0) {
 std::cerr << "Failed to allocate frame buffer!" << std::endl;
 av_frame_free(&frame);
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Fill frame with black
 memset(frame->data[0], 0, frame->linesize[0] * frame->height); // Y plane
 memset(frame->data[1], 128, frame->linesize[1] * frame->height / 2); // U plane
 memset(frame->data[2], 128, frame->linesize[2] * frame->height / 2); // V plane

 // Encode the frame
 AVPacket packet;
 av_init_packet(&packet);
 packet.data = nullptr;
 packet.size = 0;

 if (avcodec_send_frame(videoCodecContext, frame) == 0) {
 if (avcodec_receive_packet(videoCodecContext, &packet) == 0) {
 packet.stream_index = videoStream->index;
 av_interleaved_write_frame(formatContext, &packet);
 av_packet_unref(&packet);
 }
 }

 av_frame_free(&frame);

 // Write a dummy packet for the timecode stream
 AVPacket tmcdPacket;
 av_init_packet(&tmcdPacket);
 tmcdPacket.stream_index = timecodeStream->index;
 tmcdPacket.flags |= AV_PKT_FLAG_KEY;
 tmcdPacket.data = nullptr; // Empty packet for timecode
 tmcdPacket.size = 0;
 tmcdPacket.pts = 0; // Set necessary PTS
 tmcdPacket.dts = 0;
 av_interleaved_write_frame(formatContext, &tmcdPacket);

 // Write trailer
 if (av_write_trailer(formatContext) < 0) {
 std::cerr << "Failed to write file trailer!" << std::endl;
 }

 av_dump_format(formatContext, 0, "test.mov", 1);

 // Cleanup
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);

 std::cout << "Test file with timecode created successfully: " << outputFileName << std::endl;

 return 0;
}
</iostream>


The code output is :


Creating test file with tmcd stream: test_tmcd.mov
[prores_ks @ 0x11ce05790] Autoselected HQ profile to keep best quality. It can be overridden through -profile option.
[mov @ 0x11ce04f20] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[mov @ 0x11ce04f20] Encoder did not produce proper pts, making some up.
Output #0, mov, to 'test.mov':
 Metadata:
 encoder : Lavf61.7.100
 Stream #0:0: Video: prores (HQ) (apch / 0x68637061), yuv422p10le, 1920x1080, q=2-31, 2000 kb/s, 15360 tbn
 Stream #0:1: Data: timed_id3 (tmcd / 0x64636D74)
 Metadata:
 timecode : 00:00:30:00
Test file with timecode created successfully: test_tmcd.mov



The ffprobe output is :


$ ffprobe test_tmcd.mov
ffprobe version 7.1.1 Copyright (c) 2007-2025 the FFmpeg developers
 built with Apple clang version 16.0.0 (clang-1600.0.26.6)
 configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/7.1.1_3 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags='-Wl,-ld_classic' --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon
 libavutil 59. 39.100 / 59. 39.100
 libavcodec 61. 19.101 / 61. 19.101
 libavformat 61. 7.100 / 61. 7.100
 libavdevice 61. 3.100 / 61. 3.100
 libavfilter 10. 4.100 / 10. 4.100
 libswscale 8. 3.100 / 8. 3.100
 libswresample 5. 3.100 / 5. 3.100
 libpostproc 58. 3.100 / 58. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_tmcd.mov':
 Metadata:
 major_brand : qt 
 minor_version : 512
 compatible_brands: qt 
 encoder : Lavf61.7.100
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0[0x1]: Video: prores (HQ) (apch / 0x68637061), yuv422p10le, 1920x1080, 15360 tbn (default)
 Metadata:
 handler_name : VideoHandler
 vendor_id : FFMP
$ 




Spent hours with all AI models, no help. Appeal to the human intelligence now


-
Using libavformat to mux H.264 frames into RTP
22 novembre 2016, par DanielB6I have an encoder that produces a series of H.264 I-frames and P-frames. I’m trying to use libavformat to mux and transmit these frames over RTP, but I’m stuck.
My program sends RTP data, but the RTP timestamp increments by 1 each successive frame, instead of 90000/fps. It also doesn’t look like it’s doing the proper framing for H.264 NAL, since I can’t decode the stream as H.264 in Wireshark.
I suspect that I’m not setting up the codec information properly, but it appears in many places in the output format context, so it’s unclear what exactly needs to be setup. The examples seem to all copy codec context info from encoders, which isn’t my use case.
This is what I’m trying :
int main() {
AVFormatContext context = avformat_alloc_context();
if (!context) {
printf("avformat_alloc_context failed\n");
return;
}
AVOutputFormat *format = av_guess_format("rtp", NULL, NULL);
if (!format) {
printf("av_guess_format failed\n");
return;
}
context->oformat = format;
snprintf(context->filename, sizeof(context->filename), "rtp://%s:%d", "192.168.2.16", 10000);
if (avio_open(&(context->pb), context->filename, AVIO_FLAG_READ_WRITE) < 0) {
printf("avio_open failed\n");
return;
}
stream = avformat_new_stream(context, NULL);
if (!stream) {
printf("avformat_new_stream failed\n");
return;
}
stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
stream->codecpar->codec_id = AV_CODEC_ID_H264;
stream->codecpar->width = 1920;
stream->codecpar->height = 1080;
avformat_write_header(context, NULL);
...
write packets
...
}Example write packet :
int write_packet(uint8_t *data, int size) {
AVPacket p;
av_init_packet(&p);
p.data = buffer;
p.size = size;
p.stream_index = stream->index;
av_interleaved_write_frame(context, &p);
}I’ve even went so far to build in libx264, find the encoder, and copy the codec context info from there into the stream codecpar, with the same result. My goal is to build without libx264, and any other libs that aren’t required, but it isn’t clear whether libx264 is required for defaults such as time base.
How can the libavformat RTP muxer be initialized to properly send H.264 frames over RTCP+RTP ?
-
FFmpeg api : combine Camera Stream and Screen Capture or Video File stream to one stream (C/C++)
31 décembre 2016, par lostin2010I have one big question which spent me 2 total days to solve , but fail .
I want to combine Camera Stream with another stream (.flv,.mpg) to one stream . Just like the picture below. camera is a part of the Live , background is another stream.
My camera device is
[dshow @ 000373e0] "TTQ HD Camera"
[dshow @ 000373e0] Alternative name "@device_pnp_\\?\usb#vid_114d&pid_8455&mi_00#6&1e9bcf33&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"I decode my Camera Stream , its format is
YUYV422
, and decode another flv file its format is ’YUV420p’.
I use each decoder of oneself to build its own avfilter , camera is in0, flv file is in1 . and use this filter_speccolor=c=black@1:s=1920x1080[x0];[in0]null[ine0];[ine0]scale=w=960:h=540[inn0];[x0][inn0]overlay=1920*0/2:1080*0/2[x1];[in1]null[ine1];[ine1]scale=w=1160:h=740[inn1];[x1][inn1]overlay=1920*1/2:1080*0/2[x2];[x2]null[out]
i build a filter_graph.then I read packet out separately and add_frame to filter.
for (i = 0; i < video_num; i++)//i=0 camera packet , i=1 flv file packet
{
while ((read_frame_done = av_read_frame(ifmt_ctx[i], &packet)) >= 0)
{
ret = av_buffersrc_add_frame(filter_ctx[stream_index].buffersrc_ctx[i], frame[i]);
}
}then i get frame out into picref
while (1) {
ret = av_buffersink_get_frame_flags(filter_ctx[stream_index].buffersink_ctx, picref, 0);
}I encode picref or display it with SDL , I find there is only the flv stream , no camera stream on showing. i don’t know why.
but if I change the source stream from camera stream to another flv file, means two flv file as source streams, then it is correct like demo picture above. this confuses me a lot .
who can help me, I will really thank you.