Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (98)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (12182)

  • Using libavformat to mux H.264 frames into RTP

    22 novembre 2016, par DanielB6

    I have an encoder that produces a series of H.264 I-frames and P-frames. I’m trying to use libavformat to mux and transmit these frames over RTP, but I’m stuck.

    My program sends RTP data, but the RTP timestamp increments by 1 each successive frame, instead of 90000/fps. It also doesn’t look like it’s doing the proper framing for H.264 NAL, since I can’t decode the stream as H.264 in Wireshark.

    I suspect that I’m not setting up the codec information properly, but it appears in many places in the output format context, so it’s unclear what exactly needs to be setup. The examples seem to all copy codec context info from encoders, which isn’t my use case.

    This is what I’m trying :

    int main() {
       AVFormatContext context = avformat_alloc_context();

       if (!context) {
           printf("avformat_alloc_context failed\n");
           return;
       }

       AVOutputFormat *format = av_guess_format("rtp", NULL, NULL);

       if (!format) {
           printf("av_guess_format failed\n");
           return;
       }

       context->oformat = format;

       snprintf(context->filename, sizeof(context->filename), "rtp://%s:%d", "192.168.2.16", 10000);

       if (avio_open(&(context->pb), context->filename, AVIO_FLAG_READ_WRITE) < 0) {
           printf("avio_open failed\n");
           return;
       }

       stream = avformat_new_stream(context, NULL);

       if (!stream) {
           printf("avformat_new_stream failed\n");
           return;
       }

       stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
       stream->codecpar->codec_id = AV_CODEC_ID_H264;
       stream->codecpar->width = 1920;
       stream->codecpar->height = 1080;

       avformat_write_header(context, NULL);

       ...
       write packets
       ...
    }

    Example write packet :

    int write_packet(uint8_t *data, int size) {
       AVPacket p;
       av_init_packet(&p);
       p.data = buffer;
       p.size = size;
       p.stream_index = stream->index;

       av_interleaved_write_frame(context, &p);
    }

    I’ve even went so far to build in libx264, find the encoder, and copy the codec context info from there into the stream codecpar, with the same result. My goal is to build without libx264, and any other libs that aren’t required, but it isn’t clear whether libx264 is required for defaults such as time base.

    How can the libavformat RTP muxer be initialized to properly send H.264 frames over RTCP+RTP ?

  • FFmpeg api : combine Camera Stream and Screen Capture or Video File stream to one stream (C/C++)

    31 décembre 2016, par lostin2010

    I have one big question which spent me 2 total days to solve , but fail .

    I want to combine Camera Stream with another stream (.flv,.mpg) to one stream . Just like the picture below. camera is a part of the Live , background is another stream.

    enter image description here

    My camera device is

    [dshow @ 000373e0]  "TTQ HD Camera"
    [dshow @ 000373e0]     Alternative name "@device_pnp_\\?\usb#vid_114d&pid_8455&mi_00#6&1e9bcf33&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"

    I decode my Camera Stream , its format is YUYV422, and decode another flv file its format is ’YUV420p’.
    I use each decoder of oneself to build its own avfilter , camera is in0, flv file is in1 . and use this filter_spec

    color=c=black@1:s=1920x1080[x0];[in0]null[ine0];[ine0]scale=w=960:h=540[inn0];[x0][inn0]overlay=1920*0/2:1080*0/2[x1];[in1]null[ine1];[ine1]scale=w=1160:h=740[inn1];[x1][inn1]overlay=1920*1/2:1080*0/2[x2];[x2]null[out]

    i build a filter_graph.then I read packet out separately and add_frame to filter.

    for (i = 0; i < video_num; i++)//i=0 camera packet , i=1 flv file packet
    {
       while ((read_frame_done = av_read_frame(ifmt_ctx[i], &packet)) >= 0)
       {
          ret = av_buffersrc_add_frame(filter_ctx[stream_index].buffersrc_ctx[i],     frame[i]);
       }
    }

    then i get frame out into picref

    while (1) {
       ret = av_buffersink_get_frame_flags(filter_ctx[stream_index].buffersink_ctx, picref, 0);
    }

    I encode picref or display it with SDL , I find there is only the flv stream , no camera stream on showing. i don’t know why.
    but if I change the source stream from camera stream to another flv file, means two flv file as source streams, then it is correct like demo picture above. this confuses me a lot .
    who can help me, I will really thank you.

  • FFMPEG encode audio and forced subtitles at same time ?

    8 janvier 2017, par Nick Bell

    I’m using latest static build of ffmpeg windows.

    My input file (.mkv) is :

    [video] - 1080, V_MPEG4/ISO/AVC, 14.6 Mbps, ID#0
    [audio] - DTS 5.1, 1510 Kbps, ID#1
    [subtitles] - S_TEXT/ASS Lossless English, ID#14

    My problem is this : I convert the audio, so that my target player, a XB1 console (media support faq), is able to play audio/video. However sometimes its rather difficult to hear or parts may be in foreign language, so I want to force the english subtitles into the mix at the same time I convert the audio.

    Currently for the audio, I use the following command

    ffmpeg -i input.mkv -codec copy -acodec ac3 output.mkv

    Can I somehow tie in the forced subtitles (onto the video) in order to save an extra process of taking the output.mkv and trying to force subtitles on ?

    Edit : I’ve tried using the following command to extract subtitles to be able to edit them

    ffmpeg -i Movie.mkv -map 0:s:14 subs.srt

    However i get the error : Stream map '0:s:14' matches no streams

    Edit2 : attempted to extract subtitles and succeeded with

    ffmpeg -i input.mkv -map 0:14 -c copy subtitles.ass

    but still looking to force the subtitles, nonetheless !

    Also - a little bonus to this question - can I somehow extract the .ass file and edit it to only produce subtitles for foreign parts - so english audio doesn’t have subtitles during the movie but foreign audio does have subtitles ?

    Cheers

    Edit3 :

    When I try to use both of the commands at once (my earlier mentioned audio converter & one from the ffmpeg wiki)

    ffmpeg -i input.mkv -codec copy -acodec ac3 -vf "ass=subs.ass" output.mkv

    I get the following error from ffmpeg,

    Filtergraph 'ass=subs.ass' was defined for video output stream 0:0 but codec copy was selected.
    Filtering and streamcopy cannot be used together.