Recherche avancée

Médias (1)

Mot : - Tags -/punk

Autres articles (69)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (11763)

  • Stream H264 To Android Using FFMPEG

    26 février 2013, par GroovyDotCom

    I'm trying to stream a .ts file containing H.264 and AAC as an RTP stream to an Android device.

    I tried :

    .\ffmpeg -fflags +genpts -re -i 1.ts -vcodec copy -an -f rtp rtp ://127.0.0.1:10
    000 -vn -acodec copy -f rtp rtp ://127.0.0.1:20000 -newaudio

    FFMPEG displays what should be in your SDP file and I copied this into an SDP file and tried playing from VLC and FFPLAY. VLC plays audio but just gives errors re : bad NAL unit types for video. FFPLAY doesn't play anything.

    My best guess if that the FFMPEG H.264 RTP implementation is broken or at least it doesn't work in video passthru mode (i.e. using the -vcodec copy).

    I need a fix for FFMPEG or an alternate simple open-source solution. I don't want to install FFMPEG in my Android client.

    thanks.

  • c++, FFMPEG, H264, creating zero-delay stream

    5 février 2015, par Mat

    I’m trying to encode video (using h264 codec at the moment, but other codecs would be fine too if better suited for my needs) such that the data needed for decoding is available directly after a frame (including the first frame) was encoded (so, i want only I and P frames, no B frames).

    How do I need to setup the AVCodecContext to get such a stream ? So far my testing arround with the values still always resulted in avcodec_encode_video() returning 0 on the first frame.

    //edit : this is currently my setup code of the AVCodecContext :

    static AVStream* add_video_stream(AVFormatContext *oc, enum CodecID codec_id, int w, int h, int fps)
    {
       AVCodecContext *c;
       AVStream *st;
       AVCodec *codec;

       /* find the video encoder */
       codec = avcodec_find_encoder(codec_id);
       if (!codec) {
           fprintf(stderr, "codec not found\n");
           exit(1);
       }

       st = avformat_new_stream(oc, codec);
       if (!st) {
           fprintf(stderr, "Could not alloc stream\n");
           exit(1);
       }

       c = st->codec;

       /* Put sample parameters. */
       c->bit_rate = 400000;
       /* Resolution must be a multiple of two. */
       c->width    = w;
       c->height   = h;
       /* timebase: This is the fundamental unit of time (in seconds) in terms
        * of which frame timestamps are represented. For fixed-fps content,
        * timebase should be 1/framerate and timestamp increments should be
        * identical to 1. */
       c->time_base.den = fps;
       c->time_base.num = 1;
       c->gop_size      = 12; /* emit one intra frame every twelve frames at most */

       c->codec = codec;
       c->codec_type = AVMEDIA_TYPE_VIDEO;
       c->coder_type = FF_CODER_TYPE_VLC;
       c->me_method = 7; //motion estimation algorithm
       c->me_subpel_quality = 4;
       c->delay = 0;
       c->max_b_frames = 0;
       c->thread_count = 1; // more than one threads seem to increase delay
       c->refs = 3;

       c->pix_fmt       = PIX_FMT_YUV420P;

       /* Some formats want stream headers to be separate. */
       if (oc->oformat->flags & AVFMT_GLOBALHEADER)
           c->flags |= CODEC_FLAG_GLOBAL_HEADER;

       return st;
    }

    but with this avcodec_encode_video() will buffer 13 frames before returning any bytes (after that, it will return bytes on every frame). if I set gop_size to 0, then avcodec_encode_video() will return bytes only after the second frame was passed to it. I need a zero delay though.

    This guy apparently was successful (even with larger gop) : http://mailman.videolan.org/pipermail/x264-devel/2009-May/005880.html but I don’t see what he is doing differently

  • streaming H.264 over RTP with libavformat

    16 avril 2012, par Jacob Peddicord

    I've been trying over the past week to implement H.264 streaming over RTP, using x264 as an encoder and libavformat to pack and send the stream. Problem is, as far as I can tell it's not working correctly.

    Right now I'm just encoding random data (x264_picture_alloc) and extracting NAL frames from libx264. This is fairly simple :

    x264_picture_t pic_out;
    x264_nal_t* nals;
    int num_nals;
    int frame_size = x264_encoder_encode(this->encoder, &nals, &num_nals, this->pic_in, &pic_out);

    if (frame_size <= 0)
    {
       return frame_size;
    }

    // push NALs into the queue
    for (int i = 0; i < num_nals; i++)
    {
       // create a NAL storage unit
       NAL nal;
       nal.size = nals[i].i_payload;
       nal.payload = new uint8_t[nal.size];
       memcpy(nal.payload, nals[i].p_payload, nal.size);

       // push the storage into the NAL queue
       {
           // lock and push the NAL to the queue
           boost::mutex::scoped_lock lock(this->nal_lock);
           this->nal_queue.push(nal);
       }
    }

    nal_queue is used for safely passing frames over to a Streamer class which will then send the frames out. Right now it's not threaded, as I'm just testing to try to get this to work. Before encoding individual frames, I've made sure to initialize the encoder.

    But I don't believe x264 is the issue, as I can see frame data in the NALs it returns back.
    Streaming the data is accomplished with libavformat, which is first initialized in a Streamer class :

    Streamer::Streamer(Encoder* encoder, string rtp_address, int rtp_port, int width, int height, int fps, int bitrate)
    {
       this->encoder = encoder;

       // initalize the AV context
       this->ctx = avformat_alloc_context();
       if (!this->ctx)
       {
           throw runtime_error("Couldn't initalize AVFormat output context");
       }

       // get the output format
       this->fmt = av_guess_format("rtp", NULL, NULL);
       if (!this->fmt)
       {
           throw runtime_error("Unsuitable output format");
       }
       this->ctx->oformat = this->fmt;

       // try to open the RTP stream
       snprintf(this->ctx->filename, sizeof(this->ctx->filename), "rtp://%s:%d", rtp_address.c_str(), rtp_port);
       if (url_fopen(&(this->ctx->pb), this->ctx->filename, URL_WRONLY) < 0)
       {
           throw runtime_error("Couldn't open RTP output stream");
       }

       // add an H.264 stream
       this->stream = av_new_stream(this->ctx, 1);
       if (!this->stream)
       {
           throw runtime_error("Couldn't allocate H.264 stream");
       }

       // initalize codec
       AVCodecContext* c = this->stream->codec;
       c->codec_id = CODEC_ID_H264;
       c->codec_type = AVMEDIA_TYPE_VIDEO;
       c->bit_rate = bitrate;
       c->width = width;
       c->height = height;
       c->time_base.den = fps;
       c->time_base.num = 1;

       // write the header
       av_write_header(this->ctx);
    }

    This is where things seem to go wrong. av_write_header above seems to do absolutely nothing ; I've used wireshark to verify this. For reference, I use Streamer streamer(&enc, "10.89.6.3", 49990, 800, 600, 30, 40000); to initialize the Streamer instance, with enc being a reference to an Encoder object used to handle x264 previously.

    Now when I want to stream out a NAL, I use this :

    // grab a NAL
    NAL nal = this->encoder->nal_pop();
    cout << "NAL popped with size " << nal.size << endl;

    // initalize a packet
    AVPacket p;
    av_init_packet(&p);
    p.data = nal.payload;
    p.size = nal.size;
    p.stream_index = this->stream->index;

    // send it out
    av_write_frame(this->ctx, &p);

    At this point, I can see RTP data appearing over the network, and it looks like the frames I've been sending, even including a little copyright blob from x264. But, no player I've used has been able to make any sense of the data. VLC quits wanting an SDP description, which apparently isn't required.

    I then tried to play it through gst-launch :

    gst-launch udpsrc port=49990 ! rtph264depay ! decodebin ! xvimagesink

    This will sit waiting for UDP data, but when it is received, I get :

    ERROR : element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0 : No RTP
    format was negotiated. Additional debug info :
    gstbasertpdepayload.c(372) : gst_base_rtp_depayload_chain () :
    /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0 : Input buffers
    need to have RTP caps set on them. This is usually achieved by setting
    the 'caps' property of the upstream source element (often udpsrc or
    appsrc), or by putting a capsfilter element before the depayloader and
    setting the 'caps' property on that. Also see
    http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtp/README

    As I'm not using GStreamer to stream itself, I'm not quite sure what it means with RTP caps. But, it makes me wonder if I'm not sending enough information over RTP to describe the stream. I'm pretty new to video and I feel like there's some key thing I'm missing here. Any hints ?