Recherche avancée

Médias (0)

Mot : - Tags -/latitude

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (56)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Configuration spécifique d’Apache

    4 février 2011, par

    Modules spécifiques
    Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
    Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
    Création d’un (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6859)

  • FFServer streaming H.264 from Logitech C920 without re-encoding

    29 novembre 2016, par Zoltan Fedor

    I’m trying to broadcast a native .H264 webcam feed from a Logitech C920 webcam in realtime from an Odroid device (a robot) via ffserver running on a separate server (CentOS 7.1) to users’ browser without reeconding the .H264 video feed.

    Having a realtime video feed in the browser is a challenge on its own, so for now I’m just trying to get the Logitech C920 webcam on the Odroid to stream its native .H264 realtime video feed as mp4 via ffserver to users without the need to reencode the video in the process.
    Obviously I want to avoid re-encoding as that would take too much CPU time and would kill the realtime video feed. Later I might need to change the container to .flv or rtp, so it can be played from the browser in a realtime fashion. I’m using the Logitech C920 webcam, because it can do .H264 encoding on the hardware. (it has been tested by saving a file directly, it works, except the well-known ’jerkiness’ issue related to a linux kernel bug : http://sourceforge.net/p/linux-uvc/mailman/message/33164469/ , but that is a different story)

    The problem is, that however I set ffmpeg-ffserver up, as soon as ffserver is in the picture the feed gets reencoded - even from h264(native) to h264(libx264) - taking up 100% of CPU on the Odroid device and introducing a huge delay in the video feed.

    Below are my ffmpeg and ffserver settings.

    Ffmpeg from the Odroid device streaming the .H264 feed to ffserver

    $ ffmpeg -s 1920x1080 -f v4l2 -vcodec h264 -i /dev/video0 -copyinkf -vcodec copy http://xxxyyyy.com:8090/feed1.ffm
    ffmpeg version N-72744-g653bf3c Copyright (c) 2000-2015 the FFmpeg developers
     built with gcc 4.8 (Ubuntu/Linaro 4.8.2-19ubuntu1)
     configuration: --prefix=/home/odroid/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/odroid/ffmpeg_build/include --extra-ldflags=-L/home/odroid/ffmpeg_build/lib --bindir=/home/odroid/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree
     libavutil      54. 27.100 / 54. 27.100
     libavcodec     56. 41.100 / 56. 41.100
     libavformat    56. 36.100 / 56. 36.100
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 16.101 /  5. 16.101
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  2.100 /  1.  2.100
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, video4linux2,v4l2, from '/dev/video0':
     Duration: N/A, start: 6581.606726, bitrate: N/A
       Stream #0:0: Video: h264 (Constrained Baseline), yuvj420p(pc), 1920x1080 [SAR 1:1 DAR 16:9], -5 kb/s, 30 fps, 30 tbr, 1000k tbn, 60 tbc
    [swscaler @ 0x11bf0b0] deprecated pixel format used, make sure you did set range correctly
    No pixel format specified, yuvj420p for H.264 encoding chosen.
    Use -pix_fmt yuv420p for compatibility with outdated media players.
    [libx264 @ 0x12590e0] using SAR=64/45
    [libx264 @ 0x12590e0] using cpu capabilities: ARMv6 NEON
    [libx264 @ 0x12590e0] profile High, level 1b
    Output #0, ffm, to 'http://robo-car.int.thomsonreuters.com:8090/feed1.ffm':
     Metadata:
       creation_time   : now
       encoder         : Lavf56.36.100
       Stream #0:0: Video: h264 (libx264), yuvj420p(pc), 160x128 [SAR 64:45 DAR 16:9], q=-1--1, 64 kb/s, 30 fps, 1000k tbn, 5 tbc
       Metadata:
         encoder         : Lavc56.41.100 libx264
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    ^Cav_interleaved_write_frame(): Immediate exit requested00 bitrate=N/A dup=0 drop=97    
       Last message repeated 2140 times
    frame= 3723 fps=301 q=-1.0 Lsize=     396kB time=00:12:14.20 bitrate=   4.4kbits/s dup=3699 drop=103    
    video:321kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 23.500496%

    And the /etc/ffserver.conf on the server running ffserver :

    HTTPPort 8090                      # Port to bind the server to
    HTTPBindAddress 0.0.0.0
    MaxHTTPConnections 2000
    MaxClients 1000
    MaxBandwidth 10000             # Maximum bandwidth per client
                                  # set this high enough to exceed stream bitrate
    CustomLog -

    <feed>         # This is the input feed where FFmpeg will send
      File ./feed1.ffm            # video stream.
      FileMaxSize 1G              # Maximum file size for buffering video
    </feed>

    <stream>
     Feed feed1.ffm
     Format mp4
     NoAudio
    </stream>

    As you have seen above in the ffmpeg section, there is a reencoding happening on the Odroid device maxing out the CPUs :

    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))

    I have already tried setting the VideoCodec value in the ffserver config directly to libx264, tried the -re setting in ffmpeg, tried using different syntax for ffmpeg, etc. Nothing helps. Reeconding is always there and so I can’t make ffmpeg-ffserver just to broadcast the video stream as-is.

    Both ffmpeg (on the Odroid and on the server) were compiled yesterday (2015-06-09) from source, so they are the latest (and the same) version.

    Any idea ?

    EDIT :
    IN SUMMARY the issue is : I cannot find a way to get ffserver to broadcast the h264(native) feed coming from the Logitech C920 webcam without re-encoding.

  • Mobile Camera live audio/video streaming and encoding

    7 juin 2015, par Strikecounter2

    I know this question has been asked a couple of times, but I still haven’t found the right answer for my question.

    I would like to code an app that is able to live-stream audio and video while the content is being recorded and then uploaded to a server. I’d prefer to have my own back-end using Parse, because I want a high scalability. I know that the video has to be encoded to a h.264 codec and the audio to an AAC codec, but I don’t know how to achieve this. I have heard of the FFmpeg framework, but I am not sure if I would violate their license if I distribute my app or even sell it to somebody else.
    I would then like to receive the stream from the server to open it on the iPhone/android phone.

    They key requirements would be :

    • Low Latency
    • About 24 fps
    • Audio/Video in sync
    • No buffering while watching

    I would like to use Swift as a programming language, but if there is no way to use a swift-wrapper for any frameworks I would focus on Objective-C too.

    I am willing to learn everything that is needed, but I don’t know where to start.

  • libav error message with H264 codec. "non-strictly-monotonic PTS"

    13 mars 2018, par user1496491

    I have almost zero experience with libav/FFMPEG. I wrote this piece of code which capturing the screen and writes it to the file, and I’m facing some prolems with that. I was working with AV_CODEC_ID_MPEG4 codec at first, it worked just fine, but very quikly application started to spam messages like that

    [dshow @ 02da1c80] real-time buffer [screen-capture-recorder] [video input] too full or near too full (64% of size: 128000000 [rtbufsize parameter])! frame dropped!

    So I googled for some time, and found that probbably encoder is too slow, and I need to change it to faster one. So I changed it to AV_CODEC_ID_H264. Suddenly written file became unreadable, and application started to spam messages

    [libx264 @ 0455ff40] non-strictly-monotonic PTS

    I looked everywhere and all I found was a suggestion to put this two lines

    if(outPacket.pts != AV_NOPTS_VALUE) outPacket.pts = av_rescale_q(outPacket.pts, videoStream->codec->time_base, videoStream->time_base);
    if(outPacket.dts != AV_NOPTS_VALUE) outPacket.dts = av_rescale_q(outPacket.dts, videoStream->codec->time_base, videoStream->time_base);

    So I added them, and the result was the same.

    So, what should I do ? How do I configure output correctly ?

    Here’s my code :

    #include "MainWindow.h"

    #include <qguiapplication>
    #include <qlabel>
    #include <qscreen>
    #include <qtimer>
    #include <qlayout>
    #include <qimage>
    #include <qtconcurrent></qtconcurrent>QtConcurrent>
    #include <qthreadpool>
    #include <qvideoframe>

    #include "ScreenCapture.h"

    MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent)
    {
       resize(800, 600);

       label = new QLabel();
       label->setAlignment(Qt::AlignHCenter | Qt::AlignVCenter);

       auto layout = new QHBoxLayout();
       layout->addWidget(label);

       auto widget = new QWidget();
       widget->setLayout(layout);
       setCentralWidget(widget);

       connect(this, &amp;MainWindow::imageReady, [=](QImage image) {label->setPixmap(QPixmap::fromImage(image).scaled(label->size(), Qt::KeepAspectRatio));});

       init();
       initOutFile();
       collectFrames();
    }

    MainWindow::~MainWindow()
    {
       isRunning = false;

       QThreadPool::globalInstance()->waitForDone();

       avformat_close_input(&amp;inputFormatContext);
       avformat_free_context(inputFormatContext);
    }

    void MainWindow::init()
    {
       av_register_all();
       avcodec_register_all();
       avdevice_register_all();

       auto screen = QGuiApplication::screens()[0];
       QRect geometry = screen->geometry();

       inputFormatContext = avformat_alloc_context();

    //    AVDictionary* options = NULL;
    //    av_dict_set(&amp;options, "framerate", "30", NULL);
    //    av_dict_set(&amp;options, "offset_x", QString::number(geometry.x()).toLatin1().data(), NULL);
    //    av_dict_set(&amp;options, "offset_y", QString::number(geometry.y()).toLatin1().data(), NULL);
    //    av_dict_set(&amp;options, "preset", "ultrafast", NULL);
    //    av_dict_set(&amp;options, "probesize", "10MB", NULL);
    //    av_dict_set(&amp;options, "pix_fmt", "yuv420p", NULL);
    //    av_dict_set(&amp;options, "video_size", QString(QString::number(geometry.width()) + "x" + QString::number(geometry.height())).toLatin1().data(), NULL);

    //    AVInputFormat* inputFormat = av_find_input_format("gdigrab");
    //    avformat_open_input(&amp;inputFormatContext, "desktop", inputFormat, &amp;options);

       QSettings settings("HKEY_CURRENT_USER\\Software\\screen-capture-recorder", QSettings::NativeFormat);
       settings.setValue("start_x", geometry.x());
       settings.setValue("start_y", geometry.y());
       settings.setValue("capture_width", geometry.width());
       settings.setValue("capture_height", geometry.height());

       AVDictionary* options = NULL;
       av_dict_set(&amp;options, "preset", "ultrafast", NULL);
       av_dict_set(&amp;options, "vcodec", "h264", NULL);
       av_dict_set(&amp;options, "video_size", "1920x1080", NULL);
       av_dict_set(&amp;options, "crf", "0", NULL);
       av_dict_set(&amp;options, "tune", "zerolatency", NULL);
       av_dict_set(&amp;options, "rtbufsize", "128M", NULL);

       AVInputFormat *format = av_find_input_format("dshow");
       avformat_open_input(&amp;inputFormatContext, "video=screen-capture-recorder", format, &amp;options);

       av_dict_free(&amp;options);
       avformat_find_stream_info(inputFormatContext, NULL);

       videoStreamIndex = av_find_best_stream(inputFormatContext, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);

       AVStream* inStream = inputFormatContext->streams[videoStreamIndex];

       inputCodec = avcodec_find_decoder(inStream->codecpar->codec_id);
       if(!inputCodec) qDebug() &lt;&lt; "Can't find input codec!";

       inputCodecContext = avcodec_alloc_context3(inputCodec);

       qDebug() &lt;&lt; "IN_FORMAT" &lt;&lt; av_get_pix_fmt_name(inStream->codec->pix_fmt);

       avcodec_parameters_to_context(inputCodecContext, inStream->codecpar);

       if(avcodec_open2(inputCodecContext, inputCodec, NULL)) qDebug() &lt;&lt; "Can't open input codec!";
    }

    void MainWindow::initOutFile()
    {
       const char* filename = "C:/Temp/output.mp4";

       if(avformat_alloc_output_context2(&amp;outFormatContext, NULL, NULL, filename) &lt; 0) qDebug() &lt;&lt; "Can't create out context!";

       outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if(!outCodec) qDebug() &lt;&lt; "Can't find codec!";

       videoStream = avformat_new_stream(outFormatContext, outCodec);
       videoStream->time_base = {1, 30};

       const AVPixelFormat* pixelFormat = outCodec->pix_fmts;
       while (*pixelFormat != AV_PIX_FMT_NONE)
       {
           qDebug() &lt;&lt; "OUT_FORMAT" &lt;&lt; av_get_pix_fmt_name(*pixelFormat);
           ++pixelFormat;
       }

       outCodecContext = videoStream->codec;
       outCodecContext->bit_rate = 16000000;
       outCodecContext->rc_max_rate = 0;
       outCodecContext->rc_buffer_size = 0;
       outCodecContext->qmin = 10;
       outCodecContext->qmax = 51;
       outCodecContext->qcompress = 0.6f;
       outCodecContext->width = inputCodecContext->width;
       outCodecContext->height = inputCodecContext->height;
       outCodecContext->time_base = videoStream->time_base;
       outCodecContext->gop_size = 10;
       outCodecContext->max_b_frames = 1;
       outCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;

       if (outFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER) outCodecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;

       if(avcodec_open2(outCodecContext, outCodec, NULL)) qDebug() &lt;&lt; "Can't open out codec!";

       swsContext = sws_getContext(inputCodecContext->width,
                                   inputCodecContext->height,
                                   inputCodecContext->pix_fmt,
                                   outCodecContext->width,
                                   outCodecContext->height,
                                   outCodecContext->pix_fmt,
                                   SWS_BICUBIC, NULL, NULL, NULL);

       if(avio_open(&amp;outFormatContext->pb, filename, AVIO_FLAG_WRITE) &lt; 0) qDebug() &lt;&lt; "Can't open file!";
       if(avformat_write_header(outFormatContext, NULL) &lt; 0) qDebug() &lt;&lt; "Can't write header!";
    }

    void MainWindow::collectFrames()
    {
       QtConcurrent::run([this](){

           AVFrame* inFrame = av_frame_alloc();
           inFrame->format = inputCodecContext->pix_fmt;
           inFrame->width = inputCodecContext->width;
           inFrame->height = inputCodecContext->height;

           int size = av_image_alloc(inFrame->data, inFrame->linesize, inFrame->width, inFrame->height, inputCodecContext->pix_fmt, 1);

           AVFrame* outFrame = av_frame_alloc();
           outFrame->format = outCodecContext->pix_fmt;
           outFrame->width = outCodecContext->width;
           outFrame->height = outCodecContext->height;

           av_image_alloc(outFrame->data, outFrame->linesize, outFrame->width, outFrame->height, outCodecContext->pix_fmt, 1);

           AVPacket packet;
           av_init_packet(&amp;packet);

           while(isRunning &amp;&amp; (av_read_frame(inputFormatContext, &amp;packet) >= 0))
           {
               if(packet.stream_index == videoStream->index)
               {
                   //for gdigrab
    //                uint8_t* result = new uint8_t[inFrame->width * inFrame->height * 4];
    //                for (int i = 0; i &lt; inFrame->height * inFrame->width * 4; i += 4)
    //                {
    //                    result[i + 0] = packet.data[i + 2]; //B
    //                    result[i + 1] = packet.data[i + 3]; //G
    //                    result[i + 2] = packet.data[i + 0]; //R
    //                    result[i + 3] = packet.data[i + 1]; //A
    //                }

    //                memcpy(inFrame->data[0], result, size);
    //                delete result;

                   QImage image(packet.data, inFrame->width, inFrame->height, QImage::Format_ARGB32);
                   QImage mirrored = image.mirrored(false, true);
                   emit imageReady(mirrored);

                   memcpy(inFrame->data[0], mirrored.bits(), size);

                   sws_scale(swsContext, inFrame->data, inFrame->linesize, 0, inputCodecContext->height, outFrame->data, outFrame->linesize);

                   av_packet_unref(&amp;packet);

                   AVPacket outPacket;
                   av_init_packet(&amp;outPacket);

                   int encodeResult = AVERROR(EAGAIN);
                   while(encodeResult == AVERROR(EAGAIN))
                   {
                       if(avcodec_send_frame(outCodecContext, outFrame)) qDebug() &lt;&lt; "Send frame error!";

                       encodeResult = avcodec_receive_packet(outCodecContext, &amp;outPacket);
                   }
                   if(encodeResult != 0) qDebug() &lt;&lt; "Encoding error!" &lt;&lt; encodeResult;

                   if(outPacket.pts != AV_NOPTS_VALUE) outPacket.pts = av_rescale_q(outPacket.pts, videoStream->codec->time_base, videoStream->time_base);
                   if(outPacket.dts != AV_NOPTS_VALUE) outPacket.dts = av_rescale_q(outPacket.dts, videoStream->codec->time_base, videoStream->time_base);

                   av_interleaved_write_frame(outFormatContext, &amp;outPacket);

                   av_packet_unref(&amp;outPacket);
               }
           }

           av_freep(inFrame->data);
           av_freep(outFrame->data);

           av_write_trailer(outFormatContext);
           avio_close(outFormatContext->pb);
       });

    }
    </qvideoframe></qthreadpool></qimage></qlayout></qtimer></qscreen></qlabel></qguiapplication>