Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (53)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (7687)

  • libav error message with H264 codec. "non-strictly-monotonic PTS"

    13 mars 2018, par user1496491

    I have almost zero experience with libav/FFMPEG. I wrote this piece of code which capturing the screen and writes it to the file, and I’m facing some prolems with that. I was working with AV_CODEC_ID_MPEG4 codec at first, it worked just fine, but very quikly application started to spam messages like that

    [dshow @ 02da1c80] real-time buffer [screen-capture-recorder] [video input] too full or near too full (64% of size: 128000000 [rtbufsize parameter])! frame dropped!

    So I googled for some time, and found that probbably encoder is too slow, and I need to change it to faster one. So I changed it to AV_CODEC_ID_H264. Suddenly written file became unreadable, and application started to spam messages

    [libx264 @ 0455ff40] non-strictly-monotonic PTS

    I looked everywhere and all I found was a suggestion to put this two lines

    if(outPacket.pts != AV_NOPTS_VALUE) outPacket.pts = av_rescale_q(outPacket.pts, videoStream->codec->time_base, videoStream->time_base);
    if(outPacket.dts != AV_NOPTS_VALUE) outPacket.dts = av_rescale_q(outPacket.dts, videoStream->codec->time_base, videoStream->time_base);

    So I added them, and the result was the same.

    So, what should I do ? How do I configure output correctly ?

    Here’s my code :

    #include "MainWindow.h"

    #include <qguiapplication>
    #include <qlabel>
    #include <qscreen>
    #include <qtimer>
    #include <qlayout>
    #include <qimage>
    #include <qtconcurrent></qtconcurrent>QtConcurrent>
    #include <qthreadpool>
    #include <qvideoframe>

    #include "ScreenCapture.h"

    MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent)
    {
       resize(800, 600);

       label = new QLabel();
       label->setAlignment(Qt::AlignHCenter | Qt::AlignVCenter);

       auto layout = new QHBoxLayout();
       layout->addWidget(label);

       auto widget = new QWidget();
       widget->setLayout(layout);
       setCentralWidget(widget);

       connect(this, &amp;MainWindow::imageReady, [=](QImage image) {label->setPixmap(QPixmap::fromImage(image).scaled(label->size(), Qt::KeepAspectRatio));});

       init();
       initOutFile();
       collectFrames();
    }

    MainWindow::~MainWindow()
    {
       isRunning = false;

       QThreadPool::globalInstance()->waitForDone();

       avformat_close_input(&amp;inputFormatContext);
       avformat_free_context(inputFormatContext);
    }

    void MainWindow::init()
    {
       av_register_all();
       avcodec_register_all();
       avdevice_register_all();

       auto screen = QGuiApplication::screens()[0];
       QRect geometry = screen->geometry();

       inputFormatContext = avformat_alloc_context();

    //    AVDictionary* options = NULL;
    //    av_dict_set(&amp;options, "framerate", "30", NULL);
    //    av_dict_set(&amp;options, "offset_x", QString::number(geometry.x()).toLatin1().data(), NULL);
    //    av_dict_set(&amp;options, "offset_y", QString::number(geometry.y()).toLatin1().data(), NULL);
    //    av_dict_set(&amp;options, "preset", "ultrafast", NULL);
    //    av_dict_set(&amp;options, "probesize", "10MB", NULL);
    //    av_dict_set(&amp;options, "pix_fmt", "yuv420p", NULL);
    //    av_dict_set(&amp;options, "video_size", QString(QString::number(geometry.width()) + "x" + QString::number(geometry.height())).toLatin1().data(), NULL);

    //    AVInputFormat* inputFormat = av_find_input_format("gdigrab");
    //    avformat_open_input(&amp;inputFormatContext, "desktop", inputFormat, &amp;options);

       QSettings settings("HKEY_CURRENT_USER\\Software\\screen-capture-recorder", QSettings::NativeFormat);
       settings.setValue("start_x", geometry.x());
       settings.setValue("start_y", geometry.y());
       settings.setValue("capture_width", geometry.width());
       settings.setValue("capture_height", geometry.height());

       AVDictionary* options = NULL;
       av_dict_set(&amp;options, "preset", "ultrafast", NULL);
       av_dict_set(&amp;options, "vcodec", "h264", NULL);
       av_dict_set(&amp;options, "video_size", "1920x1080", NULL);
       av_dict_set(&amp;options, "crf", "0", NULL);
       av_dict_set(&amp;options, "tune", "zerolatency", NULL);
       av_dict_set(&amp;options, "rtbufsize", "128M", NULL);

       AVInputFormat *format = av_find_input_format("dshow");
       avformat_open_input(&amp;inputFormatContext, "video=screen-capture-recorder", format, &amp;options);

       av_dict_free(&amp;options);
       avformat_find_stream_info(inputFormatContext, NULL);

       videoStreamIndex = av_find_best_stream(inputFormatContext, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);

       AVStream* inStream = inputFormatContext->streams[videoStreamIndex];

       inputCodec = avcodec_find_decoder(inStream->codecpar->codec_id);
       if(!inputCodec) qDebug() &lt;&lt; "Can't find input codec!";

       inputCodecContext = avcodec_alloc_context3(inputCodec);

       qDebug() &lt;&lt; "IN_FORMAT" &lt;&lt; av_get_pix_fmt_name(inStream->codec->pix_fmt);

       avcodec_parameters_to_context(inputCodecContext, inStream->codecpar);

       if(avcodec_open2(inputCodecContext, inputCodec, NULL)) qDebug() &lt;&lt; "Can't open input codec!";
    }

    void MainWindow::initOutFile()
    {
       const char* filename = "C:/Temp/output.mp4";

       if(avformat_alloc_output_context2(&amp;outFormatContext, NULL, NULL, filename) &lt; 0) qDebug() &lt;&lt; "Can't create out context!";

       outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if(!outCodec) qDebug() &lt;&lt; "Can't find codec!";

       videoStream = avformat_new_stream(outFormatContext, outCodec);
       videoStream->time_base = {1, 30};

       const AVPixelFormat* pixelFormat = outCodec->pix_fmts;
       while (*pixelFormat != AV_PIX_FMT_NONE)
       {
           qDebug() &lt;&lt; "OUT_FORMAT" &lt;&lt; av_get_pix_fmt_name(*pixelFormat);
           ++pixelFormat;
       }

       outCodecContext = videoStream->codec;
       outCodecContext->bit_rate = 16000000;
       outCodecContext->rc_max_rate = 0;
       outCodecContext->rc_buffer_size = 0;
       outCodecContext->qmin = 10;
       outCodecContext->qmax = 51;
       outCodecContext->qcompress = 0.6f;
       outCodecContext->width = inputCodecContext->width;
       outCodecContext->height = inputCodecContext->height;
       outCodecContext->time_base = videoStream->time_base;
       outCodecContext->gop_size = 10;
       outCodecContext->max_b_frames = 1;
       outCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;

       if (outFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER) outCodecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;

       if(avcodec_open2(outCodecContext, outCodec, NULL)) qDebug() &lt;&lt; "Can't open out codec!";

       swsContext = sws_getContext(inputCodecContext->width,
                                   inputCodecContext->height,
                                   inputCodecContext->pix_fmt,
                                   outCodecContext->width,
                                   outCodecContext->height,
                                   outCodecContext->pix_fmt,
                                   SWS_BICUBIC, NULL, NULL, NULL);

       if(avio_open(&amp;outFormatContext->pb, filename, AVIO_FLAG_WRITE) &lt; 0) qDebug() &lt;&lt; "Can't open file!";
       if(avformat_write_header(outFormatContext, NULL) &lt; 0) qDebug() &lt;&lt; "Can't write header!";
    }

    void MainWindow::collectFrames()
    {
       QtConcurrent::run([this](){

           AVFrame* inFrame = av_frame_alloc();
           inFrame->format = inputCodecContext->pix_fmt;
           inFrame->width = inputCodecContext->width;
           inFrame->height = inputCodecContext->height;

           int size = av_image_alloc(inFrame->data, inFrame->linesize, inFrame->width, inFrame->height, inputCodecContext->pix_fmt, 1);

           AVFrame* outFrame = av_frame_alloc();
           outFrame->format = outCodecContext->pix_fmt;
           outFrame->width = outCodecContext->width;
           outFrame->height = outCodecContext->height;

           av_image_alloc(outFrame->data, outFrame->linesize, outFrame->width, outFrame->height, outCodecContext->pix_fmt, 1);

           AVPacket packet;
           av_init_packet(&amp;packet);

           while(isRunning &amp;&amp; (av_read_frame(inputFormatContext, &amp;packet) >= 0))
           {
               if(packet.stream_index == videoStream->index)
               {
                   //for gdigrab
    //                uint8_t* result = new uint8_t[inFrame->width * inFrame->height * 4];
    //                for (int i = 0; i &lt; inFrame->height * inFrame->width * 4; i += 4)
    //                {
    //                    result[i + 0] = packet.data[i + 2]; //B
    //                    result[i + 1] = packet.data[i + 3]; //G
    //                    result[i + 2] = packet.data[i + 0]; //R
    //                    result[i + 3] = packet.data[i + 1]; //A
    //                }

    //                memcpy(inFrame->data[0], result, size);
    //                delete result;

                   QImage image(packet.data, inFrame->width, inFrame->height, QImage::Format_ARGB32);
                   QImage mirrored = image.mirrored(false, true);
                   emit imageReady(mirrored);

                   memcpy(inFrame->data[0], mirrored.bits(), size);

                   sws_scale(swsContext, inFrame->data, inFrame->linesize, 0, inputCodecContext->height, outFrame->data, outFrame->linesize);

                   av_packet_unref(&amp;packet);

                   AVPacket outPacket;
                   av_init_packet(&amp;outPacket);

                   int encodeResult = AVERROR(EAGAIN);
                   while(encodeResult == AVERROR(EAGAIN))
                   {
                       if(avcodec_send_frame(outCodecContext, outFrame)) qDebug() &lt;&lt; "Send frame error!";

                       encodeResult = avcodec_receive_packet(outCodecContext, &amp;outPacket);
                   }
                   if(encodeResult != 0) qDebug() &lt;&lt; "Encoding error!" &lt;&lt; encodeResult;

                   if(outPacket.pts != AV_NOPTS_VALUE) outPacket.pts = av_rescale_q(outPacket.pts, videoStream->codec->time_base, videoStream->time_base);
                   if(outPacket.dts != AV_NOPTS_VALUE) outPacket.dts = av_rescale_q(outPacket.dts, videoStream->codec->time_base, videoStream->time_base);

                   av_interleaved_write_frame(outFormatContext, &amp;outPacket);

                   av_packet_unref(&amp;outPacket);
               }
           }

           av_freep(inFrame->data);
           av_freep(outFrame->data);

           av_write_trailer(outFormatContext);
           avio_close(outFormatContext->pb);
       });

    }
    </qvideoframe></qthreadpool></qimage></qlayout></qtimer></qscreen></qlabel></qguiapplication>
  • Android java.lang.UnsatisfiedLinkError - couldn't find "libffmpeg.so"

    13 octobre 2016, par Achin

    i have build this project with eclipse https://github.com/youtube/yt-watchme and it is running fine , but when i try to build this project in android studio i am error in my Ffmpeg class ,i have copy all the file from my running demo which i made in eclipse to my android studio project directory , i will post my directory structure and build.gradle , please anyone guide me ? please see the below

    Process: com.google.android.apps.watchme, PID: 6330
       java.lang.UnsatisfiedLinkError: dalvik.system.PathClassLoader[DexPathList[[zip file "/data/app/com.google.android.apps.watchme-2/base.apk"],nativeLibraryDirectories=[/vendor/lib, /system/lib]]] couldn't find "libffmpeg.so"
               at java.lang.Runtime.loadLibrary(Runtime.java:366)
               at java.lang.System.loadLibrary(System.java:988)
               at com.google.android.apps.watchme.Ffmpeg.<clinit>(Ffmpeg.java:22)
               at com.google.android.apps.watchme.VideoStreamingConnection.open(VideoStreamingConnection.java:71)
               at com.google.android.apps.watchme.StreamerService.startStreaming(StreamerService.java:73)
               at com.google.android.apps.watchme.StreamerActivity.startStreaming(StreamerActivity.java:161)
               at com.google.android.apps.watchme.StreamerActivity.access$200(StreamerActivity.java:39)
               at com.google.android.apps.watchme.StreamerActivity$1.onServiceConnected(StreamerActivity.java:55)
               at android.app.LoadedApk$ServiceDispatcher.doConnected(LoadedApk.java:1208)
               at android.app.LoadedApk$ServiceDispatcher$RunConnection.run(LoadedApk.java:1225)
               at android.os.Handler.handleCallback(Handler.java:739)
               at android.os.Handler.dispatchMessage(Handler.java:95)
               at android.os.Looper.loop(Looper.java:135)
               at android.app.ActivityThread.main(ActivityThread.java:5343)
               at java.lang.reflect.Method.invoke(Native Method)
               at java.lang.reflect.Method.invoke(Method.java:372)
               at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:905)
               at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:700)
    </clinit>

    enter image description here

    and in JNI function

    enter image description here

  • Minimal "hello world" for WebRTC real-time streaming ?

    4 novembre 2018, par d33tah

    I’d like to learn about how to set up HTML5 live streaming. The use case I have in mind is related to controlling a Lego Mindstorms robot, which means that I want minimal latency. So far I experimented with RTMP using this Docker repository, but found that I can’t seem to tune it to get a real-time streaming. After a bit of research, I found that WebRTC could perhaps fit my use case.

    Let’s say I have a ffmpeg-compatible source, such as a webcam or x11grab data that I would like to stream using WebRTC. What would a "hello, world" look like that achieves this goal ?