Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (59)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9446)

  • Opencv ffmpeg : Video capture ("Filename.avi") crashes on a non development machine - MAC 10.8.2

    8 mars 2013, par Jerry

    I was trying to deploy a simple Qt Opencv Application, the following is the code :

    Qt : .pro file :

    QT       += core gui
    greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
    TARGET = opencvVideoTest
    TEMPLATE = app

    SOURCES += main.cpp\
       mainwindow.cpp

    HEADERS  += mainwindow.h

    FORMS    += mainwindow.ui

    INCLUDEPATH = -I/usr/local/include

    LIBS += -lm -lopencv_core -lopencv_highgui -lopencv_imgproc

    Mainwindow.cpp :

    #include "mainwindow.h"
    #include "ui_mainwindow.h"
    #include <opencv2></opencv2>core/core.hpp>
    #include <opencv></opencv>cv.h>
    #include <opencv2></opencv2>highgui/highgui.hpp>
    #include <opencv2></opencv2>imgproc/imgproc.hpp>


    MainWindow::MainWindow(QWidget *parent) :
       QMainWindow(parent),
       ui(new Ui::MainWindow)
    {
       ui->setupUi(this);

       cv::Mat img;

      cv::VideoCapture cap("BMWM5.avi");

      if(cap.isOpened()){

      for(;;){
          cap.read(img);
          cv::resize(img,img,cv::Size(604,480));
          cv::imshow("Opencv", img);
          cv::waitKey(33);
      }
      }
      else{

      }


    }

    MainWindow::~MainWindow()
    {
       delete ui;
    }

    The above snippet works fine on the development machine, which by the has Opencv 2.4.3, ffmpeg 1.1.2 & Qt 5.0.1. This what happens when I try to deploy, running the otool before using macdeployqt and the output :

    >   /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current
    version 169.3.0)   /usr/local/opt/opencv/lib/libopencv_core.2.4.3.dylib
    (compatibility version 2.4.0, current version 2.4.3)
    >  /usr/local/opt/opencv/lib/libopencv_highgui.2.4.3.dylib
    (compatibility version 2.4.0, current version 2.4.3)
    >   /usr/local/opt/opencv/lib/libopencv_imgproc.2.4.3.dylib
      (compatibility version 2.4.0, current version 2.4.3)
    >   /usr/local/Qt5.0.1/5.0.1/clang_64/lib/QtWidgets.framework/Versions/5/QtWidgets
     (compatibility version 5.0.0, current version 5.0.1)
    >   /usr/local/Qt5.0.1/5.0.1/clang_64/lib/QtGui.framework/Versions/5/QtGui
     (compatibility version 5.0.0, current version 5.0.1)
    >   /usr/local/Qt5.0.1/5.0.1/clang_64/lib/QtCore.framework/Versions/5/QtCore
     (compatibility version 5.0.0, current version 5.0.1)
    >   /System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL
     (compatibility version 1.0.0, current version 1.0.0)
    >   /System/Library/Frameworks/AGL.framework/Versions/A/AGL
     (compatibility version 1.0.0, current version 1.0.0)
    >   /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current
     version 56.0.0)

    otool after macdeployqt output :

    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 169.3.0)
       @executable_path/../Frameworks/libopencv_core.2.4.3.dylib (compatibility version 2.4.0, current version 2.4.3)
     @executable_path/../Frameworks/libopencv_highgui.2.4.3.dylib (compatibility version 2.4.0, current version 2.4.3)
     @executable_path/../Frameworks/libopencv_imgproc.2.4.3.dylib (compatibility version 2.4.0, current version 2.4.3)
     @executable_path/../Frameworks/QtWidgets.framework/Versions/5/QtWidgets

    (compatibility version 5.0.0, current version 5.0.1)
    @executable_path/../Frameworks/QtGui.framework/Versions/5/QtGui (compatibility version 5.0.0, current version 5.0.1)
    @executable_path/../Frameworks/QtCore.framework/Versions/5/QtCore (compatibility version 5.0.0, current version 5.0.1)
    /System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL (compatibility version 1.0.0, current version 1.0.0)
    /System/Library/Frameworks/AGL.framework/Versions/A/AGL (compatibility version 1.0.0, current version 1.0.0)
    /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 56.0.0)

    But after deploying it, when I try and run the app on the user's machine it crashes, no error reported either, the exact samething used to happen in Windows but when I copied the opencv_ffmpeg.dll into the exe folder it worked fine. I tried to use the same logic here and failed, then I manually addded each lib files of all the ffmpeg and its dependency into the project folder and still failed. Any other solution, please help ? I'm a newbie to mac dpeloyment. Am i missing or overlooked something ?

  • Live video encoding using...?

    27 décembre 2013, par Basic

    I'm attempting to write a fairly simplistic application that will stream video/audio from a webcam to someone else across the internet (ala Skypebut with more control).

    There seems to be very little useful/relevant information on the subjectand what I can find is largely outdated. From my research so far, x264 seems to be the way to go as it offers an ultrafast option which is designed for this situation

    I'm able to turn on the webcam and receive a stream of images. I can also listen on an audio device and get samples.

    Where I'm failing is encoding that information in such a way as to be able to stream with a minimum of latency (from what I've read, 200ms delay is the goal for no obvious lag, including network latency - so let's aim for 100-150ms)

    Things I've tried

    ffmpeg

    This seems to be the most widely used option for encoding. I've had two real issues using it. Firstly, even using x264 with no look-aheads and the bare minimum buffers for stability, the delay seems to be on the order of 700ms using image2pipe. Secondly, it requires ffmpeg to be installed - being able to do this without an external dependency would be nice.

    VLC

    As with ffmpeg this requires an external program which is a negative. Even worse, I can't seem to get a latency of under 2 seconds which seems to increase over time. I've also only been able to get VLC to capture the camera itself rather than take a stream of images which means I don't get a chance to pre-process them.

    DirectShow

    I've seen a number of sites recommending using the windows direct show encoders but I haven't been able to find one that works at anything like real time. In fact, the only one I've managed to get going reliably is a Windows Media codec that has a massive latency and fairly large size.

    Other considerations

    None of the above address the problem of adding an audio stream to the video. I'm not sure if I should attempt to encode them together or send a separate stream alongside the video.

    In short, I've been Googling for a week or so now and haven't found a decent way to do this. Can someone please point me at a decent example/guide ?

  • Confusion about PTS in video files and media streams

    11 novembre 2014, par user2452253

    Is it possible that the PTS of a particular frame in a file is different with the PTS of the same frame in the same file while it is being streamed ?

    When I read a frame using av_read_frame I store the video stream in an AVStream. After I decode the frame with avcodec_decode_video2, I store the time stamp of that frame in an int64_t using av_frame_get_best_effort_timestamp. Now if the program is getting its input from a file I get a different timestamp from when I stream the input (from the same file) to the program.

    To change the input type I simply change the argv argument from "/path/to/file.mp4" to something like "udp ://localhost:1234", then I stream the file with ffmpeg in command line : "ffmpeg -re -i /path/to/file.mp4 -f mpegts udp ://localhost:1234". Can it be because the "-f mpegts" arguments change some characteristics of the media ?

    Below is my code (simplified). By reading the ffmpeg mailing list archives I realized that the time_base that I’m looking for is in the AVStream and not the AVCodecContext. Instead of using av_frame_get_best_effort_timestamp I have also tried using the packet.pts but the results don’t change.
    I need the time stamps to have a notion of frame number in a streaming video that is being received.
    I would really appreciate any sort of help.

    //..
    //argv[1]="/file.mp4";
    argv[1]="udp://localhost:7777";
    // define AVFormatContext, AVFrame, etc.
    // register av, avcodec, avformat_network_init(), etc.
    avformat_open_input(&amp;pFormatCtx, argv, NULL, NULL);
    avformat_find_stream_info(pFormatCtx, NULL);
    // find the video stream...
    // pointer to the codec context...
    // open codec...
    pFrame=av_frame_alloc();
    while(av_read_frame(pFormatCtx, &amp;packet)>=0) {
           AVStream *strem = pFormatCtx->streams[videoStream];
           if(packet.stream_index==videoStream) {
               avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);
               if(frameFinished) {
                   int64_t perts = av_frame_get_best_effort_timestamp(pFrame);
                   if (isMyFrame(pFrame)){
                        cout &lt;&lt; perts*av_q2d(strem->time_base) &lt;&lt; "\n";
                   }
                }
    }
    //free allocated space
    }
    //..