Recherche avancée

Médias (91)

Autres articles (34)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (4672)

  • How to add network caching to ffmpeg

    10 novembre 2018, par JohnnyF

    I have a network with random latency of 200-400ms per msg

    when i use VLC i add this command to the reciever node

    --network-caching=1000

    this adds 1 sec delay on the receiver, that fixes the problem with the network random delays

    how can i do it in FFMPEG (google finds only solutions of LOW delays)

  • How Opensl es on android control the audio speed ?

    6 décembre 2013, par user1882379

    I use NDK+FFMPEG on android to decode an video file, and transfer the decoded pcm data to opensles for play audio .I register the opengles for the pcm format:

     SLDataFormat_PCM format_pcm;
       format_pcm.formatType = SL_DATAFORMAT_PCM;
       format_pcm.numChannels = channel;
       format_pcm.samplesPerSec = rate * 1000;
       format_pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16;
       format_pcm.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16;

    also I convert the decoded pcm data to AV_SAMPLE_FMT_S16 format using swr_convert(...), but the play audio speed is very fast. but if you use sdl , the play speed is normal. It would be sdl can control the play audio speed, but opensles can not. I do not know whether it is my mistake using opensles or opensles can not control audio speed(need application to control)?

  • OpenCV's VideoCapture::open Video Source Dialog

    13 novembre 2015, par swtdrgn

    In my current project, when I call VideoCapture::open(camera device index) and the camera is in used by another program, it shows a Video Source dialog and returns true when I select a device that is already in use.

    However, in my [previous] experiment project, when I called VideoCapture::open(camera device index), it doesn’t show this dialog.

    I want to know what is causing the Video Source dialog to show and the program to behave differently from the experimental project.

    This is the source code to the experiment project :

    int main (int argc, char *argv[])
    {

       //vars
       time_duration td, td1;
       ptime nextFrameTimestamp, currentFrameTimestamp, initialLoopTimestamp, finalLoopTimestamp;
       int delayFound = 0;
       int totalDelay= 0;

       // initialize capture on default source
       VideoCapture capture;
       std::cout << "capture.open(0): " << capture.open(0) << std::endl;
       std::cout << "NOOO" << std::endl;
       namedWindow("video", 1);

       // set framerate to record and capture at
       int framerate = 15;

       // Get the properties from the camera
       double width = capture.get(CV_CAP_PROP_FRAME_WIDTH);
       double height = capture.get(CV_CAP_PROP_FRAME_HEIGHT);

       // print camera frame size
       //cout << "Camera properties\n";
       //cout << "width = " << width << endl <<"height = "<< height << endl;

       // Create a matrix to keep the retrieved frame
       Mat frame;

       // Create the video writer
       VideoWriter video("capture.avi",0, framerate, cvSize((int)width,(int)height) );

       // initialize initial timestamps
       nextFrameTimestamp = microsec_clock::local_time();
       currentFrameTimestamp = nextFrameTimestamp;
       td = (currentFrameTimestamp - nextFrameTimestamp);

       // start thread to begin capture and populate Mat frame
       boost::thread captureThread(captureFunc, &frame, &capture);
       // loop infinitely
       for(bool q=true;q;)
       {
           if(frame.empty()){continue;}
           //if(cvWaitKey( 5 ) == 'q'){ q=false; }
           // wait for X microseconds until 1second/framerate time has passed after previous frame write
           while(td.total_microseconds() < 1000000/framerate){
               //determine current elapsed time
               currentFrameTimestamp = microsec_clock::local_time();
               td = (currentFrameTimestamp - nextFrameTimestamp);
               if(cvWaitKey( 5 ) == 'q'){
                   std::cout << "B" << std::endl;
                   q=false;
                   boost::posix_time::time_duration timeout = boost::posix_time::milliseconds(0);
                   captureThread.timed_join(timeout);
                   break;
               }
           }

           // determine time at start of write
           initialLoopTimestamp = microsec_clock::local_time();

           // Save frame to video
           video << frame;
           imshow("video", frame);

           //write previous and current frame timestamp to console
           cout << nextFrameTimestamp << " " << currentFrameTimestamp << " ";

           // add 1second/framerate time for next loop pause
           nextFrameTimestamp = nextFrameTimestamp + microsec(1000000/framerate);

           // reset time_duration so while loop engages
           td = (currentFrameTimestamp - nextFrameTimestamp);

           //determine and print out delay in ms, should be less than 1000/FPS
           //occasionally, if delay is larger than said value, correction will occur
           //if delay is consistently larger than said value, then CPU is not powerful
           // enough to capture/decompress/record/compress that fast.
           finalLoopTimestamp = microsec_clock::local_time();
           td1 = (finalLoopTimestamp - initialLoopTimestamp);
           delayFound = td1.total_milliseconds();
           cout << delayFound << endl;

           //output will be in following format
           //[TIMESTAMP OF PREVIOUS FRAME] [TIMESTAMP OF NEW FRAME] [TIME DELAY OF WRITING]
           if(!q || cvWaitKey( 5 ) == 'q'){
               std::cout << "C" << std::endl;
               q=false;
               boost::posix_time::time_duration timeout = boost::posix_time::milliseconds(0);
               captureThread.timed_join(timeout);
               break;
           }
       }

       // Exit
       return 0;
    }

    Video Source Dialog