Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (87)

  • Soumettre bugs et patchs

    10 avril 2011

    Un logiciel n’est malheureusement jamais parfait...
    Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
    Si vous pensez avoir résolu vous même le bug (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (7133)

  • OpenCV 4.5.2 takes a long time (>100ms) to retrieve a single frame from a webcam, C++ on Windows 10

    9 juin 2021, par Mustard Tiger

    I've been having a tough time getting my webcam working quickly with opencv. Frames take a very long time to read, (a recorded average of 124ms across 500 frames) I've tried on three different computers (running Windows 10) with a logitech C922 webcam. The most recent machine I tested on has a Ryzen 9 3950X, with 32gbs of ram ; no lack of power.

    


    Here is the code :

    


    cv::VideoCapture cap = cv::VideoCapture(m_cameraNum);&#xA;&#xA;// Check if camera opened successfully&#xA;if (!cap.isOpened())&#xA;{&#xA;    m_logger->critical("Error opening video stream or file\n\r");&#xA;    return -1;&#xA;}&#xA;&#xA;bool result = true;&#xA;result &amp;= cap.set(cv::CAP_PROP_FRAME_WIDTH, 1280);&#xA;result &amp;= cap.set(cv::CAP_PROP_FRAME_HEIGHT, 720);&#xA;&#xA;bool ready = false;&#xA;std::vector<string> timeLog;&#xA;timeLog.reserve(50000);&#xA;int i = 0;&#xA;&#xA;while (i &lt; 500)&#xA;{&#xA;    auto start = std::chrono::system_clock::now();&#xA;    &#xA;    cv::Mat img;&#xA;    ready = cap.read(img);&#xA;&#xA;    // If the frame is empty, break immediately&#xA;    if (!ready)&#xA;    {&#xA;        timeLog.push_back("continue");&#xA;        continue;&#xA;    }&#xA;&#xA;    i&#x2B;&#x2B;;&#xA;    auto end = std::chrono::system_clock::now();&#xA;    timeLog.push_back(std::to_string(std::chrono::duration_cast(end - start).count()));&#xA;}&#xA;&#xA;for (auto&amp; entry : timeLog)&#xA;    m_logger->info(entry);&#xA;&#xA;cap.release();&#xA;return 0;&#xA;</string>

    &#xA;

    Notice that I write the elapsed time to a log file at the end of execution. The average time is 124ms for debug and release, and not one instance of "continue" after half a dozen runs.

    &#xA;

    It doesn't matter if I use USB 2 or USB 3 ports (the camera is USB2) or if I run a debug build or a release build, the log file will show anywhere from 110ms to 130ms of time for each frame. The camera works fine in other app, OBS can get a smooth 1080@30fps or 720@60fps.

    &#xA;

    Stepping through the debugger and doing a lot of Googling, I've learned the following about my system :

    &#xA;

      &#xA;
    • The backend chosen by default is DSHOW. GStreamer and FFMPEG are also available.
    • &#xA;

    • DSHOW uses FFMPEG somehow (it needs the FFMPEG dll) but I cannot use FFMPEG directly through opencv. Attempting to use cv::VideoCapture(m_cameraNum, cv::CAP_FFMPEG) always fails. It seems like Opencv's interface to FFMPEG is only capable of opening video files.
    • &#xA;

    • Microsoft really screwed up camera devices in Windows a few years back, not sure if this is related to my problem.
    • &#xA;

    &#xA;

    Here's a short list of the fixes I have tried, most taken from older SO posts :

    &#xA;

      &#xA;
    • result &= cap.set(cv::CAP_PROP_FRAME_COUNT, 30) ; // Returns false, does nothing
    • &#xA;

    • result &= cap.set(cv::CAP_PROP_CONVERT_RGB, 0) ; // Returns true, does nothing
    • &#xA;

    • result &= cap.set(cv::CAP_PROP_MODE, cv::VideoWriter::fourcc('M', 'J', 'P', 'G')) ; // Returns false, does nothing
    • &#xA;

    • Set registry key from http://alax.info/blog/1693 that should disable the new Windows camera server.
    • &#xA;

    • Updated from 4.5.0 to 4.5.2, no change.
    • &#xA;

    • Asked device manager to find a newer driver, no newer driver found.
    • &#xA;

    &#xA;

    I'm out of ideas. Any help ?

    &#xA;

  • avcodec_decode_video2 fails to decode after frame resolution change

    10 avril 2021, par Krzysztof Kansy

    I'm using ffmpeg in Android project via JNI to decode real-time H264 video stream. On the Java side I'm only sending the the byte arrays into native module. Native code is running a loop and checking data buffers for new data to decode. Each data chunk is processed with :

    &#xA;&#xA;

    int bytesLeft = data->GetSize();&#xA;int paserLength = 0;&#xA;int decodeDataLength = 0;&#xA;int gotPicture = 0;&#xA;const uint8_t* buffer = data->GetData();&#xA;while (bytesLeft > 0) {&#xA;    AVPacket packet;&#xA;    av_init_packet(&amp;packet);&#xA;    paserLength = av_parser_parse2(_codecPaser, _codecCtx, &amp;packet.data, &amp;packet.size, buffer, bytesLeft, AV_NOPTS_VALUE, AV_NOPTS_VALUE, AV_NOPTS_VALUE);&#xA;    bytesLeft -= paserLength;&#xA;    buffer &#x2B;= paserLength;&#xA;&#xA;    if (packet.size > 0) {&#xA;        decodeDataLength = avcodec_decode_video2(_codecCtx, _frame, &amp;gotPicture, &amp;packet);&#xA;    }&#xA;    else {&#xA;        break;&#xA;    }&#xA;    av_free_packet(&amp;packet);&#xA;}&#xA;&#xA;if (gotPicture) {&#xA;// pass the frame to rendering&#xA;}&#xA;

    &#xA;&#xA;

    The system works pretty well until incoming video's resolution changes. I need to handle transition between 4:3 and 16:9 aspect ratios. While having AVCodecContext configured as follows :

    &#xA;&#xA;

    _codecCtx->flags2|=CODEC_FLAG2_FAST;&#xA;_codecCtx->thread_count = 2;&#xA;_codecCtx->thread_type = FF_THREAD_FRAME;&#xA;&#xA;if(_codec->capabilities&amp;CODEC_FLAG_LOW_DELAY){&#xA;    _codecCtx->flags|=CODEC_FLAG_LOW_DELAY;&#xA;}&#xA;

    &#xA;&#xA;

    I wasn't able to continue decoding new frames after video resolution change. The got_picture_ptr flag that avcodec_decode_video2 enables when whole frame is available was never true after that.
    &#xA;This ticket made me wonder if the issue isn't connected with multithreading. Only useful thing I've noticed is that when I change thread_type to FF_THREAD_SLICE the decoder is not always blocked after resolution change, about half of my attempts were successfull. Switching to single-threaded processing is not possible, I need more computing power. Setting up the context to one thread does not solve the problem and makes the decoder not keeping up with processing incoming data.
    &#xA;Everything work well after app restart.

    &#xA;&#xA;

    I can only think of one workoround (it doesn't really solve the problem) : unloading and loading the whole library after stream resolution change (e.g as mentioned in here). I don't think it's good tho, it will propably introduce other bugs and take a lot of time (from user's viewpoint).

    &#xA;&#xA;

    Is it possible to fix this issue ?

    &#xA;&#xA;

    EDIT :
    &#xA;I've dumped the stream data that is passed to decoding pipeline. I've changed the resolution few times while stream was being captured. Playing it with ffplay showed that in moment when resolution changed and preview in application froze, ffplay managed to continue, but preview is glitchy for a second or so. You can see full ffplay log here. In this case video preview stopped when I changed resolution to 960x720 for the second time. (Reinit context to 960x720, pix_fmt: yuv420p in log).

    &#xA;

  • VP8 for Real-time Video Applications

    15 février 2011, par noreply@blogger.com (John Luther)

    With the growing interest in videoconferencing on the web platform, it’s a good time to explore the features of VP8 that make it an exceptionally good codec for real-time applications like videoconferencing.

    VP8 Design History & Features

    Real-time applications were a primary use case when VP8 was designed. The VP8 encoder has features specifically engineered to overcome the challenges inherent in compressing and transmitting real-time video data.

    • Processor-adaptive encoding. 16 encoder complexity levels automatically (or manually) adjust encoder features such as motion search strategy, quantizer optimizations, and loop filtering strength.
    • Encoder can be configured to use a target percentage of the host CPU.
      Ability to measure the time taken to encode each frame and adjust encoder complexity dynamically to keep the encoding time per frame constant
    • Robust error recovery (packet retransmission, forward error correction, recovery frame/new keyframe requests)
    • Temporal scalability (i.e., a single video bitstream that can degrade as needed depending on a participant’s available bandwidth)
    • Highly efficient decoding performance on low-power devices. Conventional video technology has grown to a state of complexity where dedicated hardware chips are needed to make it work well. With VP8, software-based solutions have proven to meet customer needs without requiring specialized hardware.

    For a more information about real-time video features in VP8, see the slide presentation by WebM Project engineer Paul Wilkins (PDF file).

    Commercially Available Products

    Millions of people around the world have been using VP7/8 for video chat for years. VP8 is deployed in some of today’s most popular consumer videoconferencing applications, including Skype (group video calling), Sightspeed, ooVoo and Logitech Vid. All of these vendors are active WebM project supporters. VP8’s predecessor, VP7, has been used in Skype video calling since 2005 and is supported in the new Skype app for iPhone. Other real-time VP8 implementations are coming soon, including ooVoo, and VP8 will play a leading role in Google’s plans for real-time applications on the web platform.

    Real-time applications will be extremely important as the web platform matures. The WebM community has made significant improvements in VP8 for real-time use cases since our launch and will continue to do so in the future.

    John Luther is Product Manager of the WebM Project.