Recherche avancée

Médias (91)

Autres articles (79)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (5958)

  • Plex DVR File Rename on FFMPEG Encoding

    11 juillet 2021, par Brent Johnson

    I'm currently using a bash shell script to encode all of my Plex DVR recordings to H.264 using FFMPEG. I'm using this little for loop I found online to encode all of the files in a single directory :

    


    for i in *.ts;
    do echo "$i" && ffmpeg -i "$i" -vf yadif -c:v libx264 -preset veryslow -crf 22 -y "/mnt/d/Video/DVR Stash/Seinfeld/${i%.*}.mp4";
done


    


    This has served its purposes well but in the process I would like to rename the file to my preferred naming convention so that the original filename Seinfeld (1989) - S01E01 - Pilot.ts is renamed to Seinfeld S01 E01 Pilot.mp4 in the encoded file. While I am already using a regular expression to change the .ts extension to .mp4, but I'm no expert with regex, especially in the bash shell so any help would be appreciated.

    


    For anyone that's interested in my Plex setup, I'm using an old machine running Linux Mint as my dedicated DVR and actually accessing it over the network with my daily driver which is a gaming machine, so more processing power for video encodes. While that one is a Windows machine, I'm using the Ubuntu bash under WSL2 to run my script, as I prefer it over the Windows command prompt or Powershell (my day job is a web developer on a company issued Mac). So here's a sample of my code for anyone that might consider doing something similar.

    


    if [[ -d "/mnt/sambashare/Seinfeld (1989)" ]]
then
    cd "/mnt/sambashare/Seinfeld (1989)"
    echo "Seinfeld"
    for dir in */; do
        echo "$dir/"
        cd "$dir"
        for i in *.ts;
            do echo "$i" && ffmpeg -i "$i" -vf yadif -c:v libx264 -preset veryslow -crf 22 -y "/mnt/d/Video/DVR Stash/Seinfeld/${i%.*}.mp4";
        done
        cd ..
    done
fi


    


    While I've considered doing a for loop for all shows, for now I am doing each show like this individually as there are a few shows I have custom encoding settings for

    


  • OpenCV 4.5.2 takes a long time (>100ms) to retrieve a single frame from a webcam, C++ on Windows 10

    9 juin 2021, par Mustard Tiger

    I've been having a tough time getting my webcam working quickly with opencv. Frames take a very long time to read, (a recorded average of 124ms across 500 frames) I've tried on three different computers (running Windows 10) with a logitech C922 webcam. The most recent machine I tested on has a Ryzen 9 3950X, with 32gbs of ram ; no lack of power.

    


    Here is the code :

    


    cv::VideoCapture cap = cv::VideoCapture(m_cameraNum);&#xA;&#xA;// Check if camera opened successfully&#xA;if (!cap.isOpened())&#xA;{&#xA;    m_logger->critical("Error opening video stream or file\n\r");&#xA;    return -1;&#xA;}&#xA;&#xA;bool result = true;&#xA;result &amp;= cap.set(cv::CAP_PROP_FRAME_WIDTH, 1280);&#xA;result &amp;= cap.set(cv::CAP_PROP_FRAME_HEIGHT, 720);&#xA;&#xA;bool ready = false;&#xA;std::vector<string> timeLog;&#xA;timeLog.reserve(50000);&#xA;int i = 0;&#xA;&#xA;while (i &lt; 500)&#xA;{&#xA;    auto start = std::chrono::system_clock::now();&#xA;    &#xA;    cv::Mat img;&#xA;    ready = cap.read(img);&#xA;&#xA;    // If the frame is empty, break immediately&#xA;    if (!ready)&#xA;    {&#xA;        timeLog.push_back("continue");&#xA;        continue;&#xA;    }&#xA;&#xA;    i&#x2B;&#x2B;;&#xA;    auto end = std::chrono::system_clock::now();&#xA;    timeLog.push_back(std::to_string(std::chrono::duration_cast(end - start).count()));&#xA;}&#xA;&#xA;for (auto&amp; entry : timeLog)&#xA;    m_logger->info(entry);&#xA;&#xA;cap.release();&#xA;return 0;&#xA;</string>

    &#xA;

    Notice that I write the elapsed time to a log file at the end of execution. The average time is 124ms for debug and release, and not one instance of "continue" after half a dozen runs.

    &#xA;

    It doesn't matter if I use USB 2 or USB 3 ports (the camera is USB2) or if I run a debug build or a release build, the log file will show anywhere from 110ms to 130ms of time for each frame. The camera works fine in other app, OBS can get a smooth 1080@30fps or 720@60fps.

    &#xA;

    Stepping through the debugger and doing a lot of Googling, I've learned the following about my system :

    &#xA;

      &#xA;
    • The backend chosen by default is DSHOW. GStreamer and FFMPEG are also available.
    • &#xA;

    • DSHOW uses FFMPEG somehow (it needs the FFMPEG dll) but I cannot use FFMPEG directly through opencv. Attempting to use cv::VideoCapture(m_cameraNum, cv::CAP_FFMPEG) always fails. It seems like Opencv's interface to FFMPEG is only capable of opening video files.
    • &#xA;

    • Microsoft really screwed up camera devices in Windows a few years back, not sure if this is related to my problem.
    • &#xA;

    &#xA;

    Here's a short list of the fixes I have tried, most taken from older SO posts :

    &#xA;

      &#xA;
    • result &= cap.set(cv::CAP_PROP_FRAME_COUNT, 30) ; // Returns false, does nothing
    • &#xA;

    • result &= cap.set(cv::CAP_PROP_CONVERT_RGB, 0) ; // Returns true, does nothing
    • &#xA;

    • result &= cap.set(cv::CAP_PROP_MODE, cv::VideoWriter::fourcc('M', 'J', 'P', 'G')) ; // Returns false, does nothing
    • &#xA;

    • Set registry key from http://alax.info/blog/1693 that should disable the new Windows camera server.
    • &#xA;

    • Updated from 4.5.0 to 4.5.2, no change.
    • &#xA;

    • Asked device manager to find a newer driver, no newer driver found.
    • &#xA;

    &#xA;

    I'm out of ideas. Any help ?

    &#xA;

  • ffmpeg 4.4 problem with image2 combined with stream loop -1 and overlay

    14 mai 2021, par codeSam

    I have a python app and my code to stream ffmpeg is :

    &#xA;

    &#x27;ffmpeg&#x27;,&#xA;&#x27;-thread_queue_size&#x27;, &#x27;1024&#x27;,&#xA;&#x27;-i&#x27;, &#x27;rtsp://...&#x27;,&#xA;&#x27;-f&#x27;, &#x27;image2&#x27;,&#xA;&#x27;-stream_loop&#x27;, &#x27;-1&#x27;,&#xA;&#x27;-i&#x27;, &#x27;image.png&#x27;,&#xA;&#x27;-filter_complex&#x27;, &#x27;overlay=(main_w-overlay_w)/2:main_h*0.1-overlay_h&#x27;,&#xA;&#x27;-acodec&#x27;, &#x27;aac&#x27;,&#xA;&#x27;-ar&#x27;, &#x27;44100&#x27;,&#xA;&#x27;-ab&#x27;, &#x27;128k&#x27;,&#xA;&#x27;-f&#x27;, &#x27;flv&#x27;,&#xA;&#x27;-g&#x27;, &#x27;30&#x27;,&#xA;&#x27;-vcodec&#x27;, &#x27;libx264&#x27;,&#xA;&#x27;-preset&#x27;, &#x27;ultrafast&#x27;,&#xA;&#x27;-crf&#x27;, &#x27;30&#x27;,&#xA;&#x27;rtmp://...&#x27;&#xA;

    &#xA;

    Works fine on ffmpeg 4.3.2. But after ffmpeg updated to 4.4 the stream doesn't start at all. If I change stream_loop -1 to loop 1 the stream starts but as I want to update the image.png like every 10 seconds or so, it stops being updated on the stream. That is probably because new image.png is being saved at the same time it is being read. stream_loop doesn't mind this as I have understood.

    &#xA;

    Also if I delay image.png start with -ss -5, the stream starts with main video from rtsp ://... but it stops when image.png starts to be read.

    &#xA;

    Also if I remove -f image2 from the code, the stream starts fine but image.png is not being updated to the stream.

    &#xA;

    Would be easy to downgrade ffmpeg to the older 4.3.2 version but it is not possible as I want this thing to run on Android device in Termux. Termux has only the latest ffmpeg 4.4 available.

    &#xA;

    Any ideas how to make this work on ffmpeg 4.4.

    &#xA;

    Here is what is printed out when I run the command above. The stream does not start. This is done with Termux in Android device. Stream does not start in my Mac Book Pro's ffmpeg 4.4 either, in that 4.3.2 is ok.

    &#xA;

    ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers&#xA;  built with Android (6454773 based on r365631c2) clang version 9.0.8 &#xA;(https://android.googlesource.com/toolchain/llvm-project 98c855489587874b2a325e7a516b99d838599c6f) (based on LLVM 9.0.8svn)&#xA;  configuration: --arch=aarch64 --as=aarch64-linux-android-clang --cc=aarch64-linux-android-clang&#xA;  --cxx=aarch64-linux-android-clang&#x2B;&#x2B; --cross-prefix=aarch64-linux-android- --disable-indevs &#xA;  --disable-outdevs --enable-indev=lavfi --disable-static --disable-symver --enable-cross-compile &#xA;  --enable-gnutls --enable-gpl --enable-libass --enable-libdav1d --enable-libmp3lame &#xA;  --enable-libfreetype --enable-libvorbis --enable-libopus --enable-libx264 --enable-libx265 &#xA;  --enable-libxvid --enable-libvpx --enable-shared --enable-libsoxr --enable-libvidstab &#xA;  --enable-libwebp --prefix=/data/data/com.termux/files/usr --target-os=android &#xA;  --extra-libs=-landroid-glob --enable-neon&#xA;  libavutil      56. 70.100 / 56. 70.100&#xA;  libavcodec     58.134.100 / 58.134.100&#xA;  libavformat    58. 76.100 / 58. 76.100&#xA;  libavdevice    58. 13.100 / 58. 13.100&#xA;  libavfilter     7.110.100 /  7.110.100&#xA;  libswscale      5.  9.100 /  5.  9.100&#xA;  libswresample   3.  9.100 /  3.  9.100&#xA;  libpostproc    55.  9.100 / 55.  9.100&#xA;[udp @ 0x7ebe823840] &#x27;circular_buffer_size&#x27; option was set but it is not supported on this build (pthread support is required)&#xA;[udp @ 0x7ebe8238e0] &#x27;circular_buffer_size&#x27; option was set but it is not supported on this build (pthread support is required)&#xA;[udp @ 0x7ebe823a20] &#x27;circular_buffer_size&#x27; option was set but it is not supported on this build (pthread support is required)&#xA;[udp @ 0x7ebe823ac0] &#x27;circular_buffer_size&#x27; option was set but it is not supported on this build (pthread support is required)&#xA;&#xA;Guessed Channel Layout for Input Stream #0.1 : mono&#xA;Input #0, rtsp, from &#x27;rtsp://...&#x27;:&#xA;  Metadata:&#xA;    title           : Session streamed by "TP-LINK RTSP Server"&#xA;    comment         : stream1&#xA;  Duration: N/A, start: 0.000000, bitrate: N/A&#xA;  Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080, 15 fps, 15 tbr, 90k tbn, 30 tbc&#xA;  Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s&#xA;Input #1, image2, from &#x27;image.png&#x27;:&#xA;  Duration: 00:00:00.04, start: 0.000000, bitrate: N/A&#xA;  Stream #1:0: Video: png, rgba(pc), 1152x87 [SAR 8504:8504 DAR 384:29], 25 fps, 25 tbr, 25 tbn, 25 tbc&#xA;Stream mapping:&#xA;  Stream #0:0 (h264) -> overlay:main (graph 0)&#xA;  Stream #1:0 (png) -> overlay:overlay (graph 0)&#xA;  overlay (graph 0) -> Stream #0:0 (libx264)&#xA;  Stream #0:1 -> #0:1 (pcm_alaw (native) -> aac (native))&#xA;Press [q] to stop, [?] for help&#xA;

    &#xA;