Recherche avancée

Médias (1)

Mot : - Tags -/publier

Autres articles (97)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (12085)

  • Bash removes leading / in path [duplicate]

    1er août 2020, par Keizer

    I have a simple script to resize images :

    


    #!/usr/bin/bash

right_size='/home/scripts/images_correct_size'
user=$(whoami)

case $user in
    'sonarr')
        folder='/var/lib/sonarr/MediaCover'
        ;;
    'radarr')
        folder='/var/lib/radarr/MediaCover'
        ;;
    'jellyfin')
        folder='/var/lib/jellyfin/metadata'
        ;;
    *)
        echo "This user cannot run the script."
        ;;
esac

echo "Running script as $user."

find $folder -type f -name "*.jpg" | while IFS= read -r line
do
    echo "Current file: $line"
    # Grep has a match
    if grep -q $line $right_size; then
        :
    else
        size=$(ffprobe -hide_banner -v error -select_streams v:0 -show_entries stream=width,height -of csv=p=0 $line)
        width=$(cut -d ',' -f 1 <<<$size)
        height=$(cut -d ',' -f 2 <<<$size)

        if [[ $width -gt 1024 ]] && [[ $width -ge $height ]]; then
            echo "File width too large, reducing."
            ffmpeg -hide_banner -y -i $line -vf scale=1024:-1 $line > /dev/null 2>&1
            echo $line >> $right_size
            continue
        elif [[ $height -gt 1024 ]] && [[ $height -ge $width ]]; then
            echo "File width too large, reducing."
            ffmpeg -hide_banner -y -i $line -vf scale=-1:1024 $line > /dev/null 2>&1
            echo $line >> $right_size
            continue
        else 
            echo "File is right, nothing to do."
            echo $line >> $right_size
        fi
    fi
done


    


    When it runs and the ffmpeg gets executed, for the next line the the variable line does not have the inital /, rendering the script useless for one iteration. In the next iteration of the loop, everything is back to normal.
This is part of the output, where the problem is visible :

    


    Current file: /var/lib/radarr/MediaCover/154/fanart-180.jpg
File is right, nothing to do.
Current file: /var/lib/radarr/MediaCover/154/fanart.jpg
File width too large, reducing.
Input #0, image2, from '/var/lib/radarr/MediaCover/154/fanart.jpg':
  Duration: 00:00:00.04, start: 0.000000, bitrate: 78855 kb/s
    Stream #0:0: Video: mjpeg (Progressive), yuvj420p(pc, bt470bg/unknown/unknown), 3840x2160 [SAR 1:1 DAR 16:9], 25 tbr, 25 tbn, 25 tbc
Stream mapping:
  Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[swscaler @ 0x5603ac596680] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to '/var/lib/radarr/MediaCover/154/fanart.jpg':
  Metadata:
    encoder         : Lavf58.45.100
    Stream #0:0: Video: mjpeg, yuvj420p(pc), 1024x576 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
    Metadata:
      encoder         : Lavc58.91.100 mjpeg
    Side data:
      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: N/A
frame=    1 fps=0.0 q=7.0 Lsize=N/A time=00:00:00.04 bitrate=N/A speed=0.754x    
video:38kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Current file: var/lib/radarr/MediaCover/154/poster-500.jpg
var/lib/radarr/MediaCover/154/poster-500.jpg: No such file or directory
File is right, nothing to do.
Current file: /var/lib/radarr/MediaCover/3/poster.jpg


    


    Why is this happening ?

    


  • FFmpeg and video4linux2 parameters - how to capture still images faster ?

    13 août 2021, par mcgregor94086

    Problem Summary

    


    I have built an 18-camera array of USB webcams, attached to a Raspberry Pi 400 as the controller. My Python 3.8 code for capturing an image from each webcam is slow, and I am trying to find ways to speed it up.

    


    The FFMPEG and video4linux2 command line options are confusing to me, so I'm not sure if the delays are due to my poor choice of parameters, and a better set of options would solve the problem.

    


    The Goal

    


    I am trying to capture one image from each camera as quickly as possible.

    


    I am using FFMPEG and video4linux2 command line options to capture each image within a loop of all the cameras as shown below.

    


    Expected results

    


    I just want a single frame from each camera. The frame rate is 30 fps, so I was expecting that capture time would be on the order of 1/30th to 1/10th of a second worst case. But the performance timer is telling me that each capture is taking 2-3 seconds.

    


    Additionally, I don't really understand the ffmpeg output, but this output worries me :

    


    frame=    0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed=   0x    
frame=    0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed=   0x    
frame=    0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed=   0x    
frame=    1 fps=0.5 q=8.3 Lsize=N/A time=00:00:00.06 bitrate=N/A speed=0.0318x    
video:149kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing  


    


    I don't understand why the "frame=" line is repeated 4 times. And in the 4th repitition, the fps says 0.5, which I would interpret as one frame every 2 seconds not the 30FPS that I specified.

    


    Specific Questions :

    


    Can anyone explain to me what this ffmpeg output means, and why it is taking 2 seconds per image captured, and not closer to 1/30th of a second ?

    


    Can anyone explain to me how to capture the images in less time per capture ?

    


    should I be spawning a separate thread for each ffmpeg call, so they run asynchronously, instead of serially ? Or would that not really save time in practice ?

    


    Actual results

    


      Input #0, video4linux2,v4l2, from '/dev/video0':
  Duration: N/A, start: 6004.168748, bitrate: N/A
    Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1920x1080, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
Stream mapping:
  Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
Press [q] to stop, [?] for help
Output #0, image2, to '/tmp/video1.jpg':
  Metadata:
    encoder         : Lavf58.20.100
    Stream #0:0: Video: mjpeg, yuvj422p(pc), 1920x1080, q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc
    Metadata:
      encoder         : Lavc58.35.100 mjpeg
    Side data:
      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame=    0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed=   0x    
frame=    0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed=   0x    
frame=    0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed=   0x    
frame=    1 fps=0.5 q=8.3 Lsize=N/A time=00:00:00.06 bitrate=N/A speed=0.0318x    
video:149kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown

Captured /dev/video0 image in: 3 seconds
Input #0, video4linux2,v4l2, from '/dev/video2':
  Duration: N/A, start: 6007.240871, bitrate: N/A
    Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1920x1080, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
Stream mapping:
  Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
Press [q] to stop, [?] for help
Output #0, image2, to '/tmp/video2.jpg':
  Metadata:
    encoder         : Lavf58.20.100
    Stream #0:0: Video: mjpeg, yuvj422p(pc), 1920x1080, q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc
    Metadata:
      encoder         : Lavc58.35.100 mjpeg
    Side data:
      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame=    0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed=   0x    
frame=    0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed=   0x    
frame=    0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed=   0x    
frame=    1 fps=0.5 q=8.3 Lsize=N/A time=00:00:00.06 bitrate=N/A speed=0.0318x    
video:133kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown

Captured /dev/video2 image in: 3 seconds
...


    


    The code :

    


    list_of_camera_ids = ["/dev/video1","/dev/video2", "/dev/video3", "/dev/video4",
                      "/dev/video5","/dev/video6", "/dev/video7", "/dev/video8",
                      "/dev/video9","/dev/video10", "/dev/video11", "/dev/video12",
                      "/dev/video13","/dev/video14", "/dev/video15", "/dev/video16",
                      "/dev/video17","/dev/video18"
                     ]
for this_camera_id in list_of_camera_ids:
    full_image_file_name = '/tmp/' + os.path.basename(this_camera_id) + 'jpg'
    image_capture_tic = time.perf_counter()
    
    run_cmd = subprocess.run([
                              '/usr/bin/ffmpeg', '-y', '-hide_banner',
                              '-f', 'video4linux2',
                              '-input_format',  'mjpeg',
                              '-framerate', '30',
                              '-i', this_camera_id,
                              '-frames', '1',
                              '-f', 'image2',
                              full_image_file_name
                             ],
                             universal_newlines=True,
                             stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE
                            )  
         print(run_cmd.stderr)
         image_capture_toc = time.perf_counter()       
         print(f"Captured {camera_id} image in: {image_capture_toc - image_capture_tic:0.0f} seconds")


    


    ADDITIONAL DATA :
In response to an answer by Mark Setchell that said more information is needed to answer this question, I now elaborate the requested information here :

    


    cameras : Cameras are USB-3 cameras that identify themselves as :

    


    idVendor           0x0bda Realtek Semiconductor Corp.
idProduct          0x5829 


    


    I tried to add the lengthy lsusb dump for one of the cameras but then this post exceeds the 30000 character limit

    


    How the cameras are attached : USB 3 port of Pi to a master USB-3 7-port hub, with 3 spur 7 port hubs (not all ports in the spur hubs are occupied).

    


    Camera resolution : HD Format 1920x1080

    


    Why am I setting a frame rate if I only want 1 image ?

    


    I set a frame rate which seems odd given that that specifies the time between frames, but you only want a single frame. I did that because I don't know how to get a single image from FFMPEG. This was the one example of FFMPEG command options that I found discussed on the web that I could get to capture a single image successfully. I've discovered innumerable sets of options that don't work ! I wrote this post because my web searches did not yield an example that works for me. I am hoping that someone much better informed than I am will show me a way that works !

    


    Why am I scanning the cameras sequentially rather than in parallel ?

    


    I did this just to keep things simple first and a loop over the list seemed easy and pythonic. It was clear to me that I might later be able to spawn a separate thread for each FFMPEG call, and maybe get a parallel speed up that way. Indeed, I would welcome an example of how to do that.

    


    But in any case the single image capture taking 3 seconds seems way too long anyway.

    


    Why am I only using a single 1 of the 4 cores on your Raspberry Pi ?

    


    The sample code I posted is just a snippet from my entire program. Image capturing takes place in a child thread at present, while a Window GUI with an event loop is running in the main thread, so that user input isn't blocked during imaging.

    


    I am not knowledgeable enough about the cores of the Raspberry Pi 400, nor about how the Raspberry Pi OS (aka Raspbian) manages allocation of threads to cores, nor whether Python can or should be explicitly directing threads to be running in specific cores.
    
I would welcome the suggestions of Mark Setchell (or anyone else knowledgeable about these issues) to recommend a best practice and include example code.

    


  • Ffmpeg change aspect ratio setsar setdar owerriding

    20 novembre 2016, par Ngoral

    I’m trying to change the aspect ratio of video, cause it’s being showed in a wrong way (it sould be 16:9 but shows 3:4).
    I’ve tried a lot of things, and none worked.
    E.g. I’ve tried to set SAR, but it changes DAR, so the aspect ratio stays the same. Here’s an example :

    ffmpeg -y -i rtmp://localhost/in/air-hdmi -vf "setsar=sar=16/9" -f flv rtmp://localhost/in/ngoraltestffmpeg



    [flv @ 0x38143c0] audio stream discovered after head already parsed
    [aac @ 0x3818f20] element type mismatch 1 != 0
    [flv @ 0x38143c0] video stream discovered after head already parsed
    Input #0, flv, from 'rtmp://localhost/in/air-hdmi':
     Metadata:
       Server          : NGINX RTMP (github.com/arut/nginx-rtmp-module)
       displayWidth    : 720
       displayHeight   : 576
       fps             : 0
       profile         :
       level           :
     Duration: 00:00:00.00, start: 181748.084000, bitrate: N/A
       Stream #0:0: Audio: aac (HE-AAC), 44100 Hz, stereo, fltp
       Stream #0:1: Video: h264 (High), yuv420p, 720x576, 25 fps, 25 tbr, 1k tbn, 50 tbc
    [flv @ 0x39bf5a0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
       Last message repeated 1 times
    Output #0, flv, to 'rtmp://localhost/in/ngoraltest':
     Metadata:
       Server          : NGINX RTMP (github.com/arut/nginx-rtmp-module)
       displayWidth    : 720
       displayHeight   : 576
       fps             : 0
       profile         :
       level           :
       encoder         : Lavf57.38.101
       Stream #0:0: Video: flv1 (flv) ([2][0][0][0] / 0x0002), yuv420p, 720x576 [SAR 16:9 DAR 20:9], q=2-31, 200 kb/s, 25 fps, 1k tbn, 25 tbc
       Metadata:
         encoder         : Lavc57.46.100 flv
       Side data:
         cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
       Stream #0:1: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 44100 Hz, stereo, fltp
       Metadata:
         encoder         : Lavc57.46.100 libmp3lame
    Stream mapping:
     Stream #0:1 -> #0:0 (h264 (native) -> flv1 (flv))
     Stream #0:0 -> #0:1 (aac (native) -> mp3 (libmp3lame))
    Press [q] to stop, [?] for help
    [aac @ 0x3a37000] element type mismatch 1 != 0
       Last message repeated 7 times
    [flv @ 0x39bf5a0] Failed to update header with correct duration.ate= 942.7kbits/s speed=2.37x    
    [flv @ 0x39bf5a0] Failed to update header with correct filesize.
    frame=  112 fps= 48 q=31.0 Lsize=     633kB time=00:00:05.18 bitrate= 999.9kbits/s speed=2.23x    
    video:546kB audio:82kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.846813%
    Exiting normally, received signal 2.

    I’ve also tried to apply scale=720:-1 and -aspect 16:9 and all left the same.

    BUT ! Wnen I write ffplay -vf setsar=16/9 rtmp://localhost/in/ngoraltest it shows perfectly what I need.
    What could be thae problem and hoe to solve it ?

    P.S. I’m little bit confused that there’s no onformation about SAR and DAR of input signal, but I can do totally nothing with it.