Recherche avancée

Médias (0)

Mot : - Tags -/formulaire

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (36)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (6607)

  • FFmpeg dnn_processing with tensorflow backend : difficulties applying a filter on an image

    9 avril 2023, par ArnoBen

    I am trying to perform a video segmentation for background blurring similar to Google Meet or Zoom using FFmpeg, and I'm not very familiar with it.

    


    Google's MediaPipe model is available as a tensorflow .pb file here (using download_hhxwww.sh).

    


    I can load it in python and it works as expected, though I do need to format the input frames : scaling to the model input dimension, adding a batch dimension, dividing the pixel values by 255 to have a range 0-1.

    


    FFmpeg has a filter that can use tensorflow models thanks to dnn_processing, but I'm wondering about these preprocessing steps. I tried to read the dnn_backend_tf.c file in ffmpeg's github repo, but C is not my forte. I'm guessing it adds a batch dimension somewhere otherwise the model wouldn't run, but I'm not sure about the rest.

    


    Here is my current command :

    


    ffmpeg \
    -i $FILE -filter_complex \
    "[0:v]scale=160:96,format=rgb24,dnn_processing=dnn_backend=tensorflow:model=$MODEL:input=input_1:output=segment[masks];[masks]extractplanes=2[mask]" \
    -map "[mask]" output.png


    


      

    • I'm already applying a scaling to match the input dimension.
    • 


    • I wrote this [masks]extractplanes=2[mask] because the model outputs a HxWx2 tensor (background mask and foreground mask) and I want to keep the foreground mask.
    • 


    


    The result I get with this command is the following (input-output) :

    


    Output example

    


    I'm not sure how to interpret the problems in this output. In python I can easily get a nice grayscale output :

    


    enter image description here

    


    I'm trying to obtain something similar with FFmpeg.

    


    Any suggestion or insights to obtain a correct output with FFmpeg would be greatly appreciated.

    


    PS : If I try to apply this on a video file, it hits a Segmentation Fault somewhere before getting any output so I stick with testing on an image for now.

    


  • I am streaming mp3 music via ffmpeg to a local rtmp server then converting to hls, but am having difficulties end to end testing

    12 avril 2020, par SquirrelSenpai

    I am streaming mp3 music via ffmpeg to a local rtmp server then converting to hls, but am having difficulties end to end testing. I am know test.m3u8 playlist should be produce, however I am unable to check inside /nginx/hls/ as it is locked by www-data during operation. I have tried multiple permutation of what I thought the output hls stream would be in vlc with no luck. localhost:8080/live/test.m3u8, localhost:8080/hls/test.m3u8

    



    Any tips on effective testing would be much appreciated.

    



    Technologies involved :

    



      

    • FFMPEG
    • 


    • NGINX (This and the below 3 are part of a module)
    • 


    • HLS
    • 


    • RTMP
    • 


    



    Working :

    



    ffmpeg -hide_banner -i http://149.255.59.164:8138 -f mp3 test.mp3


    



    Seemingly working, correctly reads files, shows conversion of some kind
    
size= 362kB time=00:00:23.09 bitrate= 128.3kbits/s speed=3.21x

    



    fmpeg -hide_banner -i http://x.x.x.x:8138 -f mp3 rtmp://localhost:1935/live/test


    



    Nginx.conf

    



    user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        worker_connections 768;
        # multi_accept on;
}

rtmp_auto_push on;

rtmp{

        server{

                listen 1935;

                chunk_size 4000;

                #one publisher, many subscribers

                application live {

                        # enable live streaming
                        live on;
                        record off;

                        # publish only from localhost
                        allow publish 127.0.0.1;
                        deny publish all;

                        # hls - required for web browser consumption
                        hls on;
                        hls_path /tmp/hls;
                        hls_fragment 3;
                        hls_playlist_length 60;

                        # disable consuming the streaming from nginx as rtmp
                        deny play all;

                }

        }

}

# HTTP can be used for accessing RTMP stats
http {

    server {

        listen      8080;

        # This URL provides RTMP statistics in XML
        location /stat {
            rtmp_stat all;

            # Use this stylesheet to view XML as web page
            # in browser
            rtmp_stat_stylesheet stat.xsl;
        }

        location /stat.xsl {
            # XML stylesheet to view RTMP stats.
            # Copy stat.xsl wherever you want
            # and put the full directory path here
            root /path/to/stat.xsl/;
        }

        location /hls {
            # Serve HLS fragments
            types {
                application/vnd.apple.mpegurl m3u8;
                video/mp2t ts;
            }
            root /tmp;
            add_header Cache-Control no-cache;
        }

        location /dash {
            # Serve DASH fragments
            root /tmp;
            add_header Cache-Control no-cache;
        }
    }
}


    


  • ffmpeg tile screenshot file with uniform distribution over video length

    23 octobre 2017, par Michael Yousef

    I’m trying to take a video and create a screenshot file for it. I want it to cover the entire duration of the video and be uniformly distributed over the video. I have a command right now that I found online, it can produce a screenshot file, but it doesn’t cover the entire duration

    ffmpeg -ss 00:05:00 -i video.mp4 -frames 1 -vf "select=not(mod(n\,8000)),scale=320:240,tile=4x8" out.png

    Instead of having it cap every 80 seconds, I want it to determine what the duration is use that. It should cap the first screen at my initial offset, then however far ahead as necessary.

    Also, if anyone knows how to add information to the outputted file at the top, like filename, duration, bitrate, etc., that’s also something I want to output.