Recherche avancée

Médias (1)

Mot : - Tags -/vidéo

Autres articles (36)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (8963)

  • Using ffmpeg to display a static image if an RTMP input source is missing

    19 mars 2016, par iameli

    Here is what I would like ffmpeg to output :

    • If I am streaming from my iPhone to my RTMP server, ffmpeg should output the live video from my iPhone.
    • If not, ffmpeg should output a blank red screen.

    Here’s what I have so far. It sort of works.

    ffmpeg \
     -f lavfi \
     -re \
     -i 'color=s=320x240:r=30:c=red' \
     -thread_queue_size 512 \
     -i 'rtmp://localhost/stream/iphone' \
     -c:v libx264 \
     -f flv \
     -filter_complex "\
       [1:v]scale=320:240[stream]; \
       [0:v][stream]overlay=0:0:eof_action=pass[output] \
     "\
     -map '[output]' \
     -tune zerolatency \
     'rtmp://localhost/stream/output'

    What happens : it boots up and starts streaming my iPhone’s output no problem. When I disconnect, it hangs for a long time, perhaps 20 seconds. Then it starts outputting red, okay. But then if I reconnect my phone, it doesn’t resume. It’s still red. Two questions :

    • Is there a way to configure the buffering so that it starts outputting red as soon as it stops getting data from the RTMP stream ?
    • Is there a way to have it auto-retry, so after the RTMP stream returns, it will switch back ?

    Full verbose output, if that’s helpful. I’m using the latest git version of ffmpeg as of 2016-03-18 on Ubuntu Wily. The RTMP server is nginx-rtmp.

  • OpenGLES glReadPixels exc_bad_access

    29 novembre 2011, par Yanny

    I'm trying to create video from images using OpenGLES and ffmpeg, but on iPad(4.3) I have a crash on glReadPixels

    -(NSData *) glToUIImage {

       int numberOfComponents = NUMBER_OF_COMPONENTS; //4
       int width = PICTURE_WIDTH;
       int height = PICTURE_HEIGHT;

       NSInteger myDataLength = width * height * numberOfComponents;  

       NSMutableData * buffer= [NSMutableData dataWithLength :myDataLength];    

       [self checkForGLError];

       GLenum type = NUMBER_OF_COMPONENTS == 3 ? GL_RGB : GL_RGBA; //RGBA
       glReadPixels(0, 0, width, height, type, GL_UNSIGNED_BYTE, [buffer mutableBytes]);   //EXC_BAD_ACCESS here

       return buffer;
    }

    It is working on iPhone 4 (4.3) and iPod Touch, but have problems on iPhone 3G(3.0) and iPad(4.3). Can you help me with this issue ?

    Also on iPhone 3G(3.0) and iPad(4.3) I have problems with Video - first 5-20 video frames have trash. Maybe issue with optimization ? Or architecture ?

  • ffmpeg : video to images with pts in filename

    12 février 2016, par ntg

    I am trying to extract images of exact times (say every second) from a mp4 video of an experiment. There are a lot of methods to do that using ffmpeg out there, but surprisingly enough the time accuracy is off.

    To measure accuracy, I have first time-stamped the video using pts, e.g. :

    -vf "[in] scale=640:-2 , drawtext=fontcolor=white:fontsize=22:fontfile='times.ttf':timecode='22\:10\:55\:00:text='03/12/15__':r=23.976023976:x=0:y=0 [out]"

    And as a result I got a millisecond precision time-stamp on the video. I checked the video and it seems the time-stamps are very accurate. I then tried all the methods I could find out there including :

    -Using -ss [timestamp] to go to an exact time and -vframes 1 to get the first frame at that time : this method is extremely slow since it involves calling ffmpeg once for each second of the video. Furthermore, seems to work fine for the first minutes, but then gets out of sync.

    -Using fps=1 and using out_%05d.jpg as the output. This was probably the most inaccurate, as it went off by whole seconds, plus it never got exactly the 0th millisecond.

    -Using a fast fps, and then selecting only the ones I need, e.g. -vf "fps=10, framestep=10, select=not(mod(n\,40))" was promising for the first minutes, but also became inaccurate after that.

    -I tried writing the pts/date as metadata, but (do not know how to /cannot) write to the metadata of a .jpg from ffmpeg...

    The problem is that after some time, if we are using out_%05d.jpg, the numbers get completely out of sinc, while the -ss gets inaccurate, and takes forever.

    Ideally there should be a way to write the %pts or the date as part of the filename... Does anyone know a method to extract images from an .mp4 file with millisecond precision, preferably using ffmpeg (or its library ? I am using python and getting desperate...)

    [Edit : as explained in the comment by Mulvya, the pts is calculated by using the fps of the video, ffmpeg can give it to you. In my case some of the videos have 30 and others 24*(100/1001) fps. Bellow is an example, which was produced by :

    args = ['ffmpeg',
    '-i',
    'c:\\Temp\\scr_cam.mp4',
    '-y',
    '-vf',
    "[in] drawtext=fontcolor=black:fontsize=22:fontfile='times.ttf':timecode='17\\:00\\:29\\:00':text='09/02/16__':r=30.0:x=0:y=0, drawtext=fontcolor=black:fontsize=22:fontfile='times.ttf':timecode='00\\:00\\:00\\:00':text='':r=30.0 :x=0:y=30, drawtext=fontcolor=black:fontsize=22:fontfile='times.ttf':text='n\\: %{n}   pts\\:%{pts}':r=30.0:x=0:y=60 [out]",
    '-c:a',
    'copy',
    '-metadata',
    'creation_time=2016-02-09T17:00:29',
    '-preset',
    'ultrafast',
    '-threads',
    '3',
    'c:\\Temp\\stamped_scr_cam.mp4']
    subprocess.call(args)

    In it we see that indeed pts = n/30 (n is the frame no). I have tried many combinations of the params of the commands I talk in the beginning, so listing all my efforts would take too much space. As we see, the drawtext seems to be very accurate, so it does not seem to be a problem of incorrect fps.

    sample of drawtext

    To get the fps I am using :

    def get_frame_rate_and_duration(filename):
       if not os.path.exists(filename):
           sys.stderr.write("ERROR: filename %r was not found!" % (filename,))
           return -1
       args = ["ffprobe",filename,"-v","0","-select_streams","v","-print_format","flat"]
       args.extend(["-show_entries","stream=r_frame_rate"])
       args.extend(["-show_entries","format=duration"])
       out = subprocess.check_output(args).split("\n")
       rate = out[0].split('=')[1].strip()[1:-1].split('/')
       duration = pd.Timedelta("{0} sec".format(out[1].split('=')[1].strip()[1:-1]))
       if len(rate)==1:
           rate = float(rate[0])
       if len(rate)==2:
           rate =  float(rate[0])/float(rate[1])
       else:
           rate = -1
       return rate, duration