Recherche avancée

Médias (1)

Mot : - Tags -/sintel

Autres articles (56)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (7799)

  • Decoding a h264 (High) stream with OpenCV's ffmpeg on Ubuntu

    5 juin 2018, par arvids

    I am working with a video stream from an ip camera on Ubuntu 14.04. Everything was going great with a camera that has these parameters (from FFMPEG) :

       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 352x192, 29.97 tbr, 90k tbn, 180k tbc

    But then i changed to a newer camera, which has these parameters :

       Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1280x720, 25 fps, 25 tbr, 90k tbn, 50 tbc

    My C++ program uses OpenCV3 to process the stream. By default OpenCV uses ffmpeg to decode and display the stream with function VideoCapture.

    VideoCapture vc;
    vc.open(input_stream);
    while ((vc >> frame), !frame.empty()) {
      *do work*
    }

    With the new camera stream i get errors like these (from ffmpeg) :

    [h264 @ 0x7c6980] cabac decode of qscale diff failed at 41 38
    [h264 @ 0x7c6980] error while decoding MB 41 38, bytestream (3572)
    [h264 @ 0x7c6980] left block unavailable for requested intra mode at 0 44
    [h264 @ 0x7bc2c0] SEI type 25 truncated at 208

    The image sometimes is glitched, sometimes completely frozen. However on vlc it plays perfectly. I installed the newest version (3.2.2) of ffmpeg player with

    ./configure --enable-gpl --enable-libx264

    Now playing directly with ffplay (instead of launching from source code with OpenCV function VideoCapture), the stream plays better, but sometimes still displays warnings :

    [NULL @ 0x7f834c008c00] SEI type 25 size 896 truncated at 320=1/1  
    [h264 @ 0x7f834c0d5d20] SEI type 25 size 896 truncated at 319=1/1  
    [rtsp @ 0x7f834c0008c0] max delay reached. need to consume packet  
    [rtsp @ 0x7f834c0008c0] RTP: missed 1 packets
    [h264 @ 0x7f834c094740] concealing 675 DC, 675 AC, 675 MV errors in P frame

    Changing the camera hardware is not an option. The camera can be set to encode to h265 or mjpeg. When encoding to mjpeg it can output 5 fps, which is not enough. Decoding to a static video is not an option either, because i need to display real time results about the stream. Here is a list of API backends that can be used in function VideoCapture. Maybe i should switch to some other decoder and player ?
    From my research i conclude that i have these options :

    • Somehow get OpenCV to use libVlc instead of ffmpeg

    One example of switching to vlc is here, but i don’t understand it well enough to say if that is what i need. Or maybe i should be parsing the stream in code ?

    • Use vlc to preprocess the stream, as suggested here.

    This is probably slow, which again is bad for real time results.
    Any suggestions and coments will be appreciated.

  • What is the difference between ffmpeg command with -codec and without -codec

    26 avril 2018, par Sat

    I am trying to calculate video frozen time, I created mp4 format file with 23seconds frozen.
    I am converting mp4 file into segment(.ts) file using following command

    ffmpeg -i Palivala.mp4 -codec copy -vbsf h264_mp4toannexb -map 0 palivaalaa.ts

    When i directly use video file (Palivala.mp4) or segment file(palivaalaa.ts) which generated from above command, I am getting the result as expected. i.e getting video frozen time as 23seconds.

    But when I use following command

    ffmpeg -i Palivala.mp4  -map 0 palivaalaa.ts

    I am able to see 1second frozen frames then after that 1 frame without frozen immediately next frame frozen and frozen continued for 6seconds again 1 non-frozen frame...................

    1) What is the difference between above 2 commands ?

    2) both commands chooses libx264 ?

  • Append black frames to video when audio is longer in ffmpeg

    8 avril 2018, par Edward

    I’m trying to utilize ffmpeg as a video editor and this is mostly due to that the regular video editor dropped more frames than I was comfortable with.

    ffmpeg -i "videoplayback1" -t 00:09:51 -i "audioplayback1" -t 00:09:54.38 -vcodec libx264 -crf 20 -acodec copy "playback1.mp4"

    As you can see, I’m trimming the video shorter than the audio, but what I want is something that is the opposite of the -shortest command switch, to have the file continue for the duration of the audio -t, and adding physical black frames for the remainder of that time.

    As it is now, the video is still clipped as if I was using the -shortest switch. I tried some -vf and filter_complex but either I get errors, or that the audio is still chopped, the video frozen, but the duration is that of the longest -t.

    How would I go about adding black frames for as long as the audio is playing ?