Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (100)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Diogene : création de masques spécifiques de formulaires d’édition de contenus

    26 octobre 2010, par

    Diogene est un des plugins ? SPIP activé par défaut (extension) lors de l’initialisation de MediaSPIP.
    A quoi sert ce plugin
    Création de masques de formulaires
    Le plugin Diogène permet de créer des masques de formulaires spécifiques par secteur sur les trois objets spécifiques SPIP que sont : les articles ; les rubriques ; les sites
    Il permet ainsi de définir en fonction d’un secteur particulier, un masque de formulaire par objet, ajoutant ou enlevant ainsi des champs afin de rendre le formulaire (...)

Sur d’autres sites (14928)

  • Generating High Quality Video from Images with Audio and Transistions(C#)

    18 octobre 2016, par M Imtiaz

    I am Making a Windows Application to Create a Video from a set of Images, also i want to add Audio and Transition Affects.
    i have searched Over the Internet and Found Some Libraries Like
    AForge.NET(FFMPEG Based), Splicer(https://splicer.codeplex.com/) and FFMPEG itself Command Based here.

    i am able to create video using all of the above libraries but these have certain limitations or may be i dont know the usage
    like

    1) AForge.Video.FFMPEG can create Video but there is no Option to add Audio

    2) FFMPEG Command Based can create Video from Images along with audio but Transitions found were only Fade In, Fade out(http://www.bogotobogo.com/FFMpeg/ffmpeg_fade_in_fade_out_transitions_effects_filters.php)

    3) Splicer.dll is working well it has All the Options for Adding Transitions and Audio but the quality of Video is not satisfactory.

    Now My Questions is how i can achieve my desired quality video along with Transitions and Audio from a set of Images.
    Thank you

  • How to get high quality video after merge into one

    3 avril 2018, par roomy

    I follow the page Create a mosaic out of several input videos to merge videos. But,I got poor video quality. How can I get the video same as the original.

    ffmpeg
    -i 1.flv-i 2.flv -i 3.flv -i 4.flv
    -filter_complex "
       nullsrc=size=640x480 [base];
       [0:v] setpts=PTS-STARTPTS, scale=320x240 [upperleft];
       [1:v] setpts=PTS-STARTPTS, scale=320x240 [upperright];
       [2:v] setpts=PTS-STARTPTS, scale=320x240 [lowerleft];
       [3:v] setpts=PTS-STARTPTS, scale=320x240 [lowerright];
       [base][upperleft] overlay=shortest=1 [tmp1];
       [tmp1][upperright] overlay=shortest=1:x=320 [tmp2];
       [tmp2][lowerleft] overlay=shortest=1:y=240 [tmp3];
       [tmp3][lowerright] overlay=shortest=1:x=320:y=240
    "
    -f flv rtmp://10.240.209.94:9999/live2

    And when I use rtmp ://** as video input.
    Such as :

       ffmpeg-i rtmp://10.240.209.94:9999/live1 -i rtmp://10.240.209.94:9999/live1 -i rtmp://10.240.209.94:9999/live1 -i rtmp://10.240.209.94:9999/live1
    -filter_complex "
       nullsrc=size=640x480 [base];
       [0:v] setpts=PTS-STARTPTS, scale=320x240 [upperleft];
       [1:v] setpts=PTS-STARTPTS, scale=320x240 [upperright];
       [2:v] setpts=PTS-STARTPTS, scale=320x240 [lowerleft];
       [3:v] setpts=PTS-STARTPTS, scale=320x240 [lowerright];
       [base][upperleft] overlay=shortest=1 [tmp1];
       [tmp1][upperright] overlay=shortest=1:x=320 [tmp2];
       [tmp2][lowerleft] overlay=shortest=1:y=240 [tmp3];
       [tmp3][lowerright] overlay=shortest=1:x=320:y=240
    "
    -f flv rtmp://10.240.209.94:9999/live2

    It tells me :

    Stream specifier ':v' in filtergraph description  nullsrc=size=640x480 [base];[0:v] setpts=PTS-STARTPTS, scale=320x240 [upperleft];[1:v] setpts=PTS-STARTPTS, scale=320x240 [upperright];[2:v] setpts=PTS-STARTPTS, scale=320x240 [lowerleft];[3:v] setpts=PTS-STARTPTS, scale=320x240 [lowerright];[base][upperleft] overlay=shortest=1 [tmp1];[tmp1][upperright] overlay=shortest=1:x=320 [tmp2];[tmp2][lowerleft] overlay=shortest=1:y=240 [tmp3];[tmp3][lowerright] overlay=shortest=1:x=320:y=240 matches no streams.

    Is that a bug ? but I use the newest ffmepg.
    bug

    And I can only use the command :

    ffmpeg -i rtmp://10.240.209.94:9999/live10 -vcodec copy -acodec copy -f flv output.flv

    to transfer rtmp into flv,and then read flv video...

  • Decoding a h264 (High) stream with OpenCV's ffmpeg on Ubuntu

    9 janvier 2017, par arvids

    I am working with a video stream (no audio) from an ip camera on Ubuntu 14.04. Also i am a beginner with Ubuntu and everything on it. Everything was going great with a camera that has these parameters (from FFMPEG) :

    Input #0, rtsp, from 'rtsp://*private*:8900/live.sdp': 0B f=0/0  
     Metadata:
       title           : RTSP server
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 352x192, 29.97 tbr, 90k tbn, 180k tbc

    But then i changed to a newer camera, which has these parameters :

    Input #0, rtsp, from 'rtsp://*private*/media/video2':0B f=0/0  
     Metadata:
       title           : VCP IPC Realtime stream
       Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1280x720, 25 fps, 25 tbr, 90k tbn, 50 tbc

    My C++ program uses OpenCV3 to process the stream. By default OpenCV uses ffmpeg to decode and display the stream with function VideoCapture.

    VideoCapture vc;
    vc.open(input_stream);
    while ((vc >> frame), !frame.empty()) {
      *do work*
    }

    With the new camera stream i get errors like these (from ffmpeg) :

    [h264 @ 0x7c6980] cabac decode of qscale diff failed at 41 38
    [h264 @ 0x7c6980] error while decoding MB 41 38, bytestream (3572)
    [h264 @ 0x7c6980] left block unavailable for requested intra mode at 0 44
    [h264 @ 0x7c6980] error while decoding MB 0 44, bytestream (4933)
    [h264 @ 0x7bc2c0] SEI type 25 truncated at 208
    [h264 @ 0x7bfaa0] SEI type 25 truncated at 206
    [h264 @ 0x7c6980] left block unavailable for requested intra mode at 0 18
    [h264 @ 0x7c6980] error while decoding MB 0 18, bytestream (14717)

    The image sometimes is glitched, sometimes completely frozen. After a few seconds to a few minutes the stream freezes completely without an error. However on vlc it plays perfectly. I installed the newest version (3.2.2) of ffmpeg player with

    ./configure --enable-gpl --enable-libx264

    Now playing directly with ffplay (instead of launching from source code with OpenCV function VideoCapture), the stream plays better, doesn’t freeze, but sometimes still displays warnings :

    [NULL @ 0x7f834c008c00] SEI type 25 size 896 truncated at 320=1/1  
    [h264 @ 0x7f834c0d5d20] SEI type 25 size 896 truncated at 319=1/1  
    [rtsp @ 0x7f834c0008c0] max delay reached. need to consume packet  
    [rtsp @ 0x7f834c0008c0] RTP: missed 1 packets
    [h264 @ 0x7f834c094740] concealing 675 DC, 675 AC, 675 MV errors in P frame
    [NULL @ 0x7f834c008c00] SEI type 25 size 896 truncated at 320=1/1  

    Changing the camera hardware is not an option. The camera can be set to encode to h265 or mjpeg. When encoding to mjpeg it can output 5 fps, which is not enough. Decoding to a static video is not an option either, because i need to display real time results about the stream. Here is a list of API backends that can be used in function VideoCapture. Maybe i should swithc to some other decoder and player ?
    From my research i conclude that i have these options :

    • Somehow get OpenCV to use the ffmpeg player from another directory, where it is compiled with libx264

    • Somehow get OpenCV to use libVlc instead of ffmpeg

    One example of switching to vlc is here, but i don’t understand it well enough to say if that is what i need. Or maybe i should be parsing the stream in code ? I don’t rule out that this could be some basic problem due to a lack of dependencies, because, as i said, i’m a beginner with Ubuntu.

    • Use vlc to preprocess the stream, as suggested here.

    This is probably slow, which again is bad for real time results.
    Any suggestions and coments will be appreciated.