Recherche avancée

Médias (0)

Mot : - Tags -/page unique

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (80)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (14065)

  • Combining two live RTMP streams into another RTMP stream, synchronization issues (with FFMPEG)

    12 juin 2020, par Evk

    I'm trying to combine (side by side) two live video streams coming over RTMP, using the following ffmpeg command :

    



    ffmpeg -i "rtmp://first" -i "rtmp://first" -filter_complex "[0v][1v]xstack=inputs=2:layout=0_0|1920_0[stacked]" -map "[stacked]" -preset ultrafast -vcodec libx264 -tune zerolatency -an -f flv output.flv


    



    In this example I actually use the same input stream two times, because the issue is more visible this way. And the issue is in the output two streams are out of sync by about 2-3 seconds. That is - I expect (since I have two identical inputs) to have exactly the same left and right sides in the output. Instead - left side is behind right side by 2-3 seconds.

    



    What I believe is happening is ffmpeg connects to inputs in order (I see this in output log) and connection to each one takes 2-3 seconds (maybe it waits for I-frame, those streams have I-frame interval of 3 seconds). Then probably, it buffers frames received from first (already connected) input, while connecting to the second one. When second one is connected and frames from both inputs are ready to be put through the filter - first input buffer already contains 2-3 seconds of video - and result is out of sync.

    



    Again, that's just my assumptions. So, how can achieve my goal ? What I basically want is for ffmpeg to discard all "old" frames received before BOTH inputs are connected OR somehow put "empty" (black ?) frames for second input, while waiting for that second input to become available. I tried play with various flags, with PTS (setpts filter), but to no avail.

    


  • Ffmpeg x11grab exported video 16:9 is distorded

    31 janvier 2019, par direxit

    using fluent ffmepg with ffmpeg version 3.4.4.

    Capturing screen using x11grab with that setup :

    videoCommand
    .addInput(display)
    .addInputOptions('-y', '-f' , 'x11grab' , '-draw_mouse', '0')
    .aspect('16:9')
    .withSize('768x432')
    .withFpsInput(60)
    .withFpsOutput(60)
    .output(base_path+'/'+process.argv[3]+'.mp4')

    It works great except that video image is distorded, like in the second picture below.

    1 - image that x11 is displaying
    enter image description here

    2 - image in the resulting video
    enter image description here

    3- Properties of the exported video
    enter image description here

    Tried to use .keepDAR() option for ffmpeg but i got a 4:3 video.

    This is ffmpeg log : i don’t know from where that 640*480 is coming

    enter image description here

  • PTS not set after decoding H264/RTSP stream

    29 septembre 2017, par Sergio Basurco

    Question : What does the Libav/FFmpeg decoding pipeline need in order to produce valid presentation timestamps (PTS) in the decoded AVFrames ?

    I’m decoding an H264 stream received via RTSP. I use Live555 to parse H264 and feed the stream to my LibAV decoder. Decoding and displaying is working fine, except I’m not using timestamp info and get some stuttering.

    After getting a frame with avcodec_decode_video2, the presentation timestamp (PTS) is not set.

    I need the PTS in order to find out for how long each frame needs to be displayed, and avoid any stuttering.

    Notes on my pipeline

    • I get the SPS/PPS information via Live555, I copy these values to my AVCodecContext->extradata.
    • I also send the SPS and PPS to my decoder as NAL units, with the appended 0,0,0,1 startcode.
    • Live555 provides presentation timestamps for each packet, these are in most cases not monotonically increasing. The stream contains B-frames.
    • My AVCodecContext->time_base is not valid, value is 0/2.

    Unclear :

    • Where exactly should I set the NAL PTS coming from my H264 sink (Live555) ? As the AVPacket->dts, pts, none, or both ?
    • Why is my time_basevalue not valid ? Where is this information ?
    • According to the RTP payload spec. It seems that

    The RTP timestamp is set to the sampling timestamp of the content. A 90 kHz clock rate MUST be used.

    • Does this mean that I must always asume a 1/90000 timebase for the decoder ? What if some other value is specified in the SPS ?