Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (84)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Les statuts des instances de mutualisation

    13 mars 2010, par

    Pour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
    Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)

Sur d’autres sites (7277)

  • Transcoding from VP8 to H264 is not working using fluent-ffmpeg library using node.js

    3 septembre 2019, par Mihir Patel

    I have tried transcoding stream from VP8 to H264 using command line it’s working fine but when I tried the same thing using fluent-ffmpeg it is not working as expected.

    Version information

    fluent-ffmpeg version : "2.1.2"

    ffmpeg version : "3.4.4-1 16.04.york0"

    OS : "Ubuntu"

    Trasncoding from VP8 to H264 is working using command

    ffmpeg -analyzeduration 300M -probesize 300M -protocol_whitelist file,udp,rtp -i portal-vp8.sdp -c:v libx264 -profile:v high -level:v 3.2 -pix_fmt yuv420p -x264-params keyint=25:scenecut=0 -r 25 -c:a aac -ar 16k -ac 1 -preset ultrafast -tune zerolatency -f flv rtmp://my-server-ip/myapp/testvp8

    My sdp is

    v=0
    o=- 0 0 IN IP4 127.0.0.1
    s=No Name
    c=IN IP4 127.0.0.1
    t=0 0
    a=tool:libavformat 55.2.100
    m=audio 5396 TCP 111
    a=rtpmap:111 opus/48000
    m=video 5398 RTP/AVP 100
    a=rtpmap:100 VP8/90000
    a=fmtp:100 packetization-mode=1

    Trasncoding from VP8 to H264 is not working using library

    var sdpString = "v=0\r\no=- 0 0 IN IP4 127.0.0.1\r\ns=No Name\r\nc=IN IP4 127.0.0.1\r\nt=0 0\r\na=tool:libavformat 55.2.100\r\nm=audio 5120 TCP 111\r\na=rtpmap:111 opus/48000\r\nm=video 5122 RTP/AVP 100\r\na=rtpmap:100 VP8/90000";

    let sdp = stringToStream(sdpString);
    var inputOptions = [];
       inputOptions.push('-analyzeduration');
       inputOptions.push('300M');
       inputOptions.push('-probesize');
       inputOptions.push('300M');
       inputOptions.push('-protocol_whitelist');
       inputOptions.push('file,udp,rtp,pipe');

    var outputOptions = [];
       outputOptions.push('-c:v');
       outputOptions.push('libx264');

       outputOptions.push('-profile:v');
       outputOptions.push('high');

       outputOptions.push('-level:v');
       outputOptions.push('3.2');

       outputOptions.push('-pix_fmt');
       outputOptions.push('yuv420p');

       outputOptions.push('-x264-params');
       outputOptions.push('keyint=25:scenecut=0');

       outputOptions.push('-r');
       outputOptions.push('25');

       outputOptions.push('-c:a');
       outputOptions.push('aac');

       outputOptions.push('-ar');
       outputOptions.push('16k');

       outputOptions.push('-ac');
       outputOptions.push('1');

       outputOptions.push('-preset');
       outputOptions.push('ultrafast');

       outputOptions.push('-tune');
       outputOptions.push('zerolatency');

       outputOptions.push('-f');
       outputOptions.push('flv');

    var outputUrl = "rtmp://my-server-ip/myapp/testvp8";

    var command = FfmpegCommand(sdp).inputOptions(inputOptions).outputOptions(outputOptions).output(outputUrl)
           .on('start', function (commandLine) {
               console.log('Spawned Ffmpeg with command: ' + commandLine);
           })
           .on('stderr', function(stderrLine) {
               console.log('FFMPEG Stderr output: ' + stderrLine);
           });

       command.run();
    });

    Produced command using library

    ffmpeg -analyzeduration 300M -probesize 300M -protocol_whitelist file,udp,rtp,pipe -i pipe:0 -c:v libx264 -profile:v high -level:v 3.2 -pix_fmt yuv420p -x264-params keyint=25:scenecut=0 -r 25 -c:a aac -ar 16k -ac 1 -preset ultrafast -tune zerolatency -f flv rtmp://my-server-ip/myapp/testvp8

    FFmpeg logs

    FFMPEG Stderr output: ffmpeg version 3.4.4-1~16.04.york0 Copyright (c) 2000-2018 the FFmpeg developers
    FFMPEG Stderr output:   built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
    FFMPEG Stderr output:   configuration: --prefix=/usr --extra-version='1~16.04.york0' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
    FFMPEG Stderr output:   libavutil      55. 78.100 / 55. 78.100
    FFMPEG Stderr output:   libavcodec     57.107.100 / 57.107.100
    FFMPEG Stderr output:   libavformat    57. 83.100 / 57. 83.100
    FFMPEG Stderr output:   libavdevice    57. 10.100 / 57. 10.100
    FFMPEG Stderr output:   libavfilter     6.107.100 /  6.107.100
    FFMPEG Stderr output:   libavresample   3.  7.  0 /  3.  7.  0
    FFMPEG Stderr output:   libswscale      4.  8.100 /  4.  8.100
    FFMPEG Stderr output:   libswresample   2.  9.100 /  2.  9.100
    FFMPEG Stderr output:   libpostproc    54.  7.100 / 54.  7.100
    FFMPEG Stderr output: [sdp @ 0x55c28151c180] max delay reached. need to consume packet
    FFMPEG Stderr output: [sdp @ 0x55c28151c180] RTP: missed 1 packets

    Observed results

    I have observed that inputted VP8 stream is not transcoded to H264 using FFmpeg library

    Expected results

    Inputted VP8 stream should be transcoded to H264 using the library.

    Please help so how can I resolve the issue.

  • Dynamic subtitles by ffmpeg

    8 septembre 2019, par Saeron Meng

    I would like to add some commentary texts into my video but I do not know how to use ffmpeg to realize this. The comments are like screen bullets through the screen, appearing in the right margin, moving and scrolling, and disappearing from the left.

    My thought is to count the length of the comments and define speeds for them to move and I have already gotten the comments saved as an xml file. But even though I can transfer it into srt file, the tricky problem is, it is hard to write the speeds of the subtitles, or something like that, in an srt file, and apply them to ffmpeg commands or APIs. Here is an example of comments (xml file) :

    <chat timestamp="671.195">
       <ems utctime="1562584080" sender="Bill">
           <richtext></richtext>
       </ems>
    </chat>
    <chat timestamp="677.798">
       <ems utctime="1562584086" sender="Jack">
           <richtext></richtext>
       </ems>
    </chat>

    The final result is like this (I did not find an example in websites in English. In China, such moving subtitles are called "danmu" or "screen bullets"), these colorful characters can move horizontally from right to left :

    example

    1. I have searched some solutions on the Internet, most of which talk about how to write ass/srt files and add motionless subtitles. Like this :

    ffmpeg -i infile.mp4 -i infile.srt -c copy -c:s mov_text outfile.mp4

    3
    00:00:39,770 --> 00:00:41,880
    When I was lying there in the VA hospital ...

    4
    00:00:42,550 --> 00:00:44,690
    ... with a big hole blown through the middle of my life,

    5
    00:00:45,590 --> 00:00:48,120
    ... I started having these dreams of flying.

    But I need another kind of "subtitles" which can move.

    1. When it comes to scrolling subtitles, there are still some solutions : Scrolling from RIGHT to LEFT in ffmpeg / drawtext

    So my question is, how to combine the solutions above to arrange subtitles from top to bottom and let them move concerning the timestamps of the comments ?

  • How can I display the frame got by FFmpeg cuvid decoder without copy to host memory ?

    21 septembre 2019, par hefty

    Nowadays I am working on a video display program using FFmpeg to decode the frames. For h264 input, I chose h264_cuvid to decode and get decoded AVFrame store in Nvidia video card’s device memory.

    Now I am using some inefficient way to display the frame by copying the hardware frame to host memory and then display :

    avcodec_receive_frame(decode_ctx, hw_frame); // Get decoded hardware frame that store in device memory.

    av_hwframe_transfer_data(sw_frame, hw_frame, flag); // Copy the hardware frame to the host memory.

    //...some code to scale and display the sw_frame.

    I want to display the hw_frame by Direct3D surface directly but I don’t know how to access the data in hw_frame and copy the pixel data to a D3D surface natively (without accessing the host memory).

    What I know is that the hw_frame’s data[0] and data[1] are CUdeviceptr pointing to the NV12 data store in device memory, anyone knows how to use the CUdeviceptr to transfer the data to a D3D surface through the device memory and display ?