Recherche avancée

Médias (0)

Mot : - Tags -/latitude

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (68)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (12590)

  • ffmpeg : Continiously encode and append base64 data chunks into output file

    11 février 2021, par O.O

    I have a .mov file thats being written into by my iphone cam saved as input.mov and I have a script that's reading the currently updating file and I am trying to encode the video and audio codec into a .mkv container.

    


    I have little knowledge of this tool, but looking at similar Q/A's around ffmpeg usage I have found little on using base64 as input. But it is documented by ffmpeg for images, so I assume it is possible and I have also used data:video/mp4 since these file types are very similar.

    


    I have :

    


    const ifRecordingStream = await fs.readStream('input.mov', 'base64', 4095);
ifRecordingStream.open();

ifRecordingStream.onData((chunk) => 
    execute(`ffmpeg -f concat -i "data:video/mp4;base64,${chunk} -c:v h264 -c:a aac output.mkv")
);


    


    onData() currently throws Line {}: unknown keyword {}

    


    Is my command wrong ?

    


    ffmpeg -f concat -i "data:video/mp4;base64,${chunk}" -c:v h264 -c:a aac output.mkv

    


    Any help at all is welcomed.

    


  • ffmpeg stream video file from ubuntu to youtube

    14 mars 2018, par user3010452

    I’m trying to create a stream to youtube. I could see how preview button changes into enable state. However it never actually changes from offline.
    And it gives me several error. What am I doing wrong ?

        ffmpeg -i video.flv -f flv rtmp://a.rtmp.youtube.com/live2/XXXXXX


     ffmpeg version 2.8.11-0ubuntu0.16.04.1 Copyright (c) 2000-2017 the FFmpeg developers

         built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609
         configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
         libavutil      54. 31.100 / 54. 31.100
         libavcodec     56. 60.100 / 56. 60.100
         libavformat    56. 40.101 / 56. 40.101
         libavdevice    56.  4.100 / 56.  4.100
         libavfilter     5. 40.101 /  5. 40.101
         libavresample   2.  1.  0 /  2.  1.  0
         libswscale      3.  1.101 /  3.  1.101
         libswresample   1.  2.101 /  1.  2.101
         libpostproc    53.  3.100 / 53.  3.100
       Input #0, flv, from 'video.flv':
         Metadata:
           major_brand     : qt  
           minor_version   : 0
           compatible_brands: qt  
           com.apple.quicktime.creationdate: 2017-07-20T21:44:12+0700
           com.apple.quicktime.make: Apple
           com.apple.quicktime.model: iPhone 6s Plus
           com.apple.quicktime.software: 10.3.2
           encoder         : Lavf57.83.100
         Duration: 00:01:15.24, start: 0.000000, bitrate: 4454 kb/s
           Stream #0:0: Video: flv1, yuv420p, 1920x1080, 200 kb/s, 29.97 fps, 29.97 tbr, 1k tbn, 1k tbc
           Stream #0:1: Audio: adpcm_swf, 44100 Hz, mono, s16, 176 kb/s
       Output #0, flv, to 'rtmp://a.rtmp.youtube.com/XXXXXX':
         Metadata:
           major_brand     : qt  
           minor_version   : 0
           compatible_brands: qt  
           com.apple.quicktime.creationdate: 2017-07-20T21:44:12+0700
           com.apple.quicktime.make: Apple
           com.apple.quicktime.model: iPhone 6s Plus
           com.apple.quicktime.software: 10.3.2
           encoder         : Lavf56.40.101
           Stream #0:0: Video: flv1 (flv) ([2][0][0][0] / 0x0002), yuv420p, 1920x1080, q=2-31, 200 kb/s, 29.97 fps, 1k tbn, 29.97 tbc
           Metadata:
             encoder         : Lavc56.60.100 flv
           Stream #0:1: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 44100 Hz, mono, s16p
           Metadata:
             encoder         : Lavc56.60.100 libmp3lame
       Stream mapping:
         Stream #0:0 -> #0:0 (flv1 (flv) -> flv1 (flv))
         Stream #0:1 -> #0:1 (adpcm_swf (native) -> mp3 (libmp3lame))
       Press [q] to stop, [?] for help
       [flv @ 0x162bac0] Failed to update header with correct duration.ate=4125.4kbits/s    
       [flv @ 0x162bac0] Failed to update header with correct filesize.
       frame= 2255 fps=114 q=31.0 Lsize=   37863kB time=00:01:15.24 bitrate=4122.0kbits/s    
       video:37194kB audio:588kB subtitle:0kB other streams:0kB global

    headers:0kB mixing overhead : 0.213941%

  • Why frame->pts increases by 20, rather than by 1 ?

    19 mars 2013, par user1914692

    Following the exmaples of ffmpeg : decoding_encoding.c and filtering_video.c, I process one video file taken by iPhone. The video file : .mov, video dimensions ; 480x272, video Codec : H.264/AVC, 30 frames per second, bitrate : 605 kbps.

    I first extract each frame, which is YUV.
    I convert YUV to RGB24, and process the RGB24, then write the RGB24 to a .ppm file. It shows the .ppm file is correct.

    Then I plan to encode processed RGB24 frames to a video file.
    Since MPEG does not support RGB24 picture format, I used AV_CODEC_ID_HUFFYUV.
    But the output video file (showing 18.5 MB) does not play. Movie Player on Ubuntu claims an error : Could not determine type of stream.
    I also tried it on VCL. It simply does not work, without any error information.

    My second questions is :
    For each extracted fram from the input video file, I get its pts as follows according to filtering_video.c :

    frame->pts = av_frame_get_best_effort_timestamp(frame);

    I print out each frame's pts, and find that it increases by 20, like below :

    pFrameRGB_count: 0,  frame->pts: 0
    pFrameRGB_count: 1,  frame->pts: 20
    pFrameRGB_count: 2,  frame->pts: 40
    pFrameRGB_count: 3,  frame->pts: 60

    Where frame is the extracted frame from the input video, and pFrameRGB_count is the count for processed frame in RGB24 form.

    Why are they wrong ?