Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (34)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (2317)

  • python subprocess ffmpeg return code = 69

    13 juin 2023, par Tim Chen

    I try to call ffmpeg through the subprocess.run(['ffmpeg', '-i', file_name, output_file_name], capture_output=True, text=True) command in python to convert the audio file incoming from the front end to wav format file. The backend code is as follows, using python+fastapi :

    


    @app.post("/api/upload/convert")
async def convert_upload_file(request: Request, file: UploadFile = File(...)):
    token = uuid.uuid4().hex
    tmpFileName = os.path.join(os.path.dirname(__file__), token)
    with open(tmpFileName, "wb") as buffer:
        buffer.write(await file.read())
    await file.seek(0)
    output_path = tmpFileName + '-output.wav'
    command = ['ffmpeg', '-i', tmpFileName, output_path]
    result = subprocess.run(command, capture_output=True, text=True)


    


    This code usually works, but there are some scenarios where it doesn't work. The audio file is recorded by js code (specifically navigator.mediaDevices.getUserMedia({audio: true})).
The code of the audio recorded in windows chrome can run normally and get the converted wav file, but the audio recorded from ios15 safari for more than 3 seconds cannot be converted, prompting returncode=69. The error message is as follows :

    


    CompletedProcess(args=['ffmpeg', '-i', '5cfb52c503a646bda0f422b517c8014a', '5cfb52c503a646bda0f422b517c8014a-output.wav'], returncode=69, stdout='', stderr="
ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)
configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil      56. 70.100 / 56. 70.100
libavcodec     58.134.100 / 58.134.100
libavformat    58. 76.100 / 58. 76.100
libavdevice    58. 13.100 / 58. 13.100
libavfilter     7.110.100 /  7.110.100
libswscale      5.  9.100 /  5.  9.100
libswresample   3.  9.100 /  3.  9.100
libpostproc    55.  9.100 / 55.  9.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '5cfb52c503a646bda0f422b517c8014a':
  Metadata:
    major_brand     : iso5
    minor_version   : 1
    compatible_brands: isomiso5hlsf
    creation_time   : 2023-06-11T16:36:53.000000Z
  Duration: 00:00:07.06, start: 0.000000, bitrate: 187 kb/s
  Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 184 kb/s (default)
    Metadata:
      creation_time   : 2023-06-11T16:36:53.000000Z
      handler_name    : Core Media Audio
      vendor_id       : [0][0][0][0]
Stream mapping:
  Stream #0:0 -> #0:0 (aac (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, wav, to '5cfb52c503a646bda0f422b517c8014a-output.wav':
  Metadata:
    major_brand     : iso5
    minor_version   : 1
    compatible_brands: isomiso5hlsf
    ISFT            : Lavf58.76.100
  Stream #0:0(und): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, mono, s16, 768 kb/s (default)
    Metadata:
      creation_time   : 2023-06-11T16:36:53.000000Z
      handler_name    : Core Media Audio
      vendor_id       : [0][0][0][0]
      encoder         : Lavc58.134.100 pcm_s16le
size=       2kB time=00:00:00.00 bitrate=N/A speed=N/A    
[aac @ 0x55f1f8f19fc0] Sample rate index in program config element does not match the sample rate index configured by the container.
[aac @ 0x55f1f8f19fc0] Too large remapped id is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[aac @ 0x55f1f8f19fc0] If you want to help, upload a sample of this file to https://streams.videolan.org/upload/ and contact the ffmpeg-devel mailing list. (ffmpeg-devel@ffmpeg.org)
Error while decoding stream #0:0: Not yet implemented in FFmpeg, patches welcome
[aac @ 0x55f1f8f19fc0] Multiple frames in a packet.
[aac @ 0x55f1f8f19fc0] Reserved bit set.
[aac @ 0x55f1f8f19fc0] Number of bands (18) exceeds limit (13).
Error while decoding stream #0:0: Invalid data found when processing input
[aac @ 0x55f1f8f19fc0] Reserved bit set.
[aac @ 0x55f1f8f19fc0] Prediction is not allowed in AAC-LC.
Error while decoding stream #0:0: Invalid data found when processing input
[aac @ 0x55f1f8f19fc0] Reserved bit set.


    


    For the abnormal code, I tried to execute ffmpeg -i input output.wav after fastapi handle request on the command line and subprocess.run(['ffmpeg', '-i', file_name, output_path], capture_output =True, text=True), all succeeded, which means that the final file must be normal, otherwise the subsequent verification work will get the same error.

    


    This confuses me, is there some information I'm missing ?

    


  • Failed setup for format dxva2_vld : hwaccel initialisation returned error

    13 juin 2023, par james

    My program is a video player based on ffmpeg implementation, enabling d3d11va and dxva2 hardware acceleration to decode video frames, play most of the video is normal, only a small part of the video will report this error, ffmpeg printed log as follows :
I:2023-06-13 15:34:53 ms:887:No decoder device for codec found
I:2023-06-13 15:34:53 ms:887:Failed setup for format dxva2_vld : hwaccel initialisation returned error.
I:2023-06-13 15:34:53 ms:888:Format dxva2_vld not usable, retrying get_format() without it.
I:2023-06-13 15:34:53 ms:888:decode_slice_header error
I:2023-06-13 15:34:53 ms:888:no frame !

    


    If ffplay is used and d3d11va and dxva2 hardware acceleration is not enabled, the video can be played normally, and the printed video information is as follows :
PS D :\msys64\home\wangj\ffmpeg-4.4.1\buildout\bin> .\ffplay.exe -i D :\Desktop\13132023061300012_1_C1_34.mp4
ffplay version 4.4.1 Copyright (c) 2003-2021 the FFmpeg developers
configuration : —prefix=./buildout —arch=x86 —toolchain=msvc —enable-shared —disable-debug —enable-sdl2 —enable-dxva2 —enable-d3d11va
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'D :\Desktop\13132023061300012_1_C1_34.mp4' :
Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
encoder : Lavf56.38.102
Duration : 00:00:44.00, start : 0.000000, bitrate : 1706 kb/s
Stream #0:0(und) : Video : h264 (Baseline) (avc1 / 0x31637661), yuv420p, 1920x1080, 1705 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc (default)
Metadata :
handler_name : VideoHandler
vendor_id : [0][0][0][0]

    


    Is this pixel format not supported by d3d11va and dxva2 ? What can I do to get ffplay to play this video file with hardware acceleration ?

    


    When d3d11va and dxva2 hardware acceleration is enabled, most of the videos that can be played also have yuv420p pixel format, why can't this video be played?

    


  • How can I concat several videos generated by MediaRecorder slices

    28 mai 2023, par Bruno Francisco

    I have the following frontend code :

    


    const mediaRecorder = new MediaRecorder(stream, {
        mimeType: 'video/webm'
    });

    mediaRecorder.start(10000);

    mediaRecorder.ondataavailable = (e) => {
        const formData = new FormData();

        formData.append('video', new Blob([e.data], { 'type' : 'video/webm;' }));

        fetch('http://localhost/api/session/12/video/stream', {
            method: 'POST',
            body: formData,
        }).then(() => {
            console.log('success')
        }).catch((e) => {
            console.log('error')
            console.log(e);
        });
    };


    


    Then, in the backend I'm saving the each 10 seconds video into a folder. Then, whenever the user finishes his session, we would like to stitch together all the videos together.

    


    If the user has recorded for 20 seconds, we will have 2 videos of 10 seconds.

    


    I have the following files in my folder :

    


    erKa3MVTuDfnuDUQUhUd2huUaCKfihtm8thc0KX0.bin
hAhJfVNxMEJK2MsyR99a7t7UkT3pjHkmdN1j2C9G.mkv


    


    I'm assuming that the first slice generated by MediaRecord includes the mime type, while the subsequent parts do not have mime types, generating a .bin file.

    


    Now, I run the following command to stitch all parts together :

    


    ffmpeg -i erKa3MVTuDfnuDUQUhUd2huUaCKfihtm8thc0KX0.bin -i hAhJfVNxMEJK2MsyR99a7t7UkT3pjHkmdN1j2C9G.mkv -filter_complex "concat=n=2:v=0:a=1" -vn -y final-video.mp4


    


    Then I get the following error :

    


    ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)
  configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --arch=arm64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 70.100 / 56. 70.100
  libavcodec     58.134.100 / 58.134.100
  libavformat    58. 76.100 / 58. 76.100
  libavdevice    58. 13.100 / 58. 13.100
  libavfilter     7.110.100 /  7.110.100
  libswscale      5.  9.100 /  5.  9.100
  libswresample   3.  9.100 /  3.  9.100
  libpostproc    55.  9.100 / 55.  9.100
[h264 @ 0xaaaaf1ad3a70] non-existing PPS 0 referenced
    Last message repeated 1 times

...

Input #0, h264, from 'erKa3MVTuDfnuDUQUhUd2huUaCKfihtm8thc0KX0.bin':
  Duration: N/A, bitrate: N/A
  Stream #0:0: Video: h264 (Baseline), yuv420p(tv, bt709, progressive), 1920x1080, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Input #1, matroska,webm, from 'hAhJfVNxMEJK2MsyR99a7t7UkT3pjHkmdN1j2C9G.mkv':
  Metadata:
    encoder         : Chrome
  Duration: N/A, start: 0.000000, bitrate: N/A
  Stream #1:0(eng): Video: h264 (Baseline), yuv420p(tv, bt709, progressive), 1920x1080, SAR 1:1 DAR 16:9, 29.33 fps, 29.33 tbr, 1k tbn, 2k tbc (default)
Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_concat_0


    


    Is there any way to stitch all files together ? Do I have to send the mime type on each time ondataavailable is called ?