Recherche avancée

Médias (1)

Mot : - Tags -/portrait

Autres articles (93)

  • L’utiliser, en parler, le critiquer

    10 avril 2011

    La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
    Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
    Une liste de discussion est disponible pour tout échange entre utilisateurs.

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (7064)

  • ffmpeg Audiosegment error in get audio chunks in socketIo server in python

    26 janvier 2024, par a_crszkvc30Last_NameCol

    I want to send each audio chunk every minute.
this is the test code and i want to save audiofile and audio chunk file.
then, i will combine two audio files stop button was worked correctly but with set time function is not worked in python server.
there is python server code with socketio

    


    def handle_voice(sid,data): # blob 으로 들어온 데이터 
    # BytesIO를 사용하여 메모리 상에서 오디오 데이터를 로드
    audio_segment = AudioSegment.from_file(BytesIO(data), format="webm")
    directory = "dddd"
    # 오디오 파일로 저장
    #directory = str(names_sid.get(sid))
    if not os.path.exists(directory):
        os.makedirs(directory)
 
    # 오디오 파일로 저장
    file_path = os.path.join(directory, f'{sid}.wav')
    audio_segment.export(file_path, format='wav') 
    print('오디오 파일 저장 완료')`
 


    


    and there is client

    


    &#xA;&#xA;&#xA;&#xA;    &#xA;    &#xA;    <code class="echappe-js">&lt;script src=&quot;https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.5.2/socket.io.js&quot;&gt;&lt;/script&gt;&#xA;&#xA;&#xA;&#xA;    &#xA;    
    

    &#xA;

    &#xA;

    &#xA; &lt;script&gt;&amp;#xA;        var socket = io(&amp;#x27;http://127.0.0.1:5000&amp;#x27;);&amp;#xA;        const record = document.getElementById(&quot;record&quot;)&amp;#xA;        const stop = document.getElementById(&quot;stop&quot;)&amp;#xA;        const soundClips = document.getElementById(&quot;sound-clips&quot;)&amp;#xA;        const chkHearMic = document.getElementById(&quot;chk-hear-mic&quot;)&amp;#xA;&amp;#xA;        const audioCtx = new(window.AudioContext || window.webkitAudioContext)() // 오디오 컨텍스트 정의&amp;#xA;&amp;#xA;        const analyser = audioCtx.createAnalyser()&amp;#xA;        //        const distortion = audioCtx.createWaveShaper()&amp;#xA;        //        const gainNode = audioCtx.createGain()&amp;#xA;        //        const biquadFilter = audioCtx.createBiquadFilter()&amp;#xA;&amp;#xA;        function makeSound(stream) {&amp;#xA;            const source = audioCtx.createMediaStreamSource(stream)&amp;#xA;            socket.connect()&amp;#xA;            source.connect(analyser)&amp;#xA;            //            analyser.connect(distortion)&amp;#xA;            //            distortion.connect(biquadFilter)&amp;#xA;            //            biquadFilter.connect(gainNode)&amp;#xA;            //            gainNode.connect(audioCtx.destination) // connecting the different audio graph nodes together&amp;#xA;            analyser.connect(audioCtx.destination)&amp;#xA;&amp;#xA;        }&amp;#xA;&amp;#xA;        if (navigator.mediaDevices) {&amp;#xA;            console.log(&amp;#x27;getUserMedia supported.&amp;#x27;)&amp;#xA;&amp;#xA;            const constraints = {&amp;#xA;                audio: true&amp;#xA;            }&amp;#xA;            let chunks = []&amp;#xA;&amp;#xA;            navigator.mediaDevices.getUserMedia(constraints)&amp;#xA;                .then(stream =&gt; {&amp;#xA;&amp;#xA;                    const mediaRecorder = new MediaRecorder(stream)&amp;#xA;                    &amp;#xA;                    chkHearMic.onchange = e =&gt; {&amp;#xA;                        if(e.target.checked == true) {&amp;#xA;                            audioCtx.resume()&amp;#xA;                            makeSound(stream)&amp;#xA;                        } else {&amp;#xA;                            audioCtx.suspend()&amp;#xA;                        }&amp;#xA;                    }&amp;#xA;                    &amp;#xA;                    record.onclick = () =&gt; {&amp;#xA;                        mediaRecorder.start(1000)&amp;#xA;                        console.log(mediaRecorder.state)&amp;#xA;                        console.log(&quot;recorder started&quot;)&amp;#xA;                        record.style.background = &quot;red&quot;&amp;#xA;                        record.style.color = &quot;black&quot;&amp;#xA;                    }&amp;#xA;&amp;#xA;                    stop.onclick = () =&gt; {&amp;#xA;                        mediaRecorder.stop()&amp;#xA;                        console.log(mediaRecorder.state)&amp;#xA;                        console.log(&quot;recorder stopped&quot;)&amp;#xA;                        record.style.background = &quot;&quot;&amp;#xA;                        record.style.color = &quot;&quot;&amp;#xA;                    }&amp;#xA;&amp;#xA;                    mediaRecorder.onstop = e =&gt; {&amp;#xA;                        console.log(&quot;data available after MediaRecorder.stop() called.&quot;)&amp;#xA;                        const bb = new Blob(chunks, { &amp;#x27;type&amp;#x27; : &amp;#x27;audio/wav&amp;#x27; })&amp;#xA;                        socket.emit(&amp;#x27;voice&amp;#x27;,bb)&amp;#xA;                        const clipName = prompt(&quot;오디오 파일 제목을 입력하세요.&quot;, new Date())&amp;#xA;&amp;#xA;                        const clipContainer = document.createElement(&amp;#x27;article&amp;#x27;)&amp;#xA;                        const clipLabel = document.createElement(&amp;#x27;p&amp;#x27;)&amp;#xA;                        const audio = document.createElement(&amp;#x27;audio&amp;#x27;)&amp;#xA;                        const deleteButton = document.createElement(&amp;#x27;button&amp;#x27;)&amp;#xA;&amp;#xA;                        clipContainer.classList.add(&amp;#x27;clip&amp;#x27;)&amp;#xA;                        audio.setAttribute(&amp;#x27;controls&amp;#x27;, &amp;#x27;&amp;#x27;)&amp;#xA;                        deleteButton.innerHTML = &quot;삭제&quot;&amp;#xA;                        clipLabel.innerHTML = clipName&amp;#xA;&amp;#xA;                        clipContainer.appendChild(audio)&amp;#xA;                        clipContainer.appendChild(clipLabel)&amp;#xA;                        clipContainer.appendChild(deleteButton)&amp;#xA;                        soundClips.appendChild(clipContainer)&amp;#xA;&amp;#xA;                        audio.controls = true&amp;#xA;                        const blob = new Blob(chunks, {&amp;#xA;                            &amp;#x27;type&amp;#x27;: &amp;#x27;audio/ogg codecs=opus&amp;#x27;&amp;#xA;                        })&amp;#xA;&amp;#xA;                        chunks = []&amp;#xA;                        const audioURL = URL.createObjectURL(blob)&amp;#xA;                        audio.src = audioURL&amp;#xA;                        console.log(&quot;recorder stopped&quot;)&amp;#xA;&amp;#xA;                        deleteButton.onclick = e =&gt; {&amp;#xA;                            evtTgt = e.target&amp;#xA;                            evtTgt  .parentNode.parentNode.removeChild(evtTgt.parentNode)&amp;#xA;                        }&amp;#xA;                    }&amp;#xA;&amp;#xA;                  mediaRecorder.ondataavailable = function(e) {&amp;#xA;                    chunks.push(e.data)&amp;#xA;                    if (chunks.length &gt;= 5)&amp;#xA;                    {&amp;#xA;                        const bloddb = new Blob(chunks, { &amp;#x27;type&amp;#x27; : &amp;#x27;audio/wav&amp;#x27; })&amp;#xA;                        socket.emit(&amp;#x27;voice&amp;#x27;, bloddb)&amp;#xA;                         &amp;#xA;                        chunks = []&amp;#xA;                    }&amp;#xA;                    mediaRecorder.sendData = function(buffer) {&amp;#xA;                        const bloddb = new Blob(buffer, { &amp;#x27;type&amp;#x27; : &amp;#x27;audio/wav&amp;#x27; })&amp;#xA;                        socket.emit(&amp;#x27;voice&amp;#x27;, bloddb)&amp;#xA;}&amp;#xA;};&amp;#xA;                })&amp;#xA;                .catch(err =&gt; {&amp;#xA;                    console.log(&amp;#x27;The following error occurred: &amp;#x27; &amp;#x2B; err)&amp;#xA;                })&amp;#xA;        }&amp;#xA;    &lt;/script&gt;&#xA;&#xA;

    &#xA;

    ask exception was never retrieved&#xA;future: <task finished="finished" coro="<InstrumentedAsyncServer._handle_event_internal()" defined="defined" at="at"> exception=CouldntDecodeError(&#x27;Decoding failed. ffmpeg returned error code: 3199971767\n\nOutput from ffmpeg/avlib:\n\nffmpeg version 6.1.1-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers\r\n  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)\r\n  configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint\r\n  libavutil      58. 29.100 / 58. 29.100\r\n  libavcodec     60. 31.102 / 60. 31.102\r\n  libavformat    60. 16.100 / 60. 16.100\r\n  libavdevice    60.  3.100 / 60.  3.100\r\n  libavfilter     9. 12.100 /  9. 12.100\r\n  libswscale      7.  5.100 /  7.  5.100\r\n  libswresample   4. 12.100 /  4. 12.100\r\n  libpostproc    57.  3.100 / 57.  3.100\r\n[cache @ 000001d9828efe40] Inner protocol failed to seekback end : -40\r\n[matroska,webm @ 000001d9828efa00] EBML header parsing failed\r\n[cache @ 000001d9828efe40] Statistics, cache hits:0 cache misses:3\r\n[in#0 @ 000001d9828da3c0] Error opening input: Invalid data found when processing input\r\nError opening input file cache:pipe:0.\r\nError opening input files: Invalid data found when processing input\r\n&#x27;)>&#xA;Traceback (most recent call last):&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\socketio\async_admin.py", line 276, in _handle_event_internal&#xA;    ret = await self.sio.__handle_event_internal(server, sid, eio_sid,&#xA;          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\socketio\async_server.py", line 597, in _handle_event_internal&#xA;    r = await server._trigger_event(data[0], namespace, sid, *data[1:])&#xA;        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\socketio\async_server.py", line 635, in _trigger_event&#xA;    ret = handler(*args)&#xA;          ^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\Python-Javascript-Websocket-Video-Streaming--main\poom2.py", line 153, in handle_voice&#xA;    audio_segment = AudioSegment.from_file(BytesIO(data), format="webm")&#xA;                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\pydub\audio_segment.py", line 773, in from_file&#xA;    raise CouldntDecodeError(&#xA;pydub.exceptions.CouldntDecodeError: Decoding failed. ffmpeg returned error code: 3199971767&#xA;&#xA;Output from ffmpeg/avlib:&#xA;&#xA;ffmpeg version 6.1.1-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint&#xA;  libavutil      58. 29.100 / 58. 29.100&#xA;  libavcodec     60. 31.102 / 60. 31.102&#xA;  libavformat    60. 16.100 / 60. 16.100&#xA;  libavdevice    60.  3.100 / 60.  3.100&#xA;  libavfilter     9. 12.100 /  9. 12.100&#xA;  libswscale      7.  5.100 /  7.  5.100&#xA;  libswresample   4. 12.100 /  4. 12.100&#xA;  libpostproc    57.  3.100 / 57.  3.100&#xA;[cache @ 000001d9828efe40] Inner protocol failed to seekback end : -40&#xA;[matroska,webm @ 000001d9828efa00] EBML header parsing failed&#xA;[cache @ 000001d9828efe40] Statistics, cache hits:0 cache misses:3&#xA;[in#0 @ 000001d9828da3c0] Error opening input: Invalid data found when processing input&#xA;Error opening input file cache:pipe:0.&#xA;Error opening input files: Invalid data found when processing input&#xA;</task>

    &#xA;

    im using version of ffmpeg-6.1.1-full_build.&#xA;i dont know this error exist the stop button sent event correctly. but chunk data was not work correctly in python server.&#xA;my english was so bad. sry

    &#xA;

  • Transcode webcam blob to RTMP using ffmpeg.wasm

    29 novembre 2023, par hassan moradnezhad

    I'm trying transcode webcam blob data to a rtmp server from browser by using ffmpeg.wasm .
    &#xA;first, i create a MediaStream.

    &#xA;

            const stream = await navigator.mediaDevices.getUserMedia({&#xA;            video: true,&#xA;        });&#xA;

    &#xA;

    then, i create a MediaRecorder.

    &#xA;

            const recorder = new MediaRecorder(stream, {mimeType: "video/webm; codecs:vp9"});&#xA;        recorder.ondataavailable = handleDataAvailable;&#xA;        recorder.start(0)&#xA;

    &#xA;

    when data is available, i call a function called handleDataAvailable.
    &#xA;here is the function.

    &#xA;

        const handleDataAvailable = (event: BlobEvent) => {&#xA;        console.log("data-available");&#xA;        if (event.data.size > 0) {&#xA;            recordedChunksRef.current.push(event.data);&#xA;            transcode(event.data)&#xA;        }&#xA;    };&#xA;

    &#xA;

    in above code, i use another function which called transcode it's goal is going to send data to rtmp server using use ffmpeg.wasm.
    &#xA;here it is.

    &#xA;

    const transcode = async (inputVideo: Blob | undefined) => {&#xA;        const ffmpeg = ffmpegRef.current;&#xA;        const fetchFileOutput = await fetchFile(inputVideo)&#xA;        ffmpeg?.writeFile(&#x27;input.webm&#x27;, fetchFileOutput)&#xA;&#xA;        const data = await ffmpeg?.readFile(&#x27;input.webm&#x27;);&#xA;        if (videoRef.current) {&#xA;            videoRef.current.src =&#xA;                URL.createObjectURL(new Blob([(data as any)?.buffer], {type: &#x27;video/webm&#x27;}));&#xA;        }&#xA;&#xA;        // execute by node-media-server config 1&#xA;        await ffmpeg?.exec([&#x27;-re&#x27;, &#x27;-i&#x27;, &#x27;input.webm&#x27;, &#x27;-c&#x27;, &#x27;copy&#x27;, &#x27;-f&#x27;, &#x27;flv&#x27;, "rtmp://localhost:1935/live/ttt"])&#xA;&#xA;        // execute by node-media-server config 2&#xA;        // await ffmpeg?.exec([&#x27;-re&#x27;, &#x27;-i&#x27;, &#x27;input.webm&#x27;, &#x27;-c:v&#x27;, &#x27;libx264&#x27;, &#x27;-preset&#x27;, &#x27;veryfast&#x27;, &#x27;-tune&#x27;, &#x27;zerolatency&#x27;, &#x27;-c:a&#x27;, &#x27;aac&#x27;, &#x27;-ar&#x27;, &#x27;44100&#x27;, &#x27;-f&#x27;, &#x27;flv&#x27;, &#x27;rtmp://localhost:1935/live/ttt&#x27;]);&#xA;&#xA;        // execute by stack-over-flow config 1&#xA;        // await ffmpeg?.exec([&#x27;-re&#x27;, &#x27;-i&#x27;, &#x27;input.webm&#x27;, &#x27;-c:v&#x27;, &#x27;h264&#x27;, &#x27;-c:a&#x27;, &#x27;aac&#x27;, &#x27;-f&#x27;, &#x27;flv&#x27;, "rtmp://localhost:1935/live/ttt"]);&#xA;&#xA;        // execute by stack-over-flow config 2&#xA;        // await ffmpeg?.exec([&#x27;-i&#x27;, &#x27;input.webm&#x27;, &#x27;-c:v&#x27;, &#x27;libx264&#x27;, &#x27;-flags:v&#x27;, &#x27;&#x2B;global_header&#x27;, &#x27;-c:a&#x27;, &#x27;aac&#x27;, &#x27;-ac&#x27;, &#x27;2&#x27;, &#x27;-f&#x27;, &#x27;flv&#x27;, "rtmp://localhost:1935/live/ttt"]);&#xA;&#xA;        // execute by stack-over-flow config 3&#xA;        // await ffmpeg?.exec([&#x27;-i&#x27;, &#x27;input.webm&#x27;, &#x27;-acodec&#x27;, &#x27;aac&#x27;, &#x27;-ac&#x27;, &#x27;2&#x27;, &#x27;-strict&#x27;, &#x27;experimental&#x27;, &#x27;-ab&#x27;, &#x27;160k&#x27;, &#x27;-vcodec&#x27;, &#x27;libx264&#x27;, &#x27;-preset&#x27;, &#x27;slow&#x27;, &#x27;-profile:v&#x27;, &#x27;baseline&#x27;, &#x27;-level&#x27;, &#x27;30&#x27;, &#x27;-maxrate&#x27;, &#x27;10000000&#x27;, &#x27;-bufsize&#x27;, &#x27;10000000&#x27;, &#x27;-b&#x27;, &#x27;1000k&#x27;, &#x27;-f&#x27;, &#x27;flv&#x27;, &#x27;rtmp://localhost:1935/live/ttt&#x27;]);&#xA;&#xA;    }&#xA;

    &#xA;

    after running app and start streaming, console logs are as below.

    &#xA;

    ffmpeg >>>  ffmpeg version 5.1.3 Copyright (c) 2000-2022 the FFmpeg developers index.tsx:81:20&#xA;ffmpeg >>>    built with emcc (Emscripten gcc/clang-like replacement &#x2B; linker emulating GNU ld) 3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784) index.tsx:81:20&#xA;ffmpeg >>>    configuration: --target-os=none --arch=x86_32 --enable-cross-compile --disable-asm --disable-stripping --disable-programs --disable-doc --disable-debug --disable-runtime-cpudetect --disable-autodetect --nm=emnm --ar=emar --ranlib=emranlib --cc=emcc --cxx=em&#x2B;&#x2B; --objcc=emcc --dep-cc=emcc --extra-cflags=&#x27;-I/opt/include -O3 -msimd128&#x27; --extra-cxxflags=&#x27;-I/opt/include -O3 -msimd128&#x27; --disable-pthreads --disable-w32threads --disable-os2threads --enable-gpl --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libopus --enable-zlib --enable-libwebp --enable-libfreetype --enable-libfribidi --enable-libass --enable-libzimg index.tsx:81:20&#xA;ffmpeg >>>    libavutil      57. 28.100 / 57. 28.100 index.tsx:81:20&#xA;ffmpeg >>>    libavcodec     59. 37.100 / 59. 37.100 index.tsx:81:20&#xA;ffmpeg >>>    libavformat    59. 27.100 / 59. 27.100 index.tsx:81:20&#xA;ffmpeg >>>    libavdevice    59.  7.100 / 59.  7.100 index.tsx:81:20&#xA;ffmpeg >>>    libavfilter     8. 44.100 /  8. 44.100 index.tsx:81:20&#xA;ffmpeg >>>    libswscale      6.  7.100 /  6.  7.100 index.tsx:81:20&#xA;ffmpeg >>>    libswresample   4.  7.100 /  4.  7.100 index.tsx:81:20&#xA;ffmpeg >>>    libpostproc    56.  6.100 / 56.  6.100 index.tsx:81:20&#xA;ffmpeg >>>  Input #0, matroska,webm, from &#x27;input.webm&#x27;: index.tsx:81:20&#xA;ffmpeg >>>    Metadata: index.tsx:81:20&#xA;ffmpeg >>>      encoder         : QTmuxingAppLibWebM-0.0.1 index.tsx:81:20&#xA;ffmpeg >>>    Duration: N/A, start: 0.000000, bitrate: N/A index.tsx:81:20&#xA;ffmpeg >>>    Stream #0:0(eng): Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR 4:3, 15.50 tbr, 1k tbn (default)&#xA;

    &#xA;

    the problem is when ffmpeg.wasm try to execute the last command.
    &#xA;await ffmpeg?.exec([&#x27;-re&#x27;, &#x27;-i&#x27;, &#x27;input.webm&#x27;, &#x27;-c&#x27;, &#x27;copy&#x27;, &#x27;-f&#x27;, &#x27;flv&#x27;, "rtmp://localhost:1935/live/ttt"]).
    &#xA;it just calls a GET Request, I will send further details about this request.
    &#xA;as u can see, i try to use lots of arg sample with ffmpeg?.exec, but non of them works.

    &#xA;

    the network tab in browser, after ffmpeg.wasm execute the command is as below.

    &#xA;

    enter image description here

    &#xA;

    it send a GET request to ws://localhost:1935/&#xA;and nothing happened after that.

    &#xA;

    for backend, i use node-media-server and here is my output logs when ffmpeg.wasm trying to execute the args

    &#xA;

    11/28/2023 19:33:18 55301 [INFO] [rtmp disconnect] id=JL569YOF&#xA;[NodeEvent on doneConnect] id=JL569YOF args=undefined&#xA;

    &#xA;

    at last here are my ques

    &#xA;

    &#xA;
      &#xA;
    • how can i achive this option ?
    • &#xA;

    • is it possible to share webcam to rtmp server ?
    • &#xA;

    &#xA;

    &#xA;

  • ffmpeg.wasm - How to do literally anything with a blob url

    24 novembre 2024, par SeriousLee

    I'm using the ffmpeg.wasm for the first time and I can't get anything working, beyond loading it. I have this function that does nothing in particular (I got it from the vite + react example in the docs, slightly modified) and all I want to do is pass it a blob URL like this blob:http://localhost:5173/c7a9ea7c-aa26-4f4f-9c80-11b8aef3e81f and run through the function and have it give me anything back. But instead, it hangs on the ffmpeg.exec command and never completes. And yes, I've determined that the input blob does work - it's an 8MB, 12-second long mp4 clip.

    &#xA;

    Here's the function :

    &#xA;

        const processOutputVideo = async (videoURL) => {&#xA;      const ffmpeg = ffmpegRef.current;&#xA;&#xA;      await ffmpeg.writeFile("input.mp4", await fetchFile(videoURL));&#xA;      await ffmpeg.exec(["-i", "input.mp4", "output.mp4"]);&#xA;&#xA;      const fileData = await ffmpeg.readFile("output.mp4");&#xA;      const blob = new Blob([fileData.buffer], { type: "video/mp4" });&#xA;      const blobUrl = URL.createObjectURL(blob);&#xA;&#xA;      return blobUrl;&#xA;    };&#xA;

    &#xA;

    And here's the ffmpeg logs from my terminal.

    &#xA;

    [FFMPEG stderr] ffmpeg version 5.1.4 Copyright (c) 2000-2023 the FFmpeg developers&#xA;Post.jsx:35 [FFMPEG stderr]   built with emcc (Emscripten gcc/clang-like replacement &#x2B; linker emulating GNU ld) 3.1.40 (5c27e79dd0a9c4e27ef2326841698cdd4f6b5784)&#xA;Post.jsx:35 [FFMPEG stderr]   configuration: --target-os=none --arch=x86_32 --enable-cross-compile --disable-asm --disable-stripping --disable-programs --disable-doc --disable-debug --disable-runtime-cpudetect --disable-autodetect --nm=emnm --ar=emar --ranlib=emranlib --cc=emcc --cxx=em&#x2B;&#x2B; --objcc=emcc --dep-cc=emcc --extra-cflags=&#x27;-I/opt/include -O3 -msimd128 -sUSE_PTHREADS -pthread&#x27; --extra-cxxflags=&#x27;-I/opt/include -O3 -msimd128 -sUSE_PTHREADS -pthread&#x27; --enable-gpl --enable-libx264 --enable-libx265 --enable-libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libopus --enable-zlib --enable-libwebp --enable-libfreetype --enable-libfribidi --enable-libass --enable-libzimg&#xA;Post.jsx:35 [FFMPEG stderr]   libavutil      57. 28.100 / 57. 28.100&#xA;Post.jsx:35 [FFMPEG stderr]   libavcodec     59. 37.100 / 59. 37.100&#xA;Post.jsx:35 [FFMPEG stderr]   libavformat    59. 27.100 / 59. 27.100&#xA;Post.jsx:35 [FFMPEG stderr]   libavdevice    59.  7.100 / 59.  7.100&#xA;Post.jsx:35 [FFMPEG stderr]   libavfilter     8. 44.100 /  8. 44.100&#xA;Post.jsx:35 [FFMPEG stderr]   libswscale      6.  7.100 /  6.  7.100&#xA;Post.jsx:35 [FFMPEG stderr]   libswresample   4.  7.100 /  4.  7.100&#xA;Post.jsx:35 [FFMPEG stderr]   libpostproc    56.  6.100 / 56.  6.100&#xA;Post.jsx:35 [FFMPEG stderr] Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;input.mp4&#x27;:&#xA;Post.jsx:35 [FFMPEG stderr]   Metadata:&#xA;Post.jsx:35 [FFMPEG stderr]     major_brand     : mp42&#xA;Post.jsx:35 [FFMPEG stderr]     minor_version   : 0&#xA;Post.jsx:35 [FFMPEG stderr]     compatible_brands: mp42mp41isomavc1&#xA;Post.jsx:35 [FFMPEG stderr]     creation_time   : 2019-03-15T17:39:05.000000Z&#xA;Post.jsx:35 [FFMPEG stderr]   Duration: 00:00:12.82, start: 0.000000, bitrate: 5124 kb/s&#xA;Post.jsx:35 [FFMPEG stderr]   Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1920x1080, 4985 kb/s, 29.97 fps, 29.97 tbr, 30k tbn (default)&#xA;Post.jsx:35 [FFMPEG stderr]     Metadata:&#xA;Post.jsx:35 [FFMPEG stderr]       creation_time   : 2019-03-15T17:39:05.000000Z&#xA;Post.jsx:35 [FFMPEG stderr]       handler_name    : L-SMASH Video Handler&#xA;Post.jsx:35 [FFMPEG stderr]       vendor_id       : [0][0][0][0]&#xA;Post.jsx:35 [FFMPEG stderr]       encoder         : AVC Coding&#xA;Post.jsx:35 [FFMPEG stderr]   Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 137 kb/s (default)&#xA;Post.jsx:35 [FFMPEG stderr]     Metadata:&#xA;Post.jsx:35 [FFMPEG stderr]       creation_time   : 2019-03-15T17:39:05.000000Z&#xA;Post.jsx:35 [FFMPEG stderr]       handler_name    : L-SMASH Audio Handler&#xA;Post.jsx:35 [FFMPEG stderr]       vendor_id       : [0][0][0][0]&#xA;Post.jsx:35 [FFMPEG stderr] Stream mapping:&#xA;Post.jsx:35 [FFMPEG stderr]   Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))&#xA;Post.jsx:35 [FFMPEG stderr]   Stream #0:1 -> #0:1 (aac (native) -> aac (native))&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0x154e4f0] using cpu capabilities: none!&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0x154e4f0] profile High, level 4.0, 4:2:0, 8-bit&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0x154e4f0] 264 - core 164 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;

    &#xA;

    And it just hangs there. When I use the example video URL from the official example (https://raw.githubusercontent.com/ffmpegwasm/testdata/master/video-15s.avi), it doesn't hang and it completes the function and returns me a blob URL in the same format as that first blob URL I showed you guys and this is what the ffmpeg output looks like in my console in that case :

    &#xA;

    Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] frame P:160   Avg QP:23.62  size:  1512&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] frame B:385   Avg QP:26.75  size:   589&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] consecutive B-frames:  5.5%  3.6%  0.0% 90.9%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] mb I  I16..4: 12.6% 87.4%  0.0%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] mb P  I16..4:  3.8% 47.5%  1.6%  P16..4: 12.9%  7.4%  5.0%  0.0%  0.0%    skip:21.7%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] mb B  I16..4:  1.2% 10.3%  0.4%  B16..8: 22.3%  6.9%  1.4%  direct: 2.7%  skip:54.8%  L0:46.9% L1:40.2% BI:12.9%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] 8x8 transform intra:88.7% inter:74.7%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] coded y,uvDC,uvAC intra: 68.3% 0.0% 0.0% inter: 11.8% 0.0% 0.0%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i16 v,h,dc,p: 33% 40% 24%  3%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 15% 26% 52%  2%  1%  1%  1%  1%  3%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 27% 21% 20%  5%  5%  5%  4%  6%  5%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] i8c dc,h,v,p: 100%  0%  0%  0%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] Weighted P-Frames: Y:12.5% UV:0.0%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] ref P L0: 48.9% 12.5% 22.3% 14.7%  1.6%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] ref B L0: 77.5% 15.7%  6.8%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] ref B L1: 90.9%  9.1%&#xA;Post.jsx:35 [FFMPEG stderr] [libx264 @ 0xdf3000] kb/s:242.65&#xA;Post.jsx:35 [FFMPEG stderr] Aborted()&#xA;

    &#xA;

    Where am I going wrong, what should I convert my input blob into ? And just FYI, ChatGPT has been completely garbage at helping me solve this lmao.

    &#xA;