Recherche avancée

Médias (0)

Mot : - Tags -/gis

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (109)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (11048)

  • ffmpeg - Have troubling syncing up audio and video together

    15 février 2017, par zyeek

    I have a webcam and a separate mic. I want to record what is happening.

    It almost works, however the audio seems to play quickly and parts missing while playing over the video.

    This is the command I am currently using to get it partially working

    ffmpeg -thread_queue_size 1024 -f alsa -ac 1 -i plughw:1,0 -f video4linux2 -thread_queue_size 1024 -re -s 1280x720 -i /dev/video0 -r 25 -f avi -q:a 2 -acodec libmp3lame -ab 96k out.mp4

    I have tried other arguments, but unsure if it has to do with the formats I am using or incorrect parameter settings.

    Also, the next part would be how to stream it. Everytime I try going through rtp it complains about multiple streams. I tried doing html as well, but didn’t like the format. html html://localhost:50000/live_feed or rts rts://localhost:5000

    edit :

    I am running this on a rpi 3.

  • FFMPEG split video with equal frames in the splits....?

    17 janvier 2017, par Gurinderbeer Singh

    I am using FFMPEG to split a video file by using the following command :

    ffmpeg -i input.mp4 -c copy -segment_times 600,600 -f segment out%d.mp4

    This command divides video based on time.. But even if output splits are of same duration, number of frames are different in the splits..

    Is there a way of splitting a video file without recoding so that number of frames are equal in splitted files.?

    Even if it use some other tool than FFMPEG.

    For example : input.mp4 of duration 200 seconds has 5000 frames.

    Can we split this into :

    input1.mp4 having 2500 frames
    and input2.mp4 having 2500 frames.

    It doesn’t matter if duration of output splits is different..

    Please help.....!

  • ffmpeg Audiosegment error in get audio chunks in socketIo server in python

    26 janvier 2024, par a_crszkvc30Last_NameCol

    I want to send each audio chunk every minute.
this is the test code and i want to save audiofile and audio chunk file.
then, i will combine two audio files stop button was worked correctly but with set time function is not worked in python server.
there is python server code with socketio

    


    def handle_voice(sid,data): # blob 으로 들어온 데이터 
    # BytesIO를 사용하여 메모리 상에서 오디오 데이터를 로드
    audio_segment = AudioSegment.from_file(BytesIO(data), format="webm")
    directory = "dddd"
    # 오디오 파일로 저장
    #directory = str(names_sid.get(sid))
    if not os.path.exists(directory):
        os.makedirs(directory)
 
    # 오디오 파일로 저장
    file_path = os.path.join(directory, f'{sid}.wav')
    audio_segment.export(file_path, format='wav') 
    print('오디오 파일 저장 완료')`
 


    


    and there is client

    


    &#xA;&#xA;&#xA;&#xA;    &#xA;    &#xA;    <code class="echappe-js">&lt;script src=&quot;https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.5.2/socket.io.js&quot;&gt;&lt;/script&gt;&#xA;&#xA;&#xA;&#xA;    &#xA;    
    

    &#xA;

    &#xA;

    &#xA; &lt;script&gt;&amp;#xA;        var socket = io(&amp;#x27;http://127.0.0.1:5000&amp;#x27;);&amp;#xA;        const record = document.getElementById(&quot;record&quot;)&amp;#xA;        const stop = document.getElementById(&quot;stop&quot;)&amp;#xA;        const soundClips = document.getElementById(&quot;sound-clips&quot;)&amp;#xA;        const chkHearMic = document.getElementById(&quot;chk-hear-mic&quot;)&amp;#xA;&amp;#xA;        const audioCtx = new(window.AudioContext || window.webkitAudioContext)() // 오디오 컨텍스트 정의&amp;#xA;&amp;#xA;        const analyser = audioCtx.createAnalyser()&amp;#xA;        //        const distortion = audioCtx.createWaveShaper()&amp;#xA;        //        const gainNode = audioCtx.createGain()&amp;#xA;        //        const biquadFilter = audioCtx.createBiquadFilter()&amp;#xA;&amp;#xA;        function makeSound(stream) {&amp;#xA;            const source = audioCtx.createMediaStreamSource(stream)&amp;#xA;            socket.connect()&amp;#xA;            source.connect(analyser)&amp;#xA;            //            analyser.connect(distortion)&amp;#xA;            //            distortion.connect(biquadFilter)&amp;#xA;            //            biquadFilter.connect(gainNode)&amp;#xA;            //            gainNode.connect(audioCtx.destination) // connecting the different audio graph nodes together&amp;#xA;            analyser.connect(audioCtx.destination)&amp;#xA;&amp;#xA;        }&amp;#xA;&amp;#xA;        if (navigator.mediaDevices) {&amp;#xA;            console.log(&amp;#x27;getUserMedia supported.&amp;#x27;)&amp;#xA;&amp;#xA;            const constraints = {&amp;#xA;                audio: true&amp;#xA;            }&amp;#xA;            let chunks = []&amp;#xA;&amp;#xA;            navigator.mediaDevices.getUserMedia(constraints)&amp;#xA;                .then(stream =&gt; {&amp;#xA;&amp;#xA;                    const mediaRecorder = new MediaRecorder(stream)&amp;#xA;                    &amp;#xA;                    chkHearMic.onchange = e =&gt; {&amp;#xA;                        if(e.target.checked == true) {&amp;#xA;                            audioCtx.resume()&amp;#xA;                            makeSound(stream)&amp;#xA;                        } else {&amp;#xA;                            audioCtx.suspend()&amp;#xA;                        }&amp;#xA;                    }&amp;#xA;                    &amp;#xA;                    record.onclick = () =&gt; {&amp;#xA;                        mediaRecorder.start(1000)&amp;#xA;                        console.log(mediaRecorder.state)&amp;#xA;                        console.log(&quot;recorder started&quot;)&amp;#xA;                        record.style.background = &quot;red&quot;&amp;#xA;                        record.style.color = &quot;black&quot;&amp;#xA;                    }&amp;#xA;&amp;#xA;                    stop.onclick = () =&gt; {&amp;#xA;                        mediaRecorder.stop()&amp;#xA;                        console.log(mediaRecorder.state)&amp;#xA;                        console.log(&quot;recorder stopped&quot;)&amp;#xA;                        record.style.background = &quot;&quot;&amp;#xA;                        record.style.color = &quot;&quot;&amp;#xA;                    }&amp;#xA;&amp;#xA;                    mediaRecorder.onstop = e =&gt; {&amp;#xA;                        console.log(&quot;data available after MediaRecorder.stop() called.&quot;)&amp;#xA;                        const bb = new Blob(chunks, { &amp;#x27;type&amp;#x27; : &amp;#x27;audio/wav&amp;#x27; })&amp;#xA;                        socket.emit(&amp;#x27;voice&amp;#x27;,bb)&amp;#xA;                        const clipName = prompt(&quot;오디오 파일 제목을 입력하세요.&quot;, new Date())&amp;#xA;&amp;#xA;                        const clipContainer = document.createElement(&amp;#x27;article&amp;#x27;)&amp;#xA;                        const clipLabel = document.createElement(&amp;#x27;p&amp;#x27;)&amp;#xA;                        const audio = document.createElement(&amp;#x27;audio&amp;#x27;)&amp;#xA;                        const deleteButton = document.createElement(&amp;#x27;button&amp;#x27;)&amp;#xA;&amp;#xA;                        clipContainer.classList.add(&amp;#x27;clip&amp;#x27;)&amp;#xA;                        audio.setAttribute(&amp;#x27;controls&amp;#x27;, &amp;#x27;&amp;#x27;)&amp;#xA;                        deleteButton.innerHTML = &quot;삭제&quot;&amp;#xA;                        clipLabel.innerHTML = clipName&amp;#xA;&amp;#xA;                        clipContainer.appendChild(audio)&amp;#xA;                        clipContainer.appendChild(clipLabel)&amp;#xA;                        clipContainer.appendChild(deleteButton)&amp;#xA;                        soundClips.appendChild(clipContainer)&amp;#xA;&amp;#xA;                        audio.controls = true&amp;#xA;                        const blob = new Blob(chunks, {&amp;#xA;                            &amp;#x27;type&amp;#x27;: &amp;#x27;audio/ogg codecs=opus&amp;#x27;&amp;#xA;                        })&amp;#xA;&amp;#xA;                        chunks = []&amp;#xA;                        const audioURL = URL.createObjectURL(blob)&amp;#xA;                        audio.src = audioURL&amp;#xA;                        console.log(&quot;recorder stopped&quot;)&amp;#xA;&amp;#xA;                        deleteButton.onclick = e =&gt; {&amp;#xA;                            evtTgt = e.target&amp;#xA;                            evtTgt  .parentNode.parentNode.removeChild(evtTgt.parentNode)&amp;#xA;                        }&amp;#xA;                    }&amp;#xA;&amp;#xA;                  mediaRecorder.ondataavailable = function(e) {&amp;#xA;                    chunks.push(e.data)&amp;#xA;                    if (chunks.length &gt;= 5)&amp;#xA;                    {&amp;#xA;                        const bloddb = new Blob(chunks, { &amp;#x27;type&amp;#x27; : &amp;#x27;audio/wav&amp;#x27; })&amp;#xA;                        socket.emit(&amp;#x27;voice&amp;#x27;, bloddb)&amp;#xA;                         &amp;#xA;                        chunks = []&amp;#xA;                    }&amp;#xA;                    mediaRecorder.sendData = function(buffer) {&amp;#xA;                        const bloddb = new Blob(buffer, { &amp;#x27;type&amp;#x27; : &amp;#x27;audio/wav&amp;#x27; })&amp;#xA;                        socket.emit(&amp;#x27;voice&amp;#x27;, bloddb)&amp;#xA;}&amp;#xA;};&amp;#xA;                })&amp;#xA;                .catch(err =&gt; {&amp;#xA;                    console.log(&amp;#x27;The following error occurred: &amp;#x27; &amp;#x2B; err)&amp;#xA;                })&amp;#xA;        }&amp;#xA;    &lt;/script&gt;&#xA;&#xA;

    &#xA;

    ask exception was never retrieved&#xA;future: <task finished="finished" coro="<InstrumentedAsyncServer._handle_event_internal()" defined="defined" at="at"> exception=CouldntDecodeError(&#x27;Decoding failed. ffmpeg returned error code: 3199971767\n\nOutput from ffmpeg/avlib:\n\nffmpeg version 6.1.1-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers\r\n  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)\r\n  configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint\r\n  libavutil      58. 29.100 / 58. 29.100\r\n  libavcodec     60. 31.102 / 60. 31.102\r\n  libavformat    60. 16.100 / 60. 16.100\r\n  libavdevice    60.  3.100 / 60.  3.100\r\n  libavfilter     9. 12.100 /  9. 12.100\r\n  libswscale      7.  5.100 /  7.  5.100\r\n  libswresample   4. 12.100 /  4. 12.100\r\n  libpostproc    57.  3.100 / 57.  3.100\r\n[cache @ 000001d9828efe40] Inner protocol failed to seekback end : -40\r\n[matroska,webm @ 000001d9828efa00] EBML header parsing failed\r\n[cache @ 000001d9828efe40] Statistics, cache hits:0 cache misses:3\r\n[in#0 @ 000001d9828da3c0] Error opening input: Invalid data found when processing input\r\nError opening input file cache:pipe:0.\r\nError opening input files: Invalid data found when processing input\r\n&#x27;)>&#xA;Traceback (most recent call last):&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\socketio\async_admin.py", line 276, in _handle_event_internal&#xA;    ret = await self.sio.__handle_event_internal(server, sid, eio_sid,&#xA;          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\socketio\async_server.py", line 597, in _handle_event_internal&#xA;    r = await server._trigger_event(data[0], namespace, sid, *data[1:])&#xA;        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\socketio\async_server.py", line 635, in _trigger_event&#xA;    ret = handler(*args)&#xA;          ^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\Python-Javascript-Websocket-Video-Streaming--main\poom2.py", line 153, in handle_voice&#xA;    audio_segment = AudioSegment.from_file(BytesIO(data), format="webm")&#xA;                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\pydub\audio_segment.py", line 773, in from_file&#xA;    raise CouldntDecodeError(&#xA;pydub.exceptions.CouldntDecodeError: Decoding failed. ffmpeg returned error code: 3199971767&#xA;&#xA;Output from ffmpeg/avlib:&#xA;&#xA;ffmpeg version 6.1.1-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint&#xA;  libavutil      58. 29.100 / 58. 29.100&#xA;  libavcodec     60. 31.102 / 60. 31.102&#xA;  libavformat    60. 16.100 / 60. 16.100&#xA;  libavdevice    60.  3.100 / 60.  3.100&#xA;  libavfilter     9. 12.100 /  9. 12.100&#xA;  libswscale      7.  5.100 /  7.  5.100&#xA;  libswresample   4. 12.100 /  4. 12.100&#xA;  libpostproc    57.  3.100 / 57.  3.100&#xA;[cache @ 000001d9828efe40] Inner protocol failed to seekback end : -40&#xA;[matroska,webm @ 000001d9828efa00] EBML header parsing failed&#xA;[cache @ 000001d9828efe40] Statistics, cache hits:0 cache misses:3&#xA;[in#0 @ 000001d9828da3c0] Error opening input: Invalid data found when processing input&#xA;Error opening input file cache:pipe:0.&#xA;Error opening input files: Invalid data found when processing input&#xA;</task>

    &#xA;

    im using version of ffmpeg-6.1.1-full_build.&#xA;i dont know this error exist the stop button sent event correctly. but chunk data was not work correctly in python server.&#xA;my english was so bad. sry

    &#xA;