Recherche avancée

Médias (2)

Mot : - Tags -/plugins

Autres articles (82)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (5882)

  • ffmpeg : Render webm from stdin using NodeJS

    2 juin 2015, par Vinicius Tavares

    I’m having an issue trying to dump some jpeg frames created on the fly to ffmpeg and NodeJS in order to create a webm video.

    The script attempts to do these things :

    • Fork a new ffmpeg process on initialization
    • Render a canvas
    • Once the data in canvas is updated, grab JPEG data from it.
    • Pipe the JPEG data into the ffmpeg stdin.
    • ffmpeg takes care of appending it on a webm video file.
    • and this goes forever and ffmpeg should never stop

    It should be an always growing video to be broadcast live to all connected clients, but the result that I get is just a single frame webm.

    Here is the ffmpeg fork

    var args = '-f image2pipe -r 15 -vcodec mjpeg -s 160x144 -i - -f webm -r 15 test.webm'.split(' ');
    var encoder = spawn('ffmpeg', args);
    encoder.stderr.pipe(process.stdout);

    Here is the canvas update and pipe

    theCanvas.on('draw', function () {
       var readStream = self.canvas.jpegStream();
       readStream.pipe(self.encoder.stdin);
    });

    ffmpeg output

    ffmpeg version 1.2.6-7:1.2.6-1~trusty1 Copyright (c) 2000-2014 the FFmpeg developers
     built on Apr 26 2014 18:52:58 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
     configuration: --arch=amd64 --disable-stripping --enable-avresample --enable-pthreads --enable-runtime-cpudetect --extra-version='7:1.2.6-1~trusty1' --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --enable-bzlib --enable-libdc1394 --enable-libfreetype --enable-frei0r --enable-gnutls --enable-libgsm --enable-libmp3lame --enable-librtmp --enable-libopencv --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-vaapi --enable-vdpau --enable-libvorbis --enable-libvpx --enable-zlib --enable-gpl --enable-postproc --enable-libcdio --enable-x11grab --enable-libx264 --shlibdir=/usr/lib/x86_64-linux-gnu --enable-shared --disable-static
     libavutil      52. 18.100 / 52. 18.100
     libavcodec     54. 92.100 / 54. 92.100
     libavformat    54. 63.104 / 54. 63.104
     libavdevice    53.  5.103 / 53.  5.103
     libavfilter     3. 42.103 /  3. 42.103
     libswscale      2.  2.100 /  2.  2.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  2.100 / 52.  2.100
    [image2pipe @ 0xee0740] Estimating duration from bitrate, this may be inaccurate
    Input #0, image2pipe, from 'pipe:':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: mjpeg, yuvj420p, 160x144 [SAR 1:1 DAR 10:9], 15 tbr, 15 tbn, 15 tbc
    [libvpx @ 0xec5d00] v1.3.0
    Output #0, webm, to 'test.webm':
     Metadata:
       encoder         : Lavf54.63.104
       Stream #0:0: Video: vp8, yuv420p, 160x144 [SAR 1:1 DAR 10:9], q=-1--1, 200 kb/s, 1k tbn, 15 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (mjpeg -> libvpx)
    pipe:: Input/output error
    frame=    1 fps=0.0 q=0.0 Lsize=      12kB time=00:00:00.06 bitrate=1441.1kbits/s    
    video:11kB audio:0kB subtitle:0 global headers:0kB muxing overhead 4.195804%

    What can I do ?

    Thanks,
    Vinicius

  • "Critical error detected c0000374" when running av_packet_unref or av_frame_unref

    15 mai 2021, par Shivang Sharma

    I am trying to read and decode frames which is happening nicely but when its reaching part of un-referencing frame or packet using av_packet_unref and av_frame_unref it is giving error during second frame or third frame sometimes .

    


    Error (Copied from visual studio output window) :

    


    Critical error detected c0000374
Libav.exe has triggered a breakpoint


    


    Here is some code of reading and decoding which is giving error :

    


    void Decoder::streamNextFrame(int type = 0)
{
    while (av_read_frame(this->fileFormatCtx, this->latestpacket) >= 0) {
        if (this->audioDecoder->activeAudioStream != nullptr) {
            if (this->latestpacket->stream_index == this->audioDecoder->activeAudioStream->index) {
                avcodec_send_packet(this->audioDecoder->activeStreamDecoder, this->latestpacket);
                err = avcodec_receive_frame(this->audioDecoder->activeStreamDecoder, this->decodedFrame);
                if (err == AVERROR(EAGAIN)) {
                    av_frame_unref(this->decodedFrame);
                    av_packet_unref(this->latestpacket);
                    continue;
                }

                {
                    int currentIndex = (int)this->audioFrames->size();
                    this->audioFrames->resize((int)this->audioFrames->size() + 1);
                    int nb = 0;
                    this->audioFrames->at(currentIndex).pts = (int)this->decodedFrame->pts;
                    if (this->audioDecoder->activeStreamDecoder->sample_fmt != AV_SAMPLE_FMT_S16) {
                        nb = 2048 * this->audioDecoder->activeStreamDecoder->channels;
                        printf("%i\n", nb);
                        this->audioFrames->at(currentIndex).data.resize(nb);
                        if (!swr_is_initialized(swr)) {

                            swr_alloc_set_opts(swr, this->audioDecoder->activeStreamDecoder->channel_layout, AV_SAMPLE_FMT_S16, this->audioDecoder->activeStreamDecoder->sample_rate, this->audioDecoder->activeStreamDecoder->channel_layout, this->audioDecoder->activeStreamDecoder->sample_fmt, this->audioDecoder->activeStreamDecoder->sample_rate, 0, nullptr);
                            swr_init(swr);
                        }

                        uint8_t* buffer = this->audioFrames->at(currentIndex).data.data();
                        swr_convert(swr, &buffer, nb, (const uint8_t**)this->decodedFrame->extended_data, this->decodedFrame->nb_samples);
                    }
                    else {
                        nb = this->decodedFrame->nb_samples * this->audioDecoder->activeStreamDecoder->channels;
                        this->audioFrames->at(currentIndex).data = std::vector(*this->decodedFrame->extended_data, *this->decodedFrame->extended_data + (uint8_t)nb);
                    }

                    this->audioFrames->at(currentIndex).buffersize = nb;
                }

                if (err == AVERROR_EOF) {
                    this->audioDecoder->streamEnded = true;
                    av_frame_unref(this->decodedFrame);
                    av_packet_unref(this->latestpacket);
                    break;
                }
                else if (err >= 0) {
                    this->audioDecoder->streamEnded = false;
                }

                if (type != 0) {
                    av_packet_unref(this->latestpacket);
                    av_frame_unref(this->decodedFrame);
                    break;
                }
                av_packet_unref(this->latestpacket);
                av_frame_unref(this->decodedFrame);
            }
        }
        else {
            printf("No active audio stream is set\n");
            if(type!=0)
            break;
        }
    }

}


    


    I have removed some of code which was concerning the video and was not giving problem I think.

    


    Some Information about above code :

    


    this->audioFrames is a pointer to vector with following type.&#xA;std::vector<audioframeformat>* "AudioFrameFormat" is struct defined as following&#xA;&#xA;struct AudioFrameFormat {&#xA;        std::vector data = {};&#xA;        int pts = 0;&#xA;        int buffersize = 0;&#xA;    };&#xA;&#xA;&#xA;swr is a private class member allocated in constructor&#xA;</audioframeformat>

    &#xA;

    Call Stack looks like :

    &#xA;

    enter image description here

    &#xA;

    I am getting from call stack is that I am not taking care of my heap memory.

    &#xA;

    Can someone please explain where problem is and, why some times it run till third frame and some time till second frame of the audio stream ?

    &#xA;

    And please tell how can I improve this code.

    &#xA;

  • Running ffmpeg (WASM/NodeJS) on multiple input files in a React App

    17 septembre 2024, par FlipFloop

    I recently followed a tutorial by Fireship.io going over making a React App that enables a user to input a video file and convert it into a gif. Here is the source GitHub Repo.

    &#xA;

    The packages used by the project are @ffmpeg/ffmpeg and @ffmpeg/core, which take care of converting the video into a GIF (although this can be changed to whatever, like the FFmpeg CLI tool).

    &#xA;

    I wanted to take this a step further and make it possible for me to convert multiple videos at once, each into their own separate gif, however, I am having trouble running the next task when the first is finished.

    &#xA;

    Here is documentation I found about the ffmpeg wasm package. I also read this example given by the package providers to have multiple outputs from a single file.

    &#xA;

    Here is my code (App.jsx) :

    &#xA;

    import { createFFmpeg, fetchFile } from &#x27;@ffmpeg/ffmpeg&#x27;;&#xA;const ffmpeg = createFFmpeg({ log: true });&#xA;&#xA;function App() {&#xA;    const [ready, setReady] = useState(false);&#xA;    const [videos, setVideos] = useState([]);&#xA;    const [gifs, setGifs] = useState([]);&#xA;    const load = async () => {&#xA;         await ffmpeg.load();&#xA;         setReady(true);&#xA;    };&#xA;&#xA;   useEffect(() => {&#xA;       load();&#xA;   }, []);&#xA;&#xA;   const onInputChange = (e) => {&#xA;       for (let i = 0; i &lt; e.target.files.length; i&#x2B;&#x2B;) {&#xA;           const newVideo = e.target.files[i];&#xA;           setVideos((videos) => [...videos, newVideo]);&#xA;       }&#xA;   };&#xA;&#xA;   const batchConvert = async (video) => {&#xA;       const name = video.name.split(&#x27;.mp4&#x27;).join(&#x27;&#x27;);&#xA;&#xA;       ffmpeg.FS(&#x27;writeFile&#x27;, name &#x2B; &#x27;.mp4&#x27;, await fetchFile(video));&#xA;       await ffmpeg.run(&#xA;           &#x27;-i&#x27;,&#xA;           name &#x2B; &#x27;.mp4&#x27;,&#xA;           &#x27;-f&#x27;,&#xA;           &#x27;gif&#x27;,&#xA;            name &#x2B; &#x27;.gif&#x27;,&#xA;        );&#xA;&#xA;        const data = ffmpeg.FS(&#x27;readFile&#x27;, name &#x2B; &#x27;.gif&#x27;);&#xA;&#xA;        const url = URL.createObjectURL(&#xA;            new Blob([data.buffer], { type: &#x27;image/gif&#x27; }),&#xA;        );&#xA;&#xA;        setGifs((gifs) => [...gifs, url]);&#xA;    };&#xA;&#xA;    const convertToGif = async () => {&#xA;        videos.forEach((video) => {&#xA;            batchConvert(video);&#xA;        }&#xA;    );&#xA;&#xA;return ready ? (&#xA;<div classname="App">&#xA;  {videos &amp;&amp;&#xA;    videos.map((video) => (&#xA;      <video controls="controls" width="250" src="{URL.createObjectURL(video)}"></video>&#xA;    ))}&#xA;&#xA;  <input type="file" multiple="multiple" />&#xA;&#xA;  {videos &amp;&amp; <button>Convert to Gif</button>}&#xA;&#xA;  {gifs &amp;&amp; (&#xA;    <div>&#xA;      <h3>Result</h3>&#xA;      {gifs.map((gif) => (&#xA;        <img src="http://stackoverflow.com/feeds/tag/{gif}" width="250" style='max-width: 300px; max-height: 300px' />&#xA;      ))}&#xA;    </div>&#xA;  )}&#xA;</div>&#xA;) : (&#xA;    <p>Loading...</p>&#xA;);&#xA;}&#xA;&#xA;export default App;&#xA;

    &#xA;

    The error I am getting is along the lines of "Cannot run multiple instances of FFmpeg at once", which I understand, however, I have no idea how to make the batchConvert function only run one instance at a time, whether it's outside or inside the function.

    &#xA;

    Thank you !

    &#xA;