Recherche avancée

Médias (16)

Mot : - Tags -/mp3

Autres articles (55)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (6458)

  • Can I use the file buffer or stream as input for fluent-ffmpeg ? I am trying to avoid saving the video locally to get its path before removing

    22 avril 2023, par Moath Thawahreh

    I am receiving the file via an api, I was trying to process the file.buffer as input for FFmpeg but it did not work, I had to save the video locally first and then process the path and remove the saved video later on.
I don't want to believe that there is no other way to solve this and I have been looking for solutions and workarounds but it was all about ffmpeg input as a path.

    


    I would love to find a solution using fluent-ffmpeg because it has some other great features, but I won't mind any suggestions for compressing the video using any different approaches if it's more efficient

    


    Again my code below works fine but I have to save the video and then remove it I am hoping for a more efficient solution :

    


      fs.writeFileSync(&#x27;temp.mp4&#x27;, file.buffer);&#xA;&#xA;    // Resize the temporary file using ffmpeg&#xA;    ffmpeg(&#x27;temp.mp4&#x27;) // here I tried pass file.buffer as readable stream,it receives paths only &#xA;      .format(&#x27;mp4&#x27;)&#xA;      .size(&#x27;50%&#x27;)&#xA;      .save(&#x27;resized.mp4&#x27;)&#xA;      .on(&#x27;end&#x27;, async () => {&#xA;        // Upload the resized file to Firebase&#xA;        const resizedFileStream = bucket.file(`video/${uniqueId}`).createWriteStream();&#xA;        fs.createReadStream(&#x27;resized.mp4&#x27;).pipe(resizedFileStream);&#xA;&#xA;        await new Promise<void>((resolve, reject) => {&#xA;          resizedFileStream&#xA;            .on(&#x27;finish&#x27;, () => {&#xA;              // Remove the local files after they have been uploaded&#xA;              fs.unlinkSync(&#x27;temp.mp4&#x27;);&#xA;              fs.unlinkSync(&#x27;resized.mp4&#x27;);&#xA;              resolve();&#xA;            })&#xA;            .on(&#x27;error&#x27;, reject);&#xA;        });&#xA;&#xA;        // Get the URL of the uploaded resized version&#xA;        const resizedFile = bucket.file(`video/${uniqueId}`);&#xA;        const url = await resizedFile.getSignedUrl({&#xA;          action: &#x27;read&#x27;,&#xA;          expires: &#x27;03-17-2025&#x27;, // Change this to a reasonable expiration date&#xA;        });&#xA;&#xA;        console.log(&#x27;Resized file uploaded successfully.&#x27;);&#xA;      })&#xA;      .on(&#x27;error&#x27;, (err) => {&#xA;        console.log(&#x27;An error occurred: &#x27; &#x2B; err.message);&#xA;      });&#xA;</void>

    &#xA;

  • FFmpeg get frame rate

    22 septembre 2021, par zhin dins

    I have several images and I am reproducing them in 78.7ms, I am creating like the 80s video effect. But, I am unable to find the correct ms, and this images with the original videos are unsync.

    &#xA;

    I dumped the video to images using this command => ffmpeg -i *.mp4 the80effect/img-%d.jpg And now, I have 48622 frames. The video FPS is 24

    &#xA;

    So, 48622/24 = 2025 +- I cannot use 2025ms since those images will load very slow. And the and the approximate value is 78.7ms per frame/image

    &#xA;

    How can I find the correct value ? The video duration in seconds is 2026. I have tried all math to find this but I'm failing. How many images (one frame) per msCould you help me ? Thank you.

    &#xA;

  • NumPy array of a video changes from the original after writing into the same video

    29 mars 2021, par Rashiq

    I have a video (test.mkv) that I have converted into a 4D NumPy array - (frame, height, width, color_channel). I have even managed to convert that array back into the same video (test_2.mkv) without altering anything. However, after reading this new, test_2.mkv, back into a new NumPy array, the array of the first video is different from the second video's array i.e. their hashes don't match and the numpy.array_equal() function returns false. I have tried using both python-ffmpeg and scikit-video but cannot get the arrays to match.

    &#xA;

    Python-ffmpeg attempt :

    &#xA;

    import ffmpeg&#xA;import numpy as np&#xA;import hashlib&#xA;&#xA;file_name = &#x27;test.mkv&#x27;&#xA;&#xA;# Get video dimensions and framerate&#xA;probe = ffmpeg.probe(file_name)&#xA;video_stream = next((stream for stream in probe[&#x27;streams&#x27;] if stream[&#x27;codec_type&#x27;] == &#x27;video&#x27;), None)&#xA;width = int(video_stream[&#x27;width&#x27;])&#xA;height = int(video_stream[&#x27;height&#x27;])&#xA;frame_rate = video_stream[&#x27;avg_frame_rate&#x27;]&#xA;&#xA;# Read video into buffer&#xA;out, error = (&#xA;    ffmpeg&#xA;        .input(file_name, threads=120)&#xA;        .output("pipe:", format=&#x27;rawvideo&#x27;, pix_fmt=&#x27;rgb24&#x27;)&#xA;        .run(capture_stdout=True)&#xA;)&#xA;&#xA;# Convert video buffer to array&#xA;video = (&#xA;    np&#xA;        .frombuffer(out, np.uint8)&#xA;        .reshape([-1, height, width, 3])&#xA;)&#xA;&#xA;# Convert array to buffer&#xA;video_buffer = (&#xA;    np.ndarray&#xA;        .flatten(video)&#xA;        .tobytes()&#xA;)&#xA;&#xA;# Write buffer back into a video&#xA;process = (&#xA;    ffmpeg&#xA;        .input(&#x27;pipe:&#x27;, format=&#x27;rawvideo&#x27;, s=&#x27;{}x{}&#x27;.format(width, height))&#xA;        .output("test_2.mkv", r=frame_rate)&#xA;        .overwrite_output()&#xA;        .run_async(pipe_stdin=True)&#xA;)&#xA;process.communicate(input=video_buffer)&#xA;&#xA;# Read the newly written video&#xA;out_2, error = (&#xA;    ffmpeg&#xA;        .input("test_2.mkv", threads=40)&#xA;        .output("pipe:", format=&#x27;rawvideo&#x27;, pix_fmt=&#x27;rgb24&#x27;)&#xA;        .run(capture_stdout=True)&#xA;)&#xA;&#xA;# Convert new video into array&#xA;video_2 = (&#xA;    np&#xA;        .frombuffer(out_2, np.uint8)&#xA;        .reshape([-1, height, width, 3])&#xA;)&#xA;&#xA;# Video dimesions change&#xA;print(f&#x27;{video.shape} vs {video_2.shape}&#x27;) # (844, 1080, 608, 3) vs (2025, 1080, 608, 3)&#xA;print(f&#x27;{np.array_equal(video, video_2)}&#x27;) # False&#xA;&#xA;# Hashes don&#x27;t match&#xA;print(hashlib.sha256(bytes(video_2)).digest()) # b&#x27;\x88\x00\xc8\x0ed\x84!\x01\x9e\x08 \xd0U\x9a(\x02\x0b-\xeeA\xecU\xf7\xad0xa\x9e\\\xbck\xc3&#x27;&#xA;print(hashlib.sha256(bytes(video)).digest()) # b&#x27;\x9d\xc1\x07xh\x1b\x04I\xed\x906\xe57\xba\xf3\xf1k\x08\xfa\xf1\xfaM\x9a\xcf\xa9\t8\xf0\xc9\t\xa9\xb7&#x27;&#xA;

    &#xA;

    Scikit-video attempt :

    &#xA;

    import skvideo.io as sk&#xA;import numpy as np&#xA;&#xA;video_data = sk.vread(&#x27;test.mkv&#x27;)&#xA;&#xA;sk.vwrite(&#x27;test_2_ski.mkv&#x27;, video_data)&#xA;&#xA;video_data_2 = sk.vread(&#x27;test_2_ski.mkv&#x27;)&#xA;&#xA;# Dimensions match but...&#xA;print(video_data.shape) # (844, 1080, 608, 3)&#xA;print(video_data_2.shape) # (844, 1080, 608, 3)&#xA;&#xA;# ...array elements don&#x27;t&#xA;print(np.array_equal(video_data, video_data_2)) # False&#xA;&#xA;# Hashes don&#x27;t match either&#xA;print(hashlib.sha256(bytes(video_2)).digest()) # b&#x27;\x8b?]\x8epD:\xd9B\x14\xc7\xba\xect\x15G\xfaRP\xde\xad&amp;EC\x15\xc3\x07\n{a[\x80&#x27;&#xA;print(hashlib.sha256(bytes(video)).digest()) # b&#x27;\x9d\xc1\x07xh\x1b\x04I\xed\x906\xe57\xba\xf3\xf1k\x08\xfa\xf1\xfaM\x9a\xcf\xa9\t8\xf0\xc9\t\xa9\xb7&#x27;&#xA;

    &#xA;

    I don't understand where I'm going wrong and both the respective documentations do not highlight how to do this particular task. Any help is appreciated. Thank you.

    &#xA;