Recherche avancée

Médias (91)

Autres articles (8)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (4151)

  • How to hold the mpegts stream for few seconds ?

    8 août 2024, par Aven

    How to hold the mpegts stream for 5 seconds without changing frams's PTS ?

    


    I tried the tsduck command as below :

    


    aven.cheng@tgcusers-MacBook-Pro ~ % tsp -I ip 235.35.3.5:3535 -P timeshift --time 5000 -O ip 235.35.3.5:3536

* Warning: timeshift: unknown initial bitrate, discarding packets until a valid bitrate can set the buffer size


    


    the stream of 235.35.3.5:3536 did come out 5 seconds later after i executed the above command..
But the frames' PTS of 235.35.3.5:3536 are not the same anymore. For exsample, the PTS of my target frame is 10935000 from 235.35.3.5:3535, but the PTS of my target frame has changed to 10830542 from 235.35.3.5:3536.

    


    Is that possible to hold the stream for few seconds and also keep the PTS unchanged ? I don't necessarily have to use tsduck ; I am open to using other tools.

    


  • FFmpeg "Non-monotonous DTS in output stream" error when processing video from Safari's MediaRecorder

    17 juillet 2024, par Hackermon

    I'm recording a video stream in Safari with MediaRecorder, then sending it to a remote server which then uses ffmpeg to reencode the video. When reencoding with FFmpeg, I get a lot of warnings and the final video is broken, frame are glitching and out of sync but the audio sounds fine.

    


    Here's my MediaRecorder script :

    


    const camera = await navigator.mediaDevices.getUserMedia({ audio: true, video: true });
const recorder = new MediaRecorder(camera, {
        mimeType: 'video/mp4', // Safari only supports MP4
        bitsPerSecond: 1_000_000,
});

recorder.ondataavailable = async ({ data: blob }) => {
        // open contents in new tab
        var fileURL = URL.createObjectURL(file);
         window.open(fileURL, '_blank');
};

recorder.start();
setTimeout(() => recorder.stop(), 5000);


    


    I download the video blob from Safari and use this command to reencode it :

    


    ffmpeg -i ./blob.mp4 -preset ultrafast -strict -2 -threads 10 -c copy ./output.mp4


    


    Logs :

    


    ffmpeg version 4.2.7-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
  configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x559826616000] DTS 29 < 313 out of order
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './chunk1.mp4':
  Metadata:
    major_brand     : iso5
    minor_version   : 1
    compatible_brands: isomiso5hlsf
    creation_time   : 2024-07-17T14:30:47.000000Z
  Duration: 00:00:01.00, start: 0.000000, bitrate: 3937 kb/s
    Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(progressive), 640x480 [SAR 1:1 DAR 4:3], 6218 kb/s, 33.36 fps, 600 tbr, 600 tbn, 1200 tbc (default)
    Metadata:
      creation_time   : 2024-07-17T14:30:47.000000Z
      handler_name    : Core Media Video
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 420 kb/s (default)
    Metadata:
      creation_time   : 2024-07-17T14:30:47.000000Z
      handler_name    : Core Media Audio
File './safari3.mp4' already exists. Overwrite ? [y/N] Output #0, mp4, to './safari3.mp4':
  Metadata:
    major_brand     : iso5
    minor_version   : 1
    compatible_brands: isomiso5hlsf
    encoder         : Lavf58.29.100
    Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(progressive), 640x480 [SAR 1:1 DAR 4:3], q=2-31, 6218 kb/s, 33.36 fps, 600 tbr, 19200 tbn, 600 tbc (default)
    Metadata:
      creation_time   : 2024-07-17T14:30:47.000000Z
      handler_name    : Core Media Video
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 420 kb/s (default)
    Metadata:
      creation_time   : 2024-07-17T14:30:47.000000Z
      handler_name    : Core Media Audio
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[mp4 @ 0x559826644340] Non-monotonous DTS in output stream 0:0; previous: 10016, current: 928; changing to 10017. This may result in incorrect timestamps in the output file.
[mp4 @ 0x559826644340] Non-monotonous DTS in output stream 0:0; previous: 10017, current: 1568; changing to 10018. This may result in incorrect timestamps in the output file.
[mp4 @ 0x559826644340] Non-monotonous DTS in output stream 0:0; previous: 10018, current: 2208; changing to 10019. This may result in incorrect timestamps in the output file.
...x100
frame=  126 fps=0.0 q=-1.0 Lsize=     479kB time=00:00:00.97 bitrate=4026.2kbits/s speed= 130x    
video:425kB audio:51kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.652696%


    


    Not sure what's happening or how to fix it. This issue only happens in Safari, videos from Chrome are perfectly fine.

    


    I've tried various flags :

    


    -fflags +igndts
-bsf:a aac_adtstoasc
-c:v libvpx-vp9 -c:a libopus
etc


    


    None of them seem to fix the issue.

    


  • FFMPEG eating up ram in railway deployment [flask app]

    23 juin 2024, par Eshan Das

    I created a flask app , meme generator, hosting it in railway using gunicorn, i am suspecting ffmpeg is eating the most ram.....
so what i am doing is , generating a tts, and a text adding into one of the parts of the video , and then finally combining all together

    


    from flask import Flask, request, jsonify, send_file, url_for&#xA;from moviepy.editor import VideoFileClip, ImageClip, CompositeVideoClip, AudioFileClip, concatenate_videoclips&#xA;from pydub import AudioSegment&#xA;from PIL import Image, ImageDraw, ImageFont&#xA;from gtts import gTTS&#xA;from flask_cors import CORS&#xA;import os&#xA;&#xA;app = Flask(__name__)&#xA;CORS(app)&#xA;app.config[&#x27;UPLOAD_FOLDER&#x27;] = &#x27;uploads&#x27;&#xA;&#xA;def generate_video(name, profile_image_path, song_path, start_time):&#xA;    first_video = VideoFileClip("first.mp4")&#xA;    second_video = VideoFileClip("second.mp4")&#xA;    third_video = VideoFileClip("third.mp4")&#xA;&#xA;    #font_path = os.path.join("fonts", "arial.ttf")  # Updated font path&#xA;    font_size = 70&#xA;    font = ImageFont.load_default()&#xA;    text = name&#xA;    image_size = (second_video.w, second_video.h)&#xA;    text_image = Image.new(&#x27;RGBA&#x27;, image_size, (0, 0, 0, 0))&#xA;    draw = ImageDraw.Draw(text_image)&#xA;    text_width, text_height = draw.textsize(text, font=font)&#xA;    text_position = ((image_size[0] - text_width) // 2, (image_size[1] - text_height) // 2)&#xA;    draw.text(text_position, text, font=font, fill="black")&#xA;&#xA;    text_image_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], f"text_{name}.png")&#xA;    text_image.save(text_image_path)&#xA;&#xA;    txt_clip = ImageClip(text_image_path, duration=second_video.duration)&#xA;&#xA;    tts = gTTS(text=name, lang=&#x27;en&#x27;)&#xA;    audio_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], f"audio_{name}.wav")&#xA;    tts.save(audio_path)&#xA;&#xA;    sound = AudioSegment.from_file(audio_path)&#xA;    chipmunk_sound = sound._spawn(sound.raw_data, overrides={&#xA;        "frame_rate": int(sound.frame_rate * 1.5)&#xA;    }).set_frame_rate(sound.frame_rate)&#xA;&#xA;    chipmunk_audio_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], f"chipmunk_audio_{name}.wav")&#xA;    chipmunk_sound.export(chipmunk_audio_path, format="wav")&#xA;&#xA;    audio_clip_second = AudioFileClip(chipmunk_audio_path)&#xA;&#xA;    second_video = CompositeVideoClip([second_video, txt_clip.set_position((45, 170))])&#xA;    second_video = second_video.set_audio(audio_clip_second)&#xA;&#xA;    song = AudioSegment.from_file(song_path)&#xA;    start_ms = start_time * 1000&#xA;    cropped_song = song[start_ms:start_ms &#x2B; 20000]&#xA;&#xA;    chipmunk_song = cropped_song._spawn(cropped_song.raw_data, overrides={&#xA;        "frame_rate": int(cropped_song.frame_rate * 1.5)&#xA;    }).set_frame_rate(cropped_song.frame_rate)&#xA;&#xA;    chipmunk_song_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], f"chipmunk_song_{name}.wav")&#xA;    chipmunk_song.export(chipmunk_song_path, format="wav")&#xA;&#xA;    audio_clip_third = AudioFileClip(chipmunk_song_path)&#xA;    third_video = third_video.set_audio(audio_clip_third)&#xA;&#xA;    profile_image = ImageClip(profile_image_path).set_duration(first_video.duration).resize(height=first_video.h / 8).set_position((950, 500))&#xA;    first_video = CompositeVideoClip([first_video, profile_image])&#xA;    &#xA;    profile_image = ImageClip(profile_image_path).set_duration(second_video.duration).resize(height=second_video.h / 8).set_position((950, 500))&#xA;    second_video = CompositeVideoClip([second_video, profile_image])&#xA;    &#xA;    profile_image = ImageClip(profile_image_path).set_duration(third_video.duration).resize(height=third_video.h / 8).set_position((950, 500))&#xA;    third_video = CompositeVideoClip([third_video, profile_image])&#xA;&#xA;    final_video = concatenate_videoclips([first_video, second_video, third_video])&#xA;    final_video = final_video.subclip(0, 10)&#xA;&#xA;    output_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], f"output_{name}.mp4")&#xA;    final_video.write_videofile(output_path, codec="libx264", audio_codec=&#x27;aac&#x27;)&#xA;&#xA;    final_video.close()&#xA;    first_video.close()&#xA;    second_video.close()&#xA;    third_video.close()&#xA;    audio_clip_second.close()&#xA;    audio_clip_third.close()&#xA;&#xA;    os.remove(audio_path)&#xA;    os.remove(text_image_path)&#xA;    os.remove(chipmunk_song_path)&#xA;    os.remove(chipmunk_audio_path)&#xA;    &#xA;    return output_path&#xA;&#xA;@app.route(&#x27;/generate&#x27;, methods=[&#x27;POST&#x27;])&#xA;async def generate():&#xA;    if not os.path.exists(app.config[&#x27;UPLOAD_FOLDER&#x27;]):&#xA;        os.makedirs(app.config[&#x27;UPLOAD_FOLDER&#x27;])&#xA;&#xA;    name = request.form[&#x27;name&#x27;]&#xA;    start_time = float(request.form[&#x27;start_time&#x27;])&#xA;&#xA;    profile_image = request.files[&#x27;profile_image&#x27;]&#xA;    profile_image_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], profile_image.filename)&#xA;    profile_image.save(profile_image_path)&#xA;&#xA;    song = request.files[&#x27;song&#x27;]&#xA;    song_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], song.filename)&#xA;    song.save(song_path)&#xA;&#xA;    video_path = generate_video(name, profile_image_path, song_path, start_time)&#xA;    return jsonify({"video_url": url_for(&#x27;uploaded_file&#x27;, filename=os.path.basename(video_path))})&#xA;&#xA;@app.route(&#x27;/uploads/<filename>&#x27;)&#xA;def uploaded_file(filename):&#xA;    path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], filename)&#xA;    return send_file(path, as_attachment=True)&#xA;&#xA;if __name__ == "__main__":&#xA;    if not os.path.exists(app.config[&#x27;UPLOAD_FOLDER&#x27;]):&#xA;        os.makedirs(app.config[&#x27;UPLOAD_FOLDER&#x27;])&#xA;    app.run(host=&#x27;0.0.0.0&#x27;, port=5000)&#xA;</filename>

    &#xA;

    I am trying to solve this issue, any way to optimize my code , or any other alternative of ffmpeg which i can use , to reduce the ram consumption

    &#xA;